[Elsnet-list] Call for Participation SEMEVAL 2015 Task 2 -- Semantic Textual Similarity
e.agirre at ehu.es
Wed Aug 13 02:08:43 CEST 2014
[Apologies for duplication]
Call for Participation SEMEVAL 2015 Task 2 -- Semantic Textual Similarity
Semantic textual similarity (STS) has received an increasing amount of
attention in recent years, culminating with the Semeval/*SEM tasks
organized in 2012, 2013 and 2014, bringing together more than 60
participating teams. Please check http://ixa2.si.ehu.es/stswiki/ for
details on previous tasks.
Given two sentences of text, s1 and s2, the systems participating in
this task should compute how similar s1 and s2 are, returning a
similarity score, and an optional confidence score. The annotations and
systems will use a scale from 0 (no relation) to 5 (semantic
equivalence), indicating the similarity between two sentences.
Participating systems will be evaluated using the same metrics
traditionally employed in the evaluation of STS systems, and also used
in previous Semeval/*SEM STS evaluations, i.e., mean Pearson correlation
between the system output and the gold standard annotations.
In 2015 we will continue to evaluate STS systems on the following subtasks:
- *English STS*, with sentence pairs on news headlines, image captions,
student answers, answers to question in public forums and sentences
expressing commited belief.
- *Spanish STS*, with sentence pairs extracted from encyclopedic content
and newswire, and text snippet pairs obtained from news headlines.
- *NEW* for 2015, we devised a *pilot subtask on interpretable STS*.
With this pilot task we want to explore whether participant systems are
able to explain WHY they think the two sentences are related /
unrelated, adding an explanatory layer to the similarity score. As a
first step in this direction, participating systems will need to*align
the segments* in one sentence in the pair to the segments in the other
sentence, describing what kind of *relation* exists between each pair of
segments. This pilot subtask will provide specific trial and training data.
Please join the mailing list for updates at
http://groups.google.com/group/STS-semeval. Check out the task's webpage
at http://alt.qcri.org/semeval2015/task2/ for more details.
Evaluation start: December 5, 2014 [updated due to clash with
Evaluation end: December 20, 2014 [updated due to clash with
Paper submission due: January 30, 2015
Paper reviews due: February 28, 2015
Camera ready due: March 30, 2015
SemEval workshop: Summer 2015
* STS English: Eneko Agirre, Daniel Cer, Mona Diab, Aitor
Gonzalez-Agirre, Weiwei Guo, and German Rigau.
* STS Spanish: Carmen Banea, Claire Cardie, Rada Mihalcea, and Janyce Wiebe.
* STS pilot on interpretability and segment alignment: Eneko Agirre,
Aitor Gonzalez-Agirre, Iñigo Lopez-Gazpio, Montse Maritxalar and German
**Eneko Agirre, Carmen Banea, Claire Cardie, Daniel Cer, Mona Diab,
Aitor Gonzalez-Agirre, Weiwei Guo, Rada Mihalcea, German Rigau, Janyce
Wiebe. SemEval-2014 Task 10: Multilingual Semantic Textual Similarity.
Proceedings of SemEval 2014. [pdf
Eneko Agirre, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre, WeiWei Guo.
*SEM 2013 shared task: Semantic Textual Similarity, Proceedings of *SEM
2013. [pdf <http://aclweb.org/anthology//S/S13/S13-1004.pdf>]
Eneko Agirre, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre. SemEval-2012
Task 6: A Pilot on Semantic Textual Similarity. Proceedings of SemEval
2012. [pdf <http://aclweb.org/anthology-new/S/S12/S12-1051.pdf>]
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Elsnet-list