[Elsnet-list] 2nd CFP: LREC 2010 Workshop on Multimodal Corpora +++ Extended Deadline

Patrizia Paggio patrizia at cst.dk
Thu Feb 4 13:58:59 CET 2010

Apologies for multiple postings!


Patrizia Paggio

Senior Researcher
University of Copenhagen
Center for Sprogteknologi (CST)
Njalsgade 140-142, DK-2300 CPH S
phone: + 45 35329072
fax:   + 45 35329089
email: paggio at hum.ku.dk
www: cst.dk/patrizia

-------------- next part --------------
A non-text attachment was scrubbed...
Name: MMC-2010-2nd-CFP.pdf
Type: application/pdf
Size: 100384 bytes
Desc: not available
Url : http://mailman.let.uu.nl/pipermail/elsnet-list/attachments/20100204/1b0edb2f/MMC-2010-2nd-CFP-0001.pdf
-------------- next part --------------
			*** 2nd Call for Papers ***
		   	   LREC 2010 Workshop on
Multimodal Corpora: Advances in Capturing, Coding and Analyzing Multimodality
       	            	*** 18 May 2010, Malta ***


  	     +++ EXTENDED SUBMISSION DEADLINE: 19 Feb 2010 +++

A "Multimodal Corpus" involves the recording, annotation and analysis of several communication modalities such as speech, hand gesture, facial expression, body posture, etc. As many research areas are moving from focused but single modality research to fully-fledged multimodality research, multimodal corpora are becoming a core research asset and an opportunity for interdisciplinary exchange of ideas, concepts and data. 

This workshop follows similar events held at LREC 00, 02, 04, 06, 08. There is an increasing interest in multimodal communication and multimodal corpora as visible by European Networks of Excellence and integrated projects such as HUMAINE, SIMILAR, CHIL, AMI, CALLAS and SSPNet. Furthermore, the success of recent conferences and workshops dedicated to multimodal communication (ICMI-MLMI, IVA, Gesture, PIT, Nordic Symposium on Multimodal Communication, Embodied Language Processing) and the creation of the Journal of Multimodal User Interfaces also testify to the growing interest in this area, and the general need for data on multimodal behaviours.

The 2010 full-day workshop is planned to result in a significant follow-up publication, similar to previous post-workshop publications like the 2008 special issue of the Journal of Language Resources and Evaluation and the 2009 state-of-the-art book published by Springer.


In 2010, we are aiming for a wide cross-section of the field, with contributions on collection efforts, coding, validation and analysis methods, as well as actual tools and applications of multimodal corpora. However, we want to put emphasis on the fact that there have been significant advances in capture technology that make highly accurate data available to the broader research community. Examples are the tracking of face, gaze, hands, body and the recording of articulated full-body motion using motion capture. These data are much more accurate and complete than simple videos that are traditionally used in the field and therefore, will have a lasting impact on multimodality research. However, the richness of the signals and the complexity of the recording process urgently call for an exchange of state-of-the-art information regarding recording and coding practices, new visualization and coding tools, advances in automatic coding and analyzing corpora. 


This LREC 2010 workshop on multimodal corpora will feature a special session on databases of motion capture, trackers, inertial sensors, biometric devices and image processing. Other topics to be addressed include, but are not limited to:  

     * Multimodal corpus collection activities (e.g. direction-giving dialogues, emotional behaviour, human-avatar interaction, human-robot interaction, etc.) and descriptions of existing multimodal resources

     * Relations between modalities in natural (human) interaction and in human-computer interaction

     * Multimodal interaction in specific scenarios, e.g. group interaction in meetings

     * Coding schemes for the annotation of multimodal corpora

     * Evaluation and validation of multimodal annotations

     * Methods, tools, and best practices for the acquisition, creation, management, access, distribution, and use of multimedia and multimodal corpora

     * Interoperability between multimodal annotation tools (exchange formats, conversion tools, standardization)

     * Collaborative coding

     * Metadata descriptions of multimodal corpora

     * Automatic annotation, based e.g. on motion capture or image processing, and the integration with manual annotations 

     * Corpus-based design of multimodal and multimedia systems, in particular systems that involve human-like modalities either in input (Virtual Reality, motion capture, etc.) and output (virtual characters) 

     * Automated multimodal fusion and/or generation (e.g., coordinated speech, gaze, gesture, facial expressions)

     * Machine learning applied to multimodal data

     * Multimodal dialogue modelling


* Deadline for paper submission:    19 February 2010
* Notification of acceptance:       10  March
* Final version of accepted paper:  19 March
* Final program:    	    	    21 March
* Final proceedings:   		    28 March
* Workshop: 			    18 May 


The workshop will consist primarily of paper presentations and discussion/working sessions. Submissions should be 4 pages long, must be in English, and follow the submission guidelines available under http://multimodal-corpora.org/mmc10.html

Submit your paper here: https://www.softconf.com/lrec2010/MMC2010

Demonstrations of multimodal corpora and related tools are encouraged as well (a demonstration outline of 2 pages can be submitted).


When submitting a paper through the START page, authors will be kindly asked to provide relevant information about the resources that have been used for the work described in their paper or that are the outcome of their research. For further information on this new initiative, please refer to 


Michael Kipp, DFKI, Germany
Jean-Claude Martin, LIMSI-CNRS, France
Patrizia Paggio, University of Copenhagen, Denmark
Dirk Heylen, University of Twente, The Netherlands

More information about the Elsnet-list mailing list