[Elsnet-list] [CFP] ICMI Workshop on Multimodal Corpora and Machine Learning

Dirk Heylen heylen at ewi.utwente.nl
Wed Jun 1 11:26:13 CEST 2011


ICMI WORKSHOP ON MULTIMODAL CORPORA FOR MACHINE LEARNING
Taking Stock and Roadmapping the Future
Alicante, Spain, November 18, 2010

Organizers Dirk Heylen, Patrizia Paggio, Michael Kipp

Scope 
A Multimodal Corpus involves the recording, annotation and analysis of several communication modalities such as speech, hand gesture, facial expression, body posture, etc. As many research areas are moving from focused but single modality research to fully-fledged multimodality research, multimodal corpora are becoming a core research asset and an opportunity for an interdisciplinary exchange of ideas, concepts and data. The number of publicly available multimodal corpora is constantly growing as is the interest in studying multimodal communication by machines and through machines. However, this does not mean that it becomes easier to find an appropriate corpus for a task. The construction of a corpus is consuming a high amount of resources (timewise and moneywise) and most corpora are built for a specific application in a specific project.

This workshop follows similar events held at LREC 00, 02, 04, 06, 08 and 10. This year the special topic of this workshop concerns taking stock and roadmapping the future. The workshop addresses producers and users of multimodal corpora - the ICMI community in particular - and brings them together to take stock of the current state of affairs in the use and production of multimodal corpora (in particular for machine learning) and to identify needs and requirements for improvements and for new initiatives.  Questions include: What are the major limitations of current corpora? What are best practices? What are the major obstacles to progress and how can they be circumpassed? How do technologies that aid in constructing corpora (annotation software, capturing hardware and software) need to be improved? Are there problems with the dissemination and exploitation of corpora and if so, how can they be solved?

Other topics to be addressed include but are not limited to:
* Machine learning applied to multimodal data
* Methods, tools, and best practices for the acquisition, creation, management, access,
* distribution, and use of multimedia and multimodal corpora, including crowd-sourcing
* Multimodal corpus collection activities (e.g. emotional behaviour, human-avatar interaction, human-robot interaction, etc.) and descriptions of existing multimodal resources
* Relations between modalities in natural (human) interaction and in human-computer interaction
* Multimodal interaction in specific scenarios, e.g. group interaction in meetings
* Coding schemes for the annotation of multimodal corpora
* Collaborative coding
* Evaluation and validation of multimodal annotations
* Interoperability between multimodal annotation tools (exchange formats, conversion tools, standardization) 
* Metadata descriptions of multimodal corpora
* Automatic annotation, based e.g. on motion capture or image processing, and the integration with manual annotations
* Corpus-based design of multimodal and multimedia systems, in particular systems that involve human-like modalities either in input (Virtual Reality, motion capture, etc.) and output (virtual characters)
* Automated multimodal fusion and/or generation (e.g., coordinated speech, gaze, gesture, facial expressions)

Important dates:
Submissions due: July 8, 2011
Notification of Acceptance: August 1, 2011
Camera ready paper due: August 28, 2011
Workshop: November 18, 2011

Submissions
Submissions should be sent to: heylen at ewi.utwente.nl
Papers can be from 4 to 6 pages long. 
The paper format is the same as ICMI. The workshop papers will be included in a USB stick. The workshop is planned to result in a follow-up publication.

Workshop website: http://www.multimodal-corpora.org/
ICMI website: http://www.acm.org/icmi/2011/



More information about the Elsnet-list mailing list