[Elsnet-list] CFP: ICMI WORKSHOP ON MULTIMODAL CORPORA FOR MACHINE LEARNING

Patrizia Paggio paggio at hum.ku.dk
Mon Jun 6 10:44:59 CEST 2011


[apologies for multiple copies]

ICMI WORKSHOP ON MULTIMODAL CORPORA FOR MACHINE LEARNING
Taking Stock and Roadmapping the Future
Alicante, Spain, November 18, 2011

Organizers Dirk Heylen, Patrizia Paggio, Michael Kipp

Scope
A Multimodal Corpus involves the recording, annotation and analysis of 
several communication modalities such as speech, hand gesture, facial 
expression, body posture, etc. As many research areas are moving from 
focused but single modality research to fully-fledged multimodality 
research, multimodal corpora are becoming a core research asset and an 
opportunity for an interdisciplinary exchange of ideas, concepts and 
data. The number of publicly available multimodal corpora is constantly 
growing as is the interest in studying multimodal communication by 
machines and through machines. However, this does not mean that it 
becomes easier to find an appropriate corpus for a task. The 
construction of a corpus is consuming a high amount of resources 
(timewise and moneywise) and most corpora are built for a specific 
application in a specific project.

This workshop follows similar events held at LREC 00, 02, 04, 06, 08 and 
10. This year the special topic of this workshop concerns taking stock 
and roadmapping the future. The workshop addresses producers and users 
of multimodal corpora - the ICMI community in particular - and brings 
them together to take stock of the current state of affairs in the use 
and production of multimodal corpora (in particular for machine 
learning) and to identify needs and requirements for improvements and 
for new initiatives.  Questions include: What are the major limitations 
of current corpora? What are best practices? What are the major 
obstacles to progress and how can they be circumpassed? How do 
technologies that aid in constructing corpora (annotation software, 
capturing hardware and software) need to be improved? Are there problems 
with the dissemination and exploitation of corpora and if so, how can 
they be solved?

Other topics to be addressed include but are not limited to:
* Machine learning applied to multimodal data
* Methods, tools, and best practices for the acquisition, creation, 
management, access,
* distribution, and use of multimedia and multimodal corpora, including 
crowd-sourcing
* Multimodal corpus collection activities (e.g. emotional behaviour, 
human-avatar interaction, human-robot interaction, etc.) and 
descriptions of existing multimodal resources
* Relations between modalities in natural (human) interaction and in 
human-computer interaction
* Multimodal interaction in specific scenarios, e.g. group interaction 
in meetings
* Coding schemes for the annotation of multimodal corpora
* Collaborative coding
* Evaluation and validation of multimodal annotations
* Interoperability between multimodal annotation tools (exchange 
formats, conversion tools, standardization)
* Metadata descriptions of multimodal corpora
* Automatic annotation, based e.g. on motion capture or image 
processing, and the integration with manual annotations
* Corpus-based design of multimodal and multimedia systems, in 
particular systems that involve human-like modalities either in input 
(Virtual Reality, motion capture, etc.) and output (virtual characters)
* Automated multimodal fusion and/or generation (e.g., coordinated 
speech, gaze, gesture, facial expressions)

Important dates:
Submissions due: July 8, 2011
Notification of Acceptance: August 1, 2011
Camera ready paper due: August 28, 2011
Workshop: November 18, 2011

Submissions
Submissions should be sent to: heylen at ewi.utwente.nl
Papers can be from 4 to 6 pages long.
The paper format is the same as ICMI. The workshop papers will be 
included in a USB stick. The workshop is planned to result in a 
follow-up publication.

Workshop website: http://www.multimodal-corpora.org/
ICMI website: http://www.acm.org/icmi/2011/



-- 
_________________________________

Patrizia Paggio

Senior Researcher
University of Copenhagen
Center for Sprogteknologi (CST)
Njalsgade 140-142, DK-2300 CPH S
phone: + 45 35329072
fax:   + 45 35329089
email: paggio at hum.ku.dk
www: cst.dk/patrizia
__________________________________





More information about the Elsnet-list mailing list