[Elsnet-list] Deadline extension and Final CFP: Multimodal Corpora 2014
D.K.J.Heylen at utwente.nl
D.K.J.Heylen at utwente.nl
Fri Feb 21 16:44:40 CET 2014
Please find below the final call for the 10th Workshop on Multimodal Corpora (MMC2014), which is collocated with LREC in Reykjavik.
Due to several technical problems with submissions which has also led to miscommunications regarding the submission dates, we have extended the submission date. We ask, however, that prospective authors submit a title and an author list as soon as possible - revisions will be permitted until February 28th.
The submission site, now fully functional, is http://www.softconf.com/lrec2014/MMC2014/
Hope to see you there,
With best regards,
*** Final Call for Papers DEADLINE EXTENDED! ***
LREC 2014 Workshop
Combining applied and basic research targets
*** 27 May 2014, Harpa Conference Centre, Reykjavik, Iceland ***
Currently, the creation of a multimodal corpus involves the recording, annotation and analysis of several communication modalities such as speech, hand gesture, facial expression, body posture, etc. An increasing number of research areas have transgressed or are in the process of transgressing from focused single modality research to full-fledged multimodality research, and multimodal corpora are becoming a core research asset and an opportunity for interdisciplinary exchange of ideas, concepts and data.
We are pleased to announce that in 2014, the 10th Workshop on Multimodal Corpora will return home and again be collocated with LREC.
As always, we aim for a wide cross-section of the field, with contributions ranging from collection efforts, coding, validation and analysis methods, to tools and applications of multimodal corpora. Given that LREC this year emphasizes the use of corpora to solve language technology problems and develop useful applications and services, we would like this workshop also to focus on the usefulness of multimodal corpora to applied research as well as basic research. Many of the unimodal speech corpora collected over the past decades have served a double purpose: on the one hand, they have enlightened our view on the basic research question of how speech works and how it is used; on the other hand, they have forwarded the applied research goal of developing better speech technology applications. This reflects the dual nature of speech technology, where funding demands often require researchers to follow research agendas that target applied and basic research goals in parallel. Multimodal corpora are potentially more complex than unimodal corpora, and their design poses an even greater challenge. Yet the benefits to be gained from designing with a view to both applied and basic research remain equally desirable. Against this background, the theme for this instalment of Multimodal Corpora is how multimodal corpora can be designed to serve this double purpose. Success stories of corpora that have provided insights into both applied and basic research are welcome, as are design discussions, methods and tools that help towards this dual goal.
This workshop follows similar events held at LREC 00, 02, 04, 06, 08, 10, ICMI 11, LREC 2012, and IVA 2013. All workshops are documented under www.multimodal-corpora.org<http://www.multimodal-corpora.org> and complemented by a special issue of the Journal of Language Resources and Evaluation which came out in 2008, a state-of-the-art book published by Springer in 2009 and a special issue of the Journal of Multimodal User Interfaces under publication. There is an increasing interest in multimodal communication and multimodal corpora as visible by European Networks of Excellence and integrated projects such as HUMAINE, SIMILAR, CHIL, AMI, CALLAS and SSPNet. Furthermore, the success of recent conferences and workshops dedicated to multimodal communication (ICMI-MLMI, IVA, Gesture, PIT, Nordic Symposium on Multimodal Communication, Embodied Language Processing) and the creation of the Journal of Multimodal User Interfaces also testifies the growing interest in this area, and the general need for data on multimodal behaviours.
The LREC'2014 workshop on multimodal corpora will feature a special session on the design and use of multimodal corpora for applications and multimodal technology development.
Other topics to be addressed include, but are not limited to:
• Multimodal corpus collection activities (e.g. direction-giving dialogues, emotional behaviour, human-avatar and human-robot interaction, etc.) and descriptions of existing multimodal resources
• Relations between modalities in human-human interaction and in human-computer interaction
• Multimodal interaction in specific scenarios, e.g. group interaction in meetings or games
• Coding schemes for the annotation of multimodal corpora
• Evaluation and validation of multimodal annotations
• Methods, tools, and best practices for the acquisition, creation, management, access, distribution, and use of multimedia and multimodal corpora
• Interoperability between multimodal annotation tools (exchange formats, conversion tools, standardization)
• Collaborative coding
• Metadata descriptions of multimodal corpora
• Automatic annotation, based e.g. on motion capture or image processing, and its integration with manual annotations
• Corpus-based design of multimodal and multimedia systems, in particular systems that involve human-like modalities either in input (Virtual Reality, motion capture, etc.) and output (virtual characters)
• Automated multimodal fusion and/or generation (e.g., coordinated speech, gaze, gesture, facial expressions)
• Machine learning applied to multimodal data
• Multimodal dialogue modelling
• Deadline for revisions of submitted papers (full paper): 28 February
• Notification of acceptance: 10 March
• Final version of accepted paper: 17 March
• Final program and proceedings: 6 April
• Workshop: 27 May
The workshop will consist primarily of paper presentations and discussion/working sessions. Submissions should be 4 pages long, must be in English, and follow the LREC submission guidelines. LREC’s author toolkit is available at http://lrec2014.lrec-conf.org/en/submission/authors-kit/
Demonstrations of multimodal corpora and related tools are encouraged as well (a demonstration outline of 2 pages can be submitted).
Submissions are made through the START V2 conference manager:
Time schedule and registration fee
The workshop will consist of a morning session and an afternoon session. There will be time for collective discussions.
The fee is specified on the LREC 2014 web (http://lrec2014.lrec-conf.org/en/registration/registration-fees/).
Jens Edlund, KTH Royal Institute of Technology, Sweden
Dirk Heylen, University of Twente, The Netherlands
Patrizia Paggio, University of Copenhagen, Denmark/University of Malta, Malta
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Elsnet-list