[Elsnet-list] Call for Participation: Workshop on W3C's Multimodal
Architecture and Interfaces
ashimura at w3.org
Mon Sep 3 22:26:36 CEST 2007
W3C is holding the Workshop on Multimodal Architecture and Interfaces.
The Workshop will be held at Keio University in Fujisawa, Japan hosted
The Call for Participation is available at:
Important dates and deadlines for this Workshop are:
Workshop dates: 16 and 17 November 2007
Position papers due: 5 October 2007
Final agenda: 20 October 2007
Registration closes: 3 November 2007
Registration details and information about expected audience are in
the Call for Participation.
Please note that:
- There will be a limit of 30 participants.
- Attendance is open to everyone, including non-W3C Members, but each
organization or individual wishing to participate must submit a
- To ensure maximum diversity among participants, only two
participants may attend per organization.
- There is no registration fee.
The Workshop will be chaired by Deborah Dahl and Kazuyuki Ashimura.
Scope of the Workshop
The scope of this workshop is restricted in order to make the best use
of participants' time. In general, discussion at the workshop and in
the position papers should stay focused on the workshop goal: identify
and prioritize requirements for extensions and additions to the MMI
Architecture that will improve the use of the MMI Architecture to
better support speech, GUI and Ink interfaces on multimodal
devices. Descriptions of new requirements with usage scenarios and
clear explanations of the problems to be solved is of top priority for
the workshop, while examples of the MMI Architecture syntax extensions
is secondary priority.
Identify and prioritize requirements for changes, extensions and
additions to the MMI Architecture to better support speech, GUI, Ink
and other Modality Components.
Attendees SHOULD be familiar with the MMI Architecture . The main
focus of the workshop is requirements for the interfaces between the
Runtime Framework and various Modality Components (e.g., voice, pen,
ink, etc.) within the MMI Architecture. Specifically, we ask the
participants (browser vendors, device vendors, application vendors,
etc.) to clarify:
- How to integrate specific modality components e.g ink and voice into
the MMI Architecture?
- What are the limitations of the MMI Architecture?
- What should be done in the MMI Architecture to enable applications
to adapt to different modality combinations?
The W3C contact is Kazuyuki Ashimura.
email: ashimura at w3.org
Kazuyuki Ashimura / W3C Multimodal & Voice Activity Lead
mailto: ashimura at w3.org
voice: +81.466.49.1170 / fax: +81.466.49.1171
More information about the Elsnet-list