Note Published: Registration and Discovery of Multimodal Modality Components in Multimodal Systems: Use Cases and Requirements

Author(s) and publish date

Published:

The Multimodal Interaction Working Group has published a Group Note of Registration and Discovery of Multimodal Modality Components in Multimodal Systems: Use Cases and Requirements. Users of mobile phones, personal computers, tablets or other electronic Devices are increasingly interacting with their devices in a variety of ways: touch screen, voice, stylus, keypads, etc. Today, users, vendors, operators and broadcasters can produce and use all kinds of different Media and Devices that are capable of supporting multiple modes of input or output. Tools for authoring, edition or distribution of Media for Application developers are well-documented. But there is a lack of powerful tools or practices for a richer integration and semantic synchronization of all these media. To the best of our knowledge, there is no standardized way to build a web Application that can dynamically combine and control discovered modalities by querying a registry based on user-experience data and modality states. This document describes design requirements that the Multimodal Architecture and Interfaces specification needs to cover in order to address this problem. Learn more about the Multimodal Interaction Activity.

Related RSS feed