Copyright © 2006 W3C® (MIT, ERCIM, Keio), All Rights Reserved. W3C liability, trademark and document use rules apply.
This document describes a loosely coupled architecture for multimodal user interfaces, which allows for co-resident and distributed implementations, and focuses on the role of markup and scripting, and the use of well defined interfaces between its constituents.
This section describes the status of this document at the time of its publication. Other documents may supersede this document. A list of current W3C publications and the latest revision of this technical report can be found in the W3C technical reports index at http://www.w3.org/TR/.
This document is the third Public Working Draft for review by W3C Members and other interested parties, and has been developed by the Multimodal Interaction Working Group (W3C Members Only) of the W3C Multimodal Interaction Activity. The main difference from the second draft is a more detailed specification of the events sent between the Runtime Framework and the Modality Components. Future versions of this document will further refine the event definitions , while related documents will address the issue of markup for multimodal applications. In particular those related documents will address the issue of markup for the Interaction Manager, either adopting and adapting existing languages or defining new ones for the purpose.
Comments for this specification are welcomed and should have a subject starting with the prefix '[ARCH]'. Please send them to <www-multimodal@w3.org>, the public email list for issues related to Multimodal. This list is archived and acceptance of this archiving policy is requested automatically upon first post. To subscribe to this list send an email to <www-multimodal-request@w3.org> with the word subscribe in the subject line.
For more information about the Multimodal Interaction Activity, please see the Multimodal Interaction Activity statement.
This document was produced by a group operating under the 5 February 2004 W3C Patent Policy. W3C maintains a public list of any patent disclosures made in connection with the deliverables of the group; that page also includes instructions for disclosing a patent. An individual who has actual knowledge of a patent which the individual believes contains Essential Claim(s) must disclose the information in accordance with section 6 of the W3C Patent Policy.
Publication as a Working Draft does not imply endorsement by the W3C Membership. This is a draft document and may be updated, replaced or obsoleted by other documents at any time. It is inappropriate to cite this document as other than work in progress.
1 Abstract
2 Overview
3 Design versus Run-Time considerations
3.1 Markup and The
Design-Time View
3.2 Software
Constituents and The Run-Time View
3.3 Relationship to
Compound Document Formats
4 Overview of Constituents
4.1 Run-Time
Architecture Diagram
5 The Constituents
5.1 The Runtime
Framework
5.1.1 The Interaction Manager
5.1.2 The Delivery Context Component
5.1.3 The Data Component
5.2 Modality
Components
5.3 Examples
6 Interface between the Runtime Framework and
the Modality Components
6.1 Event Delivery
Mechanism
6.2 Standard
Life Cycle Events
6.2.1 NewContextRequest
6.2.1.1
NewContextRequest Properties
6.2.2 NewContextResponse
6.2.2.1
NewContextResponse Properties
6.2.3 Prepare
6.2.3.1
Prepare Properties
6.2.4 PrepareResponse
6.2.4.1
PrepareResponse Properties
6.2.5 Start
6.2.5.1
Start Properties
6.2.6 StartResponse
6.2.6.1
StartResponse Properties
6.2.7 Done
6.2.7.1
Done Properties
6.2.8 Cancel
6.2.8.1
Cancel Properties
6.2.9 CancelResponse
6.2.9.1
CancelResponse Properties
6.2.10 Pause
6.2.10.1
Pause Properties
6.2.11 PauseResponse
6.2.11.1
PauseResponse Properties
6.2.12 Resume
6.2.12.1
Resume Properties
6.2.13 ResumeResponse
6.2.13.1
ResumeResponse Properties
6.2.14 Data
6.2.14.1
Data Properties
6.2.15 ClearContext
6.2.15.1
ClearContext Properties
6.2.16 StatusRequest
6.2.16.1
Status Request Properties
6.2.17 StatusResponse
6.2.17.1
StatusResponse Properties
7 Open Issues
This document describes a loosely coupled architecture for multimodal user interfaces, which allows for co-resident and distributed implementations, and focuses on the role of markup and scripting, and the use of well defined interfaces between its constituents.
This document describes the architecture of the Multimodal Interaction (MMI) framework [MMIF] and the interfaces between its constituents. The MMI Working Group is aware that multimodal interfaces are an area of active research and that commercial implementations are only beginning to emerge. Therefore we do not view our goal as standardizing a hypothetical existing common practice, but rather providing a platform to facilitate innovation and technical development. Thus the aim of this design is to provide a general and flexible framework providing interoperability among modality-specific components from different vendors - for example, speech recognition from one vendor and handwriting recognition from another. This framework places very few restrictions on the individual components or on their interactions with each other, but instead focuses on providing a general means for allowing them to communicate with each other, plus basic infrastructure for application control and platform services.
Our framework is motivated by several basic design goals:
Even though multimodal interfaces are not yet common, the software industry as a whole has considerable experience with architectures that can accomplish these goals. Since the 1980s, for example, distributed message-based systems have been common. They have been used for a wide range of tasks, including in particular high-end telephony systems. In this paradigm, the overall system is divided up into individual components which communicate by sending messages over the network. Since the messages are the only means of communication, the internals of components are hidden and the system may be deployed in a variety of topologies, either distributed or co-located. One specific instance of this type of system is the DARPA Hub Architecture, also known as the Galaxy Communicator Software Infrastructure [Galaxy]. This is a distributed, message-based, hub-and-spoke infrastructure designed for constructing spoken dialogue systems. It was developed in the late 1990's and early 2000's under funding from DARPA. This infrastructure includes a program called the Hub, together with servers which provide functions such as speech recognition, natural language processing, and dialogue management. The servers communicate with the Hub and with each other using key-value structures called frames.
Another recent architecture that is relevant to our concerns is the model-view-controller (MVC) paradigm. This is a well known design pattern for user interfaces in object oriented programming languages, and has been widely used with languages such as Java, Smalltalk, C, and C++. The design pattern proposes three main parts: a Data Model that represents the underlying logical structure of the data and associated integrity constraints, one or more Views which correspond to the objects that the user directly interacts with, and a Controller which sits between the data model and the views. The separation between data and user interface provides considerable flexibility in how the data is presented and how the user interacts with that data. While the MVC paradigm has been traditionally applied to graphical user interfaces, it lends itself to the broader context of multimodal interaction where the user is able to use a combination of visual, aural and tactile modalities.
In discussing the design of MMI systems, it is important to keep in mind the distinction between the design-time view (i.e., the markup) and the run-time view (the software that executes the markup). At the design level, we assume that multimodal applications will take the form of multiple documents from different namespaces. In many cases, the different namespaces and markup languages will correspond to different modalities, but we do not require this. A single language may cover multiple modalities and there may be multiple languages for a single modality.
At runtime, the MMI architecture features loosely coupled software constituents that may be either co-resident on a device or distributed across a network. In keeping with the loosely-coupled nature of the architecture, the constituents do not share context and communicate only by exchanging events. The nature of these constituents and the APIs between them is discussed in more detail in Sections 3-5, below. Though nothing in the MMI architecture requires that there be any particular correspondence between the design-time and run-time views, in many cases there will be a specific software component responsible for each different markup language (namespace).
At the markup level, an application consists of multiple documents. A single document may contain markup from different namespaces if the interaction of those namespaces has been defined (e.g., as part of the Compound Document Formats Activity [CDF].) By the principle of encapsulation, however, the internal structure of documents is invisible at the MMI level, which defines only how the different documents communicate. One document has a special status, namely the Root or Controller Document, which contains markup defining the interaction between the other documents. Such markup is called Interaction Manager markup. The other documents are called Presentation Documents, since they contain markup to interact directly with the user. The Controller Document may consist solely of Interaction Manager markup (for example a state machine defined in CCXML [CCXML] or SCXML [SCXML]) or it may contain Interaction Manager markup combined with presentation or other markup. As an example of the latter design, consider a multimodal application in which a CCXML document provides call control functionality as well as the flow control for the various Presentation documents. Similarly, an SCXML flow control document could contain embedded presentation markup in addition to its native Interaction Managment markup.
These relationships are recursive, so that any Presentation Document may serve as the Controller Document for another set of documents. This nested structure is similar to 'Russian Doll' model of Modality Components, described below in 3.2 Software Constituents and The Run-Time View.
The different documents are loosely coupled and co-exist without interacting directly. Note in particular that there are no shared variables that could be used to pass information between them. Instead, all runtime communication is handled by events, as described below in 6.2 Standard Life Cycle Events.
Furthermore, it is important to note that the asynchronicity of the underlying communication mechanism does not impose the requirement that the markup languages present a purely asynchronous programming model to the developer. Given the principle of encapsulation, markup languages are not required to reflect directly the architecture and APIs defined here. As an example, consider an implementation containing a Modality Component providing Text-to-Speech (TTS) functionality. This Component must communicate with the Runtime Framework via asynchronous events (see 3.2 Software Constituents and The Run-Time View). In a typical implementation, there would likely be events to start a TTS play and to report the end of the play, etc. However, the markup and scripts that were used to author this system might well offer only a synchronous "play TTS" call, it being the job of the underlying implementation to convert that synchronous call into the appropriate sequence of asynchronous events. In fact, there is no requirement that the TTS resource be individually accessible at all. It would be quite possible for the markup to present only a single "play TTS and do speech recognition" call, which the underlying implementation would realize as a series of asynchronous events involving multiple Components.
Existing languages such as XHTML may be used as either the Controller Documents or as Presentation Documents. Further examples of potential markup components are given in 5.3 Examples
At the core of the MMI runtime architecture is the distinction between the Runtime Framework and the Components, which is similar to the distinction between the Controller Document and the Presentation Documents. The Runtime Framework interprets the Controller Document and provides the basic infrastructure which the various Modality Components plug into. Individual Modality Components are responsible for specific tasks, particularly handling input and output in the various modalities, such as speech, pen, video, etc. Modality Components are black boxes, required only to implement the Modality Component Interface API which is described below. This API allows the Modality Components to communicate with the Framework and hence with each other, since the Framework is responsible for delivering events/messages among the Components.
Since the internals of a Component are hidden, it is possible for a Runtime Framework and a set of Components to present themselves as a Component to a higher-level Framework. All that is required is that the Framework implement the Component API. The result is a "Russian Doll" model in which Components may be nested inside other Components to an arbitrary depth.
The Runtime Framework is itself divided up into sub-components. One important sub-component is the Interaction Manager (IM), which executes the Interaction Manager markup. The IM receives all the events that the various Modality Components generate. Those events may be commands or replies to commands, and it is up to the Interaction Manager to decide what to do with them, i.e., what events to generate in response to them. In general, the MMI architecture follows a 'targetless' event model. That is, the Component that raises an event does not specify its destination. Rather, it passes it up to the Runtime Framework, which will pass it to the Interaction Manager. The IM, in turn, decides whether to forward the event to other Components, or to generate a different event, etc. The other sub-components of the Runtime Framework are the Device Context Component, which provides information about device capabilities and user preferences, and the Data Component, which stores the Data Model for the application. We do not currently specify the interfaces for the IM and the Data Component, so they represent only the logical structure of the functionality that the Runtime Framework provides. The interface to the Device Context Component is specified in [DCI].
Because we are using the term 'Component' to refer to a specific set of entities in our architecture, we will use the term 'Constituent' as a cover term all the elements in our architecture which might normally be called 'software components'.
The W3C Compound Document Formats Activity [CDF] is also concerned with the execution of user interfaces written in multiple languages. However, the CDF group focuses on defining the interactions of specific sets of languages within a single document, which may be defined by inclusion or by reference. The MMI architecture, on the other hand, defines the interaction of arbitrary sets of languages in multiple documents. From the MMI point of view, mixed markup documents defined by CDF specifications are treated like any other documents, and may be either Controller or Presentation Documents. Finally, note that the tightly coupled languages handled by CDF will usually share data and scripting contexts, while the MMI architecture focuses on a looser coupling, without shared context. The lack of shared context makes it easier to distribute applications across a network and also places minimal constraints on the languages in the various documents. As a result, authors will have the option of building multimodal applications in a wide variety of languages for a wide variety of deployment scenarios. We believe that this flexibility is important for the further development of the industry.
Here is a list of the Constituents of the MMI architecture. They are discussed in more detail in the next section.
The Runtime Framework is responsible for starting the application and interpreting the Controller Document. More specifically, the Runtime Framework must:
The need for mapping between synchronous and asynchronous APIs can be seen by considering the case where a Modality Component wants to query the Delivery Context Interface [DCI]. The DCI API provides synchronous access to property values whereas the Modality Component API, presented below in 6.2 Standard Life Cycle Events, is purely asynchronous and event-based. The Modality Component will therefore generate an event requesting the value of a certain property. The DCI cannot handle this event directly, so the Runtime Framework must catch the event, make the corresponding function call into the DCI API, and then generate a response event back to the Modality Component. Note that even though it is globally the Runtime Framework's responsibility to do this mapping, most of the Runtime Framework's behavior is asynchronous. It may therefore make sense to factor out the mapping into a separate Adapter, allowing the Runtime Framework proper to have a fully asynchronous architecture. For the moment, we will leave this as an implementation decision, but we may make the Adapter a formal part of the architecture at a later date.
The Runtime Framework's main purpose is to provide the infrastructure, rather than to interact with the user. Thus it implements the basic event loop, which the Components use to communicate with one another, but is not expected to handle by itself any events other than lifecycle events. However, if the Controller Document markup section of the application provides presentation markup as well as Interaction Management, the Runtime Framework will execute it just as the Modality Components do. Note, however, that the execution of such presentation markup is internal to the Runtime Framework and need not rely on the Modality Component API.
The Interaction Manager (IM) is the sub-component of the Runtime Framework that is responsible for handling all events that the other Components generate. Normally there will be specific markup associated with the IM instructing it how to respond to events. This markup will thus contain a lot of the most basic interaction logic of an application. Existing languages such as SMIL, CCXML, SCXML, or ECMAScript can be used for IM markup as an alternative to defining special-purpose languages aimed specifically at multimodal applications.
Due to the Russian Doll model, Components may contain their own Interaction Managers to handle their internal events. However these Interaction Managers are not visible to the top level Runtime Framework or Interaction Manager.
If the Interaction Manager does not contain an explicit handler for an event, any default behavior that has been established for the event will be respected. If there is no default behavior, the event will be ignored. (In effect, the Interaction Manager's default handler for all events is to ignore them.)
The Delivery Context [DCI] is intended to provide a platform-abstraction layer enabling dynamic adaptation to user preferences, environmental conditions, device configuration and capabilities. It allows Constituents and applications to:
Note that some device properties, such as screen brightness, are run-time settable, while others, such as whether there is a screen, are not. The term 'property' is also used for characteristics that may be more properly thought of as user preferences, such as preferred output modality or default speaking volume.
The Data Component is a sub-component of the Runtime Framework which is responsible for storing application-level data. The Interaction Manager must be able to access and update the Data Component as part of its control flow logic, but Modality Components do not have direct access to it. Since Modality Components are black boxes, they may have their own internal Data Components and may interact directly with backend servers. However, the only way that Modality Components can share data among themselves and maintain consistency is is via the Interaction Manager. It is therefore good application design practice to divide data into two logical classes: private data, which is of interest only to a given modality component, and public data, which is of interest to the Interaction Manager or to more than one Modality Component. Private data may be managed as the Modality Component sees fit, but all modification of public data, including submission to back end servers, should be entrusted to the Interaction Manager.
For the initial version of this specification, we will not specify a data access language, but will assume that the Interaction Manager language provides sufficient data access capabilities, including submission to back end servers. However, at some point in the future, we may require support for a specific data access language, independent of the Interaction Manager.
Modality Components, as their name would indicate, are responsible for controlling the various input and output modalities on the device. They are therefore responsible for handling all interaction with the user(s). Their only responsibility is to implement the interface defined in section 4.1, below. Any further definition of their responsibilities must be highly domain- and applicaton-specific. In particular we do not define a set of standard modalities or the events that they should generate or handle. Platform providers are allowed to define new Modality Components and are allowed to place into a single Component functionality that might logically seem to belong to two different modalities. Thus a platform could provide a handwriting-and-speech Modality Component that would accept simultaneous voice and pen input. Such combined Components permit a much tighter coupling between the two modalities than the loose interface defined here. Furthermore, modality components may be used to perform general processing functions not directly associated with any specific interface modality, for example, dialog flow control or natural language processing.
In most cases, there will be specific markup in the application corresponding to a given modality, specifying how the interaction with the user should be carried out. However, we do not require this and specifically allow for a markup-free modality component whose behavior is hard-coded into its software.
For the sake of concreteness, here are some examples of components that could be implemented using existing languages. Note that we are mixing the design-time and run-time views here, since it is the implementation of the language (the browser) that serves as the run-time component.
The most important interface in this architecture is the one between the Modality Components and the Runtime Framework. Modality Components communicate with the Framework and with each other via asynchronous events. Components must be able to raise events and to handle events that are delivered to them asynchronously. It is not required that components use these events internally since the implementation of a given Component is black box to the rest of the system. In general, it is expected that Components will raise events both automatically (i.e., as part of their implementation) and under mark-up control. The disposition of events is the responsibility of the Runtime Framework layer. That is, the Component that raises anevent does not specify which Component it should be delivered to or even whether it should be delivered to any Component at all. Rather that determination is left up to the Framework and Interaction Manager.
We do not currentlyspecify the mechanism used to deliver events between the Modality Components and the Runtime Framework, but we may do so in the future. We do place the following requirements on it:
Multiple protocols may be necessary to implement these requirements. For example, TCP/IP and HTTP provide reliable event delivery, and thus meet requirements 1 and 2, but additional protocols such as TLS or HTTPS would be required to meet security requirements 3 through 5.
The Multimodal Architecture defines the following basic life-cycle events which must be supported by all modality components. These events allow the Runtime Framework to invoke modality components and receive results from them. They thus form the basic interface between the Runtime Framework and the Modality components. Note that the 'data' event offers extensibilty since it contains arbitrary XML content and be raised by either the Runtime Framework or the Modality Components at any time once the context has been established. For example, an application relying on speech recognition could use the 'data' event to communicate recognition results or the fact that speech had started, etc.
The concept of 'context' is basic to these events described below. A context represents a single extended interaction with one (or possibly more) users. In a simple unimodal case, a context can be as simple as a phone call or SSL session. Multimodal cases are more complex, however, since the various modalities may not be all used at the same time. For example, in a voice-plus-web interaction, e.g., web sharing with an associated VoIP call, it would be possible to terminate the web sharing and continue the voice call, or to drop the voice call and continue via web chat. In these cases, a single context persists across various modality configurations. In general, we intend for 'context' to cover the longest period of interaction over which it would make sense for components to store state or information.
Optional event that a Modality Component may send to the Runtime Framework to request that a new context be created. If this event is sent, the Runtime Framework must respond with the NewContextResponse event.
RequestID
. An arbitrary identifier generated by
the Modality Component used to identify this request.Media
One or more valid media types indicating the
media to be associated with the context.Data
Optional semi-colon separated list of data
items. Each item consists of a name followed by a space and an XML
tree value.Sent by the Runtime Framework in response to the NewContextRequest message.
RequestID
. Matches the RequestID in the
NewContextRequest event.Status
An enumeration of Success or Failure. If
the value is Success, the NewContextRequest has been accepted and a
new context identifier will be included. (See below). If the value
is Failure, no context identifier will be included and further
information will be included in the Errorinfo
field.Context
A URI identifying the new context. Empty
if status is Failure.Media
One or more valid media types indicating the
media to be associated with the context. Note that these do not
have to be identical to the ones contained in the
NewContextRequest.Errorinfo
If status
equals Failure,
this field holds further information.Data
Optional semi-colon separated list of data
items. Each item consists of a name followed by a space and an XML
tree value.An optional event that the Runtime Framework may send to allow the Modality Components to pre-load markup and prepare to run. Modality Components are not required to take any particular action in reponse to this event, but they must return a PrepareResponse event.
Cookie
An optional cookie. Note that the Runtime
Framework may send the same cookie to multiple Modality
Components.Context
. A unique URI designating this context.
Note that the Runtime Framework may re-use the same context value
in successive calls to Start
if they are all within
the same session/call.ContentURL
Optional URL of the content (in this
case, VoiceXML) that the Modality Component should execute.
Includes standard HTTP fetch parameters such as max-age, max-stale,
fetchtimeout, etc. Incompatible with content
.Content
Optional Inline markup for the Modality
Component to execute. Incompatible with contentURL
.
Note that it is legal for both contentURL
and
content
to be empty. In such a case, the Modality
Component will revert to its default hard-coded behavior, which
could consist of returning an error event or of running a
preconfigured or hard-coded script.Data
Optional semi-colon separated list of data
items. Each item consists of a name followed by a space and an XML
tree value.Sent by the Modality Component in response to the Prepare event. Modality Components that return a PrepareResponse event with Status of 'Success' should be ready to run with close to 0 delay upon receipt of the Start event.
Context
Must match the value in the
Prepare
event.Status
Enumeration: Success or Failure.Errorinfo
If Status
equals Failure,
this field holds further information (examples: NotAuthorized,
BadFormat, MissingURI, MissingField.)Data
Optional semi-colon separated list of data
items. Each item consists of a name followed by a space and an XML
tree value.The Runtime Framework sends this event to invoke a Modality Component. The Modality Component must return a StartResponse event in response. If the Runtime Framework has sent a previous Prepare event, it may leave the contentURL and content fields empty, and the Modality Component will use the values from the Prepare event. If the Runtime Framework includes new values for these fields, the values in the Start event override those in the Prepare event.
Context
. A unique URI designating this context.
Note that the Runtime Framework may re-use the same context value
in successive calls to Start
if they are all within
the same session/call.ContentURL
Optional URL of the content (in this
case, VoiceXML) that the Modality Component should execute.
Includes standard HTTP fetch parameters such as max-age, max-stale,
fetchtimeout, etc. Incompatible with content
.Content
Optional Inline markup for the Modality
Component to execute. Incompatible with contentURL
.
Note that it is legal for both contentURL
and
content
to be empty. In such a case, the Modality
Component will either use the values provided in the preceeding
Prepare event, if one was sent, or revert to its default hard-coded
behavior, which could consist of returning an error event or of
running a preconfigured or hard-coded script.Cookie
An optional cookie.Data
Optional semi-colon separated list of data
items. Each item consists of a name followed by a space and an XML
tree value.The Modality Component must send this event in response to the Start event.
Context
Must match the value in the
Start
event.Status
Enumeration: Success or Failure.Errorinfo
If status
equals Failure,
this field holds further information (examples: NotAuthorized,
BadFormat, MissingURI, MissingField.)Data
Optional semi-colon separated list of data
items. Each item consists of a name followed by a space and an XML
tree value.Returned by the Modality Component to indicate that it has reached the end of its processing.
Context
Must match the value in the
Start
event.Status
Enumeration: Success or Failure.Errorinfo
If status
equals
Failure, this field holds further information.Data
Optional semi-colon separated list of data
items. Each item consists of a name followed by a space and
an XML tree value.Sent by the Runtime Framework to stop processing in the Modality Component. The Modality Component must return CancelResponse.
Returned by the Modality Component in response to the Cancel command.
Context
Must match the value in the
Start
event.Status
Enumeration: Success or Failure.Errorinfo
If status
equals
Failure, this field holds further information.Data
Optional semi-colon separated list of data
items. Each item consists of a name followed by a space and
an XML tree value.Sent by the Runtime Framework to suspend processing by the Modality Component. Implementations may ignore this command if they are unable to pause, but they must return PauseResponse.
Returned by the Modality Component in response to the Pause command.
Context
Must match the value in the
Start
event.Status
Enumeration: Success or Failure.Errorinfo
If status
equals Failure,
this field holds further information.Data
Optional semi-colon separated list of data
items. Each item consists of a name followed by a space and
an XML tree value.Sent by the Runtime Framework to resume paused processing by the Modality Component. Implementations may ignore this command if they are unable to pause, but they must return ResumeResponse.
Returned by the Modality Component in response to the Resume command.
Context
Must match the value in the
Start
event.Status
Enumeration: Success or Failure.Errorinfo
If status
equals
Failure, this field holds further information.Data
Optional semi-colon separated list of data
items. Each item consists of a name followed by a space and
an XML tree value.This event may be generated by either the Runtime Framework or the Modality Component and is used to communicate (presumably changed) data values to the other component.
Sent by the Runtime Framework to indicate that the specified context is no longer active and that any resources associated with it may be freed. (More specifically, the next time that the Runtime Framework uses the specified context ID, it should be understood as referring to a new context.)
The StatusRequest message and the corresponding StatusResponse are intended to provide keep-alive functionality, informing the Runtime Framework about the presence of the various modality components. Note that both these messages are not tied to any Context and may thus be sent independent of any user interaction.
The StatusRequest message is sent from the Runtime Framework to a Modality Component. By waiting for an implementation dependent period of time for a StatusResponse message, the Runtime Framework may determine if the Modality Component is active.
RequestAutomaticUpdate
. A boolean value indicating
whether the Modality Component should send ongoing StatusResponse
messages without waiting for additional StatusRequest messages from
the Runtime Framework.Data
Optional semi-colon separated list of data
items. Each item consists of a name followed by a space and
an XML tree value.Sent by the Modality Component to the Runtime Framework. If automatic updates are enabled, the Modality Component may send multiple StatusResponse messages in response to a single StatusRequest message.
AutomaticUpdate?
. A boolean indicating whether the
Modality Component will keep sending StatusResponse messages in the
future without waiting for another StatusRequest message.Status
An enumeration of 'Alive' or 'Dead'. If the
status is 'Alive', the Modality Component is able to handle
subsequent Prepare and Start messages. If status is 'Dead', it is
not able to handle such requests. Thus the status of 'Dead'
indicates that the modality component is going off-line. If the
Runtime Framework receives a StatusResponse message with status of
'Dead', it may continue to send StatusRequest messages, but it may
not receive a response to them until the Modality Component comes
back online.Data
Optional semi-colon separated list of data
items. Each item consists of a name followed by a space and
an XML tree value.Issue (confidential event data):
We are considering adding a field to lifecycle events indicating that the event contains confidential data (such as bank account numbers or PINs) which should not be implicitly logged by the platform or made potentially available to third parties in any way. Note that this is a separate requirement than the security requirements placed on the event transport protocol in 6.1 Event Delivery Mechanism. We would like feedback from potential implementers and users of this standard as to whether such a feature would be useful and how it should be defined.
Resolution:
None recorded.
The following people contributed to the development of this specification.
Brad Porter
T.V. Raman
This section presents a detailed example of how an implementation of this architecture. For the sake of concreteness, it specifies a number of details that are not included in this document. It is based on the MMI use case document [MMIUse], specifically the second use case, which presents a multimodal in-car application for giving driving directions. Three languages are involved in the design view:
The remainder of the discussion involves the run-time view. The numbered items are taken from the "User Action/External Input" field of the event table. The appended comments are based on the working group's discussion of the use case.
Recognition can be done locally, remotely (on the server) or distributed between the device and the server. By default, the location of event handling is determined by the markup. If there is a local handler for an event specified in the document, the event is handled locally. If not, the event is forwarded to the server. Thus if the markup specifies a speech-started event handler, that event will be consumed locally. Otherwise it will be forwarded to the server. However, remote ASR requires more than simply forwarding the speech-started event to the server because the audio channel must be established. This level of configuration is handled by the device profile, but can be overridden by the markup. Note that the remote server might contain a full VoiceXML interpreter as well as ASR capabilities. In that case, the relevant markup would be sent to the server along with the audio. The protocol used to control the remote recognizer and ship it audio is not part of the MMI specification (but may well be MRCP.)
Open Issue: The previous paragraph about local vs remote event handling is retained from an earlier draft. Since the Modality Component is a black box to the Runtime Framework, the local vs remote distinction should be internal to it. Therefore the event handlers would have to be specified in the VoiceXML markup. But no such possibility exists in VoiceXML 2.0. One option would be to make the local vs remote distinction vendor-specific, so that each Modality Component provider would decide whether to support remote operations and, if so, how to configure them. Alternatively, we could define the DCI properties for remote recognition, but make it optional that vendors support them. In either case, it would be up to the VoiceXML Modality Component communicate with the remote server, etc. Newer languages, such as VoiceXML 3.0 could be designed to allow explicit markup control of local vs remote operations. Note that in the most complex case, there could be multiple simultaneous recognitions, some of which were local and some remote. This level of control is most easily achieved via markup, by attaching properties to individual grammars. DCI properties are more suitable for setting global defaults.
When the IM receives the recognition result event, it parses it and retrieves the user's preferences from the DCI component, which it then dispatches to the Modality Components, which adjust their displays, output, default grammars, etc. accordingly. In VoiceXML 2.0, each of the multiple voice Modality Components will receive the corresponding event.
This particular step in the use case shows the usefulness of the Interaction Manager. One can imagine an architecture lacking an IM in which the Modality Components communicate with each other directly. In this case, all Modality Components would have to handle the location update events separately. This would mean considerable duplication of markup and calculation. Consider in particular the case of a VoiceXML 2.0 Form which is supposed to warn the driver when he went off course. If there is an IM, this Form will simply contain the off-course dialog and will be triggered by an appropriate event from the IM. In the absence of the IM, however, the Form will have to be invoked on each location update event. The Form itself will have to calculate whether the user is off-course, exiting without saying anything if he is not. In parallel, the HTML Modality Component will be performing a similar calculation to determine whether to update its display. The overall application is simpler and more modular if the location calculation and other application logic is placed in the IM, which will then invoke the individual Modality Components only when it is time to interact with the user.
Note on the GPS. We assume that the GPS raises four types of events: On-Course Updates, Off-Course Alerts, Loss-of-Signal Alerts, and Recovery of Signal Notifications. The Off-Course Alert is covered below. The Loss-of-Signal Alert is important since the system must know if its position and course information is reliable. At the very least, we would assume that the graphical display would be modified when the signal was lost. An audio earcon would also be appropriate. Similarly, the Recovery of Signal Notification would cause a change in the display and possibly a audio notification. This event would also contain an indication of the number of satellites detected, since this determines the accuracy of the signal: three satellites are necessary to provide x and y coordinate, while a fourth satellite allows the determination of height as well. Finally, note that the GPS can assume that the car's location does not change while the engine is off. Thus when it starts up it will assume that it is at its last recorded location. This should make the initialization process quicker.
When the IM is satisfied with the confidence levels, it ships the n-best list off to a remote server, which adds graphical information for at least the first choice. The server may also need to modify the n-best list, since items that are linguistically unambiguous may turn out to be ambiguous in the database (e.g., "Starbucks"). Now the IM instructs the HTML component to display the hypothesized destination (first item on n-best list) on the screen and instructs the speech component to start a confirmation dialog. Note that the submission to the remote server should be similar to the <data> tag in VoiceXML 2.1 in that it does not require a document transition. (That is, the remote server should not have to generate a new IM document/state machine just to add graphical information to the n-best list.)