@
I am interested in multimodal interaction for web browsing or device control. Not only
information appliances such as PCs
but also office appliances(copier, printer) and consumer electronics are going to be
networked. We need an environment
that enables us to access web from various kinds of devices; remote device control will
become possible over the network
by web-based method where interaction logic is written in XML.
If web content or interaction logic is described on the assumption of specific
modalities, it cannot be accessed from devices
with other kinds of modalities. Device independent multimodal interaction needs
description of modalities of the client
device and description of interaction logic easily adaptable to those modalities. Let us
consider the case where the web
content has an selection element from three choices. If the client device has a rich GUI,
radio button or pull down menu
is a suitable modality for that selection element. If it has a speech recognizer, speech
selection can be used as well. If the main
modalities of the device are 10 physical keys, key '1','2,','3' can be bound to the
choices respectively ('7','8','9' might be a better
choice of keys if the user prefers to use them?). In order to realize this kind of device
independent interaction, the client
device needs to tell the server what kind of modalities it has.
Modality is a device attribute. CC/PP, a framework for device attribute description,
can be used to describe modalities.
CC/PP itself is just a description framework and it leaves vocabulary definition of
specific attributes to the application.
So we need to define vocabularies for CC/PP-based modality description. Further more, it
would be better to define
vocabularies to describe the user's preference for modalities.
From this point of view, in order to realize device independent mutimodal interaction, I hope that this workshop will discuss
@