Approach
-
Text of spoken language (in our case, English) is marked up to disambiguate mult-sense words. This step is largely automated, but 10% - 15% of words need to be disambiguated manually.
- Using a graphical user interface (GUI), an author/interpreter constructs ASL sentences using the signs identified in the English source.
-
The GUI allows the interpreter to rearrange the syntax of the sentence, inflect signs as appropriate, and include additional facial expressions and other non-manual aspects of signing.
-
This process creates an XML file that describes the animated sign language. This XML file can also be published as a more compact "Signing Avatar Script" that is embedded in Web pages using a JavaScript function.
-
A "Signing Avatar" accessibility agent provides a pop-up animated character that interprets any sign-enabled Web page.
-
This animation and visual rendering components of this accessibility agent use the emerging Extensible 3D (X3D) and Humanoid Animation (H-Anim) specifications being developed by the Web 3D Consortium.