Ed Sims, Ph.D., CTO, Vcom3D, Inc.
eds@vcom3d.com
To date, much of the focus in Web accessibility has been concerned with providing text equivalents for multimedia objects. These text equivalents allow persons using screen readers, Braille displays, and other assistive devices to access information that they cannot see. However, for Deaf persons whose native method of communication is sign language, there have been no accepted text equivalents - and no devices for converting textual information into sign. In recent years, several organizations, including Vcom3D, have been developing text or XML-based representations for sign languages that can be used to simulate visually rendered signing. A discussion of the research and demonstrations may be viewed at www.vcom3d.com. Although still in development, sign language synthesis has progressed to a point where discussion of recommended practices for Web representation is important.
Children who are pre-lingually deaf (i.e., who are deaf at birth, or become deaf prior to developing spoken language skills), face significant challenges in learning to read. Over 90% of these children are born to hearing parents, and most of these parents never learn to sign. Thus, although most deaf children are fully capable of communicating in sign by the age of 10 months, they have no opportunities to develop these language skills. By the time they reach school age, they have missed opportunities to develop either spoken or sign language skills, and are therefore ill prepared to learn to read. They have also missed the crucial years in which the human mind is best adapted to learn basic language skills. As a result, most pre-lingually deaf persons are language-delayed, and never develop good written language skills. The average graduate of a K-12 School for the Deaf in the U.S. has a reading level no better than third grade. It has been estimated that 83 to 87% of all deaf persons in the U.S. are illiterate in English, according to common definitions of that term. These persons are unable to access important information on the Web. However, the need for sign language access to the Web is frequently overlooked because (1) there is a common misconception that deafness should not be an impediment to reading text, since "deaf people can see can't they?" and (2) there are no universally accepted methods for writing sign language.
Web-based representation of sign language can be achieved at several levels. At one extreme, video files, with appropriate identifying metadata and time markers could be used. This approach is the most straightforward, but has high costs of production, storage, editing, and distribution. A second possibility is to provide animation descriptions, which define the motions of the joints of the body, and require much lower bandwidth. Vcom3D, the University of East Anglia, and others have successfully demonstrated sign language representation using the Humanoid Animation ("H-Anim"), VRML, and X3D specifications of the Web 3D Consortium. The same organizations have also demonstrated the ability to use XML-based gesture description languages for this representation. Finally, recent research shows promise for using markup of the written form of spoken languages to simplify the client-side translation into sign language. Such an "Accessibility Markup Language" would use a tag set to disambiguate the sense of words (e.g. can="be able" vs. can="metal container") or to resolve the referents of pronouns, as an example. This final approach has the most synergy with current WAI guidelines, and has the added benefit that the same method could be used to increase accessibility by other (non-deaf) individuals with limited literacy in the language of the Web page, or to improve the accuracy of automated translation to another spoken language.
By means of this paper, we wish to initiate consideration of the need for sign language accessibility guidelines for the Web, and to provoke discussion regarding the form, or forms, these might take.