User Agent Accessibility Guidelines 1.0
W3C Working Draft 31 March 2001
- This version:
-
http://www.w3.org/WAI/UA/WD-UAAG10-20010331/
- (Formats:
plain text, gzip
PostScript, gzip PDF, gzip tar file of
HTML, zip archive of
HTML)
- Latest version:
-
http://www.w3.org/WAI/UA/UAAG10/
- Previous version:
-
http://www.w3.org/WAI/UA/WD-UAAG10-20010323
- Editors:
- Ian Jacobs, W3C
Jon Gunderson, University of Illinois at
Urbana-Champaign
Eric Hansen, Educational Testing
Service
- Authors and Contributors:
- See acknowledgements.
Copyright
©1999 - 2001 W3C® (MIT,
INRIA, Keio), All Rights
Reserved. W3C
liability,
trademark, document
use and software
licensing rules apply.
This document provides guidelines for designing user
agents that lower barriers to Web accessibility for people with
disabilities (visual, hearing, physical, and cognitive). User agents include
HTML browsers and other types of software that retrieve and render Web content. A user agent that conforms to these guidelines will promote accessibility
through its own user interface and through other internal facilities, including
its ability to communicate with other technologies (especially assistive
technologies). Furthermore, all users, not just users with
disabilities, are expected to find conforming user agents to be more
usable.
In addition to helping developers of HTML browsers, media players, etc.,
this document will also benefit developers of assistive technologies because it
explains what types of information and control an assistive technology may
expect from a conforming user agent. Technologies not addressed directly by
this document (e.g., technologies for braille rendering) will be essential to
ensuring Web access for some users with disabilities.
This section describes the status of this document at the time of its
publication. Other documents may supersede this document. The latest status of
this document series is maintained at the W3C.
This version of the document takes into account Working Group resolutions to
all issues raised during the second last call review. At their 29 March
2001 teleconference, the User Agent Accessibility Guidelines Working Group
decided to return to last call. The last call document is only expected to
differ from the current document editorially.
Publication as a Working Draft does not imply endorsement by the W3C
Membership. This is a draft document and may be updated, replaced or obsoleted
by other documents at any time. It is inappropriate to cite W3C Working Drafts
as other than "work in progress."
This document is part of a series of accessibility documents published by
the Web Accessibility Initiative (WAI) of the World Wide Web
Consortium (W3C). WAI
Accessibility Guidelines are produced as part of the WAI Technical Activity. The
goals of the User Agent Accessibility
Guidelines Working Group are described in the charter.
A list of current W3C Recommendations and
other technical documents can be found at the W3C Web site.
A separate document, entitled "Techniques for User Agent Accessibility
Guidelines 1.0" [UAAG10-TECHS], provides
suggestions and examples of how each checkpoint might be satisfied. It also
includes references to other accessibility resources (such as platform-specific
software accessibility guidelines) that provide additional information on how a
user agent may satisfy each checkpoint. The techniques provided in "Techniques
for User Agent Accessibility Guidelines 1.0" are
informative examples only, and other strategies may be used or
required to satisfy the checkpoints. The Techniques document is expected to be
updated more frequently than the current guidelines.
The Web Accessibility Initiative provides other resources and educational materials
to promote Web accessibility. Resources include information about accessibility
policies, links to translations of WAI materials into languages other than
English, information about specialized user agents and other tools,
accessibility training resources, and more.
This document specifies requirements that, if satisfied by user agent developers, will lower barriers to
accessibility. This introduction (section 1) provides context for understanding
the guidelines listed in section 2. Section 1
explains the relationship of this document to other accessibility guidelines
published by the Web Accessibility Initiative, which user agents are expected
to conform, known limitations of this document, and
the relationship of this document to other software design guidelines. Section
3 explains how to make claims that software conforms
to these guidelines and details about the applicability of the requirements for
different kinds of user agents.
"User Agent Accessibility Guidelines 1.0" (UAAG 1.0) is
part of a series of accessibility guidelines published by the Web Accessibility Initiative (WAI). The documents in this
series reflect an accessibility model in which Web content authors, format
designers, and software developers have roles in ensuring that users with
disabilities have access to the Web. These agents intersect and complement each
other as follows:
- Protocol (e.g., HTTP) and content format (e.g., HTML, XML, SVG, etc.)
specifications allow communication on the Web. These specifications includes
features that authors should use to create accessible content, and features
that user agents should support through an accessible user interface.
- Authors make use of the accessibility features of different format
specifications, use markup appropriately, write in clear and simple language,
organize a Web site consistently, etc. The "Web Content Accessibility
Guidelines 1.0"
[WCAG10] explains the responsibilities of authors in meeting the
needs of users with disabilities. In User Agent Accessibility Guidelines 1.0,
WCAG 1.0 is considered the reference for what defines accessible Web content.
The "Authoring Tool Accessibility Guidelines 1.0" [ATAG10] explains the
responsibilities of authoring tool developers. An accessible authoring tool
facilitates the creation of accessible Web content and may be operated by users
with disabilities.
- User agent developers design software that conforms to specifications
(including implementation of their accessibility features), provides an
accessible user interface, accessible documentation, and communicates with
other software (notably assistive
technologies).
This document explains the responsibilities of user agents in meeting the
needs of users with disabilities. The requirements of this document interact
with those of the "Web Content Accessibility Guidelines 1.0" [WCAG10] in a
number of ways:
- UAAG 1.0
checkpoint 8.1 requires implementation of the accessibility features of all
implemented specifications. Features are those identified as such and those
that satisfy all of the requirements of WCAG 1.0 [WCAG10].
- UAAG 1.0 checkpoint 12.1
requires conformance to WCAG 1.0 for user agent documentation.
- UAAG 1.0 also incorporates some terms and concepts from WCAG 1.0, a natural
consequence of fact that the documents were designed to complement one
another.
Formats, authors, and designers all have limitations. No format allows
authors to encode all of their knowledge in a way that a user agent can recognize. A format may lack features required
for accessibility. An author may not make use of the accessibility features of
a format or may misuse a format (which can cause problems for user agents). A
user agent designer may not implement a format specification correctly or
completely. Some requirements of this document take these limitations into
account.
- UAAG 1.0 includes requirements to satisfy the expectations set by WCAG 1.0
"until user agent" clauses. These clauses make additional requirements of
authors in order to compensate for some limitations of deployed user
agents.
- UAAG 1.0 includes several
repair requirements (e.g., checkpoints checkpoint 2.7 and checkpoint 2.10) for cases where content does not conform to
WCAG 1.0. Furthermore, this document includes some requirements to address
certain widespread authoring practices that are discouraged because they may
cause accessibility or usability problems (e.g., some uses of HTML
frames).
- Except for the indicated repair checkpoints, UAAG 1.0 only requires user
agents to handle what may be
recognized through protocols and formats. For example, user agents
are not expected to recognize that the author has used "clear and simple"
language to express ideas. Please see the section on
checkpoint applicability for more information about what the user agent is
expected to recognize.
This document was designed specifically to improve the accessibility of
mainstream user agents with multimedia capabilities for users with one or more
disabilities (including visual, hearing, physical, and cognitive). In this
context, a mainstream user agent is one designed for the general public to
handle general-purpose content in ordinary operating conditions. It is expected
that a conforming user agent will typically consist
of a Web browser, one or more media players, and possibly other components.
A user agent that conforms to these guidelines will enable access through
its own user interface and through other internal facilities, including its
ability to communicate with other technologies (especially assistive
technologies). Technologies not addressed directly by this document
(e.g., those for braille rendering) will be essential to ensuring Web access
for some users with disabilities. Note that the ability of conforming user
agents to communicate well with assistive technologies will depend in part on
the willingness of assistive technology developers to follow the same standards
and conventions for communication.
This document allows a certain amount of flexibility in the features a user
agent must support in order to conform. For example, some user agents may
conform even though they do not support certain
content types (such as video or audio) or
input modalities (such as mouse or voice). See the section on conformance for more information.
People with (or without) disabilities access the Web with widely varying
sets of capabilities, software, and hardware. Some users with disabilities:
- May not be able to see, hear, move, speak, or may not be able to process
some types of information easily or at all.
- May have difficulty reading or comprehending text.
- May not have or be able to use a keyboard or pointing device.
This document does not include requirements to meet all known accessibility
needs. Some known limitations of this document include the following:
- Braille. This document does not address braille rendering.
- Synthesized speech. This document includes few checkpoints related to
synthesized speech output (in
guideline 5).
- Size and color of non-text content. This document includes some checkpoints
to ensure that the user is able to control the size and color of visually
rendered text content (checkpoints 4.1 and
4.3). This document does not in general address control of the size and
color of visually rendered non-text
content. Note: Resizing capabilities may be
required for conformance to other specifications (e.g., SVG [SVG]).
- Background image interference. The requirement of checkpoint 3.1 to
allow the user to turn off rendering of background images does not extend to
multi-layered rendering.
- Input modalities. This document only includes requirements for keyboard,
pointing device, and voice input modalities. This document includes several
checkpoints related to voice input as part of general input requirements (e.g.,
the checkpoints of guideline 7 and
guideline 11)) but does not
otherwise address voice-based navigation or control. Note: The
UAWG intends to coordinate further work on the topics of
voice input and synthesized speech output with groups in W3C's Voice Browser Activity.
- User control of every user interface component. This document includes some
requirements for user control of user interface components that may be changed
through content (see guideline
5). However, these requirements do not account for every user interface
component that the author may affect (e.g., the author might supply a script
that causes text to scroll in the status bar). User agents are required to
follow software usability guidelines (see checkpoint 7.3), which are also expected to include
requirements for user control over user interface behavior.
- Time. This document includes requirements for control of some time
parameters (including
checkpoint 2.4, checkpoint
4.4, checkpoint 4.5,
and checkpoint 4.11).
The requirements are for time parameters that the user agent recognizes and
controls. This document does not include requirements for control of time
parameters managed on the server.
- Security. This document does not address security issues that may arise as
a result of these requirements. For instance, requirements that software be
able to read and write content and user interface information through APIs
raise security issues. See the section on
restricted functionality and conformance.
- Intellectual property. This document does not address intellectual property
issues that may arise as a result of these requirements.
Considerable effort has been made to ensure that the requirements of this
document are compatible with other good software design practices. However,
this document does not purport to be a complete guide to good software design.
For instance, the general topic of user interface design for computer software
exceeds the scope of this document, though some user interface requirements
have been included because of their importance to accessibility. The
"Techniques for User Agent Accessibility Guidelines 1.0"
[UAAG10-TECHS] includes some references to general software design
guidelines and platform-specific accessibility guidelines (see checkpoint 7.3). Involving
people with disabilities in the design and testing of software will generally
improve the accessibility of the software.
Installation is an important aspect of both accessibility and general
software usability. On platforms where a user can install a user agent, the
installation (and update) procedures need to be accessible. This document does
not include a checkpoint requiring that installation procedures be accessible.
Since this document considers installation to be part of software usage, the
different aspects of installation (user interface, documentation,
operating environment conventions, etc.) are already covered by the
complete set of checkpoints.
Many users without disabilities are likely to benefit from the requirements
developed to benefit users with disabilities. For example, users without
disabilities:
- may have a text-only screen, a small screen, or a slow Internet connection
(e.g., via a mobile phone browser). These users are likely to benefit from the
same features that provide access to people with low vision or blindness.
- may be in a situation where their eyes, ears, or hands are busy or
interfered with (e.g., driving to work, working in a noisy environment, etc.).
These users are likely to benefit from the same features that provide access to
people who cannot use a mouse or keyboard due to a visual, hearing, or physical
disability.
- may not understand fluently the natural language of spoken content. These
users are likely to benefit from the same visual rendering of text equivalents that make spoken language
accessible to people with a hearing disability.
Software that satisfies the requirements of this document is expected to be
more flexible, manageable, extensible, and beneficial to all users. For
example, a user agent architecture that allows programmatic access to content and the
user interface will encourage software modularity and reuse, and
will enable operation by scripting tools and automated test engines in addition
to assistive technologies.
The twelve guidelines in this document state general principles for the
development of accessible user agents. Each guideline includes:
- The guideline number.
- The statement of the guideline.
- The rationale behind the guideline and identification of some groups of
users who benefit from it.
- A list of checkpoint definitions. This list may be split into groups of
related checkpoints. For instance, the list might be split into one group of
"checkpoints for visually rendered text" and second group of "checkpoints for
audio volume control"." Within each group, checkpoints are ordered according to
their priority, e.g., Priority 1 before Priority 2.
Within a guideline, checkpoint groupings and checkpoint order have no bearing
on conformance.
Each checkpoint definition includes:
- The checkpoint number.
- The statement of the checkpoint. The statement of the checkpoint is one or
more requirements that must be satisfied by the user agent (i.e., the "subject of the claim) for the purposes of conformance. The "user agent" may consist of more than
one software component, as explained in the section on well-formed conformance claims.
- The priority of the checkpoint.
- A "content or user agent or both" label
indicating whether the requirements of the checkpoint must be satisfied for content only, for user agent features only, or
for both.
- Content type labels (zero or more).
Content type labels are explained in the section on
conformance.
-
Informative notes about the checkpoint (beginning with the word
"Note"). These notes do not state
requirements that must be satisfied as part of conformance; they are
informative only. They are meant to clarify the scope of the checkpoint through
further description, examples, cross references, and commentary.
Note: Some checkpoints in this document are more general than others,
and some may overlap in scope. Therefore, a checkpoint may be identified as a
"special case" or an "important special case" of one or more other
checkpoints.
- A link to
informative rationale, implementation details, references, and more
information in "Techniques for User Agent Accessibility Guidelines 1.0"
[UAAG10-TECHS].
Each checkpoint is intended to express one or more minimal requirements
clearly, so that someone evaluating a user agent may verify that it satisfies
the requirements. User agent developers are encouraged surpass the minimal
requirements expressed by the checkpoints. Indeed, for some requirements, it is
expected that developers will find it easier or less costly to implement a
solution that is more general than one that would only satisfy the minimal
requirements of a checkpoint. Both this document and "Techniques for User Agent
Accessibility Guidelines 1.0" [UAAG10-TECHS] suggest
techniques to help user agent developers meet or surpass the minimal
requirements. Note: In some cases, though the requirement of a
checkpoint may be clear, without documentation from vendors (e.g., about
implemented APIs), it may be difficult to verify that the
subject of a conformance claim has satisfied the
requirement. Some checkpoints (e.g., those requiring developers to follow
conventions or implement specifications defined outside this document) are
inherently more subject to interpretation than others.
Note: The requirements of certain checkpoints in this
document bear upon the requirements of others. For instance, the keyboard
requirements of checkpoint
1.1 influence the input device requirements of the other checkpoints. As a
result, these other checkpoints need only state "Allow configuration" instead
of "Allow configuration through the keyboard." Because the document "factors
out" requirements in this manner, first-time readers are encouraged to read the
context provided for each checkpoint, including the guideline prose, the
surrounding checkpoints (since nearby checkpoints are often related), notes
after checkpoints, and associated techniques (in the Techniques document
[UAAG10-TECHS]). The checklist [UAAG10-CHECKLIST] is
also a useful tool (e.g., for evaluating a user agent for conformance), but
does not provide the same contextual support.
Each checkpoint in this document is assigned a priority that indicates its
importance for users with disabilities.
- [Priority
1]
- This checkpoint must be satisfied by user agents,
otherwise one or more groups of users with disabilities will find it impossible
to access the Web. Satisfying this checkpoint is a basic requirement for
enabling some people to access the Web.
- [Priority
2]
- This checkpoint should be satisfied by user agents,
otherwise one or more groups of users with disabilities will find it difficult
to access the Web. Satisfying this checkpoint will remove significant barriers
to Web access for some people.
- [Priority
3]
- This checkpoint may be satisfied by user agents to make it
easier for one or more groups of users with disabilities to access information.
Satisfying this checkpoint will improve access to the Web for some people.
Since people use a variety of devices for input and output, user agent
developers need to ensure redundancy in the
user interface. The user may have to operate the user interface with
a variety of input devices (keyboard, pointing device, keyboard, speech input,
etc.) and output devices (graphical
display, speech output, braille display, etc.).
Though it may seem contradictory, enabling full user agent operation through
the keyboard is an important part of promoting device-independence given
today's user agents. In addition to the fact that some form of keyboard is
supported by most platforms, there are several reasons for this:
- For some users (e.g., users with blindness or physical disabilities),
operating a user agent with a pointing device may be difficult or impossible
since it requires tracking the pointing device position in a two-dimensional
visual space. Keyboard operation does not generally require as much movement
"through space".
- Some assistive technologies that support a diversity of input and output
mechanisms use keyboard APIs for communication with some
user agents; see checkpoint
6.7. People who cannot or do not use a pointing device may interact with
the user interface with the keyboard, through voice input, a head wand, touch
screen, or other device.
While this document only requires keyboard operation for conformance, it promotes device-independence by also
allowing people to claim conformance for full pointing device support or full
voice support.
As a way to promote output device independence, this guideline requires
support for text messages in the user interface because text may be rendered
visually, as synthesized speech, and as braille.
The API requirements of
guideline 6 also promote device independence by ensuring communication with
specialized software.
Checkpoints
-
1.1 Ensure that the user can operate the user agent fully through
keyboard input alone. [Priority 1] Both
content and user agent.
- Note: For example, ensure that the user can interact with
enabled
elements, select content, navigate viewports, configure the user
agent, access documentation, install the user agent, operate controls of the
user interface, etc., all entirely through keyboard input. It is also possible
to claim conformance to this document for full
support through pointing device input and voice input. See the section on input modality labels.
-
Techniques for checkpoint 1.1
-
1.2 For the element with content
focus, allow the user to activate
any explicitly associated input device
event handlers through keyboard input alone. [Priority 1] Content only.
- Note: The requirements for this
checkpoint refer to any explicitly associated input device
event handlers associated with an
element, independent of the input
modalities for which the user agent conforms. For example, suppose that an
element has an explicitly associated handler for pointing device events. Even
when the user agent only conforms for keyboard input (and does not conform for
the pointing device, for example), this checkpoint requires the user agent to
allow the user to activate that handler with the keyboard. This checkpoint is
an important special case of checkpoint 1.1. Please refer to the checkpoints of guideline 9 for more information about
focus requirements.
-
Techniques for checkpoint 1.2
- 1.3 Ensure that every message (e.g., prompt, alert,
notification, etc.) that is a non-text
element and is part of the user
agent user interface has a text
equivalent. [Priority 1] User
agent only.
- Note: For example, if the user is
alerted of an event by an audio cue, a visually-rendered text equivalent in the
status bar would satisfy this checkpoint. Per checkpoint 6.4, a text equivalent for each such message must
be available through a standard API. See also checkpoint 6.5 for requirements for programmatic alert of
changes to the user interface.
- Techniques for
checkpoint 1.3
The checkpoints in this section require the user agent to provide access to
all content through a series of complementary mechanisms designed so that if
one fails, another will provide some access. The following preferences are
embodied in the checkpoints:
- Not all content is rendered at all times. Automatic decision by the user
agent about when and where to render conditional content is preferred, but manual
choice by the user may be necessary for access.
- Structure is preferred (both the author's specified preferences and the
user's structured access), but unstructured access may be necessary for access
to all content.
- Rendering according to specification is preferred, but a source view of
text content may be necessary for access (e.g., because of user-side error
conditions, authoring errors, inadequate specification, or incorrect user agent
implementation). For example, the user may have to look at
URIs for information, HTML comments,
XML element names, or script data. The user agent should respect
authoring synchronization cues for content that changes over time, but also
needs to allow the user to control the time intervals when user input is
possible.
- Local control of rendering is important; global configuration of rendering
preferences is convenient for access.
Ensuring access to
conditional content benefits all users since some users may not have
access to some content due to a technological limitation (e.g., their mobile
browser cannot display graphics) or simply a configuration preference (e.g.,
they have a slow Internet connection and prefer not to download movies or
images).
Checkpoints
- 2.1 For all format specifications
that the user agent
implements, make content
available through the rendering processes described by those specifications.
[Priority 1] Content only.
- Note: This includes format-defined
interactions between author preferences and user preferences/capabilities
(e.g., when to render the "
alt
"
attribute in HTML [HTML4], the rendering order of
nested OBJECT
elements in HTML, test attributes in SMIL [SMIL], and the
cascade in CSS2
[CSS2]). If a conforming user agent does not render a content type,
it should allow the user to choose a way to handle that content (e.g., by
launching another application, by saving it to disk, etc.). This checkpoint
does not require that all content be available through each viewport.
-
Techniques for checkpoint 2.1
- 2.2 For all text formats that the user
agent
implements, provide a view of the text source. Text formats include at least the
following: (1) all media objects given an Internet media type of "text" (e.g.,
text/plain, text/HTML, or text/*), and (2) all SGML and XML applications,
regardless of Internet media type (e.g., HTML 4.01, XHTML 1.1, SMIL, SVG,
etc.). [Priority 1] Content only.
- Note: Refer to [RFC2046],
section 4.1 for information about the "text" Internet media type. A user agent
would also satisfy this checkpoint by providing a source view for any text
format, not just implemented text formats.
- Techniques
for checkpoint 2.2
- 2.3 global
configuration so that, for each piece of unrendered
conditional content "C", the user agent
alerts the user to the existence of the content and provides access
to it. Provide access to this content according to format specifications or
where unspecified, as follows. If C has a close relationship (e.g., C is a
summary, title, alternative, description, expansion, etc.) with another piece
of rendered content D, do at least one of the following: (1a) render C in place
of D, (2a) render C in addition to D, (3a) provide access to C by querying D,
or (4a) allow the user to follow a link to C from the context of D. If C does
not have a close relationship to other content (i.e., a relationship other than
just a document tree relationship), do at least one of the following: (1b)
render a
placeholder for C, (2b) provide access to C by query (e.g., allow
the user to query an element for its
attributes), or (3b) allow the user to follow a link in context to
C. [Priority 1] Content only.
- Note: The configuration requirement of
this checkpoint is global; the user agent is only required to provide one
switch that turns on or off these alert and access mechanisms. To satisfy this
checkpoint, the user agent may provide access on an element-by-element basis
(e.g., by allowing the user to query individual elements) or for all elements
(e.g., by offering a configuration to render conditional content all the time).
For instance, an HTML user agent might allow users to query each element for
access to conditional content supplied for the "
alt
",
"title
", and "longdesc
" attributes. Or, the user
agent might allow configuration so that the value of the "alt
"
attribute is rendered in place of all IMG
elements (while other
conditional content might be made available through another mechanism).
-
Techniques for checkpoint 2.3
- 2.4 For
content where user input is only possible within a finite time
interval controlled by the user agent, allow
configuration to make the time interval "infinite". Do this by
pausing automatically at the end of each time interval where user input is
possible, and resuming automatically after the user has explicitly completed
input. In this configuration, alert the user when the session has been paused
and which enabled
elements are time-sensitive.
[Priority 1] Content only.
- Note: In this configuration, the user
agent may have to pause the presentation more than once if there is more than
one opportunity for time-sensitive input. In SMIL 1.0 [SMIL], for example, the
"
begin
", "end
", and "dur
" attributes synchronize presentation components.
The user may explicitly complete input in many different ways (e.g., by
following a link that replaces the current time-sensitive resource with a
different resource). This checkpoint does not apply
when the user agent cannot recognize
the time interval in the presentation format, or when the user agent cannot
control the timing (e.g., because it is controlled by the server).
- Techniques
for checkpoint 2.4
-
2.5 Allow
configuration or control so
that text
transcripts,
collated text transcripts, captions,
and auditory
descriptions are rendered at the same time as the associated audio tracks and
visual tracks. [Priority 1]
Content only.
- Content type labels:
Video,
Audio.
- Note: This checkpoint is an important
special case of checkpoint
2.1.
-
Techniques for checkpoint 2.5
- 2.6 Respect synchronization cues
during rendering. [Priority 1] Content
only.
- Content type labels:
Video,
Audio.
- Note: This checkpoint is an important
special case of checkpoint
2.1.
- Techniques
for checkpoint 2.6
- 2.7 Allow
configuration to generate repair
text when the user agent
recognizes that the author has failed to provide
conditional content that was required by the format specification.
The user agent may satisfy this checkpoint by basing the repair text on any of
the following available sources of information: URI reference, content type, or
element type. [Priority 2] Content
only.
- Note: Some markup languages (such as HTML 4 [HTML4] and SMIL
1.0 [SMIL]
require the author to provide conditional content for some elements (e.g., the
"
alt
" attribute
on the IMG
element). Repair text based on URI reference, content
type, or element type is sufficient to satisfy the checkpoint, but may not
result in the most effective repair. Information that may be recognized as relevant to repair might not be
"near" the missing conditional content in the document object. For instance, instead of
generating repair text on a simple URI reference, the user agent might look for
helpful information near a different instance of the URI reference in the same
document object, or might retrieve useful information (e.g., a title) from the
resource designed by the URI reference.
- Techniques for
checkpoint 2.7
- 2.8 Allow
configuration so that when the user agent
recognizes that
conditional content required by the format specification is present
but empty (e.g., the empty string), the user agent either (1) generates no repair text, or (2) generates repair text as
described in checkpoint 2.7.
[Priority 3] Content only.
- Note: In some authoring scenarios, an empty string of text
(e.g., "
alt=''
") may be considered to be an appropriate text equivalent (for instance, when some non-text
content has no other function than pure decoration, or an image is
part of a "mosaic" of several images and doesn't make sense out of the mosaic).
Please refer to the Web Content Accessibility Guidelines 1.0 [WCAG10] for
more information about text equivalents.
- Techniques for
checkpoint 2.8
- 2.9 Allow configuration to render all
conditional content automatically. Provide access to this content
according to format specifications or where unspecified, by applying one of the
following techniques described in checkpoint 2.3: 1a, 2a, or 1b.
[Priority 3] Content only.
- Note: The user agent satisfies this checkpoint if it
satisfies checkpoint 2.3
by applying techniques 1a, 2a, or 1b. For instance, an HTML user agent might
allow configuration so that the value of the "
alt
" attribute is rendered in place of all
IMG
elements (while other conditional content might be made available
through another mechanism).
-
Techniques for checkpoint 2.9
-
2.10 Allow
configuration not to render content in unsupported natural
languages. Indicate to the user in context that author-supplied
content has not been rendered. [Priority 3]
Content only.
- Note: For example, use a text
substitute or accessible graphical icon to indicate that content in a
particular language has not been rendered. This checkpoint does not require the
user agent to allow different configurations for different natural
languages.
-
Techniques for checkpoint 2.10
Some content or behavior specified by the author may make the user agent
unusable or may obscure information. For instance, flashing content may trigger
seizures in people with photosensitive epilepsy, or may make a Web page too
distracting to be usable by someone with a cognitive disability. Blinking text
can affect screen reader users, since screen readers (in conjunction with
speech synthesizers or braille displays) may re-render the text every time it
blinks. Distracting background images, colors, or sounds may make it impossible
for users to see or hear other content. Dynamically changing Web content may
cause problems for some assistive
technologies. Scripts that cause unanticipated changes (viewports that open, automatically redirected
or refreshed pages, etc.) may disorient some users with cognitive
disabilities.
This guideline requires the user agent to allow configuration so that, when
loading Web
resources, the user agent does not render content in a manner that
may pose accessibility problems. Requirements for interactive control of rendered content are part of guideline 4.
Checkpoints
-
3.1 Allow
configuration not to render background images. In this
configuration, provide an option to alert the user when a background image is
available (but has not been rendered).
[Priority 1] Content only.
- Content type labels:
Image.
- Note: This checkpoint only requires control of background
images for "two-layered renderings", i.e., one rendered background image with
all other content rendered "above it". When background images are not rendered,
user agents should render a solid background color instead (see checkpoint 4.3). In this
configuration, the user agent is not required to retrieve background images
from the Web.
-
Techniques for checkpoint 3.1
- 3.2
Allow
configuration not to render audio, video,
or animated images except on explicit request from the user. In this
configuration, provide an option to render a
placeholder in context for each unrendered source of audio, video,
or animated image. When placeholders are rendered, allow the user to view the
original author-supplied content associated with each placeholder. [Priority 1] Content only.
- Content type labels:
Animation, Video,
Audio.
- Note: This checkpoint requires configuration for content
rendered without any user interaction (including content rendered on load or as
the result of a script), as well as content rendered as the result of user
interaction that is not an explicit request (e.g., when the user activates a
link). When configured not to render content except on explicit user request,
the user agent is not required to retrieve the audio, video, or animated image
from the Web until requested by the user. See also checkpoint 3.8, checkpoint 4.5, checkpoint 4.9, and checkpoint
4.10.
-
Techniques for checkpoint 3.2
- 3.3
Allow
configuration to render animated or
blinking text as motionless, unblinking text.
[Priority 1] Content only.
- Content type labels:
VisualText.
- Note: A "stock quote ticker" is an example of animated
text. This checkpoint does not apply for blinking and
animation effects that are caused by mechanisms
that the user agent cannot
recognize. This checkpoint requires configuration because blinking
effects may be disorienting to some users but useful to others, for example
users who are deaf or hard of hearing.
-
Techniques for checkpoint 3.3
- 3.4 Allow
configuration not to execute any executable
content (e.g., scripts and applets). In this configuration, provide
an option to alert the user when executable content is available (but has not
been executed). [Priority 1] Content
only.
- Note: Scripts and applets may provide
very useful functionality, not all of which causes accessibility problems.
Developers should not consider that the user's ability to turn off scripts is
an effective way to improve content accessibility; turning off scripts means
losing the benefits they offer. Instead, developers should provide users with
finer control over user agent or content behavior known to raise accessibility
barriers. The user should only have to turn off scripts as a last resort.
- Techniques
for checkpoint 3.4
-
3.5 Allow
configuration so that client-side content refreshes (i.e., those
initiated by the user agent, not the server) do not change content except on explicit user request. Allow the user to
request the new content on demand (e.g., by following a link or confirming a
prompt). Alert the user, according to the schedule specified by the author,
whenever fresh content is available (to be obtained on explicit user request).
[Priority 1] Content only.
-
Techniques for checkpoint 3.5
-
3.6 Allow
configuration so that a "client-side redirect" (i.e., one initiated
by the user agent, not the server) does not change
content except on explicit
user request. Allow the user to access the new content on demand
(e.g., by following a link or confirming a prompt). The user agent is not
required to provide these functionalities for client-side redirects that occur
instantaneously (i.e., when there is no delay before the new content is
retrieved). [Priority 2] Content only.
-
Techniques for checkpoint 3.6
- 3.7 Allow
configuration not to render images. In this configuration, provide
an option to render a
placeholder in context for each unrendered image. When placeholders
are rendered, allow the user to view the original author-supplied content
associated with each placeholder.
[Priority 2] Content only.
- Content type labels:
Image.
- Note: See also checkpoint
3.8.
- Techniques for
checkpoint 3.7
- 3.8
Once the user has viewed the original author-supplied content associated with a
placeholder, allow the user to turn off
the rendering of the author-supplied content.
[Priority 3] Content only.
- Content type labels:
Animation, Video,
Image, Audio.
- Note: For example, if the user agent
substitutes the author-supplied content for the placeholder in context, allow
the user to "toggle" between placeholder and the associated content. Or, if the
user agent renders the author-supplied content in a separate viewport, allow
the user to close that viewport. See checkpoint 3.2 and checkpoint 3.7.
-
Techniques for checkpoint 3.8
Providing access to content (see guideline 2) includes enabling users to configure and
control its rendering. Users with low vision may require that text
be rendered at a size larger than the size specified by the author or by the
user agent's default rendering. Users with color blindness may need to impose
or prevent certain color combinations.
For dynamic presentations such as synchronized multimedia presentations
created with SMIL 1.0
[SMIL], users with cognitive, hearing, visual, and physical
disabilities may not be able to interact with a presentation within the time
frame assumed by the author. To make the presentation accessible to these
users, user agents rendering multimedia content (audio, video, and other animations), have to allow the user to control
the playback rate of this content, and also to stop, start, pause, reverse, and
advance it manually. User agents rendering audio have to allow the user to
control the audio volume globally and to allow the user to control
independently distinguishable audio tracks.
User agents with speech synthesis capabilities need to allow users to
control various speech output parameters. For instance, users who are blind and
hard of hearing may not be able to make use of high or low frequencies; these
users have to be able to configure their speech synthesizers to use suitable
frequencies.
Note: The checkpoints in this guideline apply to all
content, including
conditional content when it is rendered.
Checkpoints for visually rendered text
- 4.1
Allow global
configuration and control
over the reference size of rendered text, with an option to override reference
sizes specified by the author or user agent defaults. Allow the user to choose
from among the full range of font sizes supported by the
operating environment.
[Priority 1] Content only.
- Content type labels:
VisualText.
- Note: The reference size of rendered
text corresponds to the default value of the CSS2 'font-size' property, which
is 'medium' (refer to CSS2 [CSS2], section 15.2.4). For example,
in HTML, this might be paragraph text. The default reference size of rendered
text may vary among user agents. User agents may offer different mechanisms to
allow control of the size of rendered text (e.g., font size control, zoom,
magnification, etc.). Refer, for example to the Scalable Vector Graphics
specification
[SVG] for information about scalable rendering.
-
Techniques for checkpoint 4.1
-
4.2 Allow global
configuration of the font family of all rendered text, with an
option to override font families specified by the author or by user agent
defaults. Allow the user to choose from among the full range of font families
supported by the
operating environment.
[Priority 1] Content only.
- Content type labels:
VisualText.
- Note: For example, allow the user to
specify that all
text is to be rendered in a particular
sans-serif font family. For text that
cannot be rendered properly using the user's preferred font family, the user
agent may substitute an alternative font family.
-
Techniques for checkpoint 4.2
- 4.3
Allow global
configuration of the foreground and background color of all rendered
text, with an option to override foreground and background colors specified by
the author or user agent defaults. Allow the user to choose from among the full
range of colors supported by the operating environment.
[Priority 1] Content only.
- Content type labels:
ColorText.
- Note: User configuration of foreground
and background colors may inadvertently lead to the inability to distinguish
ordinary text from selected text, focused text, etc. See checkpoint 10.3 for more
information about highlight styles.
-
Techniques for checkpoint 4.3
Checkpoints for multimedia presentations and other
presentations that change continuously over time
- 4.4 Allow the user to slow the
presentation rate of audio and
animations (including video and animated images). For a
visual track, provide at least one
setting between 40% and 60% of the original speed. For a prerecorded audio track including
audio-only presentations, provide at least one setting between 75%
and 80% of the original speed. When the user agent allows the user to slow the
visual track of a synchronized multimedia presentation to between 100% and 80%
of its original speed, synchronize the visual and audio tracks. Below 80%, the
user agent is not required to render the
audio track. The user agent is not required to satisfy this
checkpoint for audio and animations whose
recognized role is to create a purely stylistic effect. [Priority 1] Content only.
- Content type labels:
Animation, Audio.
- Note: Purely stylistic effects include background sounds,
decorative animated images, and effects caused by style sheets. The style
exception of this checkpoint is based on the assumption that authors have
satisfied the requirements of the "Web Content Accessibility Guidelines 1.0"
[WCAG10] not to convey information through style alone (e.g.,
through color alone or style sheets alone). See checkpoint 2.6 and checkpoint 4.7.
- Techniques
for checkpoint 4.4
- 4.5 Allow the user to stop, pause,
resume, fast advance, and fast reverse audio and
animations (including video and animated images) that last three or
more seconds at their default playback rate. The user agent is not required to
satisfy this checkpoint for audio and animations whose
recognized role is to create a purely stylistic effect. The user
agent is not required to play synchronized audio during fast advance or reverse
of animations (though doing so may help orient the user). [Priority 1] Content only.
- Content type labels:
Animation, Audio.
- Note: See
checkpoint 4.4 for more information about the exception for purely
stylistic effects. This checkpoint applies to content that is either rendered
automatically or on request from the user. The requirement of this checkpoint
is for control of each source of audio and animation that is recognized as distinct. Respect synchronization
cues per checkpoint
2.6.
-
Techniques for checkpoint 4.5
- 4.6 For
graphical viewports, allow the user to position text
transcripts,
collated text transcripts, and
captions in the viewport. Allow the user to choose from among at
least the range of positions available to the author (e.g., the range of
positions allowed by the markup or style language).
[Priority 1] Content only.
-
Techniques for checkpoint 4.6
-
4.7 Allow the user to slow the presentation rate of audio and animations (including video and animated
images) not covered by
checkpoint 4.4. The same speed percentage requirements of checkpoint 4.4 apply. [Priority 2] Content only.
- Content type labels:
Animation, Audio.
- Note: User agents automatically satisfy
this checkpoint if they satisfy
checkpoint 4.4 for all audio and animations.
-
Techniques for checkpoint 4.7
-
4.8 Allow the user to stop, pause, resume, fast advance, and fast
reverse audio and
animations (including video and animated images) not covered by checkpoint 4.5. [Priority 2] Content only.
- Content type labels:
Animation, Audio.
- Note: User agents automatically satisfy
this checkpoint if they satisfy checkpoint 4.5 for all audio and animations.
-
Techniques for checkpoint 4.8
Checkpoints for audio volume control
-
4.9 Allow global
configuration and control of
the volume of all audio, with an option to override audio volumes specified by
the author or user agent defaults. The user must be able to choose zero volume
(i.e., silent).
[Priority 1] Content only.
- Content type labels:
Audio.
- Note: User agents should allow
configuration and control of volume through available
operating environment controls.
-
Techniques for checkpoint 4.9
-
4.10 Allow independent control of
the volumes of distinct audio sources synchronized to play simultaneously.
[Priority 1] Content only.
- Content type labels:
Audio.
- Note: Sounds that play at different times are
distinguishable and therefore independent control of their volumes is not
required by this checkpoint (since volume control required by checkpoint 4.9
suffices). The user agent may satisfy this checkpoint by allowing the user to
control independently the volumes of all distinct audio sources. The user
control required by this checkpoint includes the ability to override
author-specified volumes for the relevant sources of audio. See also checkpoint
4.12.
-
Techniques for checkpoint 4.10
Checkpoints for synthesized speech
-
4.11 Allow
configuration and control of
the synthesized speech rate, according to the full range offered by the speech
synthesizer. [Priority 1] Content
only.
- Content type labels:
Speech.
- Note: The range of speech rates offered
by the speech synthesizer may depend on natural language.
-
Techniques for checkpoint 4.11
-
4.12 Allow control of
the synthesized speech volume, independent of other sources of audio. [Priority 1] Content only.
- Content type labels:
Speech.
- Note: The user control required by this checkpoint
includes the ability to override author-specified speech volume. See also checkpoint
4.10.
-
Techniques for checkpoint 4.12
- 4.13 Allow configuration of speech characteristics
according to the full range of values offered by the speech synthesizer. [Priority 1] Content only.
- Note: Some speech synthesizers allow
users to choose values for speech characteristics at a higher abstraction
layer, i.e., by choosing from present options that group several
characteristics. Some typical options one might encounter include: "adult male
voice", "female child voice", "robot voice", "pitch", "stress", etc. Ranges for
values may vary among speech synthesizers.
-
Techniques for checkpoint 4.13
-
4.14 Allow
configuration of the following speech characteristics: pitch, pitch
range, stress, richness. Pitch refers to the average frequency of the speaking
voice. Pitch range specifies a variation in average frequency. Stress refers to
the height of "local peaks" in the intonation contour of the voice. Richness
refers to the richness or brightness of the voice.
[Priority 2] Content only.
- Note: This checkpoint is more specific
than checkpoint
4.13: it requires support for the voice characteristics listed. Definitions
for these characteristics are taken from section 19 of the Cascading Style
Sheets Level 2 Recommendation [CSS2]; please refer to that
specification for additional
informative descriptions. Some speech synthesizers allow users to
choose values for speech characteristics at a higher abstraction layer, i.e.,
by choosing from present options distinguished by "gender", "age", "accent",
etc. Ranges of values may vary among speech synthesizers.
- Content type labels:
Speech.
-
Techniques for checkpoint 4.14
-
4.15 Provide support for
user-defined extensions to the speech dictionary, as well as the following
functionalities: spell-out (spell text one character at a time or according to
language-dependent pronunciation rules), speak-numeral (speak a numeral as
individual digits or as a full number), and speak-punctuation (speak
punctuation literally or render as natural pauses).
[Priority 2] Content only.
- Note: Definitions for the
functionalities listed are taken from section 19 of the Cascading Style Sheets
Level 2 Recommendation [CSS2]; please refer to that
specification for additional
informative descriptions.
-
Techniques for checkpoint 4.15
Checkpoints related to style sheets
- 4.16
For user agents that support
style sheets, allow the user to choose from (and apply) available
author and
user style sheets or to ignore them.
[Priority 1] Both content and user agent.
- Note: By definition, the user agent's default style sheet is always
present, but may be overridden by author or user styles. Developers should not
consider that the user's ability to turn off author and user style sheets is an
effective way to improve content accessibility; turning off style sheet support
means losing the many benefits they offer. Instead, developers should provide
users with finer control over user agent or content behavior known to raise
accessibility barriers. The user should only have to turn off author and user
style sheets as a last resort.
-
Techniques for checkpoint 4.16
Control of viewport
behavior is important to accessibility. For people with visual disabilities or
certain types of learning disabilities, it is important that the point of
regard -- what the user is presumed to be viewing -- remain as
stable as possible. Unexpected changes may cause users to lose track of how
many
viewports are open, which viewport has
the current
focus, etc. This guideline includes requirements for control of
opening and closing viewports, the relative position of graphical viewports,
changes to focus, and inadvertent form submissions and micropayments.
Checkpoints
-
5.1 Allow
configuration so that the
current focus does not move automatically to
viewports that open without explicit user request. Configuration is not
required if the current focus can only ever be moved by explicit user request.
[Priority 2] Both content and user
agent.
- Note: For example, allow configuration
so that neither the current
focus nor the pointing device jump automatically to a viewport that
opens without explicit user request.
-
Techniques for checkpoint 5.1
- 5.2 For graphical user interfaces, allow
configuration so that the viewport with the
current focus remains "on top" of all
other viewports with which it overlaps.
[Priority 2] Both content and user agent.
- Techniques
for checkpoint 5.2
- 5.3 Allow
configuration so that viewports only open on explicit
user request. In this configuration, instead of opening a viewport
automatically, alert the user and allow the user to open it on demand (e.g., by
following a link or confirming a prompt). Allow the user to close viewports. If
a viewport (e.g., a frame set) contains other viewports, these requirements
only apply to the outermost container viewport.
[Priority 2] Both content and user agent.
- Note: User creation of a new viewport (e.g., empty or with
a new resource loaded) through the user agent's user interface constitutes an
explicit user request. See also checkpoint 5.1 (for control over changes of focus when a
viewport opens) and checkpoint 6.5
(for programmatic alert of changes to the user interface).
- Techniques
for checkpoint 5.3
- 5.4 Allow
configuration to prompt the
user to confirm (or cancel) any form submission that is not caused by an explicit
user request to activate a
form submit control. [Priority 2] Content
only.
- Note: For example, do not submit a form
automatically when a menu option is selected, when all fields of a form have
been filled out, or when a "mouseover" or "change" event event occurs. The user agent may satisfy this
checkpoint by prompting
the user to confirm all form submissions.
- Techniques
for checkpoint 5.4
- 5.5 Allow
configuration to prompt the
user to confirm (or cancel) any payment that results from activation of a fee
link. [Priority 2] Content
only.
- Techniques for
checkpoint 5.5
- 5.6
Allow
configuration to
prompt the user to confirm (or cancel) closing any viewport that
starts to close without explicit
user request. [Priority 3] Both
content and user agent.
-
Techniques for checkpoint 5.6
Part of user agent accessibility involves communication with other software
in the user's operating environment. Using interoperable application
programming interfaces (APIs) increases predictability for users and
for developers of assistive
technologies. This guideline includes checkpoints for:
- exchanging information about content and
user
agent user interface controls, selection, and focus with other
software, especially with assistive
technologies.
- notifying other software of changes to content, user agent user interface
controls, selection, and focus.
- using standard communication channels for this exchange.
- ensuring the exchange takes place in a timely manner. Otherwise, assistive
technology rendering or response may lag behind that of the "source" user
agent, which can disorient the user. Timely exchange is necessary for proper
synchronization of alternative renderings.
Checkpoints
- 6.1 Provide programmatic read access to
HTML and XML
content by conforming to the following modules of the W3C Document
Object Model DOM Level 2 Core
Specification [DOM2CORE] and exporting the
interfaces they define: (1) the Core module for HTML; (2) the Core and XML
modules for XML. [Priority 1] Content
only.
- Note: Please refer to the "Document
Object Model (DOM) Level 2 Core Specification" [DOM2CORE]
for information about HTML and XML
versions covered.
- Techniques
for checkpoint 6.1
- 6.2 If the user can modify
HTML and XML content
through the user
interface, provide the same functionality programmatically by
conforming to the following modules of the W3C Document Object Model DOM Level 2 Core Specification
[DOM2CORE] and exporting the interfaces they define: (1) the Core
module for HTML; (2) the Core and XML modules for XML.
[Priority 1] Content only.
- Note: For example, if the user
interface allows users to complete HTML forms, this must
also be possible through the required DOM
APIs. Please refer to the "Document
Object Model (DOM) Level 2 Core Specification" [DOM2CORE]
for information about HTML and XML
versions covered.
- Techniques
for checkpoint 6.2
- 6.3 For markup languages other than
HTML and XML, provide programmatic access
to content using standard
APIs (e.g., platform-independent APIs and
standard APIs for the operating environment). If standard APIs do not exist, provide programmatic access
through publicly documented APIs.
[Priority 1] Content only.
- Note: This checkpoint addresses content
not covered by checkpoints
checkpoint 6.1 and
checkpoint 6.2.
-
Techniques for checkpoint 6.3
- 6.4 Provide programmatic read and write
access to user
agent user interface controls using standard
APIs. If standard
APIs do not exist, provide programmatic
access through publicly documented APIs.
[Priority 1] User agent only.
- Note: Per checkpoint 6.6, provide
programmatic access through standard APIs (e.g.,
platform-independent APIs such as the W3C DOM; standard
APIs defined for a specific operating system; and
conventions for programming languages, plug-ins, virtual machine environments,
etc.). This checkpoint requires user agents to provide programmatic access even
in the absence of a standard API for doing so.
- Techniques for
checkpoint 6.4
- 6.5 Using standard
APIs, provide programmatic alert of
changes to
content,
user interface controls,
selection, content
focus, and user
interface focus. If standard APIs do not exist, provide programmatic alert
through publicly documented APIs.
[Priority 1] Both content and user agent.
- Note: For instance, when user
interaction in one frame causes automatic changes to content in another,
provide programmatic alert through standard APIs. Use the standard APIs required by the checkpoints of guideline 6.
- Techniques for
checkpoint 6.5
- 6.6
Implement standard accessibility APIs (e.g., of the
operating environment). Where these APIs do not enable the user
agent to satisfy the requirements of this document, use the
standard input and output APIs of the
operating environment. [Priority 1] Both
content and user agent.
- Note: Accessibility APIs enable
assistive technologies to monitor input and output events. As part of
satisfying this checkpoint, the user agent needs to ensure that text content is available as text through these
APIs (and not, for example, as a series of strokes drawn on the screen).
-
Techniques for checkpoint 6.6
- 6.7 Implement
the
operating environment's standard
APIs for the keyboard. If standard APIs for the keyboard do not exist, implement
publicly documented APIs for the keyboard.
[Priority 1] User agent only.
- Note: An operating environment may define more than one
standard API for the keyboard. For instance, for Japanese and Chinese, input
may be processed in two stages, with an API for each.
- Techniques for
checkpoint 6.7
- 6.8 For an API implemented to satisfy
requirements of this document, support the character encodings required for that API.
[Priority 1] Both content and user
agent.
- Note: Support for character encodings is important so that
text is not "broken" when communicated to assistive technologies. For example,
the DOM Level 2 Core Specification [DOM2CORE], section 1.1.5
requires that the
DOMString
type be encoded using UTF-16. This
checkpoint is an important special case of the other API
requirements of this document.
- Techniques
for checkpoint 6.8
- 6.9 For user agents that implement Cascading Style Sheets
(CSS), provide programmatic access to those style sheets by
conforming to the CSS module of the W3C Document Object Model (DOM) Level 2 Style Specification
[DOM2STYLE] and exporting the interfaces it defines. [Priority 2] Content only.
- Note: As of the publication of this
document, Cascading Style Sheets (CSS) are defined by CSS
Level 1
[CSS1] and CSS Level 2 [CSS2]. Please refer to the "Document
Object Model (DOM) Level 2 Style Specification"
[DOM2STYLE] for information about CSS versions
covered.
- Techniques
for checkpoint 6.9
- 6.10 Ensure that programmatic exchanges
proceed in a timely manner. [Priority 2]
Both content and user agent.
- Note: For example, the programmatic
exchange of information required by other checkpoints in this document should
be efficient enough to prevent information loss, a risk when changes to content
or user interface occur more quickly than the communication of those changes.
The techniques for this checkpoint explain how developers can reduce
communication delays. This will help ensure that assistive technologies have
timely access to the document
object model and other information that is important for providing
access.
- Techniques
for checkpoint 6.10
Part of user agent accessibility involves following the conventions of the
user's operating environment. This includes:
Following
operating environment conventions increases predictability for users
and for developers of assistive
technologies. Platform guidelines explain what users will expect
from the look and feel of the user interface, keyboard conventions,
documentation, etc. Platform guidelines also include information about
accessibility features that the user agent should adopt rather than
reimplement.
Checkpoints
-
7.1 Follow
operating environment conventions that benefit accessibility when implementing the
selection, content
focus, and user
interface focus. [Priority 1]
User agent only.
- Note: This checkpoint is an important special case of checkpoint 7.3. See also checkpoint 9.1.
-
Techniques for checkpoint 7.1
-
7.2 Ensure that default input configurations do not interfere with
operating environment accessibility conventions. [Priority 1] User agent only.
- Note: In particular, default
configurations should not interfere with operating conventions for keyboard
accessibility. Information about operating environment accessibility
conventions is available in the Techniques document
[UAAG10-TECHS].
See also checkpoint 11.5.
-
Techniques for checkpoint 7.2
- 7.3 Follow
operating environment conventions that benefit accessibility. In
particular, follow conventions that benefit accessibility for
user interface design, keyboard
configuration, product installation, and
documentation. [Priority 2] User
agent only.
- Note: Operating environment conventions that benefit
accessibility are those described in this document and in platform-specific
accessibility guidelines. Some of these conventions (e.g., sticky keys, mouse
keys, show sounds, etc.) are discussed in the Techniques document
[UAAG10-TECHS].
- Techniques
for checkpoint 7.3
-
7.4 Follow
operating environment conventions to indicate the input
configuration. [Priority 2] User
agent only.
- Note: For example, in some operating environments,
developers may specify which command sequence will activate a functionality so
that the standard user interface components display that binding. For example,
if a functionality is available from a menu, the letter of the activating key
may be underlined in the menu. This checkpoint is an important special case of
checkpoint 7.3. See also checkpoint
11.5.
-
Techniques for checkpoint 7.4
Developers should implement open specifications. Conformance to open
specifications benefits interoperability and accessibility by making it easier
to design assistive
technologies (also discussed in guideline 6).
While developers should implement the accessibility features of any
specification, this document recommends conformance to W3C specifications in
particular for several reasons:
- W3C specifications include "built-in" accessibility features.
- W3C specifications undergo early review to ensure that accessibility issues
are considered during the design phase. This review includes review from
stakeholders in accessibility.
- W3C specifications are developed in a consensus process (refer to the
process defined by the W3C Process Document [W3CPROCESS]). W3C encourages
the public to review and comment on these specifications (public Working
Drafts, Candidate Recommendations, and Proposed Recommendations). For
information about how specifications become W3C Recommendations, refer to the
W3C Recommendation track ([W3CPROCESS], section 6.2). W3C
Recommendations (and other technical
reports) are published at the W3C Web site.
Checkpoints
-
8.1 Implement the accessibility features of all implemented specifications (markup languages,
style sheet languages, metadata languages, graphics formats, etc.). The
accessibility features of a specification are those identified as such and
those that satisfy all of the requirements of the "Web Content
Accessibility Guidelines 1.0" [WCAG10].
[Priority 1] Content only.
- Note: This checkpoint applies to both W3C-developed and
non-W3C specifications. The Techniques document
[UAAG10-TECHS] provides information about the accessibility features
of some specifications, including W3C specifications.
-
Techniques for checkpoint 8.1
- 8.2 Use and
conform to either (1) W3C Recommendations when they are available
and appropriate for a task, or (2) non-W3C specifications that enable the
creation of content that conforms to the Web Content Accessibility Guidelines
1.0
[WCAG10] at any conformance level.
[Priority 2] Content only.
- Note: For instance, for markup, the
user agent may conform to
HTML 4 [HTML4], XHTML 1.0 [XHTML10], or
XML 1.0 [XML]. For style sheets, the user
agent may conform to CSS ([CSS1], [CSS2]). For mathematics, the user
agent may conform to MathML 2.0 [MATHML20]. For synchronized
multimedia, the user agent may conform to SMIL 1.0 [SMIL]. A
specification is considered "available" if it is published (e.g., as a W3C
Recommendation) in time for integration into a user agent's development
cycle.
-
Techniques for checkpoint 8.2
Users should be able to navigate to important pieces of content within a
configurable view, identify the type of object they have navigated to, interact
with that object easily (if it is an enabled element), and review the surrounding
context (to orient themselves). Providing a variety of navigation and search
mechanisms helps users with disabilities (and all users) access content more
efficiently. Navigation and searching are particularly important to users who
access content serially (e.g., as synthesized speech or braille).
Sequential navigation (e.g., line scrolling, page scrolling, sequential
navigation through enabled elements, etc.) means advancing (or rewinding)
through rendered content in well-defined steps (line by line, screen by screen,
link by link, etc.). Sequential navigation can provide context, but can be
time-consuming. Sequential navigation is important to users who cannot scan a
page visually for context and also benefits users unfamiliar with a page.
Sequential access may be based on element type (e.g., links only), content
structure (e.g., navigation from heading to heading), or other criteria.
Direct navigation (e.g., to a particular link or paragraph) is faster than
sequential navigation, but generally requires familiarity with the content.
Direct navigation is important to users with some physical disabilities (who
may have little or no manual dexterity and/or increased tendency to push
unwanted buttons or keys), to users with visual disabilities, and also benefits
"power users." Direct navigation may be possible with the pointing device or
the keyboard (e.g., keyboard shortcuts).
Structured navigation mechanisms offer both context and speed. User agents
should allow users to navigate to content known to be structurally important:
blocks of content, headers and sections, tables, forms and form elements,
enabled elements, navigation mechanisms, containers, etc. For information about
programmatic access to document structure, see guideline 6.
User agents should allow users to configure navigation mechanisms (e.g., to
allow navigation of links only, or links and headings, or tables and forms,
etc.).
Checkpoints
- 9.1 Allow the user to make the selection and
focus of each viewport
(including frames) the current
selection and current
focus, respectively.
[Priority 1] User agent only.
- Note: For example, when all frames of a
frameset are displayed side-by-side, allow the user to move the focus among them with the keyboard.
- Techniques for
checkpoint 9.1
- 9.2 Allow the user to move the
content focus to any enabled
element in the viewport.
If the author has not specified a navigation order, allow at least forward
sequential navigation to each element, in document order. The user agent may
also include disabled
elements in the navigation order.
[Priority 1] Content only.
- Note: In addition to forward sequential
navigation, the user agent should also allow reverse sequential navigation.
This checkpoint is an important special case of checkpoint 9.8.
- Techniques for
checkpoint 9.2
-
9.3 For each state in a viewport's browsing history, maintain
information about the point of
regard, content
focus, user
interface focus, and
selection. When the user returns to any state in the viewport
history, restore the saved values for all four of these state variables. [Priority 1] User agent only.
- Note: For example, when the user uses
the "back" functionality, restore the four state variables.
-
Techniques for checkpoint 9.3
- 9.4 For the element with
content focus, make available the list of
input device event
handlers explicitly associated with the element. [Priority 2] Content only.
- Note: For example, allow the user to
query the element with content focus for the list of input device event
handlers, or add them directly to the serial navigation order. See checkpoint 1.2 for
information about activation of event handlers associated with the element with
focus.
- Techniques
for checkpoint 9.4
-
9.5 Allow
configuration so that moving the
content focus to an enabled
element does not automatically
activate any explicitly associated input device
event handlers.
[Priority 2] Content only.
- Note: In this configuration, user
agents should still apply any stylistic changes (e.g.,
highlighting) that may occur when there is a change in
content focus.
-
Techniques for checkpoint 9.5
- 9.6 Allow the user to move the
content focus to any enabled
element in the viewport.
If the author has not specified a navigation order, allow at least forward and
reverse sequential navigation to each element, in document order. The user
agent must not include disabled
elements in the navigation order.
[Priority 2] Content only.
- Note: This checkpoint is a special case
of checkpoint 9.2.
- Techniques
for checkpoint 9.6
- 9.7 Allow the user to search within rendered text
content for a sequence of characters from the
document character set. Allow the user to start a forward search (in
document order) from any selected or focused location in content. When there is
a match (1) move the viewport so that the matched text content is within it,
and (2) allow the user to search for the next instance of the text from the
location of the match. Alert the user when there is no match, when the search
reaches the end of content, and prior to any wrapping. Provide a
case-insensitive search option for text in scripts
(i.e., writing systems) where case is significant.
[Priority 2] Content only.
- Note: If the user has not indicated a start position for
the search, the search should start from the beginning of content. Use
operating environments conventions for indicating the result of a
search (e.g., selection
or content
focus). A wrapping search is one that restarts automatically at the
beginning of content once the end of content has been reached.
- Techniques for
checkpoint 9.7
- 9.8 Allow the user to navigate efficiently
to and among important structural elements. Allow forward and backward
sequential navigation to important structural elements. [Priority 2] Content only.
- Note: This specification intentionally
does not identify which "important elements" must be navigable as this will
vary according to markup language. What constitutes "efficient navigation" may
depend on a number of factors as well, including the "shape" of content (e.g.,
serial navigation of long lists is not efficient) and desired granularity
(e.g., among tables, then among the cells of a given table). Refer to the
Techniques document [UAAG10-TECHS] for
information about identifying and navigating important elements.
- Techniques for
checkpoint 9.8
- 9.9
Allow
configuration and
control of the set of important elements required by checkpoint 9.8 and checkpoint 10.5. Allow the
user to include and exclude element types in the set of elements. [Priority 3] Content only.
- Note: For example, allow the user to navigate only
paragraphs, or only headings and paragraphs, etc. See also checkpoint 6.4.
-
Techniques for checkpoint 9.9
All users require clues to help them understand their "location" when
browsing: where they are, how they got there, where they can go, what's nearby,
etc. Some mechanisms that provide such clues include:
- information about the current state of the user's interaction with content:
where the viewport is in content (shown, for example, through proportional
scroll bars), which viewport has the
current focus, where the user has selected content, a history
mechanism, the title of the current document or frame, etc. These clues need to
be available to the user in a device-independent manner;
- information about specific elements, such as the dimensions of a table, the
length of an audio clip, the structure of a form, whether following a link will
involve a fee, etc.
- information about relationships among elements, such as between table cells
and related table headers.
- information about the structure of content. For instance, a navigable
outline view can accelerate access to content while preserving context.
Orientation mechanisms such as these are especially important to users who
view content serially, (e.g., when rendered as speech or braille). For
instance, these users cannot "scan" a graphically displayed table with their
eyes for information about a table cell's headers, neighboring cells, etc. User
agents need to provide other means for users to understand table cell
relationships, frame relationships (what relationship does the graphical layout
convey?), form context (have I filled out the form completely?), link
information (have I already visited this link?), etc.
This guideline also includes requirements to allow the user to control some
user agent behavior (form submission and activation of fee links) that, if
carried out automatically, might go unnoticed by some users (e.g., users who
are blind) or might disorient others (e.g., users with cognitive
disabilities).
Checkpoints
- 10.1 Make available to the user the
purpose of each table and the relationships among the table cells and headers.
[Priority 1] Content only.
- Note: This checkpoint refers only to table information
that the user can
recognize. Depending on the table, some techniques may be more
efficient than others for conveying data relationships. For many tables, user
agents rendering in two dimensions may satisfy this checkpoint by rendering a
table as a grid and by ensuring that users can find headers associated with
cells. However, for large tables or small viewports, allowing the user to query
cells for information about related headers may improve access. This checkpoint
is an important special case of checkpoint 2.1.
- Techniques for
checkpoint 10.1
-
10.2 Provide a mechanism for
highlighting the selection
and content
focus. Allow the user to configure the highlight styles. The
highlight mechanism must not rely on color alone. For graphical viewports, if
the highlight mechanism involves colors or text decorations, allow the user to choose from
among the full range of colors or text decorations supported by the
operating environment.
[Priority 1] Content only.
- Note: Examples of highlight mechanisms include foreground
and background color variations, underlining, distinctive voice pitches,
rectangular boxes, etc. Because the selection and focus change frequently, user
agents should not highlight them using mechanisms (e.g., font size variations)
that cause content to reflow as this may disorient the user. See also checkpoint
7.1.
-
Techniques for checkpoint 10.2
- 10.3 Ensure that all of the default highlight styles for the selection,
content focus, enabled
elements, recently visited links, and fee
links (1) do not rely on color alone, and (2) differ from each
other, and not by color alone. [Priority 1]
Content only.
- Note: For instance, by default a
graphical user agent may present the selection using color and a dotted
outline, the focus using a solid outline, enabled elements as underlined in
blue, recently visited links as dotted underlined in purple, and fee links
using a special icon or flag to draw the user's attention.
- Techniques
for checkpoint 10.3
- 10.4 Provide a mechanism for highlighting all enabled elements, recently visited links, and
fee links. Allow the user to configure the
highlight styles. The highlight mechanism must not rely on color alone. For
graphical viewports, if the highlight mechanism involves colors, fonts, or text
decorations, allow the user to choose from among the full range of
colors, fonts, or text decorations supported by the
operating environment. For an image map, the user agent must
highlight the image map as a whole and should allow configuration to highlight
each enabled region. [Priority 2] Content
only.
- Note: Examples of highlight mechanisms include foreground
and background color variations, font variations, underlining, distinctive
voice pitches, rectangular boxes, etc.
- Techniques
for checkpoint 10.4
- 10.5
Make available to the user an "outline" view of
content, composed of labels for important structural elements (e.g.,
heading text, table titles, form titles, etc.).
[Priority 2] Content only.
- Note: This checkpoint is meant to
provide the user with a simplified view of content (e.g, a table of contents).
What constitutes a label is defined by each markup language specification. For
example, in HTML, a heading (
H1
-H6
) is a label for
the section that follows it, a CAPTION
is a label for a table, the
"title
" attribute is a label for its element, etc. A label is not
required to be
text only. For important elements that do
not have associated labels, user agents may generate labels for the outline
view. For information about what constitutes the set of important structural
elements, please see the Note following checkpoint 9.8. By making the outline view navigable, it is
possible to satisfy this checkpoint and checkpoint 9.8 together: Allow users to navigate among the
important elements of the outline view, and to navigate from a position in the
outline view to the corresponding position in a full view of content. See also checkpoint
9.9.
-
Techniques for checkpoint 10.5
- 10.6 To help the user decide whether to
traverse a link, make available the following information about it: link
element content, link title, whether the link is internal to the resource
(e.g., the link is to a target in the same Web page), whether the user has
traversed the link recently, whether traversing it may involve a fee, and
information about the type, size, and natural language of linked Web resources.
The user agent is not required to compute or make available information that
requires retrieval of linked Web
resources. [Priority 3] Content
only.
- Techniques for
checkpoint 10.6
Checkpoints for the user interface
- 10.7 Provide a mechanism for highlighting the viewport with the
current focus. For graphical viewports,
the default highlight mechanism must not rely on color alone. [Priority 1] User agent only.
- Note: This includes highlighting and
identifying frames. This checkpoint is an important special case of checkpoint 1.1. See also
to checkpoint checkpoint
7.3.
-
Techniques for checkpoint 10.7
-
10.8 Ensure that when a viewport's
selection or content
focus changes, it is in the viewport
after the change. [Priority 2] User agent
only.
- Note: For example, if users navigating
links move to a portion of the document outside a graphical viewport, the
viewport should scroll to include the new location of the focus. Or, for users
of audio viewports, allow configuration to render the selection or focus
immediately after the change.
-
Techniques for checkpoint 10.8
- 10.9 Indicate the relative position of
the
viewport in rendered
content (e.g., the proportion of an audio or video clip that has
been played, the proportion of a Web page that has been viewed, etc.). [Priority 3] User agent only.
- Note: The user agent may calculate the
relative position according to content focus position, selection position, or
viewport position, depending on how the user has been browsing. The user agent
may indicate the proportion of content viewed in a number of ways, including as
a percentage, as a relative size in bytes, etc. For two-dimensional renderings,
relative position includes both vertical and horizontal positions.
- Techniques
for checkpoint 10.9
Web users have a wide range of capabilities and need to be able to configure the user agent according to their
preferences for styles, graphical user interface configuration, keyboard
configuration, etc. Most of the checkpoints in this guideline pertain to the
input configuration: how user agent behavior is controlled through keyboard
input, pointing device input, and voice input.
Checkpoints
-
11.1 Provide information to the user about current user preferences
for input
configurations. [Priority 1]
User agent only.
- Note: To satisfy this checkpoint, the
user agent may make available binding information in a centralized fashion
(e.g., a list of bindings) or a distributed fashion (e.g., by listing keyboard
shortcuts in user interface menus).
-
Techniques for checkpoint 11.1
-
11.2 Provide a centralized view of the current author-specified input
configuration bindings.
[Priority 2] Content only.
- Note: For example, for HTML documents,
provide a view of keyboard bindings specified by the author through the
"
accesskey
" attribute. The intent of this checkpoint is to
centralize information about author-specified bindings so that the user does
not have to read the entire content first to find out what bindings are
available. The user agent may satisfy this checkpoint by providing different
views for different input modalities (keyboard, pointing device, voice,
etc.).
-
Techniques for checkpoint 11.2
- 11.3 Allow the user to override any
binding that is part of the user agent default input
configuration The user agent is not required to allow the user to
override standard bindings for the operating environment (e.g., for access to
help). [Priority 2] User agent only.
- Note: The override requirement only
applies to bindings for the same input modality (e.g., the user must be able to
override a keyboard binding with another keyboard binding). See also checkpoint 11.5, checkpoint 11.7, and checkpoint 12.3.
- Techniques
for checkpoint 11.3
- 11.4 Allow the user to override any binding
in the default keyboard configuration with a binding to either a key plus
modifier keys or to a single-key. For each functionality in the set required by
checkpoint 11.5, allow
the user to configure
a single-key binding (i.e., one key press performs the task, with zero modifier
keys). If the number of physical keys on the keyboard is less than the number
of functionalities required by by checkpoint 11.5, allow single-key bindings for as many of
those functionalities as possible. The user agent is not required to allow the
user to override standard bindings for the operating environment (e.g., for access to
help). [Priority 2] User agent only.
- Note: In this checkpoint, "key" refers
to a physical key of the keyboard (rather than, say, a character of the
document character set). Because single-key access is so important
to some users with physical disabilities, user agents should ensure that (1)
most keys of the physical keyboard may be configured for single-key bindings,
and (2) most functionalities of the user agent may be configured for single-key
bindings. This checkpoint does not require single physical key bindings for
character input, only for the activation of user agent functionalities. For
information about access to user agent functionality through a keyboard API,
see checkpoint 6.7.
- Techniques for
checkpoint 11.4
- 11.5
Ensure that the default input
configuration includes bindings for the following functionalities
required by other checkpoints in this document: move focus to next enabled
element; move focus to previous enabled element; activate focused
link; search for text; search again for same text; increase size of rendered
text; decrease size of rendered text; increase global volume; decrease global
volume; (each of) stop, pause, resume, fast advance, and fast reverse selected
audio and
animations (including video and animated images). If the user agent
supports the following functionalities, the
default input configuration must also include bindings for them: next history
state (forward); previous history state (back); enter URI for new resource; add
to favorites (i.e., bookmarked resources); view favorites; stop loading
resource; reload resource; refresh rendering; forward one viewport; back one
viewport; next line; previous line.
[Priority 2] User agent only.
- Note: This checkpoint does not make any
requirements about the ease of use of default input configurations, though
clearly the default configuration should include single-key bindings and allow
easy operation. Ease of use is ensured by the configuration requirements of checkpoint 11.3.
-
Techniques for checkpoint 11.5
- 11.6 For the configuration requirements of
this document, allow the user to save user preferences in at least one user profile. Allow users to choose from among
available profiles or no profile (i.e., the user agent default settings). [Priority 2] User agent only.
- Note: The configuration requirements of
the checkpoints in this document involve user preferences for styles,
presentation rates, input
configurations, navigation, viewport behavior, and user agent
prompts and alerts.
- Techniques for
checkpoint 11.6
- 11.7 For graphical user interfaces,
allow the user to configure
the position of controls on tool bars of the user agent user interface, to add or remove
controls for the user interface from a predefined set, and to restore the
default user interface. [Priority 3] User
agent only.
- Note: This checkpoint is a special case of checkpoint 11.3.
-
Techniques for checkpoint 11.7
User agent documentation is especially important to users with disabilities
who may not understand a complex graphical user interface, who may not be using
part of it (e.g., audio cues), or may not be using it at all. Without
documentation, users with disabilities may have a very difficult time knowing
what the user agent is capable of and how to operate it.
Features that support accessibility need to be clearly documented so that
users with disabilities can learn to operate the user agent efficiently.
Documentation of keyboard accessibility is particularly important to users with
visual disabilities and some types of physical disabilities. Without this
documentation, a user with a disability (or multiple disabilities) may not
think that a particular task can be performed. Or the user may try to use a
much less efficient technique to perform a task, such as using a mouse, or
using an assistive technology's mouse emulation through key strokes.
Documentation includes anything that explains how to install, get
help for, use, or configure the product.
Refer to checkpoint 7.3
for information about following system conventions for documentation.
Checkpoints
- 12.1 Ensure that at least one version of
the product
documentation conforms to at least Level Double-A of the Web Content
Accessibility Guidelines 1.0 [WCAG10].
[Priority 1] User agent only.
- Techniques
for checkpoint 12.1
-
12.2
Document all user agent features that benefit accessibility. [Priority 1] User agent only.
- Note: For example, review the
documentation or help system to ensure that it includes information about the
functions and capabilities of the user agent that are required by WAI Accessibility Guidelines,
platform-specific accessibility guidelines, etc. The documentation of
accessibility features should be integrated into the documentation as a
whole.
-
Techniques for checkpoint 12.2
-
12.3
Document the default input configuration (e.g., the default keyboard
bindings). [Priority 1] User agent
only.
- Note: If the default input
configuration is inconsistent with conventions of the operating environment,
the documentation should alert the user.
-
Techniques for checkpoint 12.3
-
12.4 In a dedicated section of the
documentation, describe all features of the user agent that benefit
accessibility. [Priority 2] User agent
only.
- Note: This is a more specific
requirement than
checkpoint 12.2.
-
Techniques for checkpoint 12.4
- 12.5 In each software release,
document all changes that affect
accessibility. [Priority 2] User agent
only.
- Note: Features that affect accessibility are those
required by WAI
Accessibility Guidelines, platform-specific accessibility guidelines, etc.
- Techniques
for checkpoint 12.5
This normative
section defines what it means to conform to this document and explains how to
make a valid conformance claim. Please note
that this document includes both conformance requirements and conformance claim
requirements.
Here is a sample claim (expressed in
HTML):
<p>On 31 March
2001, Project X (version 2.3) running on MyOperatingSystem (version 4.2)
conforms to <abbr title="the World Wide Web
Consortium">W3C</abbr>'s "User Agent Accessibility Guidelines 1.0",
http://www.w3.org/WAI/UA/WD-UAAG10-20010331, level Double-A. Unsupported
content types: Video, Speech. Unsupported input modalities: Voice. (see section
3.1 of the UAAG 1.0). The <a href="http://example.com/checkpoints">list
of checkpoints that do not apply</a> is available
online.</p>
In this document (notably in the checkpoints and in this section on
conformance), the terms "must", "should", and "may" (and related terms) are
used in accordance with RFC 2119 [RFC2119]. This section uses the
expression "subject of a claim"
to refer to a user
agent about which someone wishes to claim some level of conformance
to this document. The subject of a claim may be one or more software components
(e.g., a browser plus additional software).
Note: Conformance to the requirements of this document is
expected to be a strong indicator of accessibility, but it is neither a
necessary nor sufficient condition for ensuring the accessibility of software.
Some software may not conform to this document but still be accessible to some
users with disabilities. Conversely, some software may conform to this document
but still be inaccessible to some users with disabilities. Please refer to the
section on known limitations of this document.
The conformance model of this document has been designed to allow different
types of user agents with different input and output capabilities to conform.
At the same time, the model is designed so that:
- people reading claims can determine whether a conforming user agent is
likely to meet their accessibility needs, and
- people can compare claims about different user agents with relative ease.
Note: The checklist [UAAG10-CHECKLIST] may be
used when evaluating a user agent for conformance.
The conformance model works as follows:
- A user agent conforms to this document by satisfying a set of requirements.
Conformance requirements only come from the checkpoint
statements. Each checkpoint statement includes one or more requirements.
- The "default" set of
requirements for conformance consists of all the requirements of all of the
checkpoints. A user agent conforms unconditionally to this document if it
satisfies all of these requirements.
- Different user agents may conform to different sets of checkpoints. A user
agent conforms conditionally if it
satisfies any set of requirements that results from carrying out the four steps
below. Since these steps may produce very different sets of checkpoints for
different user agents, a valid conformance
claim must indicate how the set of requirements chosen for the claim
differs from the default set.
A user agent conforms conditionally if it satisfies any set of
requirements that results from following these steps:
- Choose a conformance level. Each
conformance level corresponds to a set of checkpoints (and thus a set of
requirements).
- Remove the requirements associated with any unsupported content type labels. In order to conform
conditionally, a user agent must satisfy the requirements of at least one
content type label.
- Remove the requirements of any checkpoints or parts of checkpoints that do
not apply.
- Add requirements associated with any supported input modality label. Note:
In the default set of requirements, the only input device requirements relate
to keyboard input.
The following example illustrates how to apply the above steps to determine
which requirements must be satisfied for conformance, and what would be
required as part of a well-formed conformance claim. This informative example does not illustrate a
complete user agent evaluation.
Consider a user agent with these capabilities:
- it supports keyboard and pointing device input;
- it renders text (in color) and implements:
- one audio format,
- two image formats,
- two other animation formats (besides video, which is considered an
animation format in this document);
- it hands off video to be rendered by a plug-in;
- it doesn't support speech output.
The claimant wishes to conform at level Double-A. This establishes a set of
requirements consisting of all of the requirements of all the priority 1 and 2
checkpoints.
The claimant wishes to claim conformance for the user agent's support of
text, images, audio, and video. The claimant does not wish to claim conformance
for other animation
formats.
The following content type labels are therefore relevant: VisualText,
ColorText, Image, Animation, Video, and Audio. This means that:
- the claimant must remove the set of requirements associated with the Speech
content type label.
- the claimant must satisfy the requirements associated with the other
content type labels.
Consider checkpoint 4.4, for example, which is associated with both the
Audio and Animation content type labels:
4.4 Allow the user to slow the presentation rate of audio and animations (including video and animated
images). For a visual
track, provide at least one setting between 40% and 60% of the
original speed. For a prerecorded audio
track including audio-only presentations, provide at least one
setting between 75% and 80% of the original speed. When the user agent allows
the user to slow the visual track of a synchronized multimedia presentation to
between 100% and 80% of its original speed, synchronize the visual and audio
tracks. Below 80%, the user agent is not required to render the audio track. The user agent is not required to
satisfy this checkpoint for audio and animations whose
recognized role is to create a purely stylistic effect. [Priority 1]
Suppose that:
- The claimant wishes to claim support for the two image formats, the one
audio format, and the one video format;
- The claimant does not wish to claim support for the other two animation
formats (e.g., because the user agent doesn't satisfy the requirements of checkpoint 4.4 for those
animation formats);
- The user agent does not implement any synchronized multimedia formats.
The resulting applicable requirements from this checkpoint would be:
- For the audio format: Allow the user to slow the presentation rate of
audio. For a prerecorded audio track including audio-only presentations,
provide at least one setting between 75% and 80% of the original speed.
- For the video format: Allow the user to slow the presentation rate of
video. For a visual track, provide at least one setting between 40% and 60% of
the original speed.
- For the image formats: None, since the Image content type label does not
include checkpoint checkpoint
4.4.
- Limitation of scope for any format: The user agent is not required to
satisfy the requirements of this checkpoint for audio and animations whose
recognized role is to create a purely stylistic effect.
The following requirements would not apply:
- When the user agent allows the user to slow the visual track of a
synchronized multimedia presentation to between 100% and 80% of its original
speed, synchronize the visual and audio tracks. Below 80%, the user agent is
not required to render the audio track. Note: The relevant applicability provision is provision three: control of a
content property that the subject cannot recognize. In this case, no format
implemented by the user agent supports synchronized multimedia.
In this example, the claimant does not wish to claim conformance for
complete operation for pointing device or voice input, so no requirements are
added.
The following information is an excerpt of that required for a well-formed
claim:
- Conformance level satisfied: Double-A
- Information about the subject. Both the "main" user agent and the plug-in
used to support video must be identified in the claim (since the plug-in is the
component used to satisfy the requirements for video).
The user agent does not conform
unconditionally, therefore, the claim must also include the following
information (excerpted from a complete claim):
- A general statement about lack of support for the Speech content type
label: "This user agent does not support the requirements of the Speech content
type label. "
- A specific statement about content type support for checkpoint 4.4: "This user
agent satisfies the requirements of the Animation content type label for the
audio format A and the video format V. It does not satisfy the Animation
requirements for animation formats Y and Z."
- A specific statement about applicability for checkpoint 4.4: "The
synchronized multimedia requirements of checkpoint checkpoint 4.4 do not apply
because the user agent does not implement any formats that support synchronized
multimedia."
Each conformance level defines a set of requirements, based on priority.
Note: Conformance levels are spelled out in text (e.g.,
"Double-A" rather than "AA") so they may be understood when rendered as
speech.
Each content type label defines a set of requirements related to support for
images, video, animations generally, visually displayed text (in color), and
synthesized speech.
- VisualText
- This content type label refers to all of the requirements related to the
visual rendering of text for the following checkpoints: 3.3, 4.1, and 4.2.
- ColorText
- This content type label refers to all of the requirements related to text
foreground and background color for the following checkpoint: 10.4.
-
Image
- This content type label refers to all of the requirements related to images
(excluding animated images) for the following checkpoints: 3.1, 3.7, and 3.8. To conform, the user
agent must implement
at least one image format.
- Animation
- This content type label refers to all of the requirements related to
animations (including video and animated images) for the following checkpoints:
3.2, 3.8, 4.4, 4.5, 4.7, and 4.8. To conform, the
user agent must implement
at least one animation format.
-
Video
- This content type label refers to all of the requirements related to video
for the following checkpoints: 2.5,
2.6, 3.2, and 3.8. To conform, the user
agent must implement
at least one video format.
-
Audio
- This content type label refers to all of the requirements related to audio
for the following checkpoints: 2.5,
2.6, 3.2, 3.8, 4.4, 4.5, 4.7, 4.8, 4.9, and 4.10. To conform,
the user agent must implement
at least one audio format.
- Speech
- This content type label refers to all of the requirements related to
synthesized speech for the following checkpoints: 4.11, 4.12, 4.13, 4.14, and 4.15. To conform, the
user agent must support
synthesized speech.
Note: Some of the labels above require implementation of at
least one format (e.g., for images). This document does not require
implementation of specific formats, (e.g., PNG [PNG] versus
SVG
[SVG] for images). However, please see the requirements of checkpoint 8.2.
Each input modality label defines a set of requirements related to support
for pointing device and voice input. Input device requirements in this document
are either stated generically (e.g., "input configuration" requirements) or as
keyboard-specific requirements (e.g., "keyboard API").
-
Pointer
- This input modality label refers to all of the input device requirements of
this document, applied to pointing device input. For keyboard-specific
requirements, substitute "pointing device input" for "keyboard." The set of
pointing device input requirements does not include the requirements of checkpoint 11.4.
-
Voice
- This input modality label refers to all of the input device requirements of
this document, applied to voice input. For keyboard-specific requirements,
substitute "voice input" for "keyboard." The set of voice input requirements
does not include the requirements of
checkpoint 11.4.
Note: Developers are encouraged to design user agents that
are at least partially operable through all three input modalities.
A checkpoint (or part of a checkpoint) applies unless any one of the
following conditions is met:
- The checkpoint makes requirements for graphical user interfaces or
graphical viewports and the subject of the claim only has audio or tactile user
interfaces or viewports.
- The checkpoint refers to a role of content (e.g., transcript, captions,
associated
conditional content, fee link,
synchronization cue, client-side redirect, purpose of a table, etc.) that the
subject of the claim cannot recognize
because of how the content has been encoded in a particular format. For
instance, HTML user agents can recognize "
alt
",
OBJECT
content, or NOFRAMES
content as specified mechanisms
for
conditional content. HTML user agents are not expected to recognize
that a nearby paragraph is a text
equivalent for the image (though not marked up as such).
- The checkpoint requires control of a content property that the subject
cannot
recognize because of how the content has
been encoded in a particular format. Some examples of this include:
- captioning information that is "burned" into a video presentation and
cannot be recognized as captions in the presentation format;
- streamed content that cannot be fast advanced or reversed,
- information or relationships encoded in
scripts in a manner that cannot be recognized. For instance, the
requirements of checkpoint
3.3 would not apply for animation effects unrecognized in a script. Some
input device behavior may be controlled by scripts in a manner that the user
agent cannot recognize. For example, when the author uses
event bubbling to dispatch events, the
user agent is not likely to recognize the full set of elements that may receive
those events; the user agent is expected to recognize which element has the explicitly associated event handler.
A claim is well-formed if meets the following conditions.
Condition 1: The claim must include the following information:
- The date of the claim.
- The guidelines title/version: "User Agent Accessibility Guidelines
1.0".
- The URI of the guidelines:
http://www.w3.org/WAI/UA/WD-UAAG10-20010331.
- The conformance level satisfied: "A",
"Double-A", or "Triple-A".
- Information about the subject. The subject of the claim may consist of one
or more software components (e.g., a browser plus a multimedia player plus a
plug-in). For each component, the claim must include the following:
- The product name and version information (version number, minor release
number, and any required patches or updates). The claim must also include the
vendor name if it is required to identify the product.
- The
operating environment (e.g., operating system) name and version
number.
Condition 2: The claim must include the following information if the user
agent conforms conditionally:
- Content type labels. Content type labels
are used in assertions that the subject either (1) does not satisfy the
requirements associated with the label (e.g., for a specific checkpoint, for
any checkpoint, etc.), or (2) does satisfy the requirements associated with the
label (e.g., for a particular format when satisfying the requirements of a
checkpoint). In order to conform conditionally, a user agent must satisfy the
requirements of at least one content type label.
- Input modality labels. Each input
modality label ("Pointer" or "Voice") is an assertion that the user agent
satisfies the requirements associated with the label.
- A list of requirements (checkpoints or portions of checkpoints) that the
claim asserts do not apply. A
well-formed claim should include rationale for why a requirement doesn't
apply.
Condition 3: If the claim is on the Web, it must conform to the "Web Content
Accessibility Guidelines 1.0" [WCAG10], level A.
This specification imposes no restrictions on the format used to make a well-formed claim. For instance,
the claim may be marked up using HTML (see sample
claim), or expressed in the Resource Description Framework
(RDF) [RDF10].
A conformance claim is valid if the following conditions are met:
- The claim is well-formed.
- It is verified that the user agent satisfies the default set of requirements, in addition to (or
except) those requirements added (or exempted) by the allowable mechanisms: conformance levels, content type labels, input modality labels, and applicability.
It is not currently possible to validate a claim entirely automatically.
Note: The subject of the claim may consist of more than one
software component, and taken together they must satisfy all all requirements
that are not excluded through the claim. This includes assistive technologies
and
operating environment features that are part of a claim. Some
components may not have to satisfy some requirements as long as the subject
as a whole satisfies them. For instance, a particular component of the
subject may not have to conform to the DOM APIs required by guideline 6 as long as the
subject of the claim as a whole makes all content available through
those APIs.
Note: Ideally, the standard (or, default) user agent
installation procedure should provide and install all components that are part
of a conformance claim. This is because, the more software components the user
must install in order to construct a conforming user agent, the higher the risk
of failure. Failure may be due to inaccessible mechanisms for downloading and
installing plug-ins, or lack of installation access privileges for a computer
in a public space, etc.
Each checkpoint definition indicates whether the requirements of the
checkpoint must be satisfied for:
- content only, i.e., the document
object, generally provided by the author.
- user agent features only, i.e., everything besides the document object
(e.g., components of the user
agent user interface, the user agent
documentation, focus and selection mechanisms, etc.).
- both content and user agent features.
The user agent must satisfy all requirements involving user interaction
(both user input and output to the user) through the
user interface of the subject of the claim. This includes not only the
requirements that directly refer to to user control, configuration, etc., but
also requirements that indirectly involve the user interface (e.g., system
conventions pertaining to the user interface).
To satisfy the requirements of this document, developers are encouraged to
adopt
operating environment conventions and features that benefit
accessibility. When an operating environment feature (e.g., the operating
system's audio control feature) is adopted to satisfy the requirements of this
document, it is part of the subject of the
claim.
Developers may provide access through the user agent's user interface to
operating environment features adopted to satisfy the requirements of this
document. For example, if the user agent adopts the operating system's audio
control feature to satisfy checkpoint 4.9, the user agent may (but is not required to)
include those controls in its own user interface.
There may be scenarios where a content provider wishes to limit the user's
full access to content. For instance, a content provider may wish to limit
access to content through an API (e.g., to protect intellectual property
rights, or for security reasons), or to provide a "read-only" view (allowing no
user interaction). A valid conformance claim remains valid even when the
functionality of a conforming user agent is restricted in a particular setting.
The validity of a conformance claim will be seriously jeopardized if a user
agent does not meet the requirements of this document for general-purpose
content.
Note: The User Agent Accessibility Guidelines Working Group
recognizes that further work is necessary in the area of digital rights
management as it relates to accessibility.
This specification imposes no restrictions about:
- who may make a claim (e.g., vendors about their own products, third parties
about those products, journalists about products, etc.), or
- where claims may be published (e.g., on the Web or in paper product
documentation).
Claimants (or relevant assuring parties) are solely responsible for the
validity of their claims, keeping claims up to date, and proper use of the conformance icons. As of the publication of this
document, W3C does not act as an assuring party, but it may do so in the
future, or it may establish recommendations for assuring parties.
Claimants are expected to modify or retract a claim if it may be
demonstrated that the claim is not valid. Claimants are encouraged to claim
conformance to the most recent User Agent Accessibility Guidelines
Recommendation available.
As part of a conformance claim, people may use a conformance icon (or,
"conformance logo") on a Web site, on product packaging, in documentation, etc.
Each conformance icon (chosen according to the appropriate conformance level) used on the Web must link to
the W3C explanation of the icon. The appearance of a conformance icon does not
imply that W3C has reviewed or validated the claim. An icon must be accompanied
by a well-formed claim.
Draft Note: In the event this document becomes a W3C
Recommendation this document will link to the W3C Web site for additional
information about the icons and how to use them.
Note: In this document, glossary terms generally link to
the corresponding entries in this section. These terms are also highlighted
through style sheets and identified as glossary terms through markup.
-
Activate
- In this document, the verb "to activate" means (depending
on context) either:
The effect of activation depends on the type of enabled element or user
interface control. For instance, when a link is activated, the user agent
generally retrieves the linked Web
resource. When a form control is activated, it may change state
(e.g., check boxes) or may take user input (e.g., a text entry field).
-
Alert
- In this document, "to alert" means to make the user aware
of some event, without requiring acknowledgement. For example, the user agent
may alert the user that new content is available on the server by displaying a
text message in the user agent's status bar.
See checkpoint 1.3 for
requirements about alerts.
-
Animation
- In this document, the term "animation" refers to any
visual movement effect created automatically (i.e., without manual user
interaction). This definition of animation includes video and animated images.
Animation techniques include:
- graphically displaying a sequence of snapshots within the same region
(e.g., as is done for video and animated images). The series of snapshots may
be provided by a single resource (e.g., an animated GIF image) or from distinct
resources (e.g., a series of images downloaded continuously by the user
agent).
- scrolling text (e.g., achieved through markup or style sheets).
- displacing graphical objects around the viewport (e.g., a picture of a ball
that is moved around the viewport giving the impression that it is bouncing off
of the viewport edges). For instance, the SMIL 2.0 [SMIL20] animation modules explain
how to create such animation effects in a declarative manner (i.e., not by
composition of successive snapshots).
- Application
Programming Interface (API), standard input/output/device
API
- An application programming interface (API) defines how
communication may take place between applications.
As part of encouraging interoperability, this document recommends using
standard APIs where possible, although this document does
not define in all cases how those APIs are standardized (i.e., whether they are
defined by specifications such as W3C Recommendations, defined by an
operating environment vendor, de facto
standards, etc.). Implementing APIs that are independent of a particular
operating environment (as are the W3C DOM Level 2 specifications) may reduce
implementation costs for multi-platform user agents and promote the development
of multi-platform assistive technologies. Implementing standard APIs defined
for a particular operating environment may reduce implementation costs for
assistive technology developers who wish to interoperate with more than one
piece of software running on that operating environment.
A "device API" defines how communication may take place
with an input or output device such as a keyboard, mouse, video card, etc. A
"standard device API" is one that is considered standard for that particular
device on a given operating or windowing system.
In this document, an "input/output API" defines how
applications or devices communicate with a user agent. As used in this
document, input and output APIs include, but are not limited to, device APIs.
Input and output APIs also include more abstract communication interfaces than
those specified by device APIs. A "standard input/output API" is one that is
expected to be implemented by software running on a particular operating
environment. Standard input/output APIs may vary from system to system. For
example, on desktop computers today, the standard input APIs
are for the mouse and keyboard. For touch screen devices or mobile devices,
standard input APIs may include stylus, buttons, voice, etc.
The graphical display and sound card are considered standard ouput devices for
a graphical desktop computer environment, and each has a standard
API.
- Assistive technology
- In the context of this document, an assistive technology
is a
user agent that:
- relies on services (such as retrieving
Web resources, parsing markup, etc.) provided by one or more other
"host" user agents. Assistive technologies communicate data and messages with
host user agents by using and monitoring APIs.
- provides services beyond those offered by the host user agents to meet the
requirements of a users with disabilities. Additional services include
alternative renderings (e.g., as synthesized speech or magnified content),
alternative input methods (e.g., voice), additional navigation or orientation
mechanisms, content transformations (e.g., to make tables more accessible),
etc.
For example, screen reader software is an assistive technology because it
relies on browsers or other software to enable Web access, particularly for
people with visual and learning disabilities.
Examples of assistive technologies that are important in the context of this
document include the following:
- screen magnifiers, which are used by people with visual disabilities to
enlarge and change colors on the screen to improve the visual readability of
rendered text and images.
- screen readers, which are used by people who are blind or have reading
disabilities to read textual information through synthesized speech or braille
displays.
- speech recognition software, which may be used by people who have some
physical disabilities.
- alternative keyboards, which are used by people with certain physical
disabilities to simulate the keyboard.
- alternative pointing devices, which are used by people with certain
physical disabilities to simulate mouse pointing and button activations.
- Beyond this document, assistive technologies consist of
software or hardware that has been specifically designed to assist people with
disabilities in carrying out daily activities, e.g., wheelchairs, reading
machines, devices for grasping, text telephones, vibrating pagers, etc. For
example, the following very general definition of "assistive technology device"
comes from the (U.S.) Assistive Technology Act of 1998 [AT1998]:
Any item, piece of equipment, or product system, whether acquired
commercially, modified, or customized, that is used to increase, maintain, or
improve functional capabilities of individuals with disabilities.
-
Attribute
- This document uses the term "attribute" in the XML sense:
an element may have a set of attribute specifications (refer to the XML 1.0
specification
[XML] section 3).
- Audio-only
presentation
- An audio-only presentation is content consisting
exclusively of one or more audio
tracks presented concurrently or in series. Examples of an
audio-only presentation include a musical performance, a radio-style news
broadcast, and a book reading.
-
Audio track
- An audio object is content rendered as sound through an
audio
viewport. An audio track is an audio
object that is intended as a whole or partial presentation. An audio track may,
but is not required to, correspond to a single audio channel (left or right
audio channel).
- Auditory description
- An auditory description (sometimes, "audio description")
is either a prerecorded human voice or a synthesized voice (recorded or
generated dynamically) describing the key visual elements of a movie or other
animation. The auditory description is
synchronized with the audio
track of the presentation, usually during natural pauses in the audio track. Auditory descriptions include
information about actions, body language, graphics, and scene changes.
-
Author styles
- Authors styles are style property values that come from a
document, or from its associated style sheets, or that are generated by the
server.
-
Captions
- Captions (sometimes, "closed captions") are text
transcripts that are
synchronized with other audio
tracks or visual
tracks. Captions convey information about spoken words and
non-spoken sounds such as sound effects. They benefit people who are deaf or
hard-of-hearing, and anyone who cannot hear the audio (e.g., someone in a noisy
environment). Captions are generally rendered
graphically above, below, or superimposed over video.
Note: Other terms that include the word "caption" may have different
meanings in this document. For instance, a "table caption" is a title for the
table, often positioned graphically above or below the table. In this document,
the intended meaning of "caption" will be clear from context.
- Character encoding
- A "character encoding" is a mapping from a character set
definition to the actual code units used to represent the data. Please refer to
the Unicode specification [UNICODE] for more information
about character encodings. Refer to "Character Model for the World Wide Web"
[CHARMOD] for additional information about characters and character
encodings.
- Collated text
transcript
- A collated text transcript is a text equivalent of a movie or other animation.
More specifically, it is the combination of the text transcript of the audio track and the text equivalent of the
visual track. For example, a collated
text transcript typically includes segments of spoken dialogue interspersed
with text descriptions of the key visual elements of a presentation (actions,
body language, graphics, and scene changes). See also the definitions of text
transcript and auditory
description. Collated text transcripts are essential for individuals
who are deaf-blind.
- Conditional content
- Conditional content is content that, by specification,
should be made available to users through the user interface, generally under
certain conditions (e.g., user preferences or operating environment
limitations). Some examples of conditional content mechanisms include:
- The "
alt
" attribute of the IMG
element in HTML 4.
According to
section 13.2 of the HTML 4 specification ([HTML4]): "User agents must render
alternate text when they cannot support images, they cannot support a certain
image type or when they are configured not to display images.
OBJECT
elements in HTML 4.
Section 13.3.1 of the HTML 4 specification ([HTML4]) explains the conditional
rendering rules of (nested) OBJECT
elements.
- The
switch
element and test attributes in SMIL 1.0. Sections
4.3 and 4.4, respectively,
of SMIL 1.0
[SMIL] explain the conditional rendering rules of these
features.
- SVG 1.0
[SVG] also includes a
switch
element and several
attributes for conditional processing.
- The
NOSCRIPT
and NOFRAMES
elements in HTML 4
[HTML4]
allow the author to provide content under conditions when the user agent does
not support scripts or frames, or the user has turned off support for scripts
or frames.
Specifications vary in how completely they define how and when to render
conditional content. For instance, the HTML 4 specification includes the
rendering conditions for the "alt
" attribute, but not for the
"title
" attribute. The HTML 4 specification does indicate that the
"title
" attribute should be available to users through the user
interface ("Values of the title attribute may be rendered by user agents in a
variety of ways...").
Note: The Web Content Accessibility Guidelines 1.0 requires
that authors provide text equivalents for non-text content. This is generally
done by using the conditional content mechanisms of a markup language. Since
conditional content may not be rendered by default, the current document
requires the user agent to provide access to unrendered conditional content (checkpoint 2.3 and checkpoint 2.9)
as it may have been provided to promote accessibility.
-
Configure,
Control
- In the context of this document, the verbs "to control"
and "to configure" share in common the idea of governance such as a user may
exercise over interface layout, user agent behavior, rendering style, and other
parameters required by this document. Generally, the difference in the terms
centers on the idea of persistence. When a user makes a change by
"controlling" a setting, that change usually does not persist beyond that user
session. On the other hand, when a user "configures" a setting, that setting
typically persists into later user sessions. Furthermore, the term "control"
typically means that the change can be made easily (such as through a keyboard
shortcut) and that the results of the change occur immediately, whereas the
term "configure" typically means that making the change requires more time and
effort (such as making the change via a series of menus leading to a dialog
box, via style sheets or scripts, etc.) and that the results of the change may
not take effect immediately (e.g., due to time spent reinitializing the system,
initiating a new session, rebooting the system). In order to be able to
configure and control the user agent, the user needs to be able to "read" as
well as "write" values for these parameters. Configuration settings may be
stored in a profile.
The range and granularity of the changes that can be controlled or configured
by the user may depend on limitations of the
operating environment or hardware.
Both configuration and control may apply at different "levels": across
Web resources (i.e., at the user agent
level, or inherited from the operating environment), to the entirety of a
Web resource, or to components of a Web resource (e.g., on a per-element
basis). In this document, the term global configuration is
used to emphasize when a configuration applies across Web resources. For
example, users may configure the user agent to apply the same font family
across Web resources, so that all text is
displayed by default using that font family. On the other hand, the user may
wish to configure the rendering of a particular element type, which may be done
through style sheets. Or, the user may wish to control the text size
dynamically (zooming in and out) for a given document, without having to
reconfigure the user agent. Or, the user may wish to control the text size
dynamically for a given element, e.g., by navigating to the element and zooming
in on it.
User agents may allow users to choose configurations based on various
parameters, such as hardware capabilities, natural language, etc.
Note: In this document, the noun "control" means "user
interface component" or "form component".
-
Content
- In this specification, the noun "content" is used in three
ways:
- It is used to mean the document
object as a whole or in parts.
- It is used to mean the content of an HTML or XML element, in the sense
employed by the XML 1.0 specification ([XML], section 3.1): "The text between
the start-tag and end-tag is called the element's content." Context should
indicate that the term content is being used in this sense.
- It is used in the context of the phrases non-text content and
text content.
-
Device-independence
- Device-independence refers to the ability to make use of
software with any supported input or output device.
-
Document Object,
Document Object Model
- In general usage, the term "document object" refers to the user agent's
representation of data (e.g., a document). This data generally comes from the
document
source, but may also be generated (from style sheets, scripts,
transformations, etc.), produced as a result of preferences set within the user
agent, added as the result of a repair performed automatically by the user
agent, etc. Some data that is part of the document object is routinely
rendered (e.g., in HTML, what appears between the start and end tags
of elements and the values of attributes such as "alt", "title", and
"summary"). Other parts of the document object are generally processed by the
user agent without user awareness, such as DTD-defined names of element types
and attributes, and other attribute values such as "href", "id", etc. These
guidelines require that users have access to both types of data through the
user interface. Most of the requirements of this document apply to the document
object after its construction. However, a few checkpoints (e.g., checkpoint 2.7 and checkpoint 2.10) may
affect the construction of the document object.
A "document object model" is the abstraction that governs the construction
of the user agent's document object. The document object model employed by
different user agents may vary in implementation and sometimes in scope. This
specification requires that user agents implement the
APIs defined in Document Object Model
(DOM) Level 2 Specifications ([DOM2CORE] and
[DOM2STYLE]) for access to HTML,
XML, and CSS content. These DOM APIs allow authors
to access and modify the content via a scripting language (e.g., JavaScript) in
a consistent manner across different scripting languages. As a standard
interface, the DOM APIs make it easier not just for authors, but for assistive
technology developers to extract information and render it in ways most suited
to the needs of particular users.
-
Document character set
- A document character set (an concept taken from SGML) is a
sequence of abstract characters that may appear in Web content represented in a
particular format (such as HTML, XML, etc.). A document character set consists
of:
- a "repertoire", A set of abstract characters, such as the Latin letter "A",
the Cyrillic letter "I", the Chinese character meaning "water", etc.
- Code positions: A set of integer references to characters in the
repertoire.
For instance, the character set required by the HTML 4 specification [HTML4] is defined
in the Unicode specification [UNICODE]. Refer to "Character
Model for the World Wide Web" [CHARMOD] for more information
about document character sets.
-
Document source, Document source view
- In this document, the term "document source" refers to the
data that the user agent receives as the direct result of a request for a
Web resource (e.g., as the result of an
HTTP/1.1
[RFC2616] "GET", or as the result of viewing a resource on the local
file system). The document source is generally a subset of the
document object (e.g., since the document
object may include repair
content).
-
Documentation
- Documentation refers to information that supports the use
of a product. This information may be found in product manuals, installation
instructions, the help system, tutorials, etc. Documentation may be distributed
(e.g., some parts may be delivered on CD-ROM, others on the Web). Refer to guideline 12 for information about
documentation requirements.
-
Element
- This document uses the term "element" both in the XML
sense (an element is a syntactic construct as described in the XML 1.0
specification
[XML], section 3) and more generally to mean a type of content (such
as video or sound) or a logical construct (such as a header or list).
-
Enabled element, disabled element
- An enabled element is a piece of
content that is subject to user activation through the user
interface or indirectly through an API. The set of elements that a user agent
enables is generally derived from, but is not limited to, the set of
interactive elements defined by implemented markup languages.
A disabled element is a piece of content
that is not an enabled element.
Some elements may only be enabled elements for part of a user session. For
instance, an element may be disabled by a script as the result of user
interaction. Or, an element may only be enabled during a given time period
(e.g., during part of a SMIL 1.0 [SMIL] presentation). Or, the user
may be viewing content in "read-only" mode, which may disable some
elements.
For the requirements of this document, user
selection does not constitute user interaction with enabled
elements. See the definition of content
focus.
Note: Enabled and disabled elements come from content; they
are not part of the user
agent user interface.
Note: The term "active element" is not used in this
document since it may suggest several different concepts, including:
interactive element, enabled element, an element "in the process of being
activated" (which is the meaning of ':active' in CSS2 [CSS2], for example).
-
Equivalent (for content)
- The term "equivalent" is used in this document as it is
used in the Web Content Accessibility Guidelines 1.0 [WCAG10]:
Content is "equivalent" to other content when both fulfill essentially the
same function or purpose upon presentation to the user. In the context of this
document, the equivalent must fulfill essentially the same function for the
person with a disability (at least insofar as is feasible, given the nature of
the disability and the state of technology), as the primary content does for
the person without any disability.
Equivalents include text
equivalents (e.g., text equivalents for images; text transcripts for
audio tracks; collated text transcripts for multimedia presentations and
animations) and non-text
equivalents (e.g., a prerecorded auditory description of a
visual track of a movie, or a sign
language video rendition of a written text, etc.).
Each markup language defines its own mechanisms for specifying
conditional content, and these mechanisms may be used by authors to
provide text equivalents. For instance, in HTML 4 [HTML4] or SMIL 1.0 [SMIL], authors may
use the "alt
" attribute to specify a text equivalent for some
elements. In HTML 4, authors may provide equivalents (or portions of
equivalents) in attribute values (e.g., the "summary" attribute for the
TABLE
element), in element content (e.g., OBJECT
for
external content it specifies, NOFRAMES
for frame equivalents, and
NOSCRIPT
for script equivalents), and in prose. Please consult the
Web Content Accessibility Guidelines 1.0 [WCAG10] and its associated
Techniques document [WCAG10-TECHS] for more
information about equivalents.
- Events and
scripting, event handler
- User agents often perform a task when an event occurs that
is due to user interaction (e.g., document loading, mouse motion or a key
press), a request from the
operating environment, etc. Some markup languages allow authors to
specify that a script, called an event handler, be executed when
the event occurs. An event handler is "explicitly associated with an
element" when the event handler is associated with that element
through markup or the DOM. The term "event
bubbling" describes a programming style where a single event
handler dispatches events to more than one element. In this case, the event
handlers are not explicitly associated with the elements receiving the events
(except for the single element that dispatches the events).
Note: The combination of HTML, style sheets, the Document
Object Model (DOM) and scripting is commonly referred to as
"Dynamic HTML" or DHTML. However, as there is no W3C specification that
formally defines DHTML, this document only refers to event handlers and
scripts.
-
Explicit user request
- In several checkpoints in this document, the term "explicit user request"
is used to mean any user interaction
recognized with certainty to be for a specific purpose. For
instance, when the user selects "New viewport" in the user agent's user
interface, this is an explicit user request for a new viewport. On the other
hand, it is not an explicit request when the user activates a link and that
link has been marked up by the author to open a new viewport (since the user
may not know that a new viewport will open). Nor is it an explicit user request
even if the link text states "will open a new viewport". Some other examples of
explicit user requests include "yes" responses to prompts from the user agent,
configuration through the user agent's user interface, activation of known form
submit controls, and link activation (which should not be assumed to mean more
than "get this linked resource", even if the link text or title or role
indicates more). Some examples of behaviors that happen without explicit user
request include changes due to scripts.
Note: Users do make mistakes. For example, a user may
submit a form inadvertently by activating a known form submit control. In this
document, this type of mistake is still considered an explicit user
request.
- Fee
link
- For the purpose of this document, the term "fee link"
refers to a link that when activated, debits the user's electronic "wallet"
(generally, a "micropayment"). The link's role as a fee link is identified
through markup (in a manner that the user agent can
recognize). This definition of fee link excludes payment mechanisms
(e.g., some form-based credit card transactions) that cannot be recognized by
the user agent as causing payments. For more information about fee links, refer
to "Common Markup for micropayment per-fee-links"
[MICROPAYMENT].
-
Focus,
content focus,
user interface focus, current focus
- Focus is a user interface mechanism that has the following
properties:
- It designates a location of potential user interaction. In most user agents
today, the focus is only sensitive to keyboard input, but this could be
generalized to other input devices. The focus is
highlighted in the viewport so that it stands out.
- The focus has state (more so than the transient pointing device), so the
user may use it as a kind of "bookmark." For instance, the user may set the
focus (through the user interface or programmatically), review other content
(e.g., by scrolling the viewport or otherwise moving the point of
regard), and then return to the focus having decided to activate the
designed enabled element.
User agents generally implement two types of focus:
- The "content focus" designates an enabled element. A viewport has at most one
content focus. The content focus may be empty (i.e., refer to no location in
content).
- The "user interface focus" designates a control of the user agent's user interface (e.g., a radio
button, text box, menu, etc.). A viewport has at most one user interface
focus.
In this document,the unmodified term "focus" means both "content focus" and
"user agent focus".
When several viewports
coexist, at most one viewport's content focus or user
interface focus receives input events; this is called the current focus.
-
Graphical
- In this document, the term "graphical" refers to
information (text, colors, graphics, images, animations, etc.) rendered for
visual consumption.
-
Highlight
- In this document, "to highlight" means to emphasize
through the user interface. For example, user agents highlight which content is
selected or focused. Graphical highlight mechanisms include dotted boxes,
underlining, and reverse video. Synthesized speech highlight mechanisms include
alterations of voice pitch and volume.
- Input configuration
- An input configuration is the mapping of user agent
functionalities to some user
interface input mechanisms (e.g., menus, buttons, keyboard keys,
voice commands, etc.). The default input configuration is the mapping the user
finds after installation of the software; it must be documented (per checkpoint 12.3]). Input
configurations may be affected by author-specified bindings (e.g., through the
"accesskey" attribute of HTML 4 [HTML4]).
- Interactive element
- An interactive element is piece of content that is
intended, by specification, to be an enabled element in some user sessions. For
instance, the interactive elements of HTML 4 [HTML4] include:
links, image maps, form controls, elements with a value for the "longdesc"
attribute, and elements with event
handlers explicitly associated with them (e.g., through the various
"on" attributes). The role of an element as an interactive element is subject
to applicability.
-
Natural language
- Natural language is spoken, written, or signed human
language such as French, Japanese, and American Sign Language. On the Web, the
natural language of content may
be specified by markup or HTTP headers. Some examples include the
"lang" attribute in HTML 4 ([HTML4] section 8.1), the "xml:lang"
attribute in XML 1.0 ([XML], section 2.12), the
HTML 4 "hreflang" attribute for links in HTML 4
([HTML4],
section 12.1.5), the HTTP Content-Language header ([RFC2616], section 14.12) and the
Accept-Language request header ([RFC2616], section 14.4). See also
the definition of script.
-
Normative,
informative
- As used in this document, the term "normative" refers to
"that on which the requirements of this document depend for their most precise
statement." What is normative is required for
conformance (though the conformance scheme of this document allows
claimants to exempt certain normative provisions as long as the claim discloses
the exemption). What is identified as "informative" (sometimes,
"non-normative") is never required for conformance.
- Operating
environment
- The term "operating environment" refers to the environment
that governs the user agent's operation, whether it is an operating system or a
programming language environment such as Java.
-
Placeholder
- A placeholder is content generated by the user agent to
replace author-supplied content. A placeholder may be generated as the result
of a user preference (e.g., to not render images) or as
repair content (e.g., when an image
cannot be found). Placeholders can be any type of content, including text and
images.
This document includes requirements that the user be able to view the
original author-supplied content associated with a placeholder. To satisfy
these requirements, the user agent might render the content in place of the
placeholder or in a separate viewport (leaving the placeholder as is). A
request to view the original content associated with a placeholder is considered an explicit user
request to render that content.
This document does not require user agents to include placeholders in the document
object. A placeholder that is inserted in the document object should
conform to the Web Content Accessibility Guidelines 1.0 [WCAG10]. If a
placeholder is not part of the document object, it is part of the user
interface only (and subject, for example, to checkpoint 1.3).
-
Point of regard
- The point of regard is a position in rendered
content that the user is presumed to be viewing. The dimensions of
the point of regard may vary. For example, it may be a point (e.g., a moment in
an audio rendering or a cursor in a graphical rendering), or a range of text
(e.g., focused text), or a two-dimensional area (e.g., content rendered through
a two-dimensional graphical viewport). The point of regard is almost always
within a viewport (though the dimensions of the point of regard could exceed
those of the viewport). The point of regard may also refer to a particular
moment in time for content that changes over time (e.g., an
audio-only presentation). User agents may determine the point of
regard in a number of ways, including based on viewport position in content,
content focus,
selection, etc. A user agent should not change the point of regard
unexpectedly as this may disorient the user.
-
Profile
- A profile is a named and persistent representation of user preferences that
may be used to configure a user agent. Preferences include input
configurations, style preferences, natural language preferences, etc. In
operating environments with distinct user accounts, profiles enable
users to reconfigure software quickly when they log on, and profiles may be
shared by several users. Platform-independent profiles are useful for those who
use the same user agent on different platforms.
-
Prompt
- In this document, "to prompt" means to require input from
the user. The user agent should allow users to
configure how they wish to be prompted. For instance, for a user
agent functionality X, configurations might include: always do X without
prompting me, never do X without prompting me, never do X but tell me when you
could have, never do X and never tell me that you could have, etc.
- Properties, values, and
defaults
- A user agent renders a document by applying formatting
algorithms and style information to the document's elements. Formatting depends
on a number of factors, including where the document is rendered: on screen, on
paper, through loudspeakers, on a braille display, on a mobile device, etc.
Style information (e.g., fonts, colors, speech prosody, etc.) may come from the
elements themselves (e.g., certain font and phrase elements in HTML), from
style sheets, or from user agent settings. For the purposes of these
guidelines, each formatting or style option is governed by a property and each
property may take one value from a set of legal values. Generally in this
document, the term "property"
has the meaning defined in CSS 2 ([CSS2], section 3). A reference to
"styles" in this document means a set of style-related properties.
- The value given to a property by a user agent when it is
installed is called the property's default value.
-
Recognize
- Authors encode information in markup languages, style
sheet languages, scripting languages, protocols, etc. When the information is
encoded in a manner that allows the user agent to process it with certainty,
the user agent can "recognize" the information. For instance, HTML allows
authors to specify a heading with the H1 element, so a user agent that
implements HTML can recognize that content as a heading. If the author creates
headings using a visual effect alone (e.g., by increasing the font size), then
the author has encoded the heading in a manner that does not allow the user
agent to recognize it as a heading.
Some requirements of this document depend on content roles, content
relationships, timing relationships, and other information supplied by the
author. These requirements only apply when the author
has encoded that information in a manner that the user agent can recognize. See
the section on conformance for more information
about applicability.
In practice, user agents will rely heavily on information that the author
has encoded in a markup language or style sheet language. On the other hand,
behaviors, style, and meaning encoded in a script may not be recognized by the
user agent as easily. For instance, a user agent is not expected to recognize
that, when executed, a script will calculate a factorial. The user agent will
be able to recognize some information in a script by virtue of implementing the
scripting language or a known program library (e.g., the user agent is expected
to recognize when a script will open a viewport or retrieve a resource from the
Web). The Techniques document [UAAG10-TECHS] lists some
markup known to affect accessibility that user agents can recognize.
-
Rendered content, rendered text
- Rendered content is the part of
content capable of being perceived by a user through a given
viewport (whether visual, auditory, or tactile). Some rendered content may lie
"outside" of a viewport at some times (e.g., when the user can only view a
portion of a large document through a small graphical viewport, when audio
content has already been played, etc.). By changing the viewport's position,
the user can view the remaining rendered content.
- Note: In the context of this document, "invisible content" is
content that influences graphical rendering of other content but is not
rendered itself. Similarly,
"silent content" is content that influences audio rendering of other
content but is not rendered itself. Neither invisible nor silent content is
considered rendered content.
-
Repair content, repair text
- In this document, the term "repair content" refers to
content generated by the user agent in order to correct an error condition.
"Repair text" means repair content consisting only of
text. Some error conditions that may lead to the generation of
repair content include:
- Erroneous or incomplete content (e.g., ill-formed markup, invalid markup,
missing
conditional content that is required by specification, etc.);
- Missing resources for handling or rendering content (e.g., the user agent
lacks a font family to display some characters, the user agent doesn't
implement a particular scripting language, etc.);
This document does not require user agents to include repair content in the
document
object. Repair content inserted in the document object should
conform to the Web Content Accessibility Guidelines 1.0 [WCAG10]. For
more information about repair techniques for Web content and software, refer to
"Techniques for Authoring Tool Accessibility Guidelines 1.0"
[ATAG10-TECHS].
-
Script
- In this document, the term "script" almost always refers to a scripting
(programming) language used to create dynamic Web content. However, in
checkpoints referring to the written (natural) language of content, the term
"script" is used as in Unicode [UNICODE] to mean "A collection of
symbols used to represent textual information in one or more writing
systems."
- Selection, current
selection
- The selection generally identifies a range of content
(e.g., text, images, etc.) in a document. This range may be empty. The selection may be structured (based on the
document tree) or unstructured (e.g., text-based). Content may be selected
through user interaction, scripts, etc. The selection may be used for a variety
of purposes: for cut and paste operations, to designate a specific element in a
document for the purposes of a query, to identify what a screen reader should
read, etc. The selection may be set by the user (e.g., by a pointing device or
the keyboard) or through an application programming interface (API).
A viewport has at most one selection (though the selection may be rendered
graphically as discontinuous text fragments).
When several viewports
coexist, at most one viewport's selection receives input events; this is called
the current selection.
On the screen, the selection may be
highlighted using colors, fonts, graphics, magnification, etc. The
selection may also be rendered through changes in speech prosody, for
example.
- Support, implement, conform
- In this document, the terms "support", "implement", and
"conform" all refer to what a developer has designed a user agent to do, but
they represent different degrees of specificity. A user agent "supports"
general classes of objects, such as "images" or "Japanese". A user agent
"implements" a specification (e.g., the PNG and SVG image format
specifications, a particular scripting language, etc.) or an
API (e.g., the DOM API) when it has been
programmed to follow all or part of a specification. A user agent "conforms to"
a specification when it implements the specification and satisfies its
conformance criteria. This document includes some explicit conformance
requirements (e.g., to a particular level of the "Web Content Accessibility
Guidelines 1.0"
[WCAG10]).
- Synchronize
- In this document, "to synchronize" refers to the
time-coordination of two or more presentation components (e.g., in a multimedia
presentation, a visual
track with captions). For Web content developers, the requirement to
synchronize means to provide the data that will permit sensible
time-coordinated rendering by a user agent. For example, Web content developers
can ensure that the segments of caption text are neither too long nor too
short, and that they map to segments of the visual track that are appropriate
in length. For user agent developers, the requirement to synchronize means to
present the content in a sensible time-coordinated fashion under a wide range
of circumstances including technology constraints (e.g., small text-only
displays), user limitations (slow reading speeds, large font sizes, high need
for review or repeat functions), and content that is sub-optimal in terms of
accessibility.
-
Text
- In this document, the term "text" used by itself refers to
a sequence of characters from a markup language's document character set. Refer to the "Character
Model for the World Wide Web " [CHARMOD] for more information
about text and characters. Note: This document makes use of
other terms that include the word "text" that have highly specialized meanings:
collated text transcript, non-text content,
text content, non-text
element, text
element, text
equivalent, and text
transcript.
-
Text content, non-text content, text
element, non-text element, text equivalent non-text
equivalent
- As used in this document a "text element" adds text characters to either content or the
user interface. Both in the Web Content Accessibility Guidelines 1.0
[WCAG10] and in this document, text elements are presumed to produce
text that can be understood when rendered visually, as speech, or as Braille.
Such text elements benefit at least these three groups of users:
- visually-displayed text benefits users who are deaf and adept in reading
visually-displayed text;
- synthesized speech benefits users who are blind and adept in use of
synthesized speech;
- braille benefits users who are deaf-blind and adept at reading
braille.
A text element may consist of both text and non-text data. For instance, a
text element may contain markup for style (e.g., font size or color), structure
(e.g., heading levels), and other semantics. The essential function of the text
element should be retained even if style information happens to be lost in
rendering.
A user agent may have to process a text element in order to have access to
the text characters. For instance, a text element may consist of markup, it may
be encrypted or compressed, or it may include embedded text in a binary format
(e.g., JPEG).
"Text content" is content that is composed of one or more text elements. A
"text equivalent" (whether in content or the user interface) is an
equivalent composed of one or more text elements. Authors generally
provide text equivalents for content by using the
conditional content mechanisms of a specification.
A "non-text element" is an element (in content or the user interface) that
does not have the qualities of a text element. "Non-text content" is composed
of one or more non-text elements. A "non-text equivalent" (whether in content
or the user interface) is an
equivalent composed of one or more non-text elements.
Note that the terms "text element" and "non-text element" are defined by the
characteristics of their output (e.g., rendering) rather than those of their
input (e.g., information sources) or their internals (e.g., format). Both text
elements and non-text elements should be understood as "pre-rendering" content
in contrast to the "post-rendering" content that they produce.
-
Text decoration
- In this document, a "text decoration" is any stylistic effect that the user
agent may apply to visually rendered text that does not affect the layout of
the document (i.e., does not require reformatting when applied or removed).
Text decoration mechanisms include underline, overline, and
strike-through.
-
Text transcript
- A text transcript is a text equivalent of audio
information (e.g., an
audio-only presentation or the
audio track of a movie or other animation). It provides text for
both spoken words and non-spoken sounds such as sound effects. Text transcripts
make audio information accessible to people who have hearing disabilities and
to people who cannot play the audio. Text transcripts are usually pre-written
but may be generated on the fly (e.g., by speech-to-text converters). See also
the definitions of captions
and
collated text transcripts.
- User
agent
- In this document, the term "user agent" is used in two
ways:
- Any software that retrieves and renders Web content for users. This may
include Web browsers, media players, plug-ins, and other programs -- including
assistive
technologies -- that help in retrieving and rendering Web
content.
- The subject of a
conformance claim to this document. This is the most common use of the term
in this document and is the usage in the checkpoints.
-
User agent default styles
- User agent default styles are style property values applied in the absence of
any author or user styles. Some markup languages specify a default rendering
for documents in that markup language. Other specifications may not specify
default styles. For example, XML 1.0 [XML] does not specify
default styles for XML documents. HTML 4 [HTML4] does not
specify default styles for HTML documents, but the CSS 2 [CSS2] specification
suggests a
sample default style sheet for HTML 4 based on current practice.
-
User interface
- For the purposes of this document, user interface includes
both:
- the "user agent user
interface", i.e., the controls and mechanisms offered by the user
agent for user interaction, such as menus, buttons, keyboard access, etc.
- the "content user interface", i.e., the enabled elements that are part of content, such
as form controls, links, applets, etc.
The document distinguishes them only where required for clarity.
-
User styles
- User styles are style property values that come from user
interface settings, user style sheets, or other user interactions.
- Visual-only
presentation
- An visual-only presentation is content consisting
exclusively of one or more visual
tracks presented concurrently or in series. Examples of an
visual-only presentation include a silent movie.
-
Visual track
- A visual object is content rendered through a graphical viewport. Visual objects include graphics,
text, and visual portions of movies and other animations. An visual track is a
visual object that is intended as a whole or partial presentation. A visual
track does not necessarily correspond to a single physical object or software
object. A visual track may be text-based or graphic. A visual track may be
static or involve
animation.
-
Views,
viewports
- User agents may handle different types of content: markup language, sound, video, etc.
The user views rendered
content through a viewport. Viewports include windows, frames,
pieces of paper, loudspeakers, virtual magnifying glasses, etc. A viewport may
contain another viewport (e.g., nested frames). User interface controls such as
prompts, menus, alerts, etc. are not viewports. When the dimensions (spatial or
temporal) of a viewport exceed the dimensions of rendered content, the viewport
includes mechanisms such as scroll bars and advance and rewind functionalities
to provide access to the content.
When several viewports
coexist, only one has the current
focus at a given moment. This viewport is
highlighted to make it stand out.
User agents may render the same content in a variety of ways; each rendering
is called a view. For
instance, a user agent may allow users to view an entire document or just a
list of the document's headers. These are two different views of the
document.
-
Voice browser
- From "Introduction and Overview of W3C Speech Interface Framework"
[VOICEBROWSER]: "A voice browser is a device (hardware and software)
that interprets voice markup languages to generate voice output, interpret
voice input, and possibly accept and produce other modalities of input and
output."
-
Web resource
- The term "Web resource" is used in this document in
accordance with Web Characterization Terminology and Definitions Sheet [WEBCHAR] to
mean anything that can be identified by a Uniform Resource Identifier
(URI); refer to RFC 2396 [RFC2396].
For the latest version of any W3C specification please
consult the list of W3C Technical Reports at
http://www.w3.org/TR/. Some documents listed below may have been superseded
since the publication of this document.
Note: In this document, bracketed labels such as "[HTML4]"
link to the corresponding entries in this section. These labels are also
identified as references through markup.
There are two recommended ways to refer to the "User Agent Accessibility
Guidelines 1.0" (and to W3C documents in general):
- References to a specific version of "User Agent Accessibility Guidelines
1.0". For example, use the "this version" URI to refer to
the current document: http://www.w3.org/WAI/UA/WD-UAAG10-20010331.
- References to the latest version of "User Agent Accessibility Guidelines
1.0". Use the "latest version" URI to refer to the most recently published
document in the series: http://www.w3.org/WAI/UA/UAAG10.
In almost all cases, references (either by name or by link) should be to a
specific version of the document. W3C will make every effort to make this
document indefinitely available at its original address in its original form.
The top of this document includes the relevant catalog metadata for specific
references (including title, publication date, "this version"
URI, editors' names, and copyright information).
An XHTML 1.0
[XHTML10] paragraph including a reference to this specific document
might be written:
<p>
<cite><a href="http://www.w3.org/WAI/UA/WD-UAAG10-20010331/">
"User Agent Accessibility Guidelines 1.0"</a></cite>,
I. Jacobs, J. Gunderson, E. Hansen, eds.,
W3C Working Draft, 31 March 2001.
The <a href="http://www.w3.org/WAI/UA/UAAG10/">latest
version</a> of this document is available at
http://www.w3.org/WAI/UA/UAAG10/.</p>
For very general references to this document (where stability of content,
anchors, etc., is not required), it may be appropriate to refer to the latest
version of this document. In this case, please use the "latest version" URI at
the top of this document.
See also information about making conformance
claims to this document.
- [DOM2CORE]
-
"Document Object Model (DOM) Level 2 Core Specification", A. Le
Hors, P. Le Hégaret, L. Wood, G. Nicol, J. Robie, M. Champion, S. Byrne,
eds., 13 November 2000. This W3C Recommendation is
http://www.w3.org/TR/2000/REC-DOM-Level-2-Core-20001113/.
- [DOM2STYLE]
-
"Document Object Model (DOM) Level 2 Style Specification", V.
Apparao, P. Le Hégaret, C. Wilson, eds., 13 November 2000. This W3C
Recommendation is
http://www.w3.org/TR/2000/REC-DOM-Level-2-Style-20001113/.
- [RFC2046]
- "Multipurpose Internet Mail
Extensions (MIME) Part Two: Media Types", N. Freed, N. Borenstein,
November 1996.
- [RFC2119]
- "Key words for use in
RFCs to Indicate Requirement Levels", S. Bradner, March 1997.
- [WCAG10]
- "Web
Content Accessibility Guidelines 1.0", W. Chisholm, G. Vanderheiden,
and I. Jacobs, eds., 5 May 1999. This W3C Recommendation is
http://www.w3.org/TR/1999/WAI-WEBCONTENT-19990505/.
- [AT1998]
- The Assistive
Technology Act of 1998, 13 November 1998, United States P.L.
105-394..
- [ATAG10]
- "Authoring
Tool Accessibility Guidelines 1.0", J. Treviranus, C.
McCathieNevile, I. Jacobs, and J. Richards, eds., 3 February 2000. This W3C
Recommendation is http://www.w3.org/TR/2000/REC-ATAG10-20000203/.
-
[ATAG10-TECHS]
-
"Techniques for Authoring Tool Accessibility Guidelines 1.0", J.
Treviranus, C. McCathieNevile, I. Jacobs, and J. Richards, eds., 4 May 2000.
This W3C Note is http://www.w3.org/TR/2000/NOTE-ATAG10-TECHS-20000504/.
- [CHARMOD]
- "Character
Model for the World Wide Web", M. Dürst and F. Yergeau, eds.,
29 November 1999. This W3C Working Draft is
http://www.w3.org/TR/1999/WD-charmod-19991129/
- [CSS1]
- "CSS, level 1
Recommendation", B. Bos, H. Wium Lie, eds., 17 December 1996,
revised 11 January 1999. This W3C Recommendation is
http://www.w3.org/TR/1999/REC-CSS1-19990111.
- [CSS2]
- "CSS, level 2
Recommendation", B. Bos, H. Wium Lie, C. Lilley, and I. Jacobs,
eds., 12 May 1998. This W3C Recommendation is
http://www.w3.org/TR/1998/REC-CSS2-19980512/.
- [HTML4]
- "HTML 4.01
Recommendation", D. Raggett, A. Le Hors, and I. Jacobs, eds., 24
December 1999. This W3C Recommendation is
http://www.w3.org/TR/1999/REC-html401-19991224/.
- [MATHML20]
-
"Mathematical Markup Language (MathML) Version 2.0", D. Carlisle, P.
Ion, R. Miner, N. Poppelier, et al., 21 February 2001. This W3C Recommendation
is http://www.w3.org/TR/2001/REC-MathML2-20010221/.
-
[MICROPAYMENT]
-
"Common Markup for micropayment per-fee-links", T. Michel, ed., 25
August 1999. This W3C Working Draft is
http://www.w3.org/TR/1999/WD-Micropayment-Markup-19990825/.
- [PNG]
- "PNG
(Portable Network Graphics) Specification 1.0", T. Boutell, ed., 1
October 1996. This W3C Recommendation is http://www.w3.org/TR/REC-png.
- [RDF10]
-
"Resource Description Framework (RDF) Model and Syntax
Specification", O. Lassila, R. Swick, eds., 22 February 1999. This
W3C Recommendation is http://www.w3.org/TR/1999/REC-rdf-syntax-19990222/.
- [RFC2396]
- "Uniform Resource
Identifiers (URI): Generic Syntax", T. Berners-Lee, R. Fielding, L.
Masinter, August 1998.
- [RFC2616]
- "Hypertext Transfer
Protocol -- HTTP/1.1", J. Gettys, J. Mogul, H. Frystyk, L. Masinter,
P. Leach, T. Berners-Lee, June 1999.
- [SMIL]
- "Synchronized
Multimedia Integration Language (SMIL) 1.0 Specification", P.
Hoschka, ed., 15 June 1998. This W3C Recommendation is
http://www.w3.org/TR/1998/REC-smil-19980615/.
- [SMIL20]
- Synchronized
Multimedia Integration Language (SMIL 2.0) Specification, J. Ayars,
et al., eds., 1 March 2001. This W3C Working Draft is
http://www.w3.org/TR/2001/WD-smil20-20010301/. The latest version of SMIL 2.0
is available at http://www.w3.org/TR/smil20.
- [SVG]
- "Scalable Vector
Graphics (SVG) 1.0 Specification", J. Ferraiolo, ed., 2 August 2000.
This W3C Candidate Recommendation is
http://www.w3.org/TR/2000/CR-SVG-20000802/.
-
[UAAG10-CHECKLIST]
- An appendix to this document lists all of the checkpoints, sorted by
priority. The checklist is available in either tabular
form or list
form.
-
[UAAG10-SUMMARY]
- An appendix to this document provides a summary of the goals and structure of User Agent
Accessibility Guidelines 1.0.
-
[UAAG10-TECHS]
- "Techniques for User
Agent Accessibility Guidelines 1.0", I. Jacobs, J. Gunderson, E.
Hansen, eds. The latest draft of the techniques document is available at
http://www.w3.org/WAI/UA/UAAG10-TECHS/.
- [UNICODE]
- "The
Unicode Standard, Version 3.0", The Unicode Consortium, Reading, MA,
Addison-Wesley Developers Press, 2000. ISBN 0-201-61633-5. Refer also to
http://www.unicode.org/unicode/standard/versions/. For information about character
encodings, refer to Unicode Technical
Report #17 "Character Encoding Model".
-
[VOICEBROWSER]
- "Voice
Browsers: An introduction and glossary for the requirements drafts",
M. Robin, J. Larson, 23 December 1999. This document is
http://www.w3.org/TR/1999/WD-voice-intro-19991223/. This document includes
references to additional W3C specifications about voice browser
technology.
- [W3CPROCESS]
-
"World Wide Web Consortium Process Document", I. Jacobs ed. The 11
November 1999 version of the Process Document is
http://www.w3.org/Consortium/Process/Process-19991111/.
-
[WCAG10-TECHS]
-
"Techniques for Web Content Accessibility Guidelines 1.0", W.
Chisholm, G. Vanderheiden, and I. Jacobs, eds. This W3C Note is
http://www.w3.org/TR/1999/WAI-WEBCONTENT-TECHS-19990505/.
- [WEBCHAR]
- "Web
Characterization Terminology and Definitions Sheet", B. Lavoie, H.
F. Nielsen, eds., 24 May 1999. This is a W3C Working Draft that defines some
terms to establish a common understanding about key Web concepts. This W3C
Working Draft is http://www.w3.org/1999/05/WCA-terms/01.
- [XHTML10]
- "XHTML[tm]
1.0: The Extensible HyperText Markup Language", S. Pemberton, et
al., 26 January 2000. This W3C Recommendation is
http://www.w3.org/TR/2000/REC-xhtml1-20000126/.
- [XML]
- "Extensible
Markup Language (XML) 1.0", T. Bray, J. Paoli, C.M.
Sperberg-McQueen, eds., 10 February 1998. This W3C Recommendation is
http://www.w3.org/TR/1998/REC-xml-19980210.
The active participants of the User Agent Accessibility Guidelines Working
Group who authored this document were: James Allan, Denis Anson (College
Misericordia), Harvey Bingham, Al Gilman, Jon Gunderson (Chair of the Working
Group, University of Illinois, Urbana-Champaign), Eric Hansen (Educational
Testing Service), Ian Jacobs (Team Contact, W3C), Tim Lacy (Microsoft), Charles
McCathieNevile (W3C), David Poehlman, Mickey Quenzer, Gregory Rosmaita
(Visually Impaired Computer Users Group of New York City), and Rich
Schwerdtfeger (IBM).
Many thanks to the following people who have contributed through review and
past participation in the Working Group: Paul Adelson, Kitch Barnicle, Olivier
Borius, Judy Brewer, Dick Brown, Bryan Campbell, Kevin Carey, Tantek
Çelik, Wendy Chisholm, David Clark, Chetz Colwell, Wilson Craig, Nir
Dagan, Daniel Dardailler, B. K. Delong, Neal Ewers, Geoff Freed, John Gardner,
Larry Goldberg, Glen Gordon, John Grotting, Markku Hakkinen, Earle Harrison,
Chris Hasser, Kathy Hewitt, Philipp Hoschka, Masayasu Ishikawa, Phill Jenkins,
Earl Johnson, Jan Kärrman (for help with html2ps), Leonard Kasday,
George Kerscher, Marja-Riitta Koivunen, Peter Korn, Josh Krieger, Catherine
Laws, Aaron Leventhal, Greg Lowney, Susan Lesch, Scott Luebking, William
Loughborough, Napoleon Maou, Peter Meijer, Karen Moses, Masafumi Nakane, Mark
Novak, Charles Oppermann, Mike Paciello, David Pawson, Michael Pederson, Helen
Petrie, Michael Pieper, Jan Richards, Hans Riesebos, Joe Roeder, Lakespur L.
Roca, Madeleine Rothberg, Lloyd Rutledge, Liam Quinn, T.V. Raman, Robert
Savellis, Constantine Stephanidis, Jim Thatcher, Jutta Treviranus, Claus
Thogersen, Steve Tyler, Gregg Vanderheiden, Jaap van Lelieveld, Jon S. von
Tetzchner, Willie Walker, Ben Weiss, Evan Wies, Chris Wilson, Henk Wittingen,
and Tom Wlodkowski.