Techniques for User Agent Accessibility Guidelines 1.0
31 July 2001
This section lists each checkpoint of "User Agent Accessibility Guidelines
1.0" [UAAG10] along with some possible
techniques for satisfying it. Each checkpoint definition includes a link to the
checkpoint definition in "User Agent Accessibility Guidelines 1.0". Each
checkpoint definition is followed by one or more of the following:
- Notes and rationale: Additional rationale and explanation
of the checkpoint;
- Who benefits: Which users with disabilities are expected
to benefit from user agents that satisfy the checkpoint;
- Example techniques: Some techniques to illustrate how a
user agent might satisfy the requirements of the checkpoint Screen shots and
other information about deployed user agents have been included as sample
techniques. References to products are not endorsements of those products by
W3C;
- Doing more: Techniques to achieve more than what is
required by the checkpoint;
- Related techniques: Links to other techniques in section
3. The accessibility topics of section 3 generally apply to more than one
checkpoint.
- References: References to other guidelines,
specifications, or resources.
Note: Most of the techniques in this document are designed
for graphical browsers and multimedia players running on desktop computers.
However, some of them also make sense for assistive technologies and other user
agents. In particular, techniques about communication between user agents will
benefit assistive technologies. Refer, for example, to the appendix on loading assistive technologies
for access to the document object model.
Each checkpoint in this document is assigned a priority that indicates its
importance for users with disabilities.
- Priority 1
(P1)
- This checkpoint must be satisfied by user agents,
otherwise one or more groups of users with disabilities will find it impossible
to access the Web. Satisfying this checkpoint is a basic requirement for
enabling some people to access the Web.
- Priority 2
(P2)
- This checkpoint should be satisfied by user agents,
otherwise one or more groups of users with disabilities will find it difficult
to access the Web. Satisfying this checkpoint will remove significant barriers
to Web access for some people.
- Priority 3
(P3)
- This checkpoint may be satisfied by user agents to make it
easier for one or more groups of users with disabilities to access information.
Satisfying this checkpoint will improve access to the Web for some people.
Note: This information about checkpoint priorities is
included for convenience only. For detailed information about conformance to
"User Agent Accessibility Guidelines 1.0"
[UAAG10], please refer to that document.
Checkpoints
1.1 Full keyboard access. (P1)
- Ensure that the user can operate through keyboard input alone any user
agent functionality available through the user
interface.
For both content and user agent.
Checkpoint 1.1
Note: User agents may support at least two types of
keyboard access to functionalities: direct access (where user awareness of a
location "in space" is not required, as is the case with keyboard shortcuts and
navigation of user agent menus) and spatial access (where the user moves the
pointing device "in space" via the keyboard). To satisfy this checkpoint, user
agents are expected to provide a mix of both types of keyboard access. User
agents should allow direct keyboard access where possible, and this may be
redundant with spatial input techniques. Furthermore, the user agent should
satisfy this requirement by offering a combination of keyboard-operable user
interface controls (e.g., keyboard operable print menus and settings) and
direct keyboard operation of user agent functionalities (e.g., a short cut to
print the current page). As examples of functionalities, ensure that the user
can interact with enabled
elements, select content, navigate viewports, configure the user
agent, access documentation, install the user agent, operate controls of the
user interface, etc., all entirely through keyboard input. It is also possible
to claim
conformance to User Agent Accessibility Guidelines 1.0
[UAAG10] for full support through pointing device input and voice
input. See the section on
input modality labels in UAAG 1.0.
Notes and rationale:
- It is up to the user agent developer to decide which functionalities are
best served by direct keyboard access and which are best served by spatial
access through the keyboard (or pointing device). The UAAG 1.0 does not
discourage a pointing device interface, but it does require redundancy through
the keyboard. In most cases, developers can allow operation of the user agent
without relying on motion "through space"; this includes text selection (a text
caret may be used to establish the start and end of the selection), region
selection (allow the user to describe the coordinates or position of the
region, e.g., relative to the viewport), drag-and-drop (allow the user to
designate start and end points and then say "go"), etc.
- For instance, the user must be able to do the following through the
keyboard alone (or pointing device alone or voice alone):
- Select
content and operate on it. For example, if the user can select rendered text
with the mouse and make it the content of a new link by pushing a button, they
also need to be able to do so through the keyboard and other supported devices.
Other operations include cut, copy, and paste.
- Set the focus on
viewports and on enabled elements.
- Install, configure, uninstall, and update the user agent software.
- Use the graphical user
interface menus. Some users may wish to user the graphical user
interface even if they cannot use or do not wish to use the pointing
device.
- Fill out forms.
- Access documentation.
- Suppose a user agent does not allow complete operation through the
keyboard alone. It is still possible to claim
conformance for the user agent in conjunction with a special module
designed to "fill in the gap".
Who benefits:
- Users with blindness are most likely to benefit from direct access through
the keyboard, including navigation of user interface controls; this is a
logical navigation, not a spatial navigation.
- Users with physical disabilities are most likely to benefit from a
combination of direct access and spatial access through the keyboard. For some
users with physical disabilities, moving the pointing device through a physical
mouse may be significantly more difficult than moving the pointing device with
arrow keys, for example.
- This checkpoint will also benefit users of many other alternative input
devices (which make use of the keyboard API) and also anyone without a
mouse.
- While keyboard operation is expected to improve access for many users,
operation by keyboard shortcuts alone may reduce accessibility (and usability)
by requiring users to memorize a long list of shortcuts. Developers should
provide mechanisms for contextual access to user agent functionalities
(including keyboard-operable cascading mechanisms, context-sensitive help,
keyboard operable configuration tabs, etc.) as well as direct access to those
functionalities. See also
checkpoint 11.5.
1.2 Activate event handlers.
(P1)
- For the element with
content focus, allow the user to activate
any explicitly associated input device event
handlers through keyboard input alone.
- The user agent is not required to allow activation of event handlers
associated with a given device (e.g., the pointing device) in any order other
than what the device itself allows.
Checkpoint 1.2
Note: The requirements for this checkpoint refer to
any explicitly associated input device event
handlers associated with an element, independent of the
input modalities for which the user agent conforms. For example, suppose
that an element has an explicitly associated handler for pointing device
events. Even when the user agent only conforms for keyboard input (and does not
conform for the pointing device, for example), this checkpoint requires the
user agent to allow the user to activate that handler with the keyboard. This
checkpoint is an important special case of checkpoint 1.1. Please
refer to the checkpoints of guideline
9 for more information about focus requirements.
Notes and rationale:
- For example, users without a pointing device need to be able to activate
form controls
and links (including the links in a client-side image map).
- Events triggered by a particular device generally follow a set pattern, and
often in pairs: start/end, down/up, in/out. One would not expect a "key down"
event for a given key to be followed by another "key down" event without an
intervening "key up" event.
Who benefits:
- Users with blindness or some users with a physical disability, and anyone
without a pointing device.
Example techniques:
- To preserve the expected order of events, provide a dynamically changing
menu of available handlers. For example, an initial menu of handlers might only
allow the user to trigger a "mousedown" event. Once triggered, the menu would
not allow "mousedown" but would allow "mouseup" and "mouseover", etc.
- For example, in HTML 4 [HTML4], input device event handlers
are described in
section 18.2.3. They are:
onclick
, ondblclick
,
onmousedown
, onmouseover
, onmouseout
,
onfocus
, onblur
, onkeypress
,
onkeydown
, and onkeyup
.
- In "Document Object Model (DOM) Level 2 Events Specification"
[DOM2EVENTS], focus and activation types are discussed in
section 1.6.1. They are:
DOMFocusIn
, DOMFocusOut
,
and DOMActivate
. These events are specified independent of a
particular input device type.
- In "Document Object Model (DOM) Level 2 Events Specification"
[DOM2EVENTS], mouse event types are discussed in
section 1.6.2. They are:
click
, mousedown
,
mouseup
, mouseover
, mousemove
and
mouseout
.
- The DOM Level 2 Event specification does not provide a key event
module.
- Sequential technique: Add each input device event handler to the serial
navigation order (refer to checkpoint
9.3). Alert the user when the user has navigated to an event handler, and
allow activation. For example, an link that also has a onMouseOver and
onMouseOut event handlers defined, might generate three "stops" in the
navigation order: one for the link and two for the event handlers. If this
technique is used, allow configuration so that input device event handlers are
not inserted in the navigation order.
- Query technique: Allow the user to query the element with content focus for
a menu of input device event handlers.
- Descriptive information about handlers can allow assistive technologies to
choose the most important functions for activation. This is possible in the
Java Accessibility API [JAVAAPI], which provides an an
AccessibleAction Java interface. This interface provides a list of actions and
descriptions that enable selective activation. See also checkpoint
6.3.
- Using MSAA [MSAA] on the Windows platform:
- Retrieve the node in the document object that has current focus.
- Call the
IHTMLDocument4::fireEvent
method on that node.
Related techniques:
- See image map
techniques.
References:
- For example,
section 16.5 of the SVG 1.0 Candidate Recommendation [SVG]
specifies processing order for user interface events.
1.3 Provide text messages. (P1)
- Ensure that every message (e.g.,
prompt, alert,
notification, etc.) that is a non-text element and is part of the user
agent user interface has a text equivalent.
Checkpoint 1.3
Note: For example, if the user is alerted of an event by an
audio cue, a visually-rendered text equivalent in the status bar could satisfy
this checkpoint. Per checkpoint
6.4, a text equivalent for each such message must be available through an
API. See also checkpoint 6.5 for requirements for programmatic alert of
changes to the user interface.
Notes and rationale:
- User agents should use modality-specific messages in the user interface
(e.g., graphical scroll bars, beeps, and flashes) as long as redundant
mechanisms are available or possible. These redundant mechanisms will benefit
all users, not just users with disabilities.
Who benefits:
- Users with blindness, deafness, or who are hard of hearing. Mechanisms that
are redundant to audio will benefit individuals who are deaf, hard of hearing,
or operating the user agent in a noisy or silent environment where the use of
sound is not practical.
Example techniques:
- Render text messages on the status bar of the graphical user interface.
Allow users to query the viewport for this status information (in addition to
having access through graphical rendering).
- Make available information in a manner that allows other software to
present it according to the user's preferences. For instance, if the graphical
user agent uses proportional scroll bars to indicate the position of the
viewport in content, make available this same information in text form. For
instance, this will allow other software to render the proportion of content
viewed as synthesized speech or as braille.
Doing more:
- Allow configuration to render or not render status information (e.g., allow
the user to hide the status bar).
Checkpoints
2.1 Render content according to
specification. (P1)
- Render content
according to format specification (e.g., for a markup language or style
sheet).
- When a rendering requirement of another specification contradicts a
requirement of the current document, the user agent may disregard the rendering
requirement of the other specification and still satisfy this checkpoint.
- Rendering requirements include format-defined interactions between author
preferences and user preferences/capabilities (e.g., when to render the
"
alt
" attribute
in HTML, the rendering order of nested OBJECT
elements in HTML,
test attributes in SMIL, and the cascade in CSS2).
Checkpoint 2.1
Note: If a conforming user agent does not render a content
type, it should allow the user to choose a way to handle that content (e.g., by
launching another application, by saving it to disk, etc.). The user agent is
not required to satisfy this checkpoint for all implemented specifications; see
the section on
conformance and implementing specifications for more information.
Notes and rationale:
- The right to disregard only applies when the rendering requirement of
another specification contradicts the requirements of the current document; no
exemption is granted if the other specification is consistent with or silent
about a requirement made by the current document.
Who benefits:
- Users with disabilities when specifications include features that promote
accessibility (e.g., scalable graphics benefit users with low vision, style
sheets allow users to override author and user style sheets).
Example techniques:
- Provide access to attribute values (one at a time, not as a group). For
instance, allow the user to select an element and read values for all
attributes set for that element. For many attributes, this type of inspection
should be significantly more usable than a view of the text source.
- When content changes dynamically (e.g., due to embedded scripts or content
refresh), users need to have access to the content before and after the
change.
- Make available information about abbreviation and acronym expansions. For
instance, in HTML, look for abbreviations specified by the ABBR and ACRONYM
elements. The expansion may be given with the "title" attribute (refer to the
Web Content Accessibility Guidelines 1.0
[WCAG10], checkpoint 4.2). To provide expansion information, user
agents may:
- Allow the user to configure that the expansions be used in place of the
abbreviations,
- Provide a list of all abbreviations in the document, with their expansions
(a generated glossary of sorts)
- Generate a link from an abbreviation to its expansion.
- Allow the user to query the expansion of a selected or input
abbreviation.
- If an acronym has no expansion in one location, look for another occurrence
in content that does. User agents may also look for possible expansions (e.g.,
in parentheses) in surrounding context, though that is a less reliable
technique.
Related techniques:
- See the sections on access to
content, link techniques, table techniques, frame techniques, and form techniques.
Doing more:
- If the requirements of the current document contradict the rendering
requirements of another specification, the user agent may offer a configuration
to allow conformance to one or the other specification.
References:
- Sections 10.4 ("Client Error 4xx") and 10.5 ("Server Error 5xx") of the
HTTP/1.1 specification [RFC2616] state that user agents
should have the following behavior in case of these error conditions:
Except when responding to a HEAD request, the server SHOULD include an
entity containing an explanation of the error situation, and whether it is a
temporary or permanent condition. These status codes are applicable to any
request method. User agents SHOULD display any included entity to the user.
2.2 Provide text view. (P1)
- For content
authored in text formats, provide a
view of the text
source. For the purposes of this document, text formats are defined
to be:
- all media objects given an Internet media type of "text" (e.g., text/plain,
text/HTML, or text/*) as defined in RFC 2046
[RFC2046], section 4.1.
- all SGML and XML applications, regardless of Internet media type (e.g.,
HTML 4.01, XHTML 1.1, SMIL, SVG, etc.).
Checkpoint 2.2
Note: A user agent would also satisfy this checkpoint by
providing a source view for any text format, not just implemented text formats.
The user agent is not required to satisfy this checkpoint for all implemented
specifications; see the section on
conformance and implementing specifications for more information.
Notes and rationale:
- In general, user agent developers should not rely on a "source view" to
convey information to users, most of whom are not familiar with markup
languages. A source view is still important as a "last resort" to some users as
content might not otherwise be accessible at all.
Who benefits:
- Users with blindness, low vision, deafness, hard of hearing, and any user
who requires the text source to understand the content.
Example techniques:
- Make the text view useful. For instance, enable links (i.e.,
URIs), allowing searching and other navigation within the view.
- A source view is an easily-implementable view that will help users inspect
some types of content, such as style sheet fragments or scripts. This does not
mean, however, that a source view of style sheets is the best user
interface for reading or changing style sheets.
Doing more:
- Provide a source view for any text format, not just implemented text
formats.
2.3
Render conditional content. (P1)
- Allow
configuration to provide access to each piece of unrendered
conditional content "C".
- The configuration may be a switch that, for all content, turns on or off
the access mechanisms described in the next provision.
- When a specification does not explain how to provide access to this
content, do so as follows:
- If C is a summary, title, alternative, description, or expansion of another
piece of content D, provide access through at least one of the following
mechanisms:
- (1a) render C in place of D;
- (2a) render C in addition to D;
- (3a) provide access to C by querying D. In this case, the user agent must
also alert the user, on a per-element basis, to the existence of C (so that the
user knows to query D);
- (4a) allow the user to follow a link to C from the context of D.
- Otherwise, provide access to C through at least one of the following
mechanisms:
- (1b) render a
placeholder for C, and allow the user to view the original
author-supplied content associated with each placeholder;
- (2b) provide access to C by query (e.g., allow the user to query an element
for its
attributes). In this case, the user agent must also alert the user,
on a per-element basis, to the existence of C;
- (3b) allow the user to follow a link in context to C.
- To satisfy this checkpoint, the user agent may provide access on a
per-element basis (e.g., by allowing the user to query individual elements) or
for all elements (e.g., by offering a configuration to render conditional
content all the time).
For all content.
Checkpoint 2.3
Note: For instance, an HTML user agent might allow users to
query each element for access to conditional content supplied for the
"alt
", "title
", and "longdesc
"
attributes. Or, the user agent might allow configuration so that the value of
the "alt
" attribute is rendered in place of all IMG
elements (while other conditional content might be made available through
another mechanism). See
checkpoint 2.10 for additional placeholder requirements.
Notes and rationale:
- There may be more than one piece of conditional content associated with
another piece of content (e.g., multiple captions tracks associated with the
visual track of a presentation).
- Please note that the alert requirement of this checkpoint is per-element. A
single resource-level alert (e.g., "there is conditional content somewhere
here") does not satisfy the checkpoint, but may be part of a solution for
satisfying this checkpoint. For example, the user agent might indicate the
presence of conditional content "somewhere" with menu in the toolbar. The menu
items could provide both per-element alert and access to the content (e.g., by
opening a viewport with the conditional content rendered).
Who benefits:
- Any user for whom the author has provided conditional content for
accessibility purposes. This includes: text equivalents for users with
blindness or low vision, or users who are deaf-blind, and captions, for users
who with deafness or who are hard of hearing.
Example techniques:
- Allow users to choose more than one piece of
conditional content at a given time. For instance,users with low
vision may want to view images (even imperfectly) but require a text
equivalent for the image; the
text may be rendered with a large font or as synthesized
speech.
- In HTML 4 [HTML4], conditional content
mechanisms include the following:
- Allow the user to
configure how the user agent renders a long description (e.g., "longdesc"
in HTML 4 [HTML4]). Some possibilities
include:
- Render the long description in a separate view.
- Render the long description in place of the associated element.
- Do not render the long description, but allow the user to query whether an
element has an associated long description (e.g., with a context-sensitive
menu) and provide access to it.
- Use an icon (with a text
equivalent) to indicate the presence of a long description.
- Use an audio cue to indicate the presence of a long description when the
user navigates to the element.
- For an object (e.g., an image) with an author-specified geometry that the
user agent does not render, allow the user to configure how the conditional
content should be rendered. For example, within the specified geometry, by
ignoring the specified geometry altogether, etc.
- For multimedia presentations with several alternative tracks, ensure access
to all tracks and allow the user to select individual tracks. The QuickTime
player [QUICKTIME] allows users to turn
on and off any number of tracks separately. For example, construct a list of
all available tracks from short descriptions provided by the author (e.g.,
through the "title" attribute).
- For multimedia presentations with several alternative tracks, allow users
to choose tracks based on natural language preferences. SMIL 1.0
[SMIL] allows users to specify
captions in different natural languages. By setting language
preferences in the SMIL player (e.g., the G2 player [G2]),
users may access captions (or audio) in different languages. Allow users to
specify different languages for different content types (e.g., English audio
and Spanish captions).
- If a multimedia presentation has several captions
(or subtitles) available, allow the user to choose from among them. Captions
might differ in level of detail, reading levels, natural
language, etc. Multilingual audiences may wish to have captions in
different natural
languages on the screen at the same time. Users may wish to use both
captions and auditory descriptions concurrently as well.
- Make apparent through the user
agent user interface which
audio tracks are meant to be played separately.
Related techniques:
- See the section on access to
content.
Doing more:
- Make information available with different levels of detail. For example,
for a voice
browser, offer two options for HTML
IMG
elements:
- Speak only "alt" text by default, but allow the user to hear "longdesc"
text on an image by image basis.
- Speak "alt" text and "longdesc" for all images.
- Allow the user to configure different natural
language preferences for different types of
conditional content (e.g., captions and auditory descriptions).
Users with disabilities may need to choose the language they are most familiar
with in order to understand a presentation for which supplementary tracks are
not all available in all desired languages. In addition, some users may prefer
to hear the program audio in its original language while reading captions in
another, fulfilling the function of subtitles or to improve foreign language
comprehension. In classrooms, teachers may wish to configure the language of
various multimedia elements to achieve specific educational goals.
2.4 Allow time-independent interaction.
(P1)
- For rendered
content where user input is only possible within a finite time
interval controlled by the user agent, allow
configuration to provide a view where user interaction is
time-independent. For example, if a presentation includes time-dependent user
input opportunities, pause automatically to allow for user input, and resume on
explicit user request. Or, offer a time-independent ("static") view of the
presentation in a different viewport that preserves the order and flow of the
presentation.
- If the user agent satisfies this checkpoint by pausing content
automatically, pause at the end of each time interval where user input is
possible. In the paused state:
- Alert the user that the rendered content has been paused (e.g.,
highlight the "pause" button in a multimedia player's control panel).
- Highlight which enabled
elements are time-sensitive.
- Allow the user to interact with the enabled
elements.
- Allow the user to resume on explicit user request (e.g., by pressing the
"play" button in a multimedia player's control panel; see also checkpoint 4.5).
- When satisfying this checkpoint for a real-time presentation, the user
agent may discard packets that continue to arrive after the construction of the
time-independent view (e.g., when paused or after the construction of a static
view).
Checkpoint 2.4
Note: If the user agent satisfies this checkpoint by
pausing automatically, it may be necessary to pause more than once when there
are multiple opportunities for time-sensitive user interaction When pausing,
pause synchronized content as well (whether rendered in the same or different
viewports) per checkpoint
2.6. In SMIL 1.0 [SMIL], for example, the
"begin
", "end
", and "dur
"
attributes synchronize presentation components. This checkpoint does
not
apply when the user agent cannot
recognize the time interval in the presentation format, or when the user
agent cannot control the timing (e.g., because it is controlled by the server).
See also checkpoint 3.5, which involves client-driven content
refresh.
Notes and rationale:
- The user agent could satisfy this checkpoint by allowing the user to step
through an entire presentation manually (as one might advance frame by frame
through a movie). However, this is likely to be tedious and lead to information
loss, so the user agent should preserve as much of the flow and order of the
original presentation as possible.
- The requirement to pause at the end (rather than at the beginning)
of a time-interval is to allow the user to review content that may change
during the elapse of this time.
- The configuration option is important because techniques used to satisfy
this checkpoint may lead to information loss for some types of content (e.g.,
highly interactive real-time presentations).
- When different streams of time-sensitive content are not synchronized (and
rendered in the same or different viewports), the user agent is not required to
pause the pieces all at once. The assumption is that both streams of content
will be available at another time.
Who benefits:
- Some users with a physical disability who may not have the time to interact
with the content. Also, users who may be accessing the content serially (e.g,.
users with blindness or some users with a physical disability) and require more
time to reach the timed content
Example techniques:
- Some HTML user agents recognize time intervals specified through the
META
element, although this usage is not defined in HTML 4
[HTML4].
- Render time-dependent links as a static list that occupies the same screen
real estate; authors may create such documents in SMIL 1.0
[SMIL]. Include temporal context in the list of links. For example,
provide the time at which the link appeared along with a way to easily jump to
that portion of the presentation.
- For a presentation that is not "live", allow the user to choose from a menu
of available time-sensitive links (essentially making them
time-independent).
Doing more:
- Provide a view where time intervals are lengthened, but not infinitely
(e.g., allow the user to multiple time intervals by 3, 5, and 10). Or, allow
the user to add extra time (e.g., 10 seconds) to each time interval.
- Allow the user to view a list of all media elements or links of the
presentations sorted by start or end time or alphabetically.
- Alert the user whenever pausing the user agent may lead to packet
loss.
References:
- Refer to section
4.2.4 of SMIL 1.0 [SMIL] for information about the SMIL
time model.
2.5 Make captions, transcripts available.
(P1)
- Allow
configuration or control to
render text
transcripts, collated text transcripts, captions,
and auditory
descriptions at the same time as the associated audio
tracks and visual
tracks.
For all content.
Checkpoint 2.5
Note: This checkpoint is an important special case of checkpoint 2.1.
Notes and rationale:
- Users may wish to a read transcript at the same time as a related visual or
audio track and pause the visual or audio track while reading; see checkpoint 4.5.
Who benefits:
- Users with blindness or low vision (auditory descriptions and text
captions, etc.) and users with deafness or who are hard of hearing.
Example techniques:
- Allow users to turn on and off auditory descriptions and captions.
- For the purpose of applying this clause, SMIL 1.0
[SMIL] user agents should recognize as captions any media object
whose reference from SMIL is guarded by the '
system-captions
' test
attribute.
- SMIL user agents should allow users to configure whether they want to view
captions, and this user interface switch should be bound to the
'
system-captions
' test attribute. Users should be able to indicate
a preference for receiving available auditory descriptions, but SMIL 1.0
[SMIL] does not include a mechanism analogous to 'system-captions'
for auditory descriptions, though [SMIL20] is expected to.
- Another SMIL 1.0 test attribute, '
system-overdub-or-captions
',
allows users to choose between subtitles and overdubs in multilingual
presentations. User agents should not interpret a value of
'caption
' for this test attribute as meaning that the user prefers
accessibility captions; that is the purpose of the
'system-captions
' test attribute. When subtitles and accessibility
captions are both available, users who are deaf may prefer to view captions, as
they generally contain information not in subtitles: information on music,
sound effects, who is speaking, etc.
- User agents that play QuickTime movies should allow the user to turn on and
off the different tracks embedded in the movie. Authors may use these
alternative tracks to provide content for accessibility purposes. The Apple
QuickTime player provides this feature through the menu item "Enable
Tracks."
- User agents that play Microsoft Windows Media Object presentations should
provide support for Synchronized Accessible Media Interchange (SAMI
[SAMI]), a protocol for creating and displaying captions) and should
allow users to configure how captions are viewed. In addition, user agents that
play Microsoft Windows Media Object presentations should allow users to turn on
and off other
conditional content, including auditory description and alternative
visual
tracks.
References:
- User agents that implement SMIL 1.0
[SMIL] should implement the "Accessibility Features of SMIL"
[SMIL-ACCESS].
2.6 Respect synchronization cues.
(P1)
- Respect synchronization cues (e.g., in markup) during rendering.
Checkpoint 2.6
Note: This checkpoint is an important special case of checkpoint 2.1.
Notes and rationale:
- The term "synchronization cues" refers to pieces of information that may
affect synchronization, such as the size and expected duration of tracks and
their segments, the type of element and how much those elements can be sped up
or slowed down (both from technological and intelligibility standpoints).
- Captions
and auditory
descriptions may not make sense unless rendered synchronously with
related video or audio content. For instance, if someone with a hearing
disability is watching a video presentation and reading associated captions,
the captions should be
synchronized with the audio so that the individual can use any
residual hearing. For auditory descriptions, it is crucial that an audio
track and an auditory description track be synchronized to avoid
having them both play at once, which would reduce the clarity of the
presentation.
Who benefits:
- Users with deafness or who are hard of hearing (e.g., for auditory
descriptions and audio tracks), and some users with a cognitive
disability.
Example techniques:
- For synchronization in SMIL 2.0 [SMIL20], refer to section
10, the timing and synchronization module.
- The idea of "sensible time-coordination" of components in the definition of
synchronize centers on the idea of simultaneity of presentation, but
also encompasses strategies for handling deviations from simultaneity resulting
from a variety of causes. Consider how deviations might be handled for captions
for a multimedia presentation such as a movie clip. Captions consist of a text
equivalent of the audio track that is synchronized with the visual
track. Typically, a segment of the captions appears visually near
the video for several seconds while the person reads the text. As the visual
track continues, a new segment of the captions is presented. However, a problem
arises if the captions are longer than can fit in the display space. This can
be particularly difficult if due to a visual disability, the font size has been
enlarged, thus reducing the amount of rendered caption text that can be
presented. The user agent needs to respond sensibly to such problems, for
example by ensuring that the user has the opportunity to navigate (e.g., scroll
down or page down) through the caption segment before proceeding with the
visual presentation and presenting the next segment.
- Developers of user agents need to determine how they will handle other
synchronization challenges, such as:
- Under what circumstances will the presentation automatically pause? Some
circumstances where this might occur include:
- the segment of rendered caption text is more than can fit on the visual
display
- the user wishes more time to read captions or the collated text
transcript
- the auditory description is of longer duration than the natural pause in
the audio.
- Once the presentation has paused, then under what circumstances will it
resume (e.g., only when the user signals it to resume, or based on a predefined
pause length)?
- If the user agent allows the user to jump to a location in a presentation
by activating a link, then how will related tracks behave? Will they jump as
well? Will the user be able to return to a previous location or undo the
action?
- Developers of user agents need to anticipate many of the challenges that
may arise in synchronization of diverse tracks.
2.7 Repair missing content. (P2)
- Allow
configuration to generate
repair text when the user agent
recognizes that the author has failed to provide
conditional content that was required by the format
specification.
- The user agent may satisfy this checkpoint by basing the repair text on any
of the following available sources of information: URI reference, content type,
or element type.
For all content.
Checkpoint 2.7
Note: Some markup languages (such as HTML 4
[HTML4] and SMIL 1.0 [SMIL] require the author to provide
conditional content for some elements (e.g., the "alt
" attribute
on the IMG
element). Repair text based on URI reference, content
type, or element type is sufficient to satisfy the checkpoint, but may not
result in the most effective repair. Information that may be
recognized as relevant to repair might not be "near" the missing
conditional content in the document object. For instance, instead of
generating repair text on a simple URI reference, the user agent might look for
helpful information near a different instance of the URI reference in the same
document object, or might retrieve useful information (e.g., a title) from the
resource designated by the URI reference.
Notes and rationale:
- Some examples of missing conditional content that is required by format
specification:
- in HTML 4 [HTML4], "
alt
" is
required for the IMG
and AREA
elements (for
validation). In SMIL 1.0 [SMIL], on the other hand,
"alt
" is not required on media objects.
- whatever the format, text equivalents for non-text content are required by
the Web Content Accessibility Guidelines 1.0
[WCAG10].
- Conditional content may come from markup, inside images (e.g., refer to
"Describing and retrieving photos using RDF and HTTP"
[PHOTO-RDF]), etc.
Who benefits:
- Users with blindness or low vision.
Example techniques:
- When HTTP is used, HTTP headers provide information about the URI of the Web
resource ("Content-Location") and its type ("Content-Type"). Refer
to the HTTP/1.1 specification [RFC2616], sections 14.14 and
14.17, respectively. Refer to "Uniform Resource Identifiers (URI): Generic
Syntax" ([RFC2396], section 4) for
information about URI references, as well as the HTTP/1.1 specification
[RFC2616], section 3.2.1.
Related techniques:
- See content repair techniques,
and cell header repair strategies.
Doing more:
- When configured to generate text, also inform the user (e.g., in the
generated text itself) that this content was not provided by the author as a
text equivalent.
References:
- The "Altifier Tool" [ALTIFIER] illustrates smart
techniques for generating text
equivalents (for images, etc.) when the author has not specified
any.
2.8 No repair text.
(P3)
- Allow at least two
configurations for when the user agent
recognizes that conditional content required by the format
specification is present but
empty:
For all content. Checkpoint
2.8
Note: In some authoring scenarios, empty content (e.g., a
string of zero characters) may make an appropriate text
equivalent, such as when non-text
content has no other function than pure decoration, or when an image
is part of a "mosaic" of several images and doesn't make sense out of the
mosaic. Please refer to the Web Content Accessibility Guidelines 1.0
[WCAG10] for more information about text equivalents.
Notes and rationale:
- User agents should render nothing in this case because the author may
specify an empty
text equivalent for content that has no function in the page other than as
decoration.
Who benefits:
- Users with blindness or low vision.
Example techniques:
- The user agent should not render generic labels such as "[INLINE]" or
"[GRAPHIC]" in the face of
empty conditional content (unless configured to do so).
- If no captioning information is available and captioning is turned on,
render "no captioning information available" in the captioning region of the
viewport (unless configured not to generate repair content).
Doing more:
- Labels (e.g., "[INLINE]" or "[GRAPHIC]") may be useful in some situations,
so the user agent may allow configuration to render "No author text" (or
similar) instead of empty conditional content.
2.9 Render conditional
content automatically. (P3)
- Allow
configuration to render all
conditional content automatically. The user agent is not required to
render all conditional content at the same time in a single viewport.
- Provide access to this content according to format specifications or where
unspecified, by applying one of the techniques described in checkpoint 2.3: 1a, 2a, or
1b.
For all content.
Checkpoint 2.9
Note: For instance, an HTML user agent might allow
configuration so that the value of the "alt
" attribute
is rendered in place of all IMG
elements (while other conditional
content might be made available through another mechanism). The user agent may
offer multiple configurations (e.g., a first configuration to render one type
of conditional content automatically, a second to render another type,
etc.).
Who benefits:
- Any user who may have difficulties with navigation and manual access to
content, including some users with a physical disability and users with
blindness or low vision.
Example techniques:
- Provide a "conditional content view", where all content that is not
rendered by default is rendered in place of associated content. For example,
Amaya
[AMAYA] offers a "Show alternate" view that accomplishes this. Note,
however, cases where an element has more than one piece of associated
conditional content (e.g., render them all as a list, or as a list of links,
etc.). For long conditional content, instead of rendering in place, link to the
content.
2.10
Toggle placeholders. (P3)
- Once the user has viewed the original author-supplied content associated
with a
placeholder, allow the user to turn off the rendering of the
author-supplied content.
Checkpoint 2.10
Note: For example, if the user agent substitutes the
author-supplied content for the placeholder in context, allow the user to
"toggle" between placeholder and the associated content. Or, if the user agent
renders the author-supplied content in a separate viewport, allow the user to
close that viewport. Note: See checkpoint 2.3, provision
(1b) for placeholder requirements.
Who benefits:
- Some users with a cognitive disability may find it difficult to access
content once too many images (for example) have been rendered one by one.
Example techniques:
- Allow the user to designate a placeholder and request to view the
associated content in a separate viewport (e.g., through the context menu),
leaving the placeholder in context. Per checkpoint 5.3, users are able to close the new
viewport.
2.11 Alert unsupported language.
(P3)
- Allow
configuration not to render content in unsupported natural
languages, when that content would otherwise be rendered. Content
"in a natural language" includes pre-recorded spoken language and text in a given
script, i.e., writing system.
- Indicate to the user in context that author-supplied content has not been
rendered.
- This checkpoint does not require the user agent to allow different
configurations for different natural languages.
Checkpoint 2.11
Note: For example, use a text substitute or accessible
graphical icon to indicate that content in a particular language has not been
rendered.
Notes and rationale:
- A script is a means of supporting the visual rendering of content in a
particular natural language. So, for user agents that render content visually,
a user agent might not recognize "the Cyrillic script", which would mean that
it would not support the visual rendering of Russian, Ukrainian, and other
languages that employ Cyrillic when written.
- Rendering content in an unsupported language (e.g., as "garbage"
characters) may confuse all users. However, this checkpoint is designed
primarily to benefit users who access content serially as it allows them to
skip portions of content that would be unusable as rendered.
- There may be cases when a conforming user agent supports a natural language
but a speech synthesizer does not, or vice versa.
Who benefits:
- Users who access content serially, including users with blindness and some
users with a physical disability.
Example techniques:
- For instance, a user agent that doesn't support Korean (e.g., doesn't have
the appropriate fonts or voice set) should allow configuration to announce the
language change with the message "Unsupported language – unable to
render" (e.g., when the language itself is not recognized) or "Korean not
supported – unable to render" (e.g., when the language is recognized by
the user agent doesn't have resources to render it). The user should also be
able to choose no alert of language changes. Rendering could involve speaking
in the designated natural language in the case of a voice browser or screen
reader. If the natural language is not supported, the language change alert
could be spoken in the default language by a screen reader or voice
browser.
- A user agent may not be able to render all characters in a document
meaningfully, for instance, because the user agent lacks a suitable font, a
character has a value that may not be expressed in the user agent's internal
character encoding, etc. In this case,
section 5.4 of HTML 4
[HTML4] recommends the following for undisplayable characters:
- Adopt a clearly visible (or audible), but unobtrusive mechanism to alert
the user of missing resources.
- If missing characters are presented using their numeric representation, use
the hexadecimal (not decimal) form since this is the form used in character set
standards.
- When HTTP is used, HTTP headers provide information about content encoding
("Content-Encoding") and content language ("Content-Language"). Refer to the
HTTP/1.1 specification [RFC2616], sections 14.11 and
14.12, respectively.
- CSS2's attribute selector may be used with the HTML "lang" or XML
"xml:lang" attributes to control rendering based on
recognized natural language information. Refer also to the ':lang'
pseudo-class ([CSS2], section 5.11.4).
Related techniques:
- See techniques for generated
content, which may be used to insert
text to indicate a language change.
- See content repair techniques
and accessibility and internationalization
techniques.
- See techniques for synthesized
speech.
References:
- For information on language codes, refer to "Codes for the representation
of names of languages" [ISO639].
- Refer to "Character Model for the World Wide Web"
[CHARMOD]. It contains basic definitions and models, specifications
to be used by other specifications or directly by implementations, and
explanatory material. In particular, this document addresses early uniform
normalization, string identity matching, string indexing, and conventions for
URIs.
In addition to the techniques below, refer also to the section on user control of style.
Checkpoints
3.1 Toggle background images.
(P1)
- Allow
configuration not to render background image
content.
- In this configuration, the user agent is not required to retrieve
background images from the Web.
- This checkpoint only requires control of background images for "two-layered
renderings", i.e., one rendered background image with all other content
rendered "above it".
Checkpoint 3.1
Note: See checkpoint 2.3 for information about how to provide access
to unrendered background images. When background images are not rendered, user
agents should render a solid background color instead (see checkpoint 4.3).
Notes and rationale:
- This checkpoint does not address issues of multi-layered renderings and
does not require the user agent to change background rendering for multi-layer
renderings (refer, for example, to the 'z-index' property in Cascading Style
Sheets, level 2 ([CSS2], section 9.9.1).
Who benefits:
- Some users with a cognitive disability or color deficiencies who may find
it difficult or impossible to read superimposed text or understand other
superimposed content.
Example techniques:
- If background image are turned off, make available to the user associated
conditional content.
- In CSS, background images may be turned on/off with the
'background' and 'background-image' properties ([CSS2], section 14.2.1).
Doing more:
- Allow control of image depth in multi-layer presentations.
3.2
Toggle audio, video, animated images. (P1)
- Allow
configuration not to render audio, video, or animated image
content, except on explicit
user request. This configuration is required for content rendered
without any user interaction (including content rendered on load or as the
result of a script), as well as content rendered as the result of user
interaction (e.g., when the user activates a link).
- The user agent may satisfy this checkpoint by making video and animated
images
invisible and audio
silent, but this technique is not recommended.
- When configured not to render content except on explicit user request, the
user agent is not required to retrieve the audio, video, or animated image from
the Web until requested by the user.
Checkpoint 3.2
Note: See checkpoint 2.3 for information about how to provide access
to unrendered audio, video, and animated images. See also checkpoint 4.5, checkpoint 4.9, and checkpoint 4.10.
Who benefits:
- Some users with a cognitive disability, for whom an excess of visual
information (and in particular animated information) might it impossible to
understand parts of content. Also, audio rendered automatically on load may
interfere with speech synthesizers.
Example techniques:
- For user agents that hand off content to different rendering engines, the
configuration should cause the content not to be handed off, and instead a
placeholder rendered.
- The "silent" or "invisible" solution for satisfying this checkpoint (e.g.,
by implementing the
'visibility' property defined in section 11.2 of CSS 2
[CSS2]). is not recommended. This solution means that the content is
processed, though not rendered, and processing may cause undesirable side
effects such as firing events. Or, processing may interfere with the processing
of other content (e.g., silent audio may interfere with other sources of sound
such as the output of a speech synthesizer). This technique should be deployed
with caution.
- As a placeholder for an animated image, render a motionless image built
from the first frame of the animated image.
3.3
Toggle animated/blinking text. (P1)
- Allow
configuration to render
animated or blinking text
content. as motionless, unblinking text. Blinking text is text whose
visual rendering alternates between visible and invisible, any rate of
change.
- In this configuration, the user must still have access to the same text
content, but the user agent may render it in a separate viewport (e.g., for
large amounts of streaming text).
- The user agent also satisfies this checkpoint by always rendering animated
or blinking text as motionless, unblinking text.
Checkpoint 3.3
Note: Animation (a rendering effect) differs from streaming
(a delivery mechanism). Streaming content might be rendered as an animation
(e.g., an animated stock ticker or vertically scrolling text) or as static text
(e.g., movie subtitles, which are rendered for a limited time, but do not give
the impression of movement). See also checkpoint 3.5. This checkpoint does not apply
for blinking and animation
effects that are caused by mechanisms that the user agent cannot
recognize.
Notes and rationale:
- The definition of blinking text is based on the CSS2 definition of the
'blink' value; refer to [CSS2], section 16.3.1.
Who benefits:
- Flashing content may trigger seizures in people with photosensitive
epilepsy, or may make a Web page too distracting to be usable by someone with a
cognitive disability. Blinking text can affect screen reader users, since
screen readers (in conjunction with speech synthesizers or braille displays)
may re-render the text every time it blinks.
- Configuration is preferred as some users may benefit from blinking effects
(e.g., users who are deaf or hard of hearing). However, the priority of this
checkpoint was assigned on the basis of requirements unrelated to this
benefit.
Example techniques:
- The user agent may render the motionless text in a number of ways. Inline
is preferred, but for extremely long text, it may be better to render the text
in another viewport, easily reachable from the user's browsing context.
- Allow the user to turn off animated or blinking text through the user
agent user interface (e.g., by pressing the Escape key to
stop animations).
- Some sources of blinking and moving text are:
- The BLINK element in HTML. Note: The BLINK element is not
defined by a W3C specification.
- The MARQUEE element in HTML. Note: The MARQUEE element is
not defined by a W3C specification.
- The 'blink' value of the
'text-decoration' property in CSS ([CSS2], section 16.3.1).
- In JavaScript, to control the start and speed of scrolling for a
MARQUEE
element:
document.all.myBanner.start();
document.all.myBanner.scrollDelay = 100
3.4 Toggle scripts. (P1)
- Allow
configuration not to execute any executable
content (e.g.,
scripts and
applets).
- In this configuration, provide an option to alert the user when executable
content is available (but has not been executed).
- The user agent is only required to alert the user to the presence of more
than zero scripts or applets (i.e., per-element alerts are not required).
Checkpoint 3.4
Note: This checkpoint does not refer to
plug-ins and other programs that are not
part of content.
Scripts and applets may provide very useful functionality, not all of which
causes accessibility problems. Developers should not consider that the user's
ability to turn off scripts is an effective way to improve content
accessibility; turning off scripts means losing the benefits they offer.
Instead, developers should provide users with finer control over user agent or
content behavior known to raise accessibility barriers. The user should only
have to turn off scripts as a last resort.
Notes and rationale:
- Executable content includes scripts,
applets, ActiveX controls, etc. This checkpoint does not apply to
plug-ins; they are not part of
content.
- Executable content includes those that run "on load" (e.g., when a document
loads into a viewport) and when other events occur (e.g., user interface
events).
- The alert that scripts are available but not executed is important, for
instance, for helping users understand why some poorly authored pages without
script alternatives produce no content when scripts are turned off.
- Where possible, authors should encode knowledge in declarative formats
rather than in scripts. Knowledge and behaviors embedded in scripts is
difficult to extract, which means that user agents are less likely to be able
to offer control by the user over the script's effect.
Who benefits:
- Control of executable content is particularly important as it can cause the
screen to flicker, since people with photosensitive epilepsy can have seizures
triggered by flickering or flashing, particularly in the 4 to 59 flashes per
second (Hertz) range. Peak sensitivity to flickering or flashing occurs at 20
Hertz.
Example techniques:
- Do not make available the switch to turn scripts off only in the "Security"
part of the user interface as people may not think to look there. For instance,
include a "Scripts" entry in the index that allows people to find the switch
more easily.
Related techniques:
- See the section on script
techniques.
Doing more:
- While this checkpoint only requires an on/off configuration switch, user
agents should allow finer control over executable content. For instance, in
addition to the switch, allow users to turn off just input device event
handlers, or to turn on and off scripts in a given scripting language
only.
3.5 Toggle content refresh. (P1)
- Allow
configuration so that the user agent only refreshes
content on explicit
user request.
- In this configuration, alert the user of the refresh rate specified in
content, and allow the user to request fresh content manually (e.g., by
following a link or confirming a prompt).
- When the user chooses not to refresh content, the user agent may ignore
that content; buffering is not required.
- This checkpoint only applies when the user agent (not the server)
automatically initiates the request for fresh content.
Checkpoint 3.5
Note: For example, allow configuration to prompt the user
to confirm content refresh, at the rate specified by the author.
Notes and rationale:
- Some HTML authors create a refresh effect by using a
META element with http-equiv="refresh" and the refresh rate specified in
seconds by the "content" attribute.
Who benefits:
- Automatically changing content can disorient some users with a cognitive
disability, users with blindness or low vision, and most users.
Example techniques:
- Alert the user that suppressing refresh may lead to loss of information
(i.e., packet loss).
Doing more:
- Allow users to specify their own refresh rate.
- Allow configuration for at least one very slow refresh rate (e.g., every 10
minutes).
- Retrieve new content without displaying it automatically. Allow the user to
view the differences (e.g., by highlighting or filtering) between the currently
rendered content and the new content (including no differences).
3.6 Toggle redirects. (P2)
- Allow
configuration so that a "client-side redirect" (i.e., one initiated
by the user agent, not the server) only changes
content on explicit
user request.
- Allow the user to access the new content on demand (e.g., by following a
link or confirming a prompt).
- The user agent is not required to provide these functionalities for
client-side redirects specified to occur instantaneously (i.e., after no
delay).
Checkpoint 3.6
Note: Some HTML user agents support
client-side redirects authored using a META
element with
http-equiv="refresh"
. Authors (and Web masters) should use the redirect
mechanisms of HTTP instead.
Notes and rationale:
- This checkpoint is a Priority 2 checkpoint in part because the author's
redirect implies that users aren't expected to use the content prior to the
redirect.
Who benefits:
- Automatically changing content can disorient some users with a cognitive
disability, users with blindness or low vision, and most users.
Example techniques:
- Provide a configuration so that when the user navigates "back" through the
user agent history to a page with a client-side redirect, the user agent does
not re-execute the client-side redirect.
Doing more:
- Allow configuration to allow access on demand to new content even when the
client-side redirect has been specified by the author to be instantaneous.
References:
- For Web content authors: refer to the HTTP/1.1 specification
[RFC2616] for information about using server-side redirect
mechanisms (instead of client-side redirects).
3.7 Toggle images. (P2)
- Allow
configuration not to render image
content.
- The user agent may satisfy this checkpoint by making images
invisible, but this technique is not recommended.
Checkpoint 3.7
Note: See checkpoint 2.3 for information about how to provide access
to unrendered images.
Notes and rationale:
- This priority of
checkpoint 3.2 is higher than the priority of this checkpoint because an
excess of moving visual information is likely to be more distracting to some
users than an excess of still visual information.
Who benefits:
- Some users with a cognitive disability, for whom an excess of visual
information might it difficult to understand parts of content.
Related techniques:
- See techniques for checkpoint 3.1.
In addition to the techniques below, refer also to the section on user control of style.
Checkpoints for visually rendered text
4.1
Configure text size. (P1)
- Allow global
configuration of the reference size of visually rendered
text, with an option to
override reference sizes specified by the author or user agent
defaults.
- Offer a range of text sizes to the user that includes at least:
- the range offered by the conventional utility available in the
operating environment that allows users to choose the text size
(e.g., the font size),
- or, if no such utility is available, the range of text sizes supported by
the conventional APIs of the operating environment for drawing
text.
Checkpoint 4.1
Note: The reference size of rendered text corresponds to
the default value of the CSS2 'font-size' property, which is 'medium' (refer to
CSS2
[CSS2], section 15.2.4). For example, in HTML, this might be
paragraph text. The default reference size of rendered text may vary among user
agents. User agents may offer different mechanisms to allow control of the size
of rendered text (e.g., font size control, zoom, magnification, etc.). Refer,
for example to the Scalable Vector Graphics specification [SVG]
for information about scalable rendering.
Notes and rationale:
- For example, allow the user to configure the user agent to apply the same
font family across Web resources, so that all
text is displayed by default using that font family. Or, allow the
user to control the text size dynamically for a given element, e.g., by
navigating to the element and zooming in on it.
- The choice of optimal techniques depends in part on which markup language
is being used. For instance, HTML user agents may allow the user to change the
font size of a particular piece of
text (e.g., by using CSS user style sheets) independent of other
content (e.g., images). Since the user agent can reflow the text after resizing
the font, the rendered text will become more legible without, for example,
distorting bitmap images. On the other hand, some languages, such as SVG, do
not allow text reflow, which means that changes to font size may cause rendered
text to overlap with other content, reducing accessibility. SVG is designed to
scale, making a zoom functionality the more natural technique for SVG user
agents satisfying this checkpoint.
- The primary intention of this checkpoint is to allow users with low vision
to increase the size of text. Full configurability includes the choice of
(very) small text sizes that may be available, though this is not considered by
the User Agent Accessibility Guidelines Working Group to be part of the
priority 1 requirement. This checkpoint does not include a "lower bound" (above
which text sizes would be required) because of how users' needs may vary across
writing systems and hardware.
Who benefits:
- Users with low vision benefit from the ability to increase the text size.
Note that some users may also benefit from the ability to choose small font
sizes (e.g., users of screen readers who wish to have more content per screen
so they have to scroll less frequently).
Example techniques:
- Inherit text size information from user preferences specified for the
operating environment.
- Use
operating environment magnification features.
- When scaling text, maintain size relationships among text of different
sizes.
- Implement the
'font-size' property in CSS ([CSS2], section 15.2.4).
- For example, in Windows, the
ChooseFont
function in the
Comdlg32 library will create the conventional utility of that operating system
that allows users to choose text (font) size. The DrawText
API is
the lower-level API for drawing text.
Doing more:
- Allow the user to configure the text size on an element level (i.e., more
precisely than globally). User style sheets allow such detailed
configurations.
- Allow the user to configure the text size differently for different
scripts (i.e., writing systems).
4.2 Configure font family. (P1)
- Allow global
configuration of the font family of all visually rendered
text, with an option to
override font families specified by the author or by user agent
defaults.
- Offer a range of font families to the user that includes at least:
- the range offered by the conventional utility available in the
operating environment that allows users to choose the font
family,
- or, if no such utility is available, the range of font families supported
by the conventional APIs of the operating environment for drawing
text.
- For text that
cannot be rendered properly using the user's preferred font family, the user
agent may substitute an alternative font family.
Checkpoint 4.2
Note: For example, allow the user to specify that all text is to be rendered in a particular
sans-serif font family.
Who benefits:
- Users with low vision or some users with a cognitive disability or reading
disorder require the ability to change the font family of text in order to read
it.
Example techniques:
- Inherit font family information from user preferences specified for the
operating environment.
- Implement the
'font-family' property in CSS ([CSS2], section 15.2.2).
- Allow the user to override author-specified font families with differing
levels of detail. For instance, use font A in place of any sans-serif font and
font B in place of any serif font.
- For example, in Windows, the
ChooseFont
function in the
Comdlg32 library will create the conventional utility of that operating system
that allows users to choose font families. The DrawText
API is the
lower-level API for drawing text.
Doing more:
- Allow the user to configure font families on an element level (i.e., more
precisely than globally). User style sheets allow such detailed
configurations.
4.3
Configure text colors. (P1)
- Allow global
configuration of the foreground and background color of all visually
rendered
text, with an option to
override foreground and background colors specified by the author or
user agent defaults.
- Offer a range of colors to the user that includes at least:
- the range offered by the conventional utility available in the
operating environment that allows users to choose colors,
- or, if no such utility is available, the range of colors supported by the
conventional APIs of the operating environment for
specifying colors.
Checkpoint 4.3
Note: User configuration of foreground and background
colors may inadvertently lead to the inability to distinguish ordinary text
from selected text, focused text, etc. See checkpoint 10.3 for more information about highlight
styles.
Who benefits:
- Users with color deficiencies and some users with a cognitive
disability.
Example techniques:
- Inherit foreground and background color information from user preferences
specified for the operating environment.
- Implement the
'color' and
'border-color' properties in CSS 2 ([CSS2], sections 14.1 and 8.5.2,
respectively).
- Implement the
'background-color' property (and other background properties) in CSS 2
([CSS2], section 14.2.1).
- SMIL does not have a global property for "background color", but allows
specification of background color by region (refer, for example, to the
definition of the '
background-color
' attribute defined in section 3.3.1 of
SMIL 1.0 [SMIL]). In the case of SMIL, the
user agent would satisfy this checkpoint by applying the users preferred
background color to all regions (and to all root-layout
elements
as well). SMIL 1.0 does not have a way to specify the foreground color of text,
so that portion of the checkpoint would not apply.
- In SVG 1.0 [SVG], the 'fill' and 'stroke'
properties are used to paint foreground colors.
- For example, in Windows, the
ChooseColor
function in the
Comdlg32 library will create the conventional utility of that operating system
that allows users to choose colors. The DrawText
API is the
lower-level API for drawing text.
Doing more:
- Allow the user to specify minimal contrast between foreground and
background colors, adjusting colors dynamically to meet those
requirements.
Checkpoints for multimedia presentations and other
presentations that change continuously over time
4.4 Slow multimedia. (P1)
- Allow the user to slow the presentation rate of rendered audio and
animations (including video and animated images).
- For a visual
track, provide at least one setting between 40% and 60% of the
original speed.
- For a prerecorded audio
track including audio-only presentations, provide at least one
setting between 75% and 80% of the original speed.
- When the user agent allows the user to slow the visual track of a
synchronized multimedia presentation to between 100% and 80% of its original
speed, synchronize the visual and audio tracks. Below 80%, the user agent is
not required to render the
audio track.
- The user agent is not required to satisfy this checkpoint for audio and
animations whose
recognized role is to create a purely stylistic effect.
Checkpoint 4.4
Note: Purely stylistic effects include background sounds,
decorative animated images, and effects caused by style sheets. The style
exception of this checkpoint is based on the assumption that authors have
satisfied the requirements of the "Web Content Accessibility Guidelines 1.0"
[WCAG10] not to convey information through style alone (e.g.,
through color alone or style sheets alone). See checkpoint 2.6 and checkpoint 4.7.
Notes and rationale:
- Slowing one track (e.g., video) may make it harder for a user to understand
another synchronized track (e.g., audio), but if the user can understand
content after two passes, this is better than not being able to understand it
at all.
- Some formats (e.g., streaming formats), might not enable the user agent to
slow down playback and would thus be subject to applicability.
Who benefits:
- Some users with a learning or cognitive disability, or some users with
newly acquired sensory limitations (such as a person who is newly blind and
learning to use a screen reader). Users who have beginning familiarity with a
natural
language may also benefit.
Example techniques:
- When changing the rate of audio, avoid pitch distortion.
- HTML 4 [HTML4], background animations may
be specified with the deprecated
background
attribute.
- The
SMIL 2.0 Time Manipulations Module ([SMIL20], chapter 11) defines the
speed
attribute, which can be used to change the playback rate (as
well as forward or reverse direction) of any animation.
- Authors sometimes specify background sounds with the "bgsound" attribute.
Note: This attribute is not part of
HTML 4 [HTML4].
Doing more:
- Allowing the user to speed up audio is also useful. For example, some users
who access content serially benefit from the ability to speed up audio.
References:
- Refer to variable playback speed techniques used for Digital Talking Books
[TALKINGBOOKS].
4.5 Start, stop, pause, advance,
reverse multimedia. (P1)
- Allow the user to stop, pause, resume, fast advance, and fast reverse
rendered audio and
animations (including video and animated images) that last three or
more seconds at their default playback rate.
- The user agent is not required to satisfy this checkpoint for audio and
animations whose
recognized role is to create a purely stylistic effect.
- The user agent is not required to play synchronized audio during fast
advance or reverse of animations (though doing so may help orient the
user).
- The user agent is not required to play animations during fast advance and
fast reverse.
- When the user pauses a real-time audio or animation, the user agent may
discard packets that continue to arrive during the pause.
Checkpoint 4.5
Note: See
checkpoint 4.4 for more information about the exception for purely
stylistic effects. This checkpoint applies to content that is either rendered
automatically or on request from the user. Respect synchronization cues per checkpoint 2.6.
Notes and rationale:
- Some formats (e.g., streaming formats), might not enable the user agent to
fast advance or fast reverse content and would thus be subject to
applicability.
- For some streaming media formats, the user agent might not be able to offer
some functionalities (e.g,. fast advance) when the content is being delivered
over the Web in real time. However, the user agent is expected to offer these
functionalities for content (in the same format) that is fully available, for
example on the user's computer.
Who benefits:
- Some users with a cognitive disability. Some users with a physical
disability who may not have fine control over advance and rewind
functionalities will find useful the ability to advance or rewind the
presentation in (configurable) increments.
Example techniques:
- If buttons are used to control advance and rewind, make the advance/rewind
distances proportional to the time the user activates the button. After a
certain delay, accelerate the advance/rewind.
- The
SMIL 2.0 Time Manipulations Module ([SMIL20], chapter 11) defines the
speed
attribute, which can be used to change the playback
direction (forward or reverse) of any animation. See also the
accelerate
and decelerate
attributes.
- Some content lends itself to different forward and reverse functionalities.
For instance, compact disk players often let listeners fast forward and
reverse, but also skip to the next or previous song.
Doing more:
- The user agent should display time codes or represent otherwise position in
content to orient the user.
- Apply techniques for changing audio speed without introducing
distortion.
- Alert the user whenever pausing the user agent may lead to packet
loss.
References:
- Refer to fast advance and fast reverse techniques used for Digital Talking
Books [TALKINGBOOKS].
- Home Page Reader [HPR] lets users insert bookmarks in
presentations.
4.6 Position captions. (P1)
- For graphical
viewports, allow the user to position rendered captions
with respect to synchronized
visual tracks as follows:
- if the user agent satisfies this checkpoint by using a markup language or
style sheet language to provide configuration or control, then the user agent
must allow the user to choose from among at least the range of positions
enabled by the format
- otherwise the user agent must allow both non-overlapping and overlapping
positions (e.g., by rendering captions in a separate viewport
that may be positioned on top of the visual track).
- In either case, the user agent must allow the user to override
the author's specified position.
- The user agent is not required to change the layout of other content (i.e.,
reflow) after the user has changed the position of captions.
- The user agent is not required to make the captions background transparent
when those captions are rendered above a related video track.
Checkpoint 4.6
Notes and rationale:
- One good reasons to render captions in an independent viewport is to allow
users with screen access programs to focus on them.
- Traditionally, captions have a background, and research shows that some
users prefer a black background behind white lettering is preferred.
Who benefits:
- Some users (e.g., with a cognitive disability) may need to be able to
position captions, etc. so that they do not obscure other content or are not
obscured by other content. Other users (e.g., users with a screen magnifier)
may require pieces of content to be in a particular relation to one another,
even if this means that some content will obscure other content.
Example techniques:
- User agents should implement the positioning features of the employed
markup or style sheet language. Even when a markup language does not specify a
positioning mechanism, when a user agent can recognize distinct text
transcripts, collated text transcripts, or captions,
the user agent should allow the user to reposition them. User agents are not
required to allow repositioning when the captions, etc. cannot be separated
from other media (e.g., the captions are part of the video track).
- For the purpose of applying this clause, SMIL 1.0
[SMIL] user agents should recognize as captions any media object
whose reference from SMIL is guarded by the '
system-captions
' test
attribute.
- Implement the CSS 2
'position' property ([CSS2], section 9.3.1).
- Allow the user to choose whether captions appear at the bottom or top of
the video area or in other positions. Currently authors may place captions
overlying the video or in a separate box. Captions prevent users from being
able to view other information in the video or on other parts of the screen,
making it necessary to move the captions in order to view all content at once.
In addition, some users will find captions easier to read if they can place
them in a location best suited to their reading style.
- Allow users to configure a general preference for caption position and to
be able to fine tune specific cases. For example, the user may want the
captions to be in front of and below the rest of the presentation.
- Allow the user to drag and drop the captions to a place on the screen. To
ensure device-independence, allow the user to enter the screen coordinates of
one corner of the caption.
- Do not require users to edit the source code of the presentation to achieve
the desired effect.
Doing more:
- The user agent may allow configuration for transparent backgrounds. Refer
to checkpoint 4.3 for
requirements related to the control of text background colors.
- Allow the user to position all parts of a presentation rather than trying
to identify captions specifically (i.e., solving the problem generally may be
easier than for captions alone).
- Allow the user to resize (graphically) the captions, etc.
4.7 Slow other multimedia. (P2)
- Allow the user to slow the presentation rate of rendered audio and
animations (including video and animated images) not covered by checkpoint 4.4.
- The same speed percentage requirements of checkpoint 4.4 apply.
Checkpoint 4.7
Note: User agents automatically satisfy this checkpoint if
they satisfy checkpoint 4.4
for all audio and animations.
4.8 Control other multimedia.
(P2)
- Allow the user to stop, pause, resume, fast advance, and fast reverse
rendered audio and
animations (including video and animated images) not covered by checkpoint 4.5.
Checkpoint 4.8
Note: User agents automatically satisfy this checkpoint if
they satisfy checkpoint
4.5 for all audio and animations.
Checkpoints for audio volume control
4.9 Global volume control. (P1)
- Allow global
configuration of the volume of all rendered audio, with an option to
override
audio volumes specified by the author or user agent defaults.
- Allow the user to choose zero volume (i.e.,
silent).
Checkpoint 4.9
Note: User agents should allow configuration of volume
through available operating environment controls.
Example techniques:
- Use audio control mechanisms provided by the
operating environment. Control of volume mix is particularly
important, and the user agent should provide easy access to those mechanisms
provided by the operating environment.
- Implement the CSS 2
'volume' property ([CSS2], section 19.2).
- Implement the
'display',
'play-during', and
'speak' properties in CSS 2 ([CSS2], sections 9.2.5, 19.6, and
19.5, respectively).
- Authors sometimes specify background sounds with the "bgsound" attribute.
Note: This attribute is not part of
HTML 4 [HTML4].
Who benefits:
- Users who are hard of hearing or who rely on audio and synthesized speech
rendering. Users in a noisy environment will also benefit.
References:
- Refer to guidelines for audio characteristics used for Digital Talking
Books [TALKINGBOOKS].
4.10 Independent volume control.
(P1)
- Allow independent control of
the volumes of rendered audio sources
synchronized to play simultaneously.
- The user agent is not required to satisfy this checkpoint for audio whose
recognized role is to create a purely stylistic effect.
- The user control required by this checkpoint includes the ability to override
author-specified volumes for the relevant sources of audio.
Checkpoint 4.10
Note: See
checkpoint 4.4 for more information about the exception for purely
stylistic effects. The user agent should satisfy this checkpoint by allowing
the user to control independently the volumes of all audio sources (e.g., by implementing a general
audio mixer type of functionality). See also checkpoint 4.13.
Notes and rationale:
- Sounds that play at different times are distinguishable and therefore
independent control of their volumes is not required by this checkpoint (since
volume control required by checkpoint 4.9 suffices).
- There are at least three good reasons for strongly recommending that all
sounds be independently configurable, not just those synchronized to play
simultaneously.
- sounds that are not synchronized may end up playing simultaneously;
- if the user cannot anticipate when a sound will play, the user cannot
adjust the global volume control at appropriate times to affect this
sound;
- it is extremely inconvenient to have to adjust the global volume
frequently.
Who benefits:
- Users (e.g., with blindness or low vision) who rely on audio and
synthesized speech rendering.
Related techniques:
- For each source of audio, allow
the user to control the volume using the same user interface used to satisfy
the requirements of
checkpoint 4.5.
4.11
Control other volume. (P2)
- Allow independent control of
the volumes of rendered audio sources
synchronized to play simultaneously that are not covered by checkpoint
4.10.
Checkpoint 4.11
Note: User agents automatically satisfy this checkpoint if
they satisfy
checkpoint 4.10 for all audio.
Checkpoints for synthesized speech rendering
See also techniques for synthesized
speech rendering.
4.12 Configure synthesized speech rate.
(P1)
- Allow
configuration of the synthesized speech rate, according to the full
range offered by the speech synthesizer.
Checkpoint 4.12
Note: The range of synthesized speech rates offered by the
speech synthesizer may depend on natural language.
Example techniques:
- For example, many speech synthesizers offer a range for English speech of
120 - 500 words per minute or more. The user should be able to increase or
decrease the rendering rate in convenient increments (e.g., in large steps,
then in small steps for finer control).
- User agents may allow different synthesized speech rate configurations for
different natural languages. For example, this may be implemented with CSS2
style sheets using the :lang
pseudo-class ([CSS2], section 5.11.4).
- Use synthesized speech mechanisms provided by the
operating environment.
- Implement the CSS 2
'speech-rate' property ([CSS2], section 19.8).
Who benefits:
- Users (e.g., with blindness or low vision) who rely on audio and
synthesized speech rendering.
Doing more:
- Content may include commands that are interpreted by a speech synthesizer
to change the rate (or control other synthesized speech parameters). This
checkpoint does not require the user agent to allow the user to override
author-specified rate changes (e.g., by transforming or otherwise stripping out
these commands before passing on the content to the speech synthesizer). Speech
synthesizers themselves may allow user override of author-specified rate
changes. For these such synthesizers, the user agent should ensure access to
this feature as part of satisfying this checkpoint.
4.13 Configure synthesized speech volume.
(P1)
- Allow control of
the synthesized speech volume, independent of other sources of audio.
- The user control required by this checkpoint includes the ability to override
author-specified synthesized speech volume.
Checkpoint 4.13
Note: See also checkpoint 4.10.
Example techniques:
- The user agent should allow the user to make synthesized speech louder and
softer than other audio sources.
- Use synthesized speech mechanisms provided by the
operating environment.
- Implement the CSS 2
'volume' property ([CSS2], section 19.2).
Who benefits:
- Users (e.g., with blindness or low vision) who rely on audio and
synthesized speech rendering.
4.14 Configure
synthesized speech characteristics. (P1)
- Allow
configuration of synthesized speech characteristics according to the
full range of values offered by the speech synthesizer.
Checkpoint 4.14
Note: Some speech synthesizers allow users to choose values
for synthesized speech characteristics at a higher abstraction layer, i.e., by
choosing from present options that group several characteristics. Some typical
options one might encounter include: "adult male voice", "female child voice",
"robot voice", "pitch", "stress", etc. Ranges for values may vary among speech
synthesizers.
Example techniques:
- Use synthesized speech mechanisms provided by the
operating environment.
- One example of a synthesized speech
API is Microsoft's Speech Application
Programming Interface [SAPI].
-
Who benefits:
- Users (e.g., with blindness or low vision) who rely on audio and
synthesized speech rendering.
References:
- For information about these synthesized speech characteristics, please
refer to descriptions in section 19.8 of Cascading Style Sheets Level 2
[CSS2].
4.15 Specific synthesized speech characteristics. (P2)
- Allow
configuration of the following synthesized speech characteristics:
pitch, pitch range, stress, richness.
- Pitch refers to the average frequency of the speaking voice.
- Pitch range specifies a variation in average frequency.
- Stress refers to the height of "local peaks" in the intonation contour of
the voice.
- Richness refers to the richness or brightness of the voice.
Checkpoint 4.15
Note: This checkpoint is more specific than checkpoint
4.14: it requires support for the voice characteristics listed. Definitions
for these characteristics are based on descriptions in section 19 of the
Cascading Style Sheets Level 2 Recommendation
[CSS2]; please refer to that specification for additional
informative descriptions. Some speech synthesizers allow users to
choose values for synthesized speech characteristics at a higher abstraction
layer, i.e., by choosing from present options distinguished by "gender", "age",
"accent", etc. Ranges of values may vary among speech synthesizers.
4.16 Configure synthesized speech features.
(P2)
- Provide support for
user-defined extensions to the synthesized speech dictionary, as well as the
following functionalities:
- spell-out: spell text one character at a time or according to
language-dependent pronunciation rules;
- speak-numeral: speak a numeral as individual digits or as a full number;
and
- speak-punctuation: speak punctuation literally or render as natural
pauses.
Checkpoint 4.16
Note: Definitions for the functionalities listed are based
on descriptions in section 19 of the Cascading Style Sheets Level 2
Recommendation [CSS2]; please refer to that
specification for additional
informative descriptions.
Example techniques:
-
Who benefits:
- Users (e.g., with blindness or low vision) who rely on audio and
synthesized speech rendering.
References:
- For information about these functionalities, please refer to descriptions
in section 19.8 of Cascading Style Sheets Level 2
[CSS2].
Checkpoints related to style sheets
4.17
Choose style sheets. (P1)
- For user agents that support
style sheets:
- Allow the user to choose from and apply available
author style sheets (in
content).
- Allow the user to choose from and apply available user style
sheets.
- Allow the user to ignore author and user style sheets.
Checkpoint 4.17
Note: By definition, the user
agent's default style sheet is always present, but may be overridden
by author or user styles. Developers should not consider that the user's
ability to turn off author and user style sheets is an effective way to improve
content accessibility; turning off style sheet support means losing the many
benefits they offer. Instead, developers should provide users with finer
control over user agent or content behavior known to raise accessibility
barriers. The user should only have to turn off author and user style sheets as
a last resort.
Example techniques:
- For HTML [HTML4], make available "class" and
"id" information so that users can override styles.
- Implement user style
sheets.
- Implement the
"!important" semantics of CSS 2 ([CSS2], section 6.4.2).
Who benefits:
- Any user with a disability who needs to override the author's style sheets
or user agent default style sheets in order to have control over style and
presentation, or who needs to tailor the style of rendered content to meet
their own needs.
References:
- For information about how alternative style sheets are specified in HTML 4
[HTML4], please refer to
section 14.3.1.
- For information about how alternative style sheets are specified in XML 1.0
[XML], please refer to "Associating Style Sheets with XML documents
Version 1.0" [XMLSTYLE].
Checkpoints
5.1 No automatic content focus change.
(P2)
- Allow
configuration so that if a
viewport opens without explicit user request, its content
focus does not automatically become the current
focus.
- Configuration is preferred, but is not required if the content focus can
only ever be moved on explicit
user request.
Checkpoint 5.1
Who benefits:
- Moving the focus automatically (and unexpectedly) to a new viewport may
disorient some users with a cognitive disability, blindness, or low vision.
These users may find it difficult to restore the previous point of regard.
Example techniques:
- Allow the user to configure how the current
focus changes when a new viewport opens. For instance, the user
might choose between these two options:
- Do not change the focus when a viewport opens, but alert the user (e.g.,
with a beep, flash, and text message on the status bar). Allow the user to
navigate directly to the new window upon demand.
- Change the focus when a window opens and use a subtle alert (e.g., a beep,
flash, and text message
on the status bar) to indicate that the focus has changed.
- If a new viewport
or prompt appears but focus does not move to it, alert assistive technologies
(per checkpoint 6.5) so that they
may discreetly inform the user.
- When a viewport is duplicated, the focus in the new viewport should
initially be the same as the focus in the original viewport. Duplicate
viewports allow users to navigate content (e.g., in search of some information)
in one viewport while allowing the user to return with little effort to the
point of regard in the duplicate viewport. There are other techniques for
accomplishing this (e.g., "registers" in Emacs).
- In JavaScript, the focus may be changed with
myWindow.focus();
- For user agents that implement CSS 2
[CSS2], the following rule will generate a message to the user at
the beginning of link text for links that are meant to open new windows when
followed:
A[target=_blank]:before{content:"Open new window"}
Doing more:
- The user agent may also allow configuration about whether the pointing
device moves automatically to windows that open without an explicit user
request.
5.2 Keep viewport on top. (P2)
- For graphical user interfaces, allow
configuration so that the viewport with the current
focus remains "on top" of all other viewports with which it
overlaps.
Checkpoint 5.2
Notes and rationale:
- The alert is important to ensure that the user realizes a new viewport has
opened; the new viewport may be hidden by the viewport configured to remain on
top.
- In most operating environments, the viewport with focus is generally the
viewport "on top". In some environments, it's possible to allow a viewport that
is not on top to have focus.
Who benefits:
- Some users with a cognitive disability may find it disorienting if the
viewport being viewed unexpectedly changes.
Doing more:
- The user agent may also allow configuration about whether the viewport
designated by the pointing device always remains on top.
5.3 Manual viewport open only. (P2)
- Allow
configuration so that viewports only open on explicit
user request.
- In this configuration, instead of opening a viewport automatically, alert
the user and allow the user to open it on demand (e.g., by following a link or
confirming a prompt).
- Allow the user to close viewports.
- If a viewport (e.g., a frame set) contains other viewports, these
requirements only apply to the outermost container viewport.
- Configuration is preferred, but is not required if viewports can only ever
open on explicit
user request.
- User creation of a new viewport (e.g., empty or with a new resource loaded)
through the user agent's user interface constitutes an explicit user
request.
Checkpoint 5.3
Note: Generally, viewports open automatically as the result
of instructions in content.
See also checkpoint
5.1 (for control over changes of focus when a viewport opens) and checkpoint 6.5 (for programmatic
alert of changes to the user interface).
Who benefits:
- Navigation of multiple open viewports may be difficult for some users who
navigate viewports serially (e.g., users with visual or physical disabilities)
and for some users with a cognitive disability (as it may disorient them).
Example techniques:
- For HTML [HTML4], allow the user to control
the process of opening a document in a new "target" frame or a viewport created
by a script. For example, for
target="_blank"
, open the window
according to the user's preference.
- For SMIL [SMIL], allow the user to control
viewports created with the "
new
" value of the "show
"
attribute.
- In JavaScript, windows may be opened with:
myWindow.open("example.com", "My New Window");
myWindow.showHelp(URI);
5.4 Selection and focus in viewport.
(P2)
- Ensure that when a viewport's
selection or
content focus changes, it is at least partially in the viewport
after the change.
Checkpoint 5.4
Note: For example, if users navigating links move to a
portion of the document outside a graphical viewport, the viewport should
scroll to include the new location of the focus. Or, for users of audio
viewports, allow configuration to render the selection or focus immediately
after the change.
Who benefits:
- Users who may be disoriented by a change in focus or selection that is not
reflected in the viewport. This includes some users with blindness or low
vision, and some users with a cognitive disability.
Example techniques:
- There are times when the content focus changes (e.g., link navigation) and
the viewport should move to track it. There are other times when the viewport
changes position (e.g., scrolling) and the content focus is moved to follow it.
In both cases, the focus (or selection) is in the viewport after the
change.
- If a search causes the selection or focus to change, ensure that the found
content is not hidden by the search prompt.
- When the content focus changes, register the newly focused element in the
navigation sequence; sequential navigation should start from there.
- Unless viewports have been coordinated, changes to selection or focus in
one viewport should not affect the selection or focus in another viewport.
- The persistence of the selection or focus in the viewport will vary
according to the type of viewport. For any viewport with persistent rendering
(e.g., a two-dimensional graphical or tactile viewport), the focus or selection
should remain in the viewport after the change until the user changes the
viewport. For any viewport without persistent rendering (e.g., and audio
viewport), once the focus or selection has been rendered, it will no longer be
"in" the viewport. In a pure audio environment, the whole persistent context is
in the mind of the user. In a graphical viewport, there is a large shared
buffer of dialog information in the display. In audio, there is no such
sensible patch of interaction that is maintained by the computer and accessed,
ad lib, by the user. The audio rendering of content requires the elapse of
time, which is a scarce resource. Consequently, the flow of content through the
viewport has to be managed more carefully, notably when the content was
designed primarily for graphical rendering.
- If the rendered selection or focus does not fit entirely within the limits
of a graphical viewport:
- if the region actually displayed prior to the change was within the
selection or focus, do not move the viewport.
- otherwise, if the region actually displayed prior to the change was not
within the newly selected or focused content, move to display at least the
initial fragment of such content.
5.5 Confirm form submission. (P2)
- Allow
configuration to prompt the
user to confirm (or cancel) any form submission.
- Configuration is preferred, but it not required if forms can only ever be
submitted on explicit
user request.
Checkpoint 5.5
Note: For example, do not submit a form automatically when
a menu option is selected, when all fields of a form have been filled out, or
when a "mouseover" or "change" event occurs.
Example techniques:
- In HTML 4 [HTML4], form submit controls are
the
INPUT element (section 17.4) with
type="submit"
and
type="image"
, and the
BUTTON element (section 17.5) with type="submit"
.
- Allow the user to configure script-based submission (e.g., form submission
accomplished through an "onChange" event). For instance, allow these settings:
- Do not allow script-based submission.
- Allow script-based submission after confirmation from the user.
- Allow script-based submission without prompting the user (but not by
default).
- Authors may write scripts that submit a form when particular events occur (e.g., "onchange" events). Be
aware of this type of practice:
<SELECT NAME="condition" onchange="switchpage(this)">
As soon as the user attempts to navigate the menu, the "switchpage" function
opens a document in a new viewport. Try to avoid orientation problems that may
be caused by scripts bound to form elements.
- Be aware that users may inadvertently pressing the Return or
Enter key and accidentally submit a form.
- In JavaScript, a form may be submitted with:
document.form[0].submit();
document.all.mySubmitButton.click();
- Generate a form submit button when the author has not provided one.
Who benefits:
- Any user who might be disoriented by an automatic form submission (e.g.,
users with blindness who are are navigating serially through select box
options, or some users with a cognitive disability) or who might inadvertently
submit a form (e.g., some users with a physical disability).
Doing more:
- Some users may not want to have to confirm all form submissions, so allow
multiple configurations, such as: confirm all form submissions; confirm
script-activated form submissions; confirm all form submissions except those
done through the graphical user interface (e.g., when the user moves content
focus to a submit button and activates it); etc.
- Users who navigate a document serially may think that the submit button in
a form is the "last" control
they need to complete before submitting the form. Therefore, for forms in which
additional controls follow a submit button, if those controls have not been
completed, inform the user and ask for confirmation (or completion) before
submission.
- For forms, allow users to search for
controls that need to be changed by the user before submitting the
form.
5.6 Confirm fee links. (P2)
- Allow
configuration to prompt the
user to confirm (or cancel) any payment that results from activation
of a fee
link.
- Configuration is preferred, but is not required if fee links can only ever
be activated on explicit
user request.
Checkpoint 5.6
Who benefits:
- Any user who might inadvertently activate a fee link (e.g., some users with
a physical or cognitive disability).
Example techniques:
- Allow the user to configure the user agent to prompt for payments above a
certain amount (including any payment).
- Warn the user that even in this configuration, the user agent may not be
able to recognize some payment mechanisms.
5.7
Manual viewport close only. (P3)
- Allow
configuration to prompt the
user to confirm (or cancel) closing any viewport that starts to close without
explicit
user request.
Checkpoint 5.7
Who benefits:
- Some users with a cognitive disability may find it disorienting if a
viewport closes automatically. On the other hand, some users with a physical
disability may wish these same viewports to close automatically (rather than
being required to close them manually).
Example techniques:
- In JavaScript, windows may be closed with
myWindow.close();
Checkpoints
6.1 DOM read access. (P1)
- Provide programmatic read access to HTML and
XML content by
conforming to the following modules of the W3C Document Object Model DOM Level 2 Core Specification [DOM2CORE] and exporting the
interfaces they define:
- the Core module for HTML;
- the Core and XML modules for XML.
Checkpoint 6.1
Note: Please refer to the "Document Object Model (DOM)
Level 2 Core Specification"
[DOM2CORE] for information about HTML and
XML versions covered.
Notes and rationale:
- The primary reason for requiring user agents to implement the DOM is that
this gives assistive technologies access to the original structure of the
document. For example, this means that assistive technologies that render
content as synthesized speech are not required to construct the speech view by
"reverse engineering" a graphical view. Direct access to the structure allows
the assistive technologies to render content in a manner best suited to a
particular output device. This does not mean that assistive technologies should
be prevented from having access to the rendering of the conforming user agent;
simply that they not be required to depend entirely on it. In fact, user agents
the render as synthesized speech may wish to synchronize a graphical view with
a speech view.
- Note that the W3C DOM is designed to be used on a server as well as a
client and does not address some user interface-specific information.
Who benefits:
- Users with a disability who rely on assistive technologies for input and
output.
Example techniques:
- Refer to a listing of DOM implementations at the Open
Directory Project [ODP-DOM].
Related techniques:
- See the appendix on loading assistive
technologies for DOM access.
References:
- For information about rapid access to Internet Explorer's
[IE-WIN] DOM through COM, refer to
[BHO].
- Refer to the DirectDOM Java implementation of the DOM
[DIRECTDOM].
6.2 DOM write access. (P1)
- If the user can modify HTML and XML
content through the user
interface, provide the same functionality programmatically by
conforming to the following modules of the W3C Document Object Model DOM Level 2 Core Specification [DOM2CORE] and exporting the
interfaces they define:
- the Core module for HTML;
- the Core and XML modules for XML.
Checkpoint 6.2
Note: For example, if the user interface allows users to
complete HTML forms, this must also be possible through the
required DOM APIs. Please refer to the "Document Object
Model (DOM) Level 2 Core Specification"
[DOM2CORE] for information about HTML and
XML versions covered.
Notes and rationale:
- Allowing assistive technologies write access through the DOM allows them
to:
- modify the attribute list of a document and thus add information into the
document object that will not be rendered by the user agent.
- add entire nodes to the document that are specific to the assistive
technologies and that may not be rendered by a user agent unaware of their
function.
- The ability to write to the DOM can improve performance for the assistive
technology. For example, if an assistive technology has already traversed a
portion of the document object and knows that a section (e.g., a style element)
could not be rendered, it can mark this section "to be skipped".
- Another benefit is to add information necessary for audio rendering but
that would not be stored directly in the DOM during parsing (e.g., numbers in
an ordered list). An assistive technology component can add numeric information
to the document object. The assistive technology can also mark a subtree as
having been traversed and updated, to eliminate recalculating the information
the next time the user visits the subtree.
Who benefits:
- Users with a disability who rely on assistive technologies for input and
output.
Related techniques:
- See also techniques for
checkpoint 6.1.
6.3 Programmatic access to
non-HTML/XML content. (P1)
- For markup languages other than HTML and
XML, provide programmatic read access to
content.
- Provide programmatic write access for those parts of
content that the user can modify through
the user interface. To satisfy these requirements, implement at least one API that is either
- defined by a W3C Recommendation, or
- a publicly documented API designed to enable interoperability with
assistive technologies.
- If no such API is available, or if available APIs do not enable the user
agent to satisfy the requirements, implement at least one publicly documented
API to satisfy the requirements, and follow operating environment
conventions for the use of input and output
APIs.
- An API is considered available if the specification of the API is published
(e.g., as a W3C Recommendation) in time for integration into a user agent's
development cycle.
Checkpoint 6.3
Note: This checkpoint addresses content not covered by
checkpoints checkpoint 6.1
and checkpoint 6.2.
Notes and rationale:
- Some examples of markup languages covered by this checkpoint include
SGML
applications other than HTML and
RTF, and TeX.
- Some software (e.g., Word and Excel for Windows) offer APIs specific to
their formats.
Who benefits:
- Users with a disability who rely on assistive technologies for input and
output.
Related techniques:
- See techniques for checkpoint
6.4.
References:
- Some public APIs that enable access include:
- Microsoft Active Accessibility ([MSAA]) in Windows 95/98/NT
versions.
- Sun Microsystems Java Accessibility API ([JAVAAPI]) in Java JDK. If the
user agent supports Java applets and provides a Java Virtual Machine to run
them, the user agent should support the proper loading and operation of a Java
native assistive technology. This assistive technology can provide access to
the applet as defined by Java accessibility standards.
6.4 Programmatic operation. (P1)
- Provide programmatic read access to user agent user interface controls.
- Provide programmatic write access for those controls that the user can
modify through the user interface. For security reasons, user agents are not
required to allow instructions in
content to modify user agent user interface controls.
- To satisfy these requirements, implement at least one API that is either
- defined by a W3C Recommendation, or
- a publicly documented API designed to enable interoperability with
assistive technologies.
- If no such API is available, or if available APIs do not enable the user
agent to satisfy the requirements, implement at least one publicly documented
API that allows programmatic operation of all of the functionalities that are
available through the user agent user interface, and follow operating
environment
conventions for the use of input and output
APIs.
- An API is considered available if the specification of the API is published
(e.g., as a W3C Recommendation) in time for integration into a user agent's
development cycle.
For user agent features.
Checkpoint 6.4
Note: APIs used to satisfy the requirements of this
checkpoint may be platform-independent APIs such as the W3C
DOM, conventional APIs for a particular operating environment, conventional
APIs for programming languages,
plug-ins, virtual machine environments, etc. User agent developers
are encouraged to implement APIs that allow assistive technologies to
interoperate with multiple types of software in a given operating environment
(user agents, word processors, spreadsheet programs, etc.), as this reuse will
benefit users and assistive technology developers. User agents should always
follow operating environment conventions for the use of input and output
APIs.
Notes and rationale:
- It is important to use APIs that ensure that text content is available to assistive
technologies as text and not, for example, as a series of strokes drawn on the
screen.
Who benefits:
- Users with a disability who rely on assistive technologies for input and
output.
Example techniques:
- User agents that implement conventional APIs are
generally more compatible with assistive technologies and provide accessibility
at no extra cost.
- Use conventional user
interface controls. Third-party assistive technology developers are
more likely able to access conventional controls than custom
controls. If you use custom controls,
review them for accessibility and compatibility with third-party assistive
technology. Ensure that they provide accessibility information through an API
as is done for the conventional controls.
- Make use of operating environment-level features. See the
appendix of accessibility features
for some common operating systems.
- Operating system and application frameworks have conventions for
communication with input devices. In the case of Windows, OS/2, the X Windows
System, and Mac OS, the window manager provides Graphical User Interface
(GUI) applications with this information through the
messaging queue. In the case of non-GUI applications, the compiler run-time
libraries provide conventional mechanisms for receiving keyboard input in the
case of desktop operating systems. If you use an application framework such as
the Microsoft Foundation Classes, the framework used should support the same
conventional input mechanisms.
- Do not communicate directly with an input device; this may circumvent
operating environment messaging. For instance, in Windows, do not
open the keyboard device driver directly. It is often the case that the
windowing system needs to change the form and method for processing
conventional input mechanisms for proper application coexistence within the
user interface framework.
- Do not implement your own input device event queue mechanism; this may
circumvent operating environment messaging. Some assistive technologies use
conventional system facilities for simulating keyboard and mouse events. From
the application's perspective, these events are no different than those
generated by the user's actions. The "Journal Playback Hooks" (in both OS/2 and
Windows) are one example of an application that feeds the standard event
queues. For an example of a standard event queue mechanism, refer to the
"Carbon Event Manager Preliminary API Reference"
[APPLE-HI].
-
Operating environments have conventions for communicating with
output devices. In the case of common desktop operating systems such as
Windows, OS/2, and Mac OS, conventional
APIs are provided for writing to the
display and the multimedia subsystems.
- Avoid rendering text in the
form of a bitmap before transferring to the screen, since some screen readers
rely on the user agent's offscreen model. An offscreen model is rendered
content created by an assistive technology that is based on the rendered
content of another user agent. Assistive technologies that rely on
an offscreen model generally construct it by intercepting conventional
operating environment drawing calls. For example, in the case of
display drivers, some screen readers are designed to monitor what is drawn on
the screen by hooking drawing calls at different points in the drawing process.
While knowing about the user agent's formatting may provide some useful
information to assistive technologies, this document encourages assistive
technologies to access to content directly through published APIs (such as the
DOM) rather than via a particular rendering.
- Common operating environment two-dimensional graphics engines and drawing
libraries provide functions for drawing
text to the screen. Examples of this are the Graphics Device
Interface (GDI) for Windows, Graphics Programming Interface (GPI) for OS/2, and
the X library (XLIB) for the X Windows System or Motif.
- Do not communicate directly with an output device.
- Do not draw directly to the video frame buffer.
- Do not provide your own mechanism for generating pre-defined
operating environment sounds.
- When writing textual information in a GUI
operating environment, use conventional operating environment APIs for drawing text.
- Use
operating environment resources for rendering audio information.
When doing so, do not take exclusive control of system audio resources. This
could prevent an assistive technology such as a screen reader from speaking if
they use software text-to-synthesized speech conversion. Also, in operating
environments like Windows, a set of audio sound resources is provided to
support conventional sounds such as
alerts. These preset sounds are used to trigger SoundSentry graphical cues when a problem
occurs; this benefits users with hearing disabilities. These cues may be
manifested by flashing the desktop, active caption bar, or current viewport. It
is important to use the conventional mechanisms to generate audio feedback so
that operating environments or special assistive technologies can add
additional functionality for users with hearing disabilities.
- API designers should promote backwards compatibility so that assistive
technologies do not suddenly break when a new version of an API is published
and implemented by user agents.
References:
- Some public accessibility APIs include:
- Microsoft Active Accessibility ([MSAA]). This the conventional
accessibility API for the Windows 95/98/NT operating systems.
- Sun Microsystems Java Accessibility API ([JAVAAPI]) in the Java JDK. This
is the conventional accessibility API for the Java environment. If the user
agent supports Java applets and provides a Java Virtual Machine to run them,
the user agent should support the proper loading and operation of a Java native
assistive technology. This assistive technology can provide access to the
applet as defined by Java accessibility standards.
- For information about rapid access to Internet Explorer's
[IE-WIN] DOM through COM, refer to Browser Helper Objects
[BHO].
6.5 Programmatic alert of changes. (P1)
- Provide programmatic alert of changes to
content, user
interface controls,
selection, content
focus, and user
interface focus.
- To satisfy these requirements, implement at least one API that is either
- defined by a W3C Recommendation, or
- a publicly documented API designed to enable interoperability with
assistive technologies.
- If no such API is available, or if available APIs do not enable the user
agent to satisfy the requirements, implement at least one publicly documented
API to satisfy the requirements, and follow operating environment
conventions for the use of input and output
APIs.
- An API is considered available if the specification of the API is published
(e.g., as a W3C Recommendation) in time for integration into a user agent's
development cycle.
For both content and user agent.
Checkpoint 6.5
Note: For instance, when user interaction in one frame
causes automatic changes to content in another, provide a programmatic alert.
This checkpoint does not require the user agent to alert the user of
rendering changes caused by content (e.g., an animation effect or an
effect caused by a style sheet), just changes to the
content itself.
Who benefits:
- Users with a disability who rely on assistive technologies for output.
Example techniques:
- Write output to and take input from conventional
operating environment APIs rather than directly
from hardware controls. This will enable the input/output to be redirected from
or to assistive technology devices – for example, screen readers and
braille displays often redirect output (or copy it) to a serial port, while
many devices provide character input, or mimic mouse functionality. The use of
generic APIs makes this feasible in a way that allows for interoperability of
the assistive technology with a range of applications.
- Alert the user when an action in one frame causes the content of another
frame to change. Allow the user to navigate with little effort to the frame(s)
that changed.
Related techniques:
- See techniques for checkpoint
6.4.
References:
- Refer to "mutation events" in "Document Object Model (DOM) Level 2 Events
Specification" [DOM2EVENTS]. This DOM Level 2
specification allows assistive technologies to be informed of changes to the
document tree.
- Refer also to information about monitoring HTML events
through the document
object model in Internet Explorer
[IE-WIN].
6.6 Conventional keyboard APIs. (P1)
- Follow
operating environment conventions when implementing
APIs for the keyboard.
- If such APIs for the keyboard do not exist, implement
publicly documented APIs for the keyboard.
Checkpoint 6.6
Note: An operating environment may define more than one
conventional API for the keyboard. For instance, for Japanese and Chinese,
input may be processed in two stages, with an API for each.
Who benefits:
- Users with a disability who rely on assistive technologies for input.
Example techniques:
- Account for author-specified keyboard bindings, such as those specified by
"accesskey" attribute in HTML 4 ([HTML4], section 17.11.2).
- Test that all user
interface components may be operable by software or devices that
emulate a keyboard. Use SerialKeys and/or
voice recognition software to test keyboard event emulation.
Related techniques:
- Apply the techniques for checkpoint 1.1 to the keyboard.
Doing more:
- Enhance the functionality of conventional operating environment controls to
improve accessibility where none is provided by responding to conventional
keyboard input mechanisms. For example provide keyboard navigation to menus and
dialog box controls in the Apple Macintosh operating system. Another example is
the Java Foundation Classes, where internal frames do not provide a keyboard
mechanism to give them focus. In this case, you will need to add keyboard
activation through the conventional keyboard activation facility for Abstract
Window Toolkit components.
6.7 API character encodings. (P1)
- For an API implemented to satisfy requirements of this document, support
the character
encodings required for that API.
For both content and user agent.
Checkpoint 6.7
Note: Support for character encodings is important so that
text is not "broken" when communicated to assistive technologies. For example,
the DOM Level 2 Core Specification [DOM2CORE], section 1.1.5
requires that the DOMString
type be encoded using UTF-16. This
checkpoint is an important special case of the other API
requirements of this document.
Who benefits:
- Users with disabilities who rely on assistive technologies for input and
output.
Example techniques:
- The list of character encodings that any conforming implementation of Java
version 1.3 [JAVA13] must support is: US-ASCII,
ISO-8859-1, UTF-8, UTF-16BE, UTF-16LE, and UTF-16.
- MSAA [MSAA] relies on the
COM interface, which in turn relies on Unicode
[UNICODE], which means that for MSAA a user agent must support
UTF-16. From Chapter 3 of the COM documentation, on interfaces, entitled "Interface Binary Standard":
Finally, and quite significantly, all strings passed through all COM
interfaces (and, at least on Microsoft platforms, all COM APIs) are Unicode
strings. There simply is no other reasonable way to get interoperable objects
in the face of (i) location transparency, and (ii) a high-efficiency object
architecture that doesn't in all cases intervene system-provided code between
client and server. Further, this burden is in practice not large."
6.8 DOM CSS access. (P2)
- For user agents that implement
Cascading Style Sheets (CSS), provide programmatic access to
those style sheets in content by
conforming to the CSS module of the W3C Document Object Model (DOM) Level 2 Style Specification [DOM2STYLE] and exporting the
interfaces it defines.
- For the purposes of satisfying this checkpoint, Cascading Style Sheets
(CSS) are defined by either CSS Level 1 [CSS1] or CSS Level 2 [CSS2].
Checkpoint 6.8
Note: Please refer to the "Document Object Model (DOM)
Level 2 Style Specification"
[DOM2STYLE] for information about CSS versions
covered.
Who benefits:
- Users with a disability who rely on assistive technologies for input and
output.
Related techniques:
- See techniques for
6.9 Timely access. (P2)
- Ensure that programmatic exchanges proceed in a timely manner.
For both content and user agent.
Checkpoint 6.9
Note: For example, the programmatic exchange of information
required by other checkpoints in this document should be efficient enough to
prevent information loss, a risk when changes to content or user interface
occur more quickly than the communication of those changes. Timely exchange is
also important for the proper synchronization of alternative renderings. The
techniques for this checkpoint explain how developers can reduce communication
delays. This will help ensure that assistive technologies have timely access to
the document
object model and other information that is important for providing
access.
Notes and rationale:
- This document requires that a conforming user agent provide access to
content and user interface information through APIs because assistive
technologies must be able to respond incrementally to changes in the user's
session. Simply providing a "text dump" of content to an assistive technology,
for example, would make it extremely difficult for assistive technologies to
provide timely access (as the assistive technology would have to recalculate
much more information rather than having information about incremental
changes).
Who benefits:
- Users with a disability who rely on assistive technologies for input and
output.
Related techniques:
- Please see the appendix that explains how to load assistive technologies for DOM
access.
Doing more:
- Alert the user when information may be lost due to communication
delays.
Checkpoints
7.1 Focus and selection conventions.
(P1)
- Follow
operating environment conventions that benefit accessibility when
implementing the
selection, content
focus, and user
interface focus.
Checkpoint 7.1
Note: This checkpoint is an important special case of checkpoint 7.3. See also checkpoint 9.1 and checkpoint 9.2.
Who benefits:
- Many users with many types of disabilities.
Related techniques:
- See techniques for checkpoint
7.3.
References:
- Refer to
Selection and Partial Selection of DOM Level 2 ([DOM2RANGE], section
2.2.2).
- For information about focus in the Motif environment (under X Windows),
refer to the OSF/Motif Style Guide [MOTIF].
7.2 Respect input configuration conventions. (P1)
- Ensure that default input configurations of the user agent do not
interfere with
operating environment accessibility conventions (e.g., for keyboard
accessibility).
For user agent features.
Checkpoint 7.2
Note:
See also checkpoint 11.5.
Who benefits:
- Many users with many types of disabilities.
Example techniques:
- The default configuration should not include
"Alt-F4",
"Control-Alt-Delete", or other combinations
that have reserved meanings in a given operating environment.
- Clearly document any default configurations that depart from operating
environment conventions.
Related techniques:
- Some reserved keyboard bindings are listed in the appendix on accessibility features of some operating
systems.
7.3 Operating environment conventions.
(P2)
- Follow
operating environment conventions that benefit accessibility. In
particular, follow conventions that benefit accessibility for user
interface design, keyboard configuration, product installation, and
documentation.
- For the purposes of this checkpoint, an operating environment convention
that benefits accessibility is either
- one identified as such in operating environment design or accessibility
guidelines, or
- one that allows the author to satisfy any requirement of the "Web Content
Accessibility Guidelines 1.0"
[WCAG10] or of the current document.
For user agent features.
Checkpoint 7.3
Note:
Notes and rationale:
- Much of the rationale behind the content requirements of User Agent
Accessibility Guidelines 1.0 also makes sense for the user
agent user interface (e.g., allow the user to turn off any blinking
or moving user interface components).
Who benefits:
- Many users with many types of disabilities.
Example techniques:
- Follow operating environment conventions for loading assistive
technologies. See the appendix on loading
assistive technologies for DOM access for information about how an
assistive technology developer can load its software into a Java Virtual
Machine.
- Inherit
operating environment settings related to accessibility (e.g., for
fonts, colors, natural
language preferences, input configurations, etc.).
- Ensure that any online services (e.g., automated update facilities,
download-and-install functionalities, sniff-and-fill forms, etc.) observe
relevant operating environment conventions concerning device independence and
accessibility (as well as the Web Content Accessibility Guidelines 1.0
[WCAG10]).
- Evaluate the conventional interface controls on the target platform against
any built-in operating environment accessibility functions (see the appendix on accessibility features of some
operating systems). Ensure that the user agent operates properly with all
these functions. Here is a sample of features to consider:
- Microsoft Windows offers an accessibility function called "High Contrast".
Standard window classes and controls automatically support this setting.
However, applications created with custom classes or controls work with the
"GetSysColor" API to ensure compatibility with High Contrast.
- Apple Macintosh offers an accessibility function called "Sticky Keys".
Sticky Keys operate with keys the operating environment recognizes as modifier
keys, and therefore a custom control should not attempt to define a new
modifier key.
- Maintain consistency in the user interface between versions of the
software. Consistency is less important than improved general accessibility and
usability when implementing new features. However, developers should make
changes conservatively to the layout of user interface
controls, the behavior of existing
functionalities, and the default keyboard configuration.
Related techniques:
- See techniques for checkpoint
6.4 and checkpoint
7.2.
References:
- Follow accessibility guidelines for specific platforms:
- "Macintosh Human Interface Guidelines"
[APPLE-HI]
- "IBM Guidelines for Writing Accessible Applications Using 100% Pure Java"
[JAVA-ACCESS].
- "An Inter-client Exchange (ICE) Rendezvous Mechanism for
X Window System Clients" [ICE-RAP].
- "Information for Developers About Microsoft Active Accessibility"
[MSAA].
- "The Inter-Client communication conventions manual"
[ICCCM].
- "Lotus Notes accessibility guidelines"
[NOTES-ACCESS].
- "Java accessibility guidelines and checklist" [JAVA-CHECKLIST].
- "The Java Tutorial. Trail: Creating a GUI with JFC/Swing"
[JAVA-TUT].
- "The Microsoft Windows Guidelines for Accessible Software Design"
[MS-SOFTWARE].
- Follow general guidelines for producing accessible software:
- "Accessibility for applications designers"
[MS-ENABLE].
- "Application Software Design Guidelines"
[TRACE-REF]. Refer also to "EZ ACCESS(tm) for electronic devices V
2.0 implementation guide" [TRACE-EZ] from the Trace
Research and Development Center.
- Articles and papers from Sun Microsystems about accessibility
[SUN-DESIGN].
- "EITAAC Desktop Software standards"
[EITAAC].
- "Requirements for Accessible Software Design"
[ED-DEPT].
- "Software Accessibility" [IBM-ACCESS].
- Towards Accessible Human-Computer Interaction"
[SUN-HCI].
- "What is Accessible Software" [WHAT-IS].
- Accessibility guidelines for Unix and X Window applications
[XGUIDELINES].
7.4 Input configuration indications.
(P2)
- Follow
operating environment conventions to indicate the input
configuration.
For user agent features.
Checkpoint 7.4
Note: For example, in some operating environments, when a
functionality may be triggered through a menu and through the keyboard, the
developer may design the menu entry so that the character of the activating key
is also shown. This checkpoint is an important special case of checkpoint 7.3. See also checkpoint
11.5.
Who benefits:
- Many users with many types of disabilities.
Example techniques:
- Use
operating environment conventions to indicate the current
configuration (e.g., in menus, indicate what key strokes will activate the
functionality, underline single keys that will work in conjunction with a key
such as Alt, etc.) These are conventions used by the Sun Java
Foundations Classes [JAVA-TUT] and Microsoft
Foundations Classes for Windows.
- Ensure that information about changes to the input configuration is
available in a device-independent manner (e.g., through visual and audio cues,
and through text).
- If the current configuration changes locally (e.g., a search prompt opens,
changing the keyboard mapping for the duration of the prompt), alert the
user.
- Named configurations are easier to remember. This is especially important
for people with certain types of cognitive disabilities. For example, if the
invocation of a search prompt changes the input configuration, the user may
remember more easily which key strokes are meaningful in search mode if alerted
that there is a "Search Mode". Context-sensitive help (if available) should
reflect the change in mode, and a list of keybindings for the current mode
should be readily available to the user.
Related techniques:
- See input configuration
techniques.
Checkpoints
8.1 Implement accessibility features.
(P1)
- Implement the accessibility features of specifications (markup languages,
style sheet languages, metadata languages, graphics formats, etc.). For the
purposes of this checkpoint, an accessibility feature is either
- one identified as such, or
- one that allows the author to satisfy any requirement of the "Web Content
Accessibility Guidelines 1.0"
[WCAG10].
For all content.
Checkpoint 8.1
Note: This checkpoint applies to both W3C-developed and
non-W3C specifications.
conformance and implementing specifications for more information.
Who benefits:
- Many users with many types of disabilities.
Example techniques:
- Make obvious to users features that are known to benefit accessibility.
Make them easy to find in the user interface and in documentation.
- Some specifications include optional features (not required for conformance
to the specification). If an optional feature is likely to cause accessibility
problems, developers should either ensure that the user can turn off the
feature or they not implement the feature.
- Refer to the following list of accessibility features of HTML 4
[HTML4] (in addition to those described in techniques for checkpoint 2.1):
References:
- Refer to the "Accessibility Features of CSS"
[CSS-ACCESS]. Note that CSS 2 includes properties for configuring
synthesized speech styles.
- Refer to the "Accessibility Features of SMIL"
[SMIL-ACCESS].
- Refer to the "Accessibility Features of SVG"
[SVG-ACCESS].
- For information about the Sun Microsystems Java Accessibility API in Java
JDK, refer to [JAVAAPI].
- For information about captioning for the Synchronized Accessible Multimedia
Interchange (SAMI), refer to
[SAMI].
8.2 Conform to specifications. (P2)
- Use and conform to
either
- W3C Recommendations when they are available and appropriate for a task,
or
- non-W3C specifications that enable the creation of content that conforms at
level A or better to the Web Content Accessibility Guidelines 1.0
[WCAG10].
- When a requirement of another specification contradicts a requirement of
the current document, the user agent may disregard the requirement of the other
specification and still satisfy this checkpoint.
- A specification is considered available if it is published (e.g., as a W3C
Recommendation) in time for integration into a user agent's development
cycle.
For all content.
Checkpoint 8.2
Note: For instance, for markup, the user agent may
conform to HTML 4
[HTML4], XHTML 1.0 [XHTML10], or
XML 1.0 [XML]. For style sheets, the user
agent may conform to CSS ([CSS1],
[CSS2]). For mathematics, the user agent may conform to MathML 2.0
[MATHML20]. For synchronized
multimedia, the user agent may conform to SMIL 1.0
[SMIL]. The user agent is not required to satisfy this checkpoint
for all implemented specifications; see the section on
conformance and implementing specifications for more information.
Notes and rationale:
- The right to disregard only applies when the requirement of another
specification contradicts the requirements of the current document; no
exemption is granted if the other specification is consistent with or silent
about a requirement made by the current document.
- Conformance to W3C Recommendations is not a Priority 1 requirement because
user agents can (and should!) provide access for non-W3C specifications as
well.
- The requirement of this checkpoint is to conform to at least one
W3C Recommendation that is available and appropriate for a particular task, or
at least one non-W3C specification that allows the creation of content that
conforms to WCAG 1.0 [WCAG10]. For example, user agents
would satisfy this checkpoint by conforming to the Portable Network Graphics
1.0 specification [PNG] for raster images. In addition,
user agents may implement other image formats such as JPEG, GIF, etc. Each
specification defines what conformance means for that specification.
Who benefits:
- Many users with many types of disabilities.
Example techniques:
- If more than one version or level of a specification is appropriate for a
particular task, user agents are encouraged to conform to the latest version.
However, developers should consider implementing the version that best supports
accessibility, even if this is not the latest version.
- For reasons of backward compatibility, user agents should continue to
implement deprecated features of specifications. Information about deprecated
language features is generally part of the language's specification.
References:
- The list of current W3C Recommendations and
other technical documents is available at
http://www.w3.org/TR/
.
- W3C make available validation services to promote the proper usage and
implementation of specifications. Refer to the:
- Information about PDF and accessibility is made available by Adobe
[ADOBE].
Checkpoints
9.1 Provide content focus. (P1)
- Provide at least one
content focus for each
viewport (including frames) where enabled
elements are part of the rendered
content.
- Allow the user to make the content focus of each viewport the current
focus.
Checkpoint 9.1
Note: For example, when two frames of a frameset contain
enabled elements, allow the user to make the content
focus of either frame the current focus. Note that viewports "owned"
by plug-ins
that are part of a conformance claim are also covered by this checkpoint.
Who benefits:
- Users who rely on the
content focus for interaction (e.g., for interaction with enabled
elements through the keyboard, or for assistive technologies that consider the
current focus a point of
regard). This includes some users with blindness, low vision, or a
physical disability.
Example techniques:
- None.
9.2 Provide user interface focus. (P1)
- Provide a user
interface focus.
Checkpoint 9.2
Who benefits:
- Users who rely on the user
interface focus for interaction (e.g., for interaction with user
interface controls through the keyboard, or for assistive technologies that
consider the current focus a point of regard). This includes some users with
blindness, low vision, or a physical disability.
Example techniques:
- Some
operating environments provide a means to move the user
interface focus among all open windows using multiple input devices
(e.g., keyboard and mouse). This technique would suffice for switching among
user agent viewports that are separate windows.
9.3 Move content focus. (P1)
- Allow the user to move the
content focus to any enabled element in the
viewport.
- Allow
configuration so that the content focus of a viewport only changes
on explicit
user request. Configuration is not required if the content focus
only ever changes on explicit user request. See also checkpoint
5.1.
- If the author has not specified a navigation order, allow at least forward
sequential navigation to each element, in document order.
- The user agent may also include disabled
elements in the navigation order.
Checkpoint 9.3
Note: In addition to forward sequential navigation, the
user agent should also allow reverse sequential navigation. This checkpoint is
an important special case of
checkpoint 9.9.
Who benefits:
- Users who rely on the focus for interaction (e.g., for interaction with
enabled elements through the keyboard, or for assistive technologies that
consider the focus a point of regard). This includes some users with blindness,
low vision, or a physical disability.
- Allow the user to move the content focus to each enabled element by
repeatedly pressing a single key. Many user agents today allow users to
navigate sequentially by repeating a key combination – for example, using
the Tab key for forward navigation and Shift-Tab for
reverse navigation. Because the Tab key is typically on one side of
the keyboard while arrow keys are located on the other, users should be allowed
to configure the user agent so that sequential navigation is possible with keys
that are physically closer to the arrow keys. See also checkpoint 11.3.
- Maintain a logical element navigation order. For instance, users may use
the keyboard to navigate among elements or element groups using the arrow keys
within a group of elements. One example of a group of elements is a set of
radio buttons. Users should be able to navigate to the group of buttons, then
be able to select each button in the group. Similarly, allow users to navigate
from table to table, but also among the cells within a given table (up, down,
left, right, etc.).
- Respect author-specified information about navigation order (e.g., the
"tabindex" attribute in HTML 4
[HTML4], section 17.11.1). Allow users to override the
author-specified navigation order (e.g., by offering an alphabetized view of
links or other orderings).
- The default sequential navigation order should respect the conventions of
the natural
language of the document. Thus, for most left-to-right languages,
the usual navigation order is top-to-bottom and left-to-right. For
right-to-left languages, the order would be top-to-bottom and
right-to-left.
- Implement the
':hover', ':active', and ':focus' pseudo-classes of CSS 2 ([CSS2], section 5.11.3). This allows
users to modify content focus presentation with user style sheets. Use them in
conjunction with the
CSS 2 ':before' pseudo-elements ([CSS2], section 5.12.3) to clearly
indicate that something is a link (e.g., 'A:before { content : "LINK:"
}').
- In Java, a component is part of the sequential navigation order when added
to a panel and its
isFocusTraversable
method returns true. A
component can be removed from the navigation order by extending the component,
overloading this method, and returning false.
-
Doing more:
- Provide other sequential navigation mechanisms for particular element types
or semantic units, e.g., "Find the next table" or "Find the previous form." For
more information about sequential navigation of form
controls and form submission, see
techniques for checkpoint
5.5.
- For graphical user agents (or any user agent offering a two-dimensional
display), navigation based not on document order but on layout may also benefit
the user. For example, allow the user to navigate up, down, left, and right to
the nearest rendered enabled link. This type of navigation may be particularly
useful when it is clear from the layout where the next navigation step will
take the user (e.g., grid layouts where it is clear what the next link to the
left or below will be).
- Excessive use of sequential navigation can reduce the usability of software
for both disabled and non-disabled users. Some useful types of direct
navigation include: navigation based on position (e.g., all links are numbered
by the user agent), navigation based on element content (e.g., the first letter
of
text content), direct navigation to a
table cell by its row/column position, and searching (e.g., based on form
element text, associated labels, or form element names).
9.4 Restore history. (P1)
- For user agents that implement a viewport history mechanism, for each state
in a viewport's browsing history, maintain information about the point of
regard, content
focus, and
selection.
- When the user returns to any state in the viewport history, restore the
saved values for these three state variables.
Checkpoint 9.4
Note: For example, when the user uses the "back button",
restore the point of regard, content focus, and selection for previous state in
the viewport's history.
Notes and rationale:
- This checkpoint only refers to a per-viewport history mechanism, not a
history mechanism that is common to all viewports (e.g., of visited Web
resources).
Who benefits:
- Users who may have difficulty re-orienting themselves during a browsing
session. This includes some users with a memory or cognitive disability, some
users with a physical disability, and some users who access content serially
and for whom repositioning will be time consuming (e.g., users with blindness
or low vision).
Example techniques:
- If the user agent allows the user to browse multimedia or
audio-only presentations, when the user leaves one presentation for
another, pause the presentation. When the user returns to a previous
presentation, allow the user to resume the presentation where it was paused
(i.e., return the point of
regard to the same place in space and time). Note:
This may be done for a presentation that is available "completely" but not for
a "live" stream or any part of a presentation that continues to run in the
background.
- Allow the user to configure whether leaving a viewport pauses a multimedia
presentation.
- If the user activates a broken link, leave the viewport where it is and
alert the user (e.g., in the status bar and with a graphical
or audio alert). Moving the viewport suggests that a link is not broken, which
may disorient the user.
- In JavaScript, the following may be used to change the Web resource in the
viewport, and navigate the history:
myWindow.home();
myWindow.forward();
myWindow.back();
myWindow.navigate("http://example.com/");
myWindow.history.back();
myWindow.history.forward();
myWindow.history.go( -2 );
location.href = "http://example.com/"
location.reload();
location.replace("http://example.com/");
Doing more:
- Restore the four state variables after the user refreshes the same
content.
References:
- Refer to the HTTP/1.1 specification for information about history
mechanisms ([RFC2616], section 13.13).
9.5 No events on focus change.
(P2)
- Allow
configuration so that moving the content
focus to or from an enabled element does not automatically activate
any explicitly associated
event handlers.
Checkpoint 9.5
Note: For instance, in this configuration for an HTML
document, do not activate any handlers for the 'onfocus
',
'onblur
', or 'onchange
' attributes. In this
configuration, user agents should still apply any stylistic changes (e.g.,
highlighting) that may occur when there is a change in content
focus.
Notes and rationale:
- First-time users of a page may want access to link text before deciding
whether to follow (activate) the link. More experienced users of a page might
prefer to follow the link directly, without the intervening content focus
step.
Who benefits:
- Users with blindness or some users with a physical disability, and anyone
without a pointing device.
Example techniques:
- Allow the following configurations:
- On invocation of the input binding, move focus to the associated enabled
element, but do not activate it.
- On invocation of the input binding, move focus to the associated enabled
element and prompt the user with information that will allow the user to decide
whether to activate the element (e.g., link title or text). Allow the user to
suppress future prompts for this particular input binding.
- On invocation of the input binding, move focus to the associated enabled
element and activate it.
9.6 Show event handlers. (P2)
- For the element with
content focus, make available the list of input device event
handlers explicitly associated with the element.
Checkpoint 9.6
Note: For example, allow the user to query the element with
content focus for the list of input device event handlers, or add them directly
to the serial navigation order described in checkpoint 9.3. See checkpoint 1.2 for
information about activation of event handlers associated with the element with
focus.
Who benefits:
- Users with blindness or some users with a physical disability, and anyone
without a pointing device.
Example techniques:
- For HTML content, the left mouse button is generally the only mouse button
that is used to activate event handlers associated with mouse clicks.
References:
- See checkpoint
1.2 for information about input device event handlers in HTML 4
[HTML4] and the Document Object Model (DOM) Level 2 Events
Specification [DOM2EVENTS].
9.7 Move content focus optimally. (P2)
- Allow the user to move the
content focus to any enabled element in the
viewport.
- If the author has not specified a navigation order, allow at least forward
and reverse sequential navigation to each element, in document order.
- The user agent must not include disabled
elements in the navigation order.
Checkpoint 9.7
Note: This checkpoint is a special case of checkpoint 9.3.
Who benefits:
- Users who rely on the focus for interaction (e.g., for interaction with
enabled elements through the keyboard, or for assistive technologies that
consider the focus a point of regard). This includes some users with blindness,
low vision, or a physical disability.
Related techniques:
- Apply the techniques of
checkpoint 9.3 to enabled elements only.
Doing more:
- Allow configuration so that disabled elements are included in the
navigation order. These elements cannot be activated (as they are disabled),
but their presence may lend continuity to navigation.
9.8 Text search.
(P2)
- Allow the user to search within rendered
text for a sequence of characters from the
document character set.
- Allow the user to start a forward search (in document order) from any
selected or focused location in content.
- When there is a match do both of the following:
- move the viewport so that the matched text content is within it, and
- allow the user to search for the next instance of the text from the
location of the match.
- Alert the user when there is no match, when the search reaches the end of
content, and prior to any wrapping. A wrapping search is one that restarts
automatically at the beginning of content once the end of content has been
reached.
- Provide a case-insensitive search option for text in
scripts (i.e., writing systems) where
case is significant.
For all rendered content.
Checkpoint 9.8
Note: If the user has not indicated a start position for
the search, the search should start from the beginning of content. Per checkpoint 7.3, use
operating environments conventions for indicating the result of a
search (e.g., selection
or content
focus).
Who benefits:
- Some users who access content serially (e.g., users with blindness or low
vision), some users with a cognitive disability (who may have difficulty
locating information among other information), and some users with a physical
disability (for whom navigation may be a significant effort).
Example techniques:
- Use the selection or focus to indicate found text. This will provide
assistive technologies with access to the text.
- Allow users to search all views (e.g., including views of the text
source).
- For extremely small viewports or extremely long matches, the entire matched
text content may not fit within the viewport. In this case, developers may move
the viewport to encompass the initial part of the matched content.
- The search string input method should follow
operating environment conventions (e.g., for international character
input).
- When the point of regard depends on time (e.g., for audio viewports), the
user needs to be able to search through content that will be available through
that viewport. This is analogous to content rendered graphically that is
reachable by scrolling.
- For frames, allow users to search for content in all frames, without having
to be in a particular frame.
- For multimedia presentations, allow users to search and examine
time-dependent media elements and links in a time-independent manner. For
example, present a static list of time-dependent links.
- Allow users to search the element content of form elements (where
applicable) and any label text.
- When searching a document, the user agent should not search text whose
properties prevent it from being visible (such as text that has
visibility="hidden"
), or equivalent text for elements with such
properties (such as "alt
" text for an image that has
visibility="hidden"
).
Doing more:
- If the number of matches is known, provide this information to orient the
user.
- It may be confusing to allow users to search for text content that is
not rendered (and thus that they have not viewed). If this type of search
is possible, alert the user of this particular search mode.
- Allow the following additional search functionalities:
- Allow the user to start a search from the beginning of the document rather
than from the current selection or focus.
- Provide distinct alerts for the situation where the user has searched
through all content or where the user has simply reached the end of the
document and needs to wrap to the beginning.
- Allow reverse search so the user doesn't not have to start he search from
the beginning of the document if the search goes too far.
- Allow the user to easily start a search from the beginning of the content
currently rendered in the viewport.
- Provide the option of searching through conditional content that is
associated with rendered content, and render the found conditional content
(e.g., by showing its relation to the rendered content).
References:
- For information about when case is significant in a
script, please refer to Section 4.1 of
Unicode [UNICODE].
9.9 Structured navigation. (P2)
- Allow the user to navigate efficiently to and among important structural
elements in rendered
content.
- Allow forward and backward sequential navigation to these important
structural elements.
Checkpoint 9.9
Note: This specification intentionally does not identify
which "important elements" must be navigable as this will vary according to
markup language. What constitutes "efficient navigation" may depend on a number
of factors as well, including the "shape" of content (e.g., serial navigation
of long lists is not efficient) and desired granularity (e.g., among tables,
then among the cells of a given table).
Who benefits:
- Users who access content serially, including users with blindness and some
users with a physical disability.
Notes and rationale:
- User agents should construct the navigation view with the goal of breaking
content into sensible pieces according to the author's design. In most cases,
user agents should not break down content into individual elements for
navigation; element-by-element navigation of the document object does not meet
the goal of facilitating navigation to important pieces of content. (The
navigation view may also be an expanding/contracting outline view; see checkpoint 10.5.) Instead,
user agents are expected to construct the navigation view based on markup.
Example techniques:
- In HTML 4 [HTML4], important elements include:
A
, ADDRESS
, APPLET
, BUTTON
,
FIELDSET
, DD
, DIV
, DL
,
DT
, FORM
, FRAME
, H1-H6
,
IFRAME
, IMG
, INPUT
, LI
,
LINK
(if rendered), MAP
, OBJECT
,
OL
, OPTGROUP
, OPTION
, P
,
TABLE
, TEXTAREA
, and UL
. HTML also allows
authors to specify keyboard configurations ("accesskey", "tabindex"), which can
serve as hints about what the author considers important.
- Allow navigation based on commonly understood document models, even if they
do not adhere strictly to a Document Type Definition (DTD) or schema. For instance, in
HTML, although headings (H1-H6) are not containers, they may be treated as such
for the purpose of navigation. Note that they should be properly nested.
- Use the DOM ([DOM2CORE]) as the basis of
structured navigation (e.g., a postorder traversal). However, for well-known
markup languages such as HTML, structured navigation should take advantage of
the structure of the source tree and what is rendered.
- Follow
operating environment conventions for indicating navigation progress
(e.g., selection
or content
focus).
- Allow the user to limit navigation to the cells of a table (notably left
and right within a row and up and down within a column). Navigation techniques
include keyboard navigation from cell to cell (e.g., using the arrow keys) and
page up/down scrolling. See the section on table navigation.
- Alert the user when navigation has led to the beginning or end of a
structure (e.g., end of a list, end of a form, table row or column end, etc.).
See also checkpoint 1.3.
- For those languages with known (e.g., by specification, schema, metadata,
etc.) conventions for identifying important components, user agents should
construct the navigation tree from those components, allowing users to navigate
up and down the document tree, and forward and backward among siblings. As the
same time, allow users to shrink and expand portions of the document tree. For
instance, if a subtree consists of a long series of links, this will pose
problems for users with serial access to content. At any level in the document
tree (for forward and backward navigation of siblings), limit the number of
siblings to between five and ten. Break longer lists down into structured
pieces so that users can access content efficiently, decide whether they want
to explore it in detail, or skip it and move on.
- Tables and forms illustrate the utility of a recursive navigation
mechanism. The user should be able to navigate to tables, then change "scope"
and navigate within the cells of that table. Nested tables (a table within the
cell of another table) fit nicely within this scheme. However, the headers of a
nested table may provide important context for the cells of the same row(s) or
column(s) containing the nested table. The same ideas apply to forms: users
should be able to navigate to a form, then among the controls within that
form.
- Navigation and orientation go together. The user agent should allow the
user to navigate to a location in content, explore the context, navigate again,
etc. In particular, user agents should allow users to:
- Navigate to a piece of content that the author has identified as important
according to the markup language specification and conventional usage. In HTML,
for example, this includes headings, forms, tables, navigation mechanisms, and
lists.
- Navigate past that piece of content (i.e., avoid the details of that
component).
- Navigate into that piece of content (i.e., chose to view the details of
that component).
- Change the navigation view as they go, expanding and contracting portions
of content that they wish to examine or ignore. This will speed up navigation
and facilitate orientation at the same time.
- Provide context-sensitive navigation. For instance, when the user navigates
to a list or table, provide locally useful navigation mechanisms (e.g., within
a table, cell-by-cell navigation) using similar input commands.
- Allow users to skip author-specified navigation mechanisms such as
navigation bars. For instance, navigation bars at the top of each page at a Web
site may force users with screen readers or some physical disabilities to wade
through many links before reaching the important information on the page. User
agents may facilitate browsing for these users by allowing them to skip
recognized navigation bars (e.g., through a configuration option).
Some techniques for this include:
- Providing a functionality to jump to the first non-link content.
- If the number of elements of a particular type is known, provide this
information to orient the user.
- In HTML, the MAP element may be used to mark up a navigation bar (even when
there is no associated image). Thus, users might ask that MAP elements not be
rendered in order to hide links inside the MAP element. User agents might allow
users to hide MAP elements selectively. For example, hide any MAP element with
a "
title
" attribute specified. Note: Starting in
HTML 4, the MAP element allows block content, not just AREA
elements.
- Allow depth-first as well as breadth-first navigation.
- Allow users to navigate synchronized multimedia presentations. See also checkpoint
4.5.
Doing more:
- Allow the user to navigate characters, words, sentences, paragraphs,
screenfuls, etc. according to conventions of the natural
language. This benefits users of synthesized speech-based user
agents and has been implemented by several screen readers, including Winvision
[WINVISION], Window-Eyes
[WINDOWEYES], and JAWS for Windows
[JFW].
References:
- The following is a summary of ideas provided by the National Information
Standards Organization with respect to Digital Talking Books
[TALKINGBOOKS]:
A talking book's "Navigation Control Center" (NCC) resembles a traditional
table of contents, but it is more. It contains links to all headings at all
levels in the book, links to all pages, and links to any items that the reader
has chosen not to have read. For example, the reader may have turned off the
automatic reading of footnotes. To allow the user to retrieve that information
efficiently, the reference to the footnote is placed in the NCC and the reader
can go to the reference, understand the context for the footnote, and then read
the footnote.
Once the reader is at a desired location and wishes to begin reading, the
navigation process changes. Of course, the reader may elect to read
sequentially, but often some navigation is required (e.g., frequently people
navigate forward or backward one word or character at a time). Moving from one
sentence or paragraph at a time is also needed. This type of local navigation
is different from the global navigation used to get to the location of what you
want to read. It is frequently desirable to move from one block element to the
next. For example, moving from a paragraph to the next block element which may
be a list, blockquote, or sidebar is the normally expected mechanism for local
navigation.
9.10
Configure important elements. (P3)
- Allow
configuration of the set of important elements required by checkpoint 9.9 and checkpoint 10.5.
- Allow the user to include and exclude element types in the set of
elements.
Checkpoint 9.10
Note: For example, allow the user to navigate only
paragraphs, or only headings and paragraphs, or to suppress and restore
navigation bars, to navigate within and among tables and table cells, etc.
Who benefits:
- Users who access content serially, including users with blindness and some
users with a physical disability.
Example techniques:
- Allow the user to navigate HTML elements that share the
same "class" attribute.
- The CSS
'display' and
'visibility' properties ([CSS2], sections 9.2.5 and 11.2,
respectively), allow the user to override the default settings in user style
sheets.
Doing more:
- Allow the user to navigate according to similar styles (which may be an
approximation for similar element types).
Checkpoints
10.1 Table orientation. (P1)
- Make available to the user the purpose of each rendered table (e.g., as
expressed in a summary or table caption) and the relationships among the table
cells and headers.
Checkpoint 10.1
Note: This checkpoint refers only to table purpose and
cell/header relationship information that the user agent can
recognize. Depending on the table, some techniques may be more efficient
than others for conveying data relationships. For many tables, user agents
rendering in two dimensions may satisfy this checkpoint by rendering a table as
a grid and by ensuring that users can find headers associated with cells.
However, for large tables or small viewports, allowing the user to query cells
for information about related headers may improve access. This checkpoint is an
important special case of
checkpoint 2.1.
Notes and rationale:
- The more complex the table, the more clues to table structure are needed.
Make available information summarizing table structure, including any table
head and foot rows, and possible row grouping into multiple table bodies,
column groups, header cells and how they relate to data cells, the grouping and
spanning of rows and columns that apply to qualify any cell value, cell
position information, table dimensions, etc.
Who benefits:
- Users for whom summaries are important (e.g., some users with a cognitive
or memory disability), and for whom two-dimensional relationships may be
difficult to process (e.g., users with blindness who have serial access to the
content, or some users with a cognitive disability). Renderings that provide
easy access to cell header information will also help some users with low
vision or a physical disability, for whom it may be time-consuming to scroll in
order to locate relevant headers.
Example techniques:
- Refer to the
THEAD, TBODY, and TFOOT elements of HTML 4 ([HTML4], section 11.2.3). These
elements may be "fixed" to the screen (or repeated on paper) with the 'fixed'
value of the
CSS2 'position' property ([CSS2], section 9.3.1). When these
elements are used by authors, users can scroll through data while retaining
headers and footers "in view".
- In HTML, beyond the TR, TH, and TD elements, the table attributes
"summary", "abbr", "headers", "scope", and "axis" provide information about
relationships among cells and headers. For more information, see the section on
table techniques.
- When rendering a table serially, allow the user to specify how cell header
information should be rendered before cell data information. Some possibilities
are illustrated by the
CSS2 'speak-header' property ([CSS2], section 17.7.1).
-
10.2 Highlight selection and content focus.
(P1)
- Provide a mechanism for
highlighting the
selection and
content focus of each viewport.
- The highlight mechanism must not rely on color alone.
- Allow global
configuration of selection and focus highlight styles.
- For graphical viewports, if the highlight mechanism involves colors or text
decorations, offer a range of colors or text decorations to the user
that includes at least:
- the range offered by the conventional utility available in the
operating environment that allows users to choose colors or text
decorations,
- or, if no such utility is available, the range of colors or text
decorations supported by the conventional APIs of the operating environment for
specifying colors or drawing text.
Checkpoint 10.2
Note: Examples of highlight mechanisms include foreground
and background color variations, underlining, distinctive synthesized speech
prosody, rectangular boxes, etc. Because the selection and focus change
frequently, user agents should not highlight them using mechanisms (e.g., font
size variations) that cause content to reflow as this may disorient the user.
See also checkpoint
7.1.
Who benefits:
- Users with color deficiencies or blindness, for whom color will not be
useful. Also, some devices may not render colors (e.g., speech synthesizers,
black and white screens).
Example techniques:
- Inherit selection
and focus
information from user's settings for the
operating environment.
- A highlighted selection or focus may span text with different background
colors, text foreground colors, font families, etc.
- For selection:
- For focus, implement the
':hover', ':active', and ':focus' pseudo-classes of CSS 2 ([CSS2], section 5.11.3). and dynamic outlines
and focus of CSS 2 ([CSS2], sections 5.11.3 and 18.4.1,
respectively).
Example.
The following rule will cause links with focus to appear with a blue
background and yellow text.
A:focus { background: blue; color: yellow }
The following rule will cause TEXTAREA
elements with focus to
appear with a particular focus outline:
TEXTAREA:focus { outline: thick black solid }
Doing more:
- Test the user agent to ensure that individuals who have low vision and use
screen magnification software are able to follow highlighted item(s).
Related techniques:
- For Windows, see information about
ChooseFont
and
ChooseColor
in techniques for checkpoint 4.1, checkpoint 4.2, and checkpoint 4.3. ChooseFont
is also used to
choose some text decorations in Windows.
10.3 Distinct default highlight
styles. (P1)
- Ensure that all of the default
highlight styles for the
selection and
content focus, as well as for enabled
elements, recently visited links, and fee links
in rendered
content:
- do not rely on color alone, and
- differ from each other, and not by color alone.
- This checkpoint does not apply to those highlight styles inherited from the
operating environment as default values, as long as the user can change the
styles in the operating environment.
Checkpoint 10.3
Note: For instance, by default a graphical user agent may
present the selection using color and a dotted outline, the focus using a solid
outline, enabled elements as underlined in blue, recently visited links as
dotted underlined in purple, and fee links using a special icon or flag to draw
the user's attention.
Who benefits:
- For this checkpoint, and for others in this document, "color" includes
black, white, and greys.
- Users with color deficiencies or blindness, for whom color will not be
useful. Also, some devices may not render colors (e.g., speech synthesizers,
black and white screens).
Example techniques:
- If the user overrides the default styling for any one of these mechanisms,
the new styling may interfere with the others. Therefore, the user agent should
allow the user to configure them all at once or should alert the user to
potential conflicts when change are made. For instance, if the user configures
both the selection and focus highlighting to use colors, there may be a
conflict (especially if the colors are the same or similar).
- If default highlight styles are inherited from the operating environment,
document how to change them, or explain where to find this information in the
documentation for the operating environment.
10.4 Highlight special elements. (P2)
- Provide a mechanism for
highlighting all enabled elements, recently visited links, and
fee links
in rendered
content.
- Allow the user to configure the highlight styles. The highlight mechanism
must not rely on color alone.
- For graphical viewports, if the highlight mechanism involves text size,
font family, colors, or text decorations, offer the corresponding range
of values required by
checkpoint 4.1,
checkpoint 4.2,
checkpoint 4.3, or checkpoint 10.2.
- For a graphically rendered enabled elements, highlight
the most specific rendered element that:
- encompasses the enabled element, and
- is rendered as a coherent unit according to specification.
For example, an HTML user agent rendering a PNG image as part of an image map
is only required to highlight the image as a whole, not each enabled region. On
the other hand, an SVG user agent rendering an SVG image with embedded
graphical links is required to highlight each graphical link that may be
rendered independently according to the SVG specification.
Checkpoint 10.4
Note: Examples of highlight mechanisms include foreground
and background color variations, font variations, underlining, distinctive
synthesized speech prosody, rectangular boxes, etc.
Notes and rationale:
- For example, most graphical user agents highlight all the links on a page
so that users know at a glance where to interact.
Who benefits:
- Users with color deficiencies or blindness, for whom color will not be
useful. Also, some devices may not render colors (e.g., speech synthesizers,
black and white screens). If different text styles are used, some users with
low vision may need to configure them.
Example techniques:
- Do not rely solely on fonts or colors to alert the user whether or not the
link has previously been followed. Allow the user to configure how information
will be presented (colors, sounds, status bar messages, some combination,
etc.).
- Use CSS2 [CSS2] to add style to these
different classes of elements. In particular, consider the
'text-decoration' property ([CSS2], section 16.3.1), aural
cascading style sheets, font properties, and color properties.
- For enabled elements, implement CSS2
attribute selectors to match elements with associated scripts ([CSS2], section 5.8).
- For fee links:
- The W3C specification "Common Markup for micropayment per-fee-links"
[MICROPAYMENT] describes how
authors may mark up micropayment information in an interoperable manner.
- Use conventional, accessible interface controls to present information
about fees and to prompt the user to confirm payment.
- For a link that has
content focus, allow the user to query the link for fee information
(e.g., by activating a menu or key stroke).
-
Related techniques:
- For links, see the section on link
techniques, the visited links example in the section on generated content techniques, and
techniques for checkpoint
9.3.
Doing more:
- Test the user agent to ensure that individuals who have low vision and use
screen magnification software are able to follow highlighted item(s).
10.5
Outline view. (P2)
- Make available to the user an "outline" view of
content, composed of labels for important
structural elements (e.g., heading text, table titles, form titles, etc.).
- What constitutes a label is defined by each markup language specification.
A label is not required to be text
only.
Checkpoint 10.5
Note: This checkpoint is meant to provide the user with a
simplified view of content (e.g, a table of contents). For example, in HTML, a
heading (H1
-H6
) is a label for the section that
follows it, a CAPTION
is a label for a table, the
"title
" attribute is a label for its element, etc. For important
elements that do not have associated labels, user agents may generate labels
for the outline view. For information about what constitutes the set of
important structural elements, please see the Note following checkpoint 9.9. By making the
outline view navigable, it is possible to satisfy this checkpoint and checkpoint 9.9 together: allow
users to navigate among the important elements of the outline view, and to
navigate from a position in the outline view to the corresponding position in a
full view of content. See
also checkpoint 9.10.
Who benefits:
- The outline view is a type of summary view, and will benefit some users
with a memory or cognitive disability, as well as users for whom serial access
is time consuming (e.g., some users with blindness or a physical disability, or
some users with low vision). A navigable outline view will add further benefits
for these users.
Example techniques:
- For instance, in HTML, labels include the following:
- The
CAPTION
element is a label for TABLE
- The "
title
" attribute is a label for many elements.
- The
H1
-H6
elements are labels for sections that
follow
- The
LABEL
element is a label for form element
- The
LEGEND
element is a label for a set of form elements
- The
TH
element is a label for a row/column of table
cells.
- The
TITLE
element is a label for the document.
- Allow the user to expand or shrink portions of the outline view (configure
detail level) for faster access to important parts of content.
- Hide portions of content by using the CSS
'display' and
'visibility' properties ([CSS2], sections 9.2.5 and 11.2,
respectively).
- Provide a structured view of form
controls (e.g., those grouped by
LEGEND
or
OPTGROUP
in HTML) along with their labels.
-
Related techniques:
- See structured navigation techniques for checkpoint 9.9.
Doing more:
- For documents that do not use structure properly, user agents may attempt
to create an outline based on the rendering of elements and heuristics about
what elements may indicate about document structure.
10.6 Provide link information. (P3)
- To help the user decide whether to traverse a link, make available the
following information about it:
- link element content,
- link title,
- whether the link is internal to the resource (e.g., the link is to a target
in the same Web page),
- whether the user has traversed the link recently,
- whether traversing it may involve a fee, and
- information about the type, size, and natural language of linked Web
resources.
- The user agent is not required to compute or make available information
that requires retrieval of linked
Web resources.
Checkpoint 10.6
Who benefits:
- Users for whom following a link may lead to loss of context upon return,
including some users with blindness and low vision, a memory or cognitive
disability, or a physical disability.
Example techniques:
- Some markup languages allow authors to provide hints about the nature of
linked content (e.g., in HTML 4 [HTML4], the "hreflang" and "type"
attributes on the A element). Specifications should indicate when this type of
information is a hint from the author and when these hints may be overridden by
another mechanism (e.g., by HTTP headers in the case of HTML). User agent
developers should make the author's hints available to the user (prior to
retrieving a resource), but should provide definitive information once
available.
- Links may be simple (e.g., HTML links) or more complex, such as those
defined by the XML Linking Language (XLink)
[XLINK].
- The scope of "recently followed link" depends on the user agent. The user
agent may allow the user to configure this parameter, and should allow the user
to reset all links as "not followed recently".
- User agents should cache information determined as the result of retrieving
a Web resource and should make it available to the user. Refer to HTTP/1.1
caching mechanisms described in RFC 2616
[RFC2616], section 13.
- For a link that has
content focus, allow the user to query the link for information
(e.g., by activating a menu or key stroke).
- Do not mark all local links (to anchors in the same page) as visited when
the page has been visited.
Related techniques:
- See the section on link
techniques.
Doing more:
- User agents may provide information about any input bindings associated
with a link. See
checkpoint 11.2.
References:
- User agents may use HTTP HEAD rather than GET for information about size,
language, etc. Refer to RFC 2616 [RFC2616], section 9.3
- For information about content size in HTTP/1.1, refer to RFC 2616
[RFC2616], section 14.13. User agents are not expected to compute
content size recursively (i.e., by adding the sizes of resources referenced by
URIs within another resource).
- For information about content language in HTTP/1.1, refer to RFC 2616
[RFC2616], section 14.12.
- For information about content type in HTTP/1.1, refer to RFC 2616
[RFC2616], section 14.17.
Checkpoints for the user interface
10.7 Highlight current viewport.
(P1)
- Provide a mechanism for
highlighting the viewport with the current
focus (including any frame that takes current focus).
- For graphical viewports, the default highlight mechanism must not rely on
color alone.
- This default color requirement does not apply if the highlight mechanism is
inherited from the operating environment as the default and the user can change
it in the operating environment.
Checkpoint 10.7
Note: This checkpoint is an important special case of checkpoint 1.1. See also
to checkpoint
checkpoint 7.1.
Who benefits:
- Users with color deficiencies or blindness, for whom color will not be
useful. Also, some devices may not render colors (e.g., speech synthesizers,
black and white screens).
Example techniques:
- Provide a setting that causes a window that is the viewport with the
current focus to be maximized automatically. For example, maximize the parent
window of the browser when launched, and maximize each child window
automatically when it receives focus.
Maximizing does not necessarily mean occupying the whole screen or parent
window; it means expanding the viewport so that users have to scroll
horizontally or vertically as little as possible.
- If the viewport with the current focus is a frame or the user does not want
windows to pop to the foreground, use colors, reverse videos, or other
graphical clues to indicate the viewport with the current focus.
- If the default highlight mechanism is inherited from the operating
environment, document how to change it, or explain where to find this
information in the documentation for the operating environment.
- For synthesized speech or braille output, use the frame or window title to
identify the viewport with the current focus.
- Use
operating environment conventions, for specifying selection and
content focus (e.g., schemes in Windows).
- Implement the
':hover', ':active', and ':focus' pseudo-classes of CSS 2 ([CSS2], section 5.11.3). This allows
users to modify content focus rendering with user style
sheets.
-
Related techniques:
- See the section on frame
techniques.
10.8 Indicate rendering progress. (P3)
- Indicate the
viewport's position relative to rendered
content (e.g., the proportion of an audio or video clip that has
been played, the proportion of a Web page that has been viewed, etc.).
- The user agent may calculate the relative position according to content
focus position, selection position, or viewport position, depending on how the
user has been browsing.
- For two-dimensional renderings, relative position includes both vertical
and horizontal positions.
- The user agent may indicate the proportion of content viewed in a number of
ways, including as a percentage, as a relative size in bytes, etc.
Checkpoint 10.8
Notes and rationale:
- This checkpoint does not specify how to calculate the proportion in all
cases, and implementations may vary. For instance, suppose a user agent is to
render fifty audio clips one after the other. It may be costly to calculate the
proportion based on the total time required by all fifty clips (as this may
require the user agent to fetch all fifty in advance). Instead, the user agent
may represent the proportion as something like "2:43 remaining in the tenth
audio clip (of fifty)."
Who benefits:
- This type of context information benefits everyone, but is particularly
valuable to some users with serial access to content (e.g., users with
blindness) and to some users with a cognitive disability.
Example techniques:
- The proportion should be indicated using a relative value (where
applicable), otherwise as an absolute offset from some recognized
landmark.
- Provide a scrollbar for the viewport. Some specifications address scrolling
requirements or suggestions, such as for
the
THEAD
and TBODY
elements of HTML 4 ([HTML4], section 11.2.3) and the
'overflow' property of CSS 2 ([CSS2], section 11.1.1).
- Indicate the size of the document, so that users may decide whether to
download for offline viewing. For example, the playing time of an audio file
could be stated in terms of hours, minutes, and seconds. The size of a
primarily text-based Web page might be stated in both kilobytes and screens,
where a screen of information is calculated based on the current dimensions of
the viewport.
- Indicate the number of screens of information, based on the current
dimensions of the viewport (e.g., "screen 4 of 10").
- Use a variable pitch audio signal to indicate the viewport's different
positions.
- Provide markers for specific percentages through the document.
- Provide markers for positions relative to some position – a user
selected point, the bottom, the
H1
, etc.
- Put a marker on the scrollbar, or a highlight at the bottom of the page
while scrolling (so you can see what was the bottom before you started
scrolling).
- For images that render gradually (coarsely to finely), it is not necessary
to show percentages for each rendering pass.
Doing more:
- Allow users to configure what status information they want rendered. Useful
status information includes:
- Document proportions (numbers of lines, pages, width, etc.);
- Number of elements of a particular type (e.g., tables, forms, and
headings);
- Whether the viewport is at the beginning or end of the document;
- Size of document in bytes;
- The number of controls in a form and controls in a form element group
(e.g.,
FIELDSET
in HTML).
Checkpoints
11.1 Current user bindings. (P1)
- Provide information to the user about current user preferences for input
configurations.
- To satisfy this checkpoint, the user agent may make available binding
information in a centralized fashion (e.g., a list of bindings) or a
distributed fashion (e.g., by listing keyboard shortcuts in user interface
menus).
For user agent features.
Checkpoint 11.1
Who benefits:
- Many users benefit from direct access to important user agent
functionalities (e.g., via a single key stroke or short voice command): users
with blindness (for whom the pointing device is not useful), users with poor
physical control (who might mistakenly repeat a key stroke), users who fatigue
easily (for whom the composition of key sequences is a significant effort),
users who cannot remember key combinations, and any user who wants to operate
the user agent efficiently.
Related techniques:
- See the input configuration
techniques.
11.2 Current author bindings.
(P2)
- Provide a centralized view of the current author-specified input
configuration bindings.
- The user agent may satisfy this checkpoint by providing different views for
different input modalities (keyboard, pointing device, voice, etc.).
For all content.
Checkpoint 11.2
Note: For example, for HTML documents, provide a view of
keyboard bindings specified by the author through the "accesskey
"
attribute. The intent of this checkpoint is to centralize information about
author-specified bindings so that the user does not have to read the entire
content first to find out what bindings are available.
Who benefits:
- Refer to
checkpoint 11.2: some users with blindness, a physical disability, or a
memory or cognitive disability.
Example techniques:
- If the user agent offers a special view that lists author-specified
bindings, allow the user to navigate easily back and forth between the viewport
with the current focus and the list of bindings.
Related techniques:
- See input configuration
techniques.
Doing more:
- In addition to providing a centralized view of bindings, allow users to
find out about bindings in content. For example, highlight enabled elements
that have associated event handlers (e.g., by indicating bindings near the
element).
11.3 Override bindings. (P2)
- Allow the user to override
any binding that is part of the user agent default input
configuration.
- The user agent is not required to allow the user to override conventional
bindings for the operating environment (e.g., for access to
help).
- The override requirement only applies to bindings for the same input
modality (e.g., the user must be able to override a keyboard binding with
another keyboard binding).
For user agent features.
Checkpoint 11.3
Note: See also checkpoint 11.5, checkpoint 11.7, and checkpoint 12.3.
Who benefits:
- Refer to
checkpoint 11.2: some users with blindness, a physical disability, or a
memory or cognitive disability.
Example techniques:
- Allow the user to override bindings at the level of the operating
environment.
Related techniques:
- See input configuration
techniques.
Doing more:
- Allow users to choose from among pre-packaged configurations, to override
some of the chosen configuration, and to save it as a
profile. Not only will the user save time
configuring the user agent, but this will reduce questions to technical support
personnel.
- Allow users to restore easily the default input configuration.
- Allow users to create macros and bind them to key strokes or other input
methods.
- Test the default keyboard configuration for usability. Ask users with
different disabilities and combinations of disabilities to test
configurations.
11.4 Single key access. (P2)
- Allow the user to override
any binding in the user agent default keyboard configuration with a binding to
either a key plus modifier keys or to a single-key. In this checkpoint, "key"
refers to a physical key of the keyboard (rather than, say, a character of the
document character set).
- For each functionality in the set required by checkpoint 11.5, allow the
user to configure
a single-key binding (i.e., one key press performs the task, with zero modifier
keys).
- If the number of physical keys on the keyboard is less than the number of
functionalities required by checkpoint 11.5, allow single-key bindings for as many of
those functionalities as possible.
- The single-key binding requirements may be satisfied with a "single-key
mode" (i.e., a mode where the current bindings are replaced by a set of
single-key bindings).
- The user agent is not required to allow the user to override
conventional bindings for the operating environment (e.g., for access to
help).
- This checkpoint does not require single physical key bindings for character
input, only for the activation of user agent functionalities.
For user agent features.
Checkpoint 11.4
Note: Because single-key access is so important to some
users with physical disabilities, user agents should ensure that (1) most keys
of the physical keyboard may be configured for single-key bindings, and (2)
most functionalities of the user agent may be configured for single-key
bindings. For information about access to user agent functionality through a
keyboard API, see checkpoint
6.6.
Notes and rationale:
- When using a physical keyboard, some users require single-key access,
others require that keys activated in combination be physically close together,
while others require that they be spaced physically far apart.
- In some modes of interaction (e.g., when the user is entering text), the
number of available single keys will be significantly reduced.
- A "single-key mode" allows user agents to "save" keys for other bindings by
default and still satisfy this checkpoint. However, even when a single-key mode
is offered, user agents should include as many required single-key bindings as
possible in the default keyboard configuration. The user should be able to
enter into a single-key mode by using a single-key.
Who benefits:
- Single-key access is particularly important to some users with a physical
disability, or a memory or cognitive disability (for simplicity's sake).
Example techniques:
- Offer a single-key mode where, once the user has entered into that mode
(e.g., by pressing a single key), most of the keys of the keyboard are
configurable for single-key operation of the user agent. Allow the user to exit
that mode by pressing a single key as well. For example, Opera
[OPERA] includes a mode in which users can access important user
agent functionalities with single strokes from the numeric keypad.
- Consider distance between keys and key alignment (e.g., "9/I/K", which
align almost vertically on many keyboards) in the default configuration. For
instance, if Enter is used to activate links, put other link
navigation commands near it (e.g., page up/down, arrow keys, etc. on many
keyboards). In configurations for users with reduced mobility, pair related
functionalities on the keyboard (e.g., left and right arrows for forward and
back navigation).
- Mouse Keys (available in some
operating environments) allow users to simulate the mouse through
the keyboard. They provide a usable command structure without interfering with
the user interface for users who do not require keyboard-only and single-key
access.
Doing more:
- Allow users to accomplish tasks through repeated key strokes (e.g.,
sequential navigation) since this means less physical repositioning for all
users. However, repeated key strokes may not be efficient for some tasks. For
instance, do not require the user to position the pointing device by pressing
the "down arrow" key repeatedly.
- So that users do not mistakenly activate certain functionalities, make
certain combinations "more difficult" to invoke (e.g., users are not likely to
press Control-Alt-Delete accidentally).
11.5
Default binding requirements. (P2)
- Ensure that the user agent default input
configuration includes bindings for the following functionalities
required by other checkpoints in this document:
- move focus to next enabled element, and move focus to previous
enabled element;
- activate focused link;
- search for text;
- search again for same text;
- increase size of rendered
text, and decrease size of rendered text;
- increase global volume, and decrease global volume;
- stop, pause, resume, fast advance, and fast reverse selected audio and
animations (including video and animated images).
- If the user agent supports
the following functionalities, the default input configuration must also
include bindings for them:
- next history state (forward), and previous history state (back);
- enter URI for new resource;
- add to favorites (i.e., bookmarked resources);
- view favorites;
- stop loading resource;
- reload resource;
- refresh rendering;
- forward one viewport, and back one viewport;
- next line, and previous line.
For user agent features.
Checkpoint 11.5
Note: This checkpoint does not make any requirements about
the ease of use of default input configurations, though clearly the default
configuration should include single-key bindings and allow easy operation. Ease
of use is ensured by the configuration requirements of checkpoint 11.3.
Who benefits:
- Refer to
checkpoint 11.2: some users with blindness, a physical disability, or a
memory or cognitive disability.
Example techniques:
- Input configurations should allow quick and direct navigation that does not
rely on graphical
output. Do not require the user to navigate through a graphical user interface
as the only way to activate a functionality.
Related techniques:
- See the techniques for checkpoint 7.4
Doing more:
- Provide different input configuration
profiles (e.g., one keyboard profile with key combinations close
together and another with key combinations far apart).
- Offer a mode that makes the input configuration compatible with other
versions of the software (or with other software).
- Provide convenient bindings for controlling the user interface, such as
showing, hiding, moving, and resizing graphical
viewports.
- Allow the user to configure how much the viewport should move when
scrolling the viewport backward or forward through content (e.g., for a
graphical viewport, "page down" causes the viewport to move half the height of
the viewport, or the full height, or twice the height, etc.).
11.6 User profiles. (P2)
- For the configuration requirements of this document, allow the user to save
user preferences in at least one user
profile.
- Allow the user to choose from among available default profiles, profiles
created by the same user, and no profile (i.e., the user agent default
settings).
For user agent features.
Checkpoint 11.6
Notes and rationale:
- The user agent is only expected to allow the user to choose from profiles
created by the same user, not profiles created by other users.
Who benefits:
- Refer to
checkpoint 11.2: some users with blindness, a physical disability, or a
memory or cognitive disability.
Example techniques:
- Follow applicable operating environment conventions for input
configuration
profiles.
- Allow users to choose a different profile, to switch rapidly between
profiles, and to return to the default input configuration.
- If the user can edit the profile by hand, the user agent documentation
should explain the profile format.
11.7 Configure tool bars. (P3)
- For graphical user interfaces, allow the user to configure
the position of controls on tool bars of the user agent user interface, to add or remove
controls for the user interface from a predefined set, and to restore the
default user interface.
For user agent features.
Checkpoint 11.7
Note: This checkpoint is a special case of checkpoint 11.3.
Who benefits:
- Users for whom serial navigation may be difficult (e.g,. some users with
blindness or a physical disability). Some users with a memory or cognitive
disability (who may have difficulty remembering where and how to access user
agent functionalities).
Example techniques:
- Use conventional operating environment controls for allowing
configuration of font sizes, synthesized speech rates, and other style
parameters.
- Allow the user to show and hide controls. This benefits users with
cognitive disabilities and users who navigate user interface controls
sequentially.
- Allow the user to choose icons and/or text.
- Allow the user to change the grouping of icons and the order of menu
entries (e.g., for faster access to frequently used controls).
- Allow multiple icon sizes (big, small, other sizes). Ensure that these
values are applied consistently across the user interface.
- Allow the user to change the position of control bars, icons, etc. Do not
rely solely on drag-and-drop for reordering tool bar. Allow the user to
configure the user
agent user interface in a device-independent manner (e.g., through a
text-based
profile).
Checkpoints
12.1 Accessible documentation. (P1)
- Ensure that at least one version of the user agent
documentation conforms to at least Level Double-A of the Web Content
Accessibility Guidelines 1.0
[WCAG10].
For user agent features.
Checkpoint 12.1
Notes and rationale:
- User agents may provide documentation in many formats, but at least one
must conform to at least Level Double-A of the Web Content Accessibility
Guidelines 1.0 [WCAG10].
- Remember to keep documentation accessible as the user agent evolves (e.g.,
when bug fixes are published, etc.).
Who benefits:
- Many users with many types of disabilities.
Example techniques:
- Distribute accessible documentation over the Web, on CD-ROM, or by
telephone. Alternative hardcopy formats may also benefit some users.
- For example, for conformance to the Web Content Accessibility Guidelines
1.0 [WCAG10]:
- Provide text
equivalents of all non-text content (e.g., graphics,
audio-only presentations, etc.);
- Provide extended descriptions of screen-shots, flow charts, etc.;
- Provide a text
equivalent for audio user agent tutorials. Tutorials that use
synthesized speech to guide a user through the operation of the user agent
should also be available at the same time as graphical
representations.
- Use clear and consistent navigation and search mechanisms;
- Use the
NOFRAMES
element when the support/documentation is
presented in a FRAMESET
;
- See also checkpoint
12.3.
- Describe the user interface with device-independent terms. For example, use
"select" instead of "click on".
- Provide documentation in small chunks (for rapid downloads) and also as a
single source (for easy download and/or printing). A single source might be a
single HTML file or a compressed archive of several
HTML documents and included images.
- Ensure that run-time help and any Web-based help or support information is
accessible and may be operated with a single, well-documented, input command
(e.g., key stroke). Use operating environment conventions for input
configurations related to run-time help.
- Ensure that user agent identification codes are accessible to users so they
may install their software. Codes printed on software packaging may not be
accessible to people with visual disabilities.
Doing more:
- Provide accessible documentation for all audiences: end users, developers,
etc. For instance, developers with disabilities may wish to add accessibility
features to the user agent, and so require information on available APIs and other implementation details.
- Provide documentation in alternative formats such as braille (refer to
"Braille Formats: Principles of Print to Braille Transcription 1997" [BRAILLEFORMATS]), large
print, or audio tape. Agencies such as Recording for the Blind and Dyslexic
[RFBD] and the National Braille Press [NBP]
can create alternative formats.
12.2 Document accessibility features.
(P1)
-
Document all user agent features that benefit accessibility.
- For the purposes of this checkpoint, a user agent feature that benefits
accessibility is one implemented to satisfy the requirements of this document
(including the requirements of checkpoints 8.1 and 7.3).
- The user agent may satisfy this checkpoint either by
- providing a centralized view of the accessibility features, or
- integrating accessibility features into the rest of the documentation.
For user agent features.
Checkpoint 12.2
Note: The help system should include discussion of user
agent features that benefit accessibility. The user agent should satisfy this
checkpoint by providing both centralized and integrated views of accessibility
features in the documentation.
Who benefits:
- Many users with many types of disabilities.
Example techniques:
- Document any features that affect accessibility and that depart from system
conventions.
- Provide a sensible index to accessibility features. For instance, users
should be able to find "How to turn off blinking text" in the documentation
(and the user interface). The user agent may support this feature by turning
off scripts, but users should not have to guess (or know) that turning off
scripts will turn off blinking text.
- Document configurable features in addition to defaults for those
features.
- Document the features implemented to conform with these guidelines.
- Include references to accessibility features in both the table of contents
and index of the documentation.
- If configuration files are used to satisfy the requirements of this
document, the documentation should explain the configuration file formats.
- In developer documentation, document the APIs that are required by this
document. Please see the requirements of guideline 6.
12.3 Document default bindings.
(P1)
-
Document the default user agent input
configuration (e.g., the default keyboard bindings).
For user agent features.
Checkpoint 12.3
Note: If the default input configuration is inconsistent
with conventions of the operating environment, the documentation should alert
the user.
Notes and rationale:
- Documentation of keyboard accessibility is particularly important to users
with visual disabilities and some types of physical disabilities. Without this
documentation, a user with a disability (or multiple disabilities) may not
think that a particular task can be performed. Or the user may try to use a
much less efficient technique to perform a task, such as using a mouse, or
using an assistive technology's mouse emulation through key strokes.
Who benefits:
- Many users with many types of disabilities.
Example techniques:
- If the user agent inherits default values (e.g,. for the input
configuration, for highlight styles, etc.) from the operating environment,
document how to modify them in the operating environment, or explain where to
find this information in the documentation for the operating environment.
References:
- As an example of online documentation of keyboard support, refer to the Mozilla
Keyboard Planning FAQ and Cross Reference for the Mozilla browser
[MOZILLA].
12.4 Document changes. (P2)
-
Document changes from the previous version of the user agent to
accessibility features, including accessibility features of the user
interface.
- Accessibility features are those defined in checkpoint 12.2.
For user agent features.
Checkpoint 12.4
Who benefits:
- Many users with many types of disabilities.
Notes and rationale:
- In particular, document changes to the user interface.
Example techniques:
- Either describe the changes that affect accessibility in the section of the
documentation dedicated to accessibility features (see checkpoint 12.5) or link
to the changes from the dedicated section.
- Provide a text description of changes (e.g., in a README file).
12.5 Dedicated section on accessibility.
(P2)
- Provide a centralized view of all features of the user agent that benefit
accessibility in a dedicated section of the
documentation.
- The features that benefit accessibility are those defined in checkpoint 12.2.
For user agent features.
Checkpoint 12.5
Note: The user agent satisfies this checkpoint
automatically by providing a centralized view of accessibility features to
satisfy checkpoint
12.2. However, developers are encouraged to integrate descriptions of
accessibility features into the documentation alongside other features, in
addition to providing a centralized view.
Who benefits:
- Many users with many types of disabilities.
Example techniques:
- Integrate information about accessibility features throughout the
documentation. The dedicated section on accessibility should provide access to
the documentation as a whole rather than standing alone as an independent
section. For instance, in a hypertext-based help system, the section on
accessibility may link to pertinent topics elsewhere in the documentation.
- Ensure that the section on accessibility features is easy to find.