[contents]
Copyright © 2014 W3C® (MIT, ERCIM, Keio, Beihang), All Rights Reserved. W3C liability, trademark and document use rules apply.
This document lists and describes accessibility evaluation features that can be provided by web authoring, quality assurance, and accessibility evaluation tools, to help ensure conformance of websites to the Web Content Accessibility Guidelines (WCAG) 2.0. The main purpose of this document is to promote awareness on such accessibility evaluation features and to provide guidance for tool developers on what kind of features they could provide in future implementations of their tools. The document can also be used to help compare the features provided by different types of tools such as, for example, during the procurement of such tools.
The features in scope of this document include capabilities to help manage, carry out, and report the results from accessibility evaluation. For example, some of the described features relate to crawling of websites, interacting with tool users to carry out semi-automated evaluation, and providing evaluation results in machine-readable format. This document does not describe the evaluation of web content features, which is addressed by WCAG 2.0 Success Criteria.
This document encourages the incorporation of accessibility evaluation features in all web authoring and quality assurance tools, and the continued development and creation of different types of web accessibility evaluation tools. The document does not prioritize nor require any particular accessibility evaluation features or specific types of evaluation tools.
This section describes the status of this document at the time of its publication. Other documents may supersede this document. A list of current W3C publications and the latest revision of this technical report can be found in the W3C technical reports index at http://www.w3.org/TR/.
This 3 February 2014 [Editor Draft of Guidance on the development of web accessibility evaluation tools] proposes a new framework for the document. Previously the document was called Techniques For Accessibility Evaluation And Repair Tools (previous Working Draft of 26 April 2000). Most of the work related to conformance assessment was merged into the Techniques for WCAG 2.0. Remaining aspects of the previous are included in this document.
This is an early draft to get feedback on the overall approach and to identify areas of the document that need further expansion. The Evaluation and Repair Tools Working Group (ERT WG) invites discussion and feedback on this document by tool and web developers, evaluators, researchers, and others with interest in web accessibility evaluation. The group is particularly looking for feedback on the overall approach of the document as well as on missing accessibility evaluation features.
Please send comments on this [Editor Draft of Guidance on the development of web accessibility evaluation tools] by [date] to public-wai-ert@w3.org (publicly visible mailing list archive).
Publication as [Editor Draft] does not imply endorsement by the W3C Membership. This is a draft document and may be updated, replaced or obsoleted by other documents at any time. It is inappropriate to cite this document as other than work in progress.
This document has been produced by the Evaluation and Repair Tools Working Group (ERT WG), as part of the Web Accessibility Initiative (WAI) Technical Activity.
This document was produced by a group operating under the 5 February 2004 W3C Patent Policy. The groups do not expect this document to become a W3C Recommendation. W3C maintains a public list of any patent disclosures made in connection with the deliverables of this group; this page also include instructions for disclosing a patent. An individual who has actual knowledge of a patent which the individual believes contains Essential Claim(s) must disclose the information in accordance with section 6 of the W3C Patent Policy.
Designing, developing, and managing a website typically involves a variety of tasks and people who use different types of tools. For example, a web developer might use an integrated development environment (IDE) to create templates for a content management system (CMS) of a website while a less technical content author will typically use the editing facility provided by the CMS to create the web pages. Ideally accessibility evaluation is carried out throughout the process and by everyone involved. For example, web developers should ensure that any headings provided in templates are coded appropriately while content authors should ensure that any images added to web pages have appropriate text alternatives.
Evaluation tools can assist accessibility evaluation in many different ways. For example, tools can assist:
This document lists and describes these types of accessibility evaluation features that can be provided by evaluation tools. It does not describe the evaluation of specific web content features, which is addressed by WCAG 2.0 Success Criteria.
In the context of this document, an evaluation tool is any web-based or non-web-based application with functionality to check web content to specific quality criteria. This includes but is not limited to the following (non-mutually-exclusive) types of tools:
The accessibility evaluation features listed and described in this document can be incorporated by evaluation tools to provide support for accessibility evaluation. Section 3 provides example profiles of evaluation tools with accessibility evaluation features.
W3C Web Accessibility Initiative (WAI) provides a list of web accessibility evaluation tools that can be searched according to different criteria such as the features listed in this document.
[Review Note: Feedback on this section is particularly welcome, specifically with suggestions for accessibility evaluation features that are not listed below and with comments to refine listed accessibility evaluation features.]
The accessibility evaluation features listed and described below are not exhaustive. It may also not be possible nor desired for a single tool to implement all of the listed features. For example, tools that are specifically designed to assist designers in creating web page layouts would likely not incorporate features for evaluating the code of web applications. Developers can use this list to identify features that are relevant to their tools to plan their implementation. Also others interested in acquiring and using evaluation tools can use this list to learn about relevant features to look for.
This section applies to evaluation tools that actively traverse web content, either directly or through web browser functionality. It does not typically apply to evaluation tools that receive the web content as input, such as typical extensions for IDEs, CMSs, and browsers.
Evaluation tools can support fetching and processing different content types (indicated by the web server through the corresponding HTTP header). Some evaluation tools only fetch the content while others can actually process different content types. For example, evaluation tools can fetch and process HTML content to identify supplementary resources such as images, scripts, and stylesheets. Also SMIL format can have compound resources such video, audio, text (caption), and sign-language tracks. [Other examples?]
Note: Other sections relate to crawling and testing functionality for different content formats and content languages.
Text-based content types can be served using different character encodings (indicated by the web server through the corresponding HTTP header). Evaluation tools can support a variety of character encodings, such as UTF-8, UTF-16, and UTF-32. HTML and XML include attributes declaring the character encoding that can be processed by evaluation tools as well. [Other formats?]
Evaluation tools can control the HTTP header exchange to request the desired types of content from the web server when a website supports such content negotiation. For example, evaluation tools can imitate web browsers on different devices to fetch particular variants of the content, such as a mobile website or a language version of a website.
Similarly to content negotiation, evaluation tools can control the HTTP headers to exchange cookies with the web server to imitate particular situations. Evaluation tools can allow the tool users to set the behavior, in particular in combination with test automation functionality. For example, tool users can set the evaluation tools to accept or reject cookies on particular web pages, or to send cookies with specific parameters on other web pages, to evoke web applications to generate particular content.
Evaluation tools can manage session tracking, in particular to imitate a real web user traversing through a website and to generate particular content. In combination with test automation, this feature can also be used to check complete processes (as defined by WCAG 2.0) provided on a website.
HTTP authentication is handled directly by the web server using corresponding HTTP headers exchanges with the client (usually a web browser). Evaluation tools can provide authentication credentials (username and password) to access restricted content. Other forms of authentication require input into forms and web applications, and are discussed in the next section on triggering events.
Websites, in particular interactive applications, often generate different content depending on actions carried out by the web users. This includes actions such as clicking with a mouse, swiping an area through a touch-screen, using voice commands, or typing text into form controls. Evaluation tools can trigger events to imitate user actions on a website, for example to fill and submit forms, such as a log-in form. This can be used to to generate particular content, in particular in combination with test automation functionality.
Evaluation tools can fetch individual web pages or crawl (spider) through entire areas of websites. In some cases evaluation tools are specifically designed to crawl through multiple websites, for example for large-scale evaluation studies. Crawling usually requires other features described earlier in this section. For example, complete authentication and access restricted areas of a website.
@@@ make reference back to content-types
@@@ make reference back to character encoding
@@@ a.k.a. DOM document fragments
@@@ synch with definition from EARL (reused in ATAG)
@@@ synch with definition from EARL (reused in ATAG)
@@@ synch with definition from EARL (reused in ATAG)
@@@ web driver and APIs to automate test-execution
@@@ make reference back to content traversing
@@@ adjusting testing parameters
@@@ creating custom tests
@@@ suspicious pages for manual evaluation; pages that changed and need re-testing; "representative samples" for expert evaluation (WCAG-EM)
This category includes characteristics that help to identify and evaluate different types of content.
Although the vast majority of web documents are HTML documents, there are many other types of resources that need to be considered when analyzing web accessibility. For example, resources like CSS stylesheets or Javascript scripts allow the modification of markup documents in the user agent when they are loaded or via user interaction. Many accessibility tests are the result of the interpretation of those resources and are therefore important for an accessibility evaluation.
In general, the following types of content formats can be distinguished:
Several accessibility evaluation tools concentrate on the markup evaluation, but the most advanced ones are able to process many of the content types described above.
The web is a multilingual and multicultural space in which information can be presented in different languages. Furthermore, this content can be transmitted using different character encodings and sets. Some accessibility evaluation tools can process such variations and present its results adequately. More information about this topic can be found in the W3C Internationalization Activity [W3Ci18n].
Many websites are generated dynamically by combining code templates with HTML snippets that are created by website authors. Some evaluation tools can be integrated into Content Management Systems (CMS) and Integrated Development Environments (IDE) to check these snippets as website authors create them. Usually this is done by creating DOM [DOM] document fragments from these snippets. Some tools are able to filter the accessibility tests according to their relevance to the document fragment.
Web and cloud applications are becoming very frequent on the web. These applications present similar interaction patterns as those of the desktop applications and contain dynamic content and interface updates. Tools that evaluate such applications should emulate and record different user actions (e.g., activating interface components or filling and sending forms) that modify the status of the current page or load new resources. The user of such an application needs to define these intermediate steps that can be later on interpreted by the tool (see section on web testing APIs).
A cookie is a name-value pair that it is stored in the browser of the user [HTTPCOOKIES]. Cookies contain information relevant to the website that is being rendered and often include authentication and session information. This information is relevant to other use cases, like the crawling tool described later.
Many sites require some kind of authentication (e.g., HTTP authentication, OpenID, etc.). Some accessibility testing tools support common authentication scenarios. It is important to do so because many sites present customized content to authenticated users.
For security reasons, some sites include the session ID in the URL or in a cookie, for example. With the support of the session information, websites may implement security mechanisms like for instance login out a user after a long inactivity period or track the interaction paths of the users.
The identification of resources on the web by a Uniform Resource Identifier (URI) alone may not be sufficient, as other factors such as HTTP content negotiation might come into play. To support content negotiation, the testing tool sends and customizes different HTTP headers according to different criteria, combined with some of the features presented earlier and interprets the response of the server.
This issue is significant for accessibility, as some sites to be tested may present different content in different languages, encodings, etc., as described in previous sections.
There are tools that incorporate a web crawler [WEBCRAWLER] able to extract hyperlinks out of web resources. It must be kept in mind that there are many types of resources on the web that contain hyperlinks. The misconception that only HTML documents contain links may lead to wrong assumptions in the evaluation process.
A web crawler defines an starting point and a set of options. The most common features of a web crawler (configuration capabilities) are:
Some of these characteristics were presented earlier or are described later in the document.
This category includes features targeted to the selection of the tests to be performed.
Depending on the workflow that the customer uses for development, it is sometimes desirable to perform only a reduced set of tests. Some tools offer different possibilities to customize the tests performed and match accordingly the reporting output and, when applicable, the interface of the tool. A typical example could be performing tests to the different conformance levels (A, AA or AAA) of the Web Content Accessibility Guidelines 2.0 or selecting individual tests for a single technique or common failure.
According to the Evaluation and Report Language (EARL) specification [EARL10], there are three types of modes to perform accessibility tests:
Most of the tools concentrate on the testing of accessibility requirements which can be performed automatically, although there are some that support accessibility experts by performing the other two types of tests. This support is normally introduced by highlighting in the source code or in the rendered document areas which could be originating accessibility problems or where human intervention is needed (for instance, to judge the adequacy of a given alternative text to an image).
Some tools do not declare that they only perform automatic testing. Since it is a known fact that automatic tests only cover a small set of accessibility issues, full accessibility conformance can only be ensured by supporting developers and accessibility experts while testing in manual and semiautomatic mode.
Developers and quality assurance engineers need sometimes to implement their own tests. For that purpose, some tools define an API so developers can create their own tests, which respond to internal demands within their organisation.
When evaluating accessibility of web sites and applications it is sometimes desirable to create scripts that emulate some kind of user interaction. With the growing complexity of web applications, there has been an effort to standardize such interfaces. One of them is, for instance, the WebDriver API [WebDriver]. With such tools, it is possible to write tests that automate the application's and users' behaviour.
This category includes characteristics related to the ability of the tool to present the testing results in different ways, including filtering, manipulating and displaying graphically these results.
Support for standard reporting languages like EARL [EARL10] is a requirement for many customers. There are cases where tool users want to exchange results, compare evaluation results with other tools, import results (for instance, when tool A does not test a given problem, but tool B does it), filter results, etc. Due to its semantic nature, EARL is an adequate framework to exchange and compare results.
The results of the evaluation can be used in different circumstances. With that aim, results could be filtered depending on (see previous sections):
Evaluation results can be presented in different ways. This presentation of results is also influenced by the underlying hierarchy of the accessibility techniques with guidelines and success criteria. Aggregation is also related to the structure of the page, for instance, the accessibility errors would be listed for a whole web resource or presented for concrete components like images, videos, tables, forms, etc.
Conformance statements are demanded by many customers to assess quickly the status of their website. When issuing such conformance statements it is thus necessary to tackle the different types of techniques (i.e., common failures, sufficient techniques, etc.) and their implications.
This section includes characteristics that are targeted to the customization of different aspects of the tool depending on its audience, like for instance, reporting and user interface language, user interface functionality, etc.
Localization and internationalization are important to address worldwide markets. There may be cases where the tool users are not able to speak English and it is necessary to present the user interface (e.g., icons, text directionality, UI layout, units, etc.) and the reports customized to other languages and cultures. As pointed out earlier, more information about this topic can be found in the W3C Internationalization Activity [W3Ci18n] and in [I18N].
From the accessibility standpoint, it is recommended to use the authorized translations of the Web Content Accessibility Guidelines. It must be considered as well that some accessibility tests need to be customized to other languages, like for instance, those related to readability.
Typically, evaluation tools are targeted to web accessibility experts with a deep knowledge of the topic. However, there are also tools that allow the customization of the evaluation results or even the user interface functionality to other audiences like, for instance:
The availability of such characteristics must be declared explicitly and presented in an adequate way to these target user groups.
Although there is an international effort to harmonisation of legislation in regard to web accessibility, there are still minor differences in accessibility policies in different countries. The tool should specify in its documentation which policy environments are supported. Most of the tools are focused on the implementation of the Web Content Accessibility Guidelines 2.0 [WCAG20], because it is the most common reference for those policies worldwide.
Accessibility evaluation teams and web developers may include people with disabilities. To that end, it is relevant that the tool itself can be used with different assistive technologies and it is integrated with the accessibility APIs of the running operating system.
The following sections describe aspects related to the integration of the tool into the standard development workflow of the customer.
The majority of web developers have little or no knowledge about web accessibility. Some tools provide together with their reporting capabilities additional information to support developers and accessibility experts to correct the accessibility problems detected. Such information may include examples, tutorials, screencasts, pointers to online resources, links to the W3C recommendations, etc. Automatic repair of accessibility problems is discouraged, as it may originate non-desirable side-effects.
Such support may include a guided step-by-step wizard which guides the evaluator to correct the problems found.
Accessibility evaluation tools present different interfaces. What is important is how these tools integrate into the workflow of the web developer. Mostly the typical ones that can be highlighted are the following:
Managers and quality assurance engineers of big websites and portals need to be able to monitor the level of compliance and the progress on improving different sections of a portal. For that it is important the persistence of the results and their comparison. Some tools offer a dashboard functionality, configurable depending on the needs of their users.
As it was mentioned earlier, there is a wide landscape of accessibility evaluation tools available on the web. The following sections describe some examples of such tools. These examples do not represent any existing tool. They are provided here as illustration of how to present a profile and its features.
Tool A is a simple browser plug-in that the user can download to perform a quick automatic accessibility evaluation on a rendered HTML page. The tool tests only the Web Content Accessibility Guidelines 2.0 techniques that can be automatically analysed. Its configuration options of the tool are limited to perform one of the three conformance levels of WCAG.
After the test is run, the tool presents an alert at the side of the components where an error is found. When selecting the alert, the author is informed about the problem and hints are given on ways to solve the error. Since the tool works directly on the browser, it is not integrated in the workflow of some authors who use IDEs in their development.
Table 1 presents an overview of the matching features as described in section 2.
Tool B is a large-scale accessibility evaluation tool. It offers its users the possibility to crawl and analyze complete websites. It offers the possibility to customise which parts of the website are analysed by defining or excluding different areas of the site to be crawled. Results are persisted in a relational database and there is a dashboard to compare results at different dates.
The tool supports authentication, sessions, cookies and content negotiation by customising the HTTP headers used in the crawling process. The tool performs autonomously the WCAG automatic tests.
The tool offers a customized view, where experts can select a subset of the crawled pages and complete the automatic and semiautomatic tests by inspecting the selected pages and store the results in the database.
The reports of the tool can be exported as a EARL report (serialized as RDF/XML), in a spreadsheet and as a PDF document.
The tool incorporates the corresponding interfaces to the accessibility APIs of its operating system.
Table 1 presents an overview of the matching features as described in section 2.
Tool C is an accessibility evaluation tool for web-based mobile applications. The tool does not support native applications, but it provides a simulation environment that gives access to the application to the Device API.
The tool can emulate different user agents running on different mobile operating systems. It also shows typical display sizes corresponding to mainstream smartphones and tablets. It supports HTML, CSS and JavaScript, providing to the testers and implementation of the Web Driver API, supporting automatic and manual evaluation.
This section presents a tabular overview of the characteristics of the tools described previously.
Category | Feature | Tool A | Tool B | Tool C |
---|---|---|---|---|
Test subjects and their environment | Content-types | HTML (CSS and JavaScript interpretation is provided because the plug-in has access to the rendered DOM within the browser) | HTML and CSS only | HTML, CSS and JavaScript |
Content encoding and language | yes | yes | yes | |
Document fragments | no | no | no | |
Dynamic content | yes | no | yes | |
Cookies | yes | yes | yes | |
Authentication | yes | yes | yes | |
Session tracking | no | yes | yes | |
Content negotiation | no | yes | yes | |
Crawling | no | yes | no | |
Test customization | Customization of the performed tests | no | yes | nno |
Semiautomatic and manual testing | no | yes | yes | |
Development of own tests and test extensions | no | no | no | |
Web testing APIs | no | no | yes | |
Reporting | Standard reporting languages | no | yes | no |
Report customization and filtering according to different criteria | yes | yes | no | |
Conformance and results aggregation | no | yes | yes | |
Tool audience | Localization and internationalization | no | no | yes |
Functionality customization to different audiences | no | yes | no | |
Policy environments | no | no | no | |
Tool accessibility | no | yes | no | |
Monitoring and workflow integration | Error repair | yes | no | yes |
Integration in the web development workflow | no | yes | no | |
Persistence of results and monitoring over time | no | yes | yes |
The following are references cited in the document.
The editors would like to thank the contributions from the Evaluation and Repair Tools Working Group (ERT WG), and especially from Shadi Abou-Zahra, Yod Samuel Martín, Christophe Strobbe, Emmanuelle Gutiérrez y Restrepo and Konstantinos Votis.