FAQs

Gregg Vanderheiden 3/28/2003

Question 1:

 In standards work it is generally not permitted to create a standard whose conformance depends upon something that is not in the standard, - unless the cited object is itself an established and/or frozen standard. How would this scheme not defer the conformance determination to the reference transcoding server, which is in fact a moving target?

Answer: The standard specifies that the page is in conformance if it meets the RTS at the time the standard is established or subsequent versions of the RTS as long as they conform to the criteria set forth herein for the RTS.

In some ways, the RTS would be like the technology-specific checklists. A checklist can not be created which does not conform to the guidelines, however, it can over time include new techniques which are known to conform to the standards. They also help to define more explicitly what conformance would mean for specific technologies.

The reason this new model is being proposed is that technologies are advancing and being introduced so quickly that it is unfortunate, unfair and unwise to create a set of standards that would block new technologies from being "conformant" until such time as the standards go through revision.    For example, if a set of guidelines insisted that content be accessible without the use of any stylesheets, then a technology such as XML could never be used. (In fact, XML can be used with stylesheets so long as the stylesheet can be applied at the server or if it does not prevent access to the human-presentable portion of the Web content.)

Question 2:

This sounds like an interesting model, but it seems to depend upon a lot of things that don't exist. For example, where is this free baseline user agent, where are the public transcoding servers and is it possible to create this reference transcoding server?

Answer: There are already user agents which are designed to provide accessibility. For example, pwWebspeak was an example of one that was free (without support) and HPR is an example of one today (though it is not free). IBM and others are working on transcoding servers. Adobe has an early version of one for well-behaved PDF documents. The reference transcoding server would simply be a subset of the transcoding servers and agents that represented the capabilities in a single location that were judged to be widely enough available that pages that pass through the RT server successfully would pass through all of the other public servers at least as successfully.

The actual existence of all of these components would depend upon an investment of time, effort and funds by government and/or private sectors. Given the importance of access to the Web, and the additional flexibility that this would provide to industry, while at the same time increasing the accessibility to users under constrained conditions, it should be possible to garner the support to create these components. The cost to create them to be at least as effective as what we have today (or would have using our current model) is not great and the ability to go beyond it is substantial. It would also be much easer for other countries, languages and cultures to address the issue in this fashion than to try to get additional user agent and/or content sources to each accommodate the different languages and cultures.

Movement in this direction would require a commitment by governments and or industry to at least achieve the level of performance that the alternative approach would yield.

Note that the cost of operating the servers would not be that much if it were limited to just those who had trouble accessing sites due to disabilities. A danger exists that this functionality would be so powerful and useful to individuals with mobile communications, while driving vehicles, etc. that the use of these could be dwarfed by use by others. This generates two problems:

  1. First is the cost to maintain a server system that could handle this additional load
  2. Second is that designers often design pages to be not as efficient or easy to use because it servers their purposes with regard to advertising, promotion, image, etc. If large numbers of users began to use servers that changed their look and feel, they may not be happy. It has already been found that some are not happy and even individuals with disabilities do not view their page "as they intended it to be viewed" even when it is impossible to do so given their disability (e.g. they are blind).

Question 3:

How does this relate to companies who are creating new technologies?

Answer: This is the only approach that really allows individuals creating new technologies to be able to easily move them into the marketplace in a "accessible" and "approved" form. Individuals having an inaccessible format could create a tool or module that could be installed on public transcoding servers that could handle their new format. This could be anything from a module that transforms it into an accessible form (e.g. a PDF to HTML converter), or it could simply be a script that allowed a JPEG image interface to be easily interpreted and transformed using already existing capabilities of the transcoding servers.

This is a much more effective mechanism than we have today. Second, it is self-validating in that when it is known to be effective it would be put on the RTS (Reference Transcoding Server). Third, it wouldn't be on the RTS until it was in the public server so that it could be rapidly deployed and when declared "approved" it would already be in place.

Question 4:

How do companies handle confidential documents?

Answer: Companies could either implement a PTS within their firewall or special certified confidential transcoding servers could be set up.

Question 5.

How would documents be handled that are encoded or otherwise have copyright protection measures applied to them? Companies may want to have things be accessible but are concerned that making them accessible would make them trivially piratable.

Answer: A companion proposal is to create a service through the "book share" technologies of Benetech  which would combine with the technologies we have described here. Individuals who have disabilities would register with the service. Documents that were copy protected would be fed into special servers where they would be made accessible without violating the applicable laws. They would then be fingerprinted and sent back to the individual requiring access to the document along with a built in statement that the document was not to be used by anyone other than the intended qualified recipient. This would operate only in countries where there are laws where it is allowable to put copyrighted materials into accessible form (such as the United States).

Question 6:

How does this relate to or handle Web applications or Web content that looks more like a video game than an HTML page? That is, content which is dynamic in nature and perhaps even real-time. These don't lend themselves to transcoding and re-presentation in "HTML" or something like that.

Answer: If the content is of that nature and you send it to the reference server, it would either have to be something which the company had created a tool that was capable of creating an accessible version or it would have to be stereotypic that someone had created scripts that would allow existing tools to be able to handle it or the reference server would simply pass it back in it's original form in which case it would have to have met all of the guidelines itself in advance. Content of the latter type (where there was nothing that the server could do to process it) would have to have its accessibility and compatibility built directly into the content. Even with our current guidelines, these types of materials are deferred to user agent or software accessibility guidelines.