Minutes, WCAG Face to Face meeting, 25 - 27 October 2006
DISCLAIMER: Only those things noted as "consensus" were decisions of the Face to Face
group. All other comment on the page were just quickly typed notes to try
to capture the discussion. They are incomplete and are sometimes inaccurate
due to speed of our volunteers vs speed of discussion. If there are any
questions on what is there - please address them to the group for
clarification or correction since they may not be accurate.
Also please note that none of the results of the Face to Face are official
decisions of the working group until we discuss and adopt them at our
Thursday meeting this week.
Present: Alex_Li, Andi_Snow-Weaver, Gregg_Vanderheiden, Katie_Haritos-Shea, Sorcha_Moore, Makoto_Ueki, David_MacDonald, Tim_Boland, Ben_Caldwell, Michael_Cooper, Sofia_Celic, Bengt_Farre, Loretta_Guarino_Reid, Becky_Gibson, Cynthia_Shelly
Agenda Review
scribe: Tim
by end of week, we should be able to determine whether we'll do a second last call
font scaling
points of discussion (not consensus):
level 3 (reflow) versus level 2 (resize), "reflow" is a better term than "wrap", most text can be resized, difficult to put maximum value for resizing, resizing applies to text only (what is definition of "text")? Possible SCs - Level 2 - text can be resized, Level 3 - when text is resized, it can be reflowed? Text different from other content in terms of zooming? Is this a UA problem or an author problem? Level 2 SC - author should not interfere with UA ability to resize/reflow? Need SC at some level to deal with fonts? Level 3 SC dealing with scaling and flowing? Require some consideration of reflow at Level 2?
Consensus
- if we can find proper wording, there should be at least one SC dealing with font scaling
- If we can find proper wording, a SC at least at Level 3 that deals with scaling and reflow (font and other?)
- To draft a SC that talks about not interfering with the font-related accessibility features of UAs
ACTION: A group (David, Sorcha, Ben, Katie) will investigate drafting appropriate wording pertaining to the three consensus items for font scaling
Set of Principles
Do we agree on the four principles (constraints) mentioned in agenda? Are we missing any? May need to refer to these in responses.
testability
Points of Discussion: nobody wants to step back from this - there was some concern expressed because of congnitive issues. Level 3 - testable but not as testable? Then Level 3 would become recommendations. Any levels we have conformance for must be testable.. If we decide that Level 3 is optional, then we could consider whether testability of level 3 is a constraint we want to impose. Nobody wants to speak against making Level 3 strictly testable at this time. Will discuss further..
QAWG definition of testability:
A proposition is testable if there is such a procedure that assesses the truth-value of a proposition with a high confidence level.
Consensus: If it's a SC to which you must conform, it must be testable..
general applicability of Level 1 and Level 2
Consensus: Everything at L1 or L2 can be applied to all web content
implementability of most webmasters with training
Points of Discussion: real-time captioning takes training - also translating into sign language is difficult without training - doing these things requires that people have skills. Thus these issues would be addressed at level 3? Need definition of "with training"? Need to address why everything isn't at level 1? Gregg and Michael did level calculation on our SCs against actual levels, and some SCs didn't agree.. Replace "webmaster" with "content provider" or "content creator"? Time of training an issue? Just talking about one SC here.. Other possible exmples might be in cognitive areas (everything marked up with its meaning). Are we just talking about IT training or other types of training? Are we going to require that nobody can put content up unless experts are available to them? Emphasize that this is consensus process, but need to give substantive answers to challenges, and explain our decisions to get buy-in. Maybe list evaluation criteria we use in responding? Maybe give more clarity to meaning of different levels? Does it help the most people rather than apply to the most web pages? Are some of these SCs more important than others?
Tied in to definition of levels (Gregg showed writeup of definition of levels against criteria). Criteria mentioned are important access issue (Y L1, Y L2, Y L3) testable (Y L1 Y L2 Y L3), can be met on all web sites where it would apply (Y L1 Y L2), can be implemented by any qualified webmaster..(skill?)(Y L1 Y L2), and if you don't do, even AT can't make it accessible (Y L1). Group looked at differentiation between levels. Should the differentiations be changed? Should groups of individuals be targeted? What about the "coverage issues" with the different levels? Should any other differentiating factors/criteria be included? Each level requires more skills to implement.
Don't include strictly usability issues. When is it a usability vs. accessibility issue? If it takes people without disabilities x times as long to use the page (if you don't do something) and it takes people WITH disabilities the same x times as long use the page then it is a usability issue and is not treated as an accessibility problem. If it takes people without disabilities x times as long to use the page (if you don't do something) and it takes people WITH disabilities substantially more than x times as long use the page (if you don't do that same thing) then it is accessibility issue and is treated as an accessibility problem.
What about cost (live captioning example)? What about inclusion of untested ideas?
Bias against if it affects default presentation (isn't invisible)..
Number of people? Number of different disabilities? Skill level of - effort to author? Michael did check against WCAG2.0 requirements and found they don't map to SC principles (they should). Applicable across technologies? Backward compatible with WCAG1.0 (unless no longer needed or not effective, or no technique to meet)? Forewards applicable?
Gregg will send SC review to people at meeting..
Cognitive Discussion
Scribe: Katie (1:30 pm to 7:30pm)
Reviewed: John Slatin’s comments email from 10-24-2006.
Topics:
14.1 of WCAG 1.0
Complete a letter Addressing CLL (sent to those protesting, working in this field and others – open to distribute)
- Seeking comments and a meeting/dialog
- Invite constituents to contribute
- List SC - How they relate to Cognitive, Language & Learning Disabilities (distinctions important in other cultures)
- List problems (i.e., people who have trouble reading math) and how they benefit from each SC
- Requiring AT and not requiring AT
- Other techniques than AT (i.e., captions, text transcripts)
- Develop Application Note: How to optimize for people with disabilities
- Research, grant funding and future technologies (i.e., content transformation, alternative renderings)
Discussion:
- Some CLL qualitative in nature
- Lacking clarity in WCAG 2.0 draft
- Support for alternate views – is there enough support there?
- Author or User Agent responsibility?
- Letter: Admit that WCAG 2.0 doesn’t address everything.
- Add some testable SC and techniques (i.e. avoiding fully justified text) Comments 569, 1263.
- Application Notes planned
- Accessible Forms
- Tutorial Addressing CLL
- Getting Started (like Quick Start) Primer
- Discuss the “You were Right About…”…without too many “but(s)”. “Changes we have made in response to your input….”
- Possibly add non-testable SC and Advisory techniques Notes
- Talk to Key People about the letter, to vet the approach FIRST.
- Companion Document
- If it is not required will people do it?
- The W3C stamp gives credibility as to why it is a good idea, even if not part of the GLs. Motivating and educating for those so inclined.
- Quick reference organized by disability.
- Beef up Benefits Section of WCAG 2.0
- Advisory techniques can be un-testable. Can probably write up techniques that are qualitative. “Did you make it as simple as you can?”
- Perhaps a catch-all requirement like Functional Performance in 508 (Subpart C) that would include a CLL component. To test that you might have to do user testing.
- Can say “Usability is important” in an Application Note.
- The benefits around CLL should be stated in terms of the specific problems experienced, NOT the category of disability (e.g. reading text, printed text)
- User testing to be really useful needs to have very large numbers of testing individuals to provide relevant results. Need to provide all of the variance of disabilities and abilities.
Error Recovery
- Team C discussed Error Prevention in the SC level and taking it out of the Guidelines
- Do we have ideas for Error Prevention?
- We do have many Advisories
- We can write Sufficient Techniques
- How do you test for Prevention? How do you do this for PWD and nobody else?
- Problematic Issue: One label and more than one input box. Screen readers associate the label with the first text box only? 1294 Sophia’s boss provided some examples.
- Should we have prevention in the title of the GL?
- Take out Help User Avoid Mistakes unless we provide additional techniques – suggest:
- Instructions for completion be at the top of the form.
- Format information be part of the language of the instructions. (i.e. date format)
- Begin dropdown list with the most distinct info first – Advisory Technique unrelated to
- 2.5 Question: Does that mean navigation errors? Provide clarity with a new success criteria.
-
ACTION: Team C will draft a success criterion for the Advisory Techniques about error prevention – Michael, Sophia, David and Tim. Can we figure out How to do this?
- Stabilization Draft upcoming
ACTION: Michael, Sofia, David, Tim, Makoto to make proposals based on discussion of Errors topic.
Timing
- Relied on Issue Summary Andi authored Sept 13.
- Comment: Where did 20 and 10 come from? If you don’t have a standard to point to - drop it.
- Comment: Level Change request. Andi thought this was a UA issue – related to page refreshes. Couldn’t find reference in minutes.
- Add a note on the cross-reference if Citable. Most that we want aren’t yet.
- Will be reviewed for level change as part of overall level discussion.
- Progress bars have no timeout. Concern is covered at level 1.
Text Alt
- Summary from Andi of
- Many comments on 1.1.1 the way it is worded it is difficult to parse. Changed it to say “at least one of the following…..”. The SC contains situational things. Suggest reword to simplify.
- Can’t have an exception in the How to Meet Doc. If separated will come out to too many conditional statements. This is why it is tied into one SC.
- SC looks illogical and contradictory. Alex uses an if/and flow. Change language to “If……and…..”. We really mean in certain cases we expect certain things to happen.
- The technical techniques are the same for all of these and there will be too much duplication if separated.
- Appropriateness is no longer in the SC, now in informative info.
- Team: Andi, Alex and Gregg
- Non-text content that responds to user input. Content that the user interacts with can only be labeled. Should be taken care of by 1.2.2. Loretta added a whole new issue on this in last group telecon. A link is NOT alt text. Text can be informational and not functional.
ACTION: Andi, Alex and Gregg to make proposals based on discussion of text alternatives topic.
Contrast
- Metrics are too sensitive. Pages that looked like they had sufficient contrast failed. It was determined to be because of the width of the pixels. Thin letters get washed out. W3C page failed.
- Is this stroke width? OSs (MAC) will be independent – settings can be made to be 20 x something – so one cannot tell how many pixels some thing is.
- Level 2 gives us more leeway. Should we back off to 4 to 1, instead of 5 to 1. It all comes down to line stroke.
- This is base on feedback by people who have been using it on their web sites.
- Old Action Item: How does Luminosity affect Contrast?
- Ambient light is important to contrast.
- .05 is factored into the metrics for glare.
- The user can solve the problem by trying to get the glare off their own screen. Should not be an author issue.
- Create more samples of contrast things. Ben: The slide set had all of them.
- Magnifiers. Canadian government wants WCAG 2.0 to talk about web-friendly colors. Dithering is a UA. Biege would be consisting of several colors mixed together. Zoom applications were set up for web frienly colors. Canada suggests a note
- Aliasing – is it measuring between black and white or next grey in the 12 grey in between?
- In techniques we should say something about in the eyedropper we should
- Some screen readers translate the html hexadecimal code to 16 named colors.
- The web safe color palate – contrast analysis can be done on that.
- Why does WCAG 2.0 not require web safe colors?
- Advisory Technique – if web-safe colors helps users with older AT in the contrast SC. Otherwise can get it in the Guideline. ACTION: David will try to draft an advisory technique about web-safe colors for users with older AT.
- Dark doesn’t work as well in a room that has light?
Seizure Disorders
- Michael wants to try to use viewport instead of pixel.
- Can we talk about “Angle of view”?
- How does the author know that?
- But we can specify a ‘Standard viewing distance’.
- It must be true at the ‘Standard viewing distance’, in the How to Meet document the following must be sufficient and make a reasonable assumption about distance and justify it.
- Can we come up with this idea?
- It gets safer is the resolution gets smaller, it is no worse. At higher resolutions screens most web pages get smaller.
- Move what is sufficient outside the GL, which allows for new standards. As long as what is left behind is still testable.
- Given distance the size needs to be known – and the author cannot know what that is.
- In the sufficient techniques we would say what reasonable assumptions are.
- Will try to go in that direction. Will say some thing like 10 degrees at that range for normal viewing distance. Do we think that will be testable? For web pages meant to be viewed on a regular screen, do this...
CONSENSUS: Recast the pixel measurements about region on a screen that can trigger photosensitive epilepsy seizures into "angle of view". How to Meet describes how to calculate, and sufficient techniques provide pixel measurements for common use cases (e.g., standard monitor at standard viewing distance). Author must make assumptions about user environment, we need to provide guidance about reasonable assumptions.
ACTION: Gregg to rewrite photosensitive epilepsy SC, How to Meet, and techniques using angle of view approach.
Color Variations – Colors Programmatically Determined
- Issues were raised 1.3.2 and 1.3.4
- Information that is conveyed by colors is also evident without color (? Check exact text)
- Information that is conveyed in variations in text can be programmatically determined (? Check exact text).
- Is italics
- Comments: Why is color treated differently than other forms of visual presentation? Some want to be able to determine the meaning ‘implied by’
- Gets into UI domain, which we are trying to steer clear of.
- All the changes are in italics – how do I find out what that is?
- Look at 1.3.2 first, that is fairly clear, visually apparent withpout color. Clearly a color blindness issue. It doesn’t say that the info is conveyed in text. It doesn’t require that you interpret it
- 1.3.4 makes it clear that what is blue and what is MEANT by it being blue.
- Vision Australia has the problem with the variations, because that is not information.
- Vision Australia this whole family of things having to do with being able to separate content and presentation. 1.3.1 Talks about information and presentation. Vision Australia loves that.
- Sophia: 1.3.1 (Level 1) to cover the covers the situation that the information can be determined. Adding 1.3.4 weakens access to text and WCAG would be better off by deleting it.
- Comment: Use roles.
- Emphasis is not information? You are only conveying the italics-ness and not the information.
- Technique: Use on a Label positioned off screen by CSS (for a required field).
- At the moment it has to be something that is visible. EM not supported adequately by AT to , if someone has turned off presentational kind of things, then it will not be accessible. This concern is for the case of screen enlargement, where users can turn off EM.
- Suggests running 1.3.4 back into 1.3.1. ASW like JavaScript repurposing content which will be solved by the xhtml role attribute.
- 1.3.2 Color. It must be visually evident.
- It is the non-color textual variations that we have a problem with. 1.3.4 suggest another thing that you can do.
- Need to cover variations of presentations. Note this includes color, font style, font size. If zapping 1.3.4. If we do want it, if we do have a role of required. Assuming ARIA support.
- Asterisks are OK. It is other presentational things.
- suggests: Information and relations are NOT conveyed only through presentation.
- We are saying that links are marked with A, etc. Paragraphs are marked as Ps. Where is the requirement to use the correct thing. Maybe we need another requirement to use the correct markup.
- LGR: Make using the markup the first solution, then use text (as semantic relationships) as a fallback.
- there is no known technique for doing some relationships programmatically as requried in 1.3.1 as it is currently worded.
- Information and relations can be programmatically determined – is future proofing.
- The asterisk could be a strategy if it is programmatically determined. ASW by using 2 labels on a required field it would work.
- asterisk is OK for 1.3.2. Also text (e.g. the word "required")
- for 1.3.1 the asterisk or word would only count as programmaticall determined if it were programmatically associated (e.g. it was inside a label).
- by collapsing the two (1.3.4 and 1.3.1) there are relationships that cannot be expressed by current semantics (e.g. markup) and therefore must be expressed in text. (example "The changes made by mary are in italics")
- if you pull presentation then the infor is still there.
Consensus: - COMBINE 1.3.4 ADN 1.3.1
ACTION: Katie, David, Sorcha, Bengt, Ben, Loretta to make proposals based on discussion of text variations topic
Baseline
Scribe: Sorcha Moore
Issues: Loretta has completed summary of Baseline issues
Discussion:
Issues: Short Version description of baseline & summary of issues
- We should be able to explain baseline in 5 minutes
- Some uncertainties about Baseline – what does it mean, can you lie about it
- Is the baseline something that each author declares individually or something someone who understands it defines, lets look at the different definitions.
- How you go about picking it?
- How you go about using some technology that is not accessible?
- If it is not accessible can you put it in the baseline?
- It is accessible and the way you go about coding using that is accessible.
- It is not accessible or you use if poorly, therefore you reply on equivalent facilitation.
- Whether or not you have the technology – this is not an aspect of accessibility – whether or not you can get the technology in a way that is not discriminatory. (If everyone has to buy/download it, people with disabilities can buy/download it, e.g the link to Flash is accessible…)
- Whether or not the technology is accessible/AT compatible (you should only put something in a baseline if it is AT compatible) Authors need guidance for this.
- AT definitions: a company could establish one and people use it OR and author could pick one out that has been pre-established
- When we say AT compatible what do we mean by AT? Working with mainstream AT or all AT? What constitutes AT supported?
- Is AT available to day?
- Cost of AT?
- Pervasion of AT in the community?
- Existing or future AT? Install base
- Theoretical AT compatibility but no AT exists
- Supporting standard accessibility APIs
- Is compatibility with API good enough?
- Does it need to be tested?
- Testing in the language of the content
Baseline is a description of those technologies that are supported by AT for a particular environment for a particular time. Authors can choose technologies from within those technologies without having to know everything about Assistive Technologies.
Baselines is those set of technologies that are required for a particular web product/web site to meet the success criteria/claimed Level. It is the same set of technologies that are required for the site to work at all for anyone regardless of disability.
Set of technologies that must be supported and enabled in the user agent for the claim of WCAG conformance to yield the intended accessibility benefits for the claim to be valid. (These are the technologies my page relies on…)
- Baselines that are supported by AT and you can use.
- Baseline is the set of accessibility enabled technologies used on your site. Any functionality that required technologies outside the Baseline must be accommodated by equivalent facilitation. (“accessibility enabled” defined by no. 1 above, or “possible to create accessible content for it”)
- Technologies that are relied on by a site for conformance
Is Flash, that provides accessibility features, but is not necessarily accessible, AT enabled?
Another model, take baseline out of conformance– talk about output.
Baseline not part of conformance, sufficient technique but not a required technique. Put it in as a sufficient technique.
Users without that specific knowledge can use a pre-established baseline.
Guidance for people who do not how to choose a technology, guidance on what works and what does not. Look at the code to say what you have to do, and look at output to say if you did it.
Concern that the fact that you can use a technology wrong should not preclude it from being in a baseline. This is true of all technologies.
The question remains about a technology where only some parts can be made accessible.
Concern is not about people trying to get around rules but those who are trying but don’t have a good way of knowing what is possible or accessible. We want authors to know in advance what they can use that will result in accessible content and those things that cant be done no matter how they try (if they use this technology).
Are accessibility and AT compatibility the same?
- Baselines are driven by UA capabilities – what is the set of UA from which to choose?
- Baseline is the same as what we have been calling “relies upon”.
Baseline:
Set of technologies supported in different environments, eg Internet vs intranet.
What it is that a web unit is relying on for a conformance level.
- Perception of Baseline as “get out of jail free card”.
- Baseline being set by regulatory authorities – difficulty with technological advancement and regulatory advancement.
Hybrid Baseline to address both above issues:
Baseline out of conformance, exchange with relied upon, add rules for relied upon (within these capture reasonable AT support in the way that it is used).
- Using sufficient techniques.
- Programmatically determined
- Known set of technologies that are “safe”.
AT as only a piece of accessibility – AT and accessibility support.
Direct accessibility support – accessibility support without AT.
Who created the list of accessibility support – is this normative?
Using the list itself will be a sufficient technique.
Credibility of Baseline and responsibility for this – self policing effect.
Different sets of audiences/people who are trying to implement these guidelines.
- trying to cheat,
- do not have the skills
- do have the skills
- redefining technology landscape
Cheaters – to target or not to target
Will find a way of cheating anyway
Write guidelines in such a way that it is possible for regulators to “go after” cheaters - “cheater responsibility”.
Baselines: not just a list of technologies:
Versions, people who use those technologies, user agents
Sufficiency as “required”.
Basis for defining/evaluating baseline – evidence to back it up.
Defining AT/Criteria
Who is going to do all the testing/gathering information for user support?
Internationalisation? (defining baseline by country)
Keeping up to date/current?
Timeline, validity for timelines e.g. this is the baseline for 2007 (point in time approach)
Using those set of rules rather than those technologies…
Version numbers
Environment intended for/location
Output approach (similar to 508 approach)
- Does not specify how to code
- Does not require specificaiton of AT/UA compatibility
- Current WCAG approach as output approach
Taking baseline out of conformance:
- Using rules (AT support/direct accessibility support)
- Sufficient technique for people who do not have that knowledge: “set”/“pool” of technologies that can safely be relied upon.
- Technologies non-normative,
- Who to create them? Sets as useful as the reputation that the creator relies upon.
- Mechanism that will be self-regulation
What becomes normative is the set of rules for picking technologies that are accessible. Then informative stuff – renderings of those rules (time, place) – for authors that do not know how to use those rules.
Consensus: Progress with Hybrid Baseline Approach
Conformance
Scribe: Alex Li
No issue summary
Key topics
- 4.2
- AAA
- User Contributed Content
- Task
- Aggregated Content
- Sets of Pages
4.2
-
Consensus: Move 4.2.1 and 4.2.3 to conformance ; there is a logic flaw by keeping 4.2.1 and 4.2.3 in as success criteria; content that is suppose to pass 4.2.1 & 4.2.3 would fail all or some of the other success criteria; keep 4.2.2 & 4.2.4 in guideline;
Level AAA
- AAA-pulse check to see if it is easy.
- Do we need every success criteria in level 3 be met to claim AAA? Some object.
- Can we remove AAA conformance? Some can’t live with that.
- Another idea—3 get-out-of-jail-cards
- How bad/impossible is it to meet all level 3 success criteria?
- Does anybody really claim AAA? If nobody claims, why should we bother?
- What is the objection with 50%? It is inconsistent with WCAG.
- There maybe an incentive to do AA+ or AA+1, AA+2
- How about we require full; look for “deal killers” level 3 success criteria.
- Can we live with AA+ or AA+1 (specific level 3 success criteria), AA+2 etc. and also allow AAA?
- AA+ is not enough due to lack of incentive to go above and beyond
- Concern with applicability of policies to AA+
- What about A+ or A+1?
- Not just policy makers may have concern
- Propose:
- -require full AAA conformance
- -progress toward conformance report
- If there is a WCAG sanctioned way of claiming AA+ or progress towards AAA, need a way to specify which provisions you are claiming conformance to.
- Possibilities
- All of level 3 required for AAA
- Intermediate levels are allowed
- Intermediate levels are allowed if you itemize what additional ones you are claiming conformance to
- Could report conformance in a “VPAT-like” way where you say which provisions you meet and which you don’t meet
<SEE BELOW FOR RESOLUTION TO THIS>
Tasks
- Traditionally, conformance is per URI.
- But task-based conformance may make sense too.
- Some tasks are more important; a task that cannot be complete in a same accessibility level may not be useful to claim a specific accessibility level.
- Same URI can be used for the many different tasks.
- Is there a way to cheat by unclear or fudging of task/borderline?
- Task may or may not be identifiable
- Is there a minimal claim of an entire site?
- Horizontal scoping is ruled as unacceptable
- Vertical scoping is acceptable
- Some see advantage of claiming higher level for particular tasks.
- Some way to determine what tasks are in the conformance or not and if any page or element is part of a specific task
- What if part of the page is in conformance
- Different from every other standards on web accessibility
- No other W3C standard deals with tasks
- Difficult for people who are not programmer to understand
- What if the task/process/step is programmatically determinable
- Finding accessible content in pages mixed with non-accessible content
- Testability / machine testable
- We are web content accessibility guideline, not web task accessibility guideline
- Which of these are true or not true for URI vs. task?
Consensus:
- AAA conformance means all applicable level 3 success criteria are met.
- Those making conformance claims can report progress towards higher level if they specify particular success criteria they have met.
- No A+ or AA+ claim languages, only A or AA etc.
User-contributed content
a) define and
b) constrain to plain text, or provide mean for user to submit conforming content and
c) exclude results
Consensus: Definition of User-contributed content: content from a person/entity who is not compensated for the content that is included automatically in web content and where the content is not edited except for censorship
Aggregates
Scribe: Sofia Celic
- Claims have to be made on web units with aggregations in place. Would be nice to make it easier for someone who is aggregating.
- Like to find a way to be able to make a claim for conformance on an authored component. Valuable in the marketplace. Add this to the list of things to consider around task based conformance b/c URIs are part of the problem with components.
- Fix our language so that we’re not implying that you can make a claim.
- If you can’t make a claim, doesn’t mean you can’t talk about how the aggregated content meets accessibility guidelines.
- Concern about web moving towards all aggregated content so will not be able to make an accessibility claim.
- Worried about how to manage and describe it. Maybe stick with ‘web unit’ and look into it further.
- Maybe some way to officially recognise aggregated content?
- Conformance not on the same basis?
- Don’t want to create a dis-incentive
- In terms of translation, it would be better to shorten each sentence and use simple words b/c it may be harder for non-English speakers to understand.
- Want to be able to say what does and doesn’t conform to WCAG. Types of aggregated content already e.g. image, needs to be embedded in HTML page to be able to conform.
- Related to task-based conformance?
- Use metadata? Pre-arranged with content providers. Would be machine readable.
- Use ‘aggregate sub-unit’? If it meets all of the guidelines then it could claim conformance. Allow an aggregator to inherit claims.
- Is the aggregator legally responsible? Possibly out of our scope.
- Confidence claim? E.g. 90%
- Non-normative sub-list that could apply to parts is available. Helps people pick the good ones and possibly help in court.
- The person putting all the content together needs to make the claim.
- Looking too far into aggregated content? Need guidelines for now. Is it a priority for now?
- Handle this separately?
- New classes of conformance claims
- Some included content is automatically included (such as a feed) so potentially no opportunity to evaluate and make an accurate claim.
- Even if all components are accessible, does not meet the whole is accessible so aggregator still has a responsibility.
- Suggested wording: “We encourage providers of authored units to follow the sc that apply to their content in order to help Aggregators meet the guidelines and to be able to choose between providers of authored units. However, the conformance is the responsibility of the aggregator.”
- Some incentive needed for aggregators. Frameworks? Due diligence?
Need a defensible position.
- Claims are optional
- More suggested wording: “Sometimes, a Web unit is assembled ("aggregated") from multiple sources. Authored units are defined as "some set of material created as a single entity by an author". We encourage providers of authored units to follow the sc that apply to their content in order to help Aggregators choose content that will enable them to conform to the guidelines. However, the conformance is the responsibility of the aggregator and conformance level is based upon the entire web unit after it is assembled.”
Proposal for aggregated content:
We are defining what accessibility means. We want to do it in a way that is usable by legal entities so that they don't go make up their own – but we don't need to get into their territory any more than is a natural part of ours. So we go with:
- Conformance claims only by Primary Resource (Web Page/unit)
- Allow “subclaims” for subunits or components that are aggregated to make Web Page/Unit
- If subunit meets all SC that it can (when packaged with support information such as a JPG being accompanied by at text file or meta data with alternate text) then the subunit can claim subunit conformance for the corresponding level.
- We do not make any comment about inheriting conformance.
- The conformance is at the page level.
- The aggregator can use subunit conformance as a defense in a legal proceedings but legal responsibility is not the role of WCAG.
- Also checking on it occasionally could provide additional evidence of ‘due diligence’.
Consensus: F2F Consent was reached to accept proposal for aggregates, clean up our language and remove inheritance implication.
Levels analysis
Scribe: Cynthia Shelly
- 1.2.1
- Need to deal with multimedia between pre-recorded and live.
- Timing consideration.
- 1.4.3 question about importance vs. 1.4.1
- 2.2.3 Is timing essential in a broadcast? Streaming systems allow even broadcasts to be paused. X in column 3. All x’s. calc value = 1
- 2.3.2 xxnxx=3
- 2.4.1 all x’s calc level = 1
- 2.4.2 no in col 5 and ? in col 1
- 2.4.3 ? col 1 ? col 3
- 2.4.4 col 5 says “user and usability”
- 2.4.5 ? col 1, ? col 2 , ? col 3
- 2.4.5 shouldn’t have a ? in calc level column
- 2.4.7 no col 3, x col 5, calc level 3
- 2.4.8 col 5 says “user/usability”
- 2.5.2 col 3 ?. real-time spell-checking makes the behavior less usable. Should we have a column for “it’s not always a good idea to do this, it may make it the behavior worse, less accessible”?
- 3.1.1 col 1 x, col 5 ?
- 3.1.2 col 1 x, col 5 ?
- 3.1.3 col 5 x, calc level is 1 ? 4
- 3.1.4 ? col 3 . calc level is 1. Why do we have it at 3? Is it noise and mess creating less usable? Is it cost?
Scribe: Michael Cooper
Shorthand: the 5 columns in order (important, testable, all, training, must for AT) indicated by x=yes, n=no, ?=unsure, followed by =# to indicate calculated level
- 3.1.5 x???n=?
- 3.1.6 ?xxxx=1? debate about whether this is an accessibility issue
- 3.2.1 xxxxx=1
- 3.2.2 xxxxx=1
- 3.2.3 x??xx=1?
- 3.2.4 x??xx=1?
- 3.2.5 xxnxx=3
- 4.1.1 xxxxx=1
- 4.1.2 xxxxx=1
Web page / Web unit
Hallway discussion / proposal from Gregg
- Web Page is a primary resource and all other resources rendered with it simultaneously
- Dynamic page may change from time to time; each state needs to be accessible
- Could encounter multiple states or multiple pages at a single URI
- Difference between states and pages is somewhat subjective or can be gray at the border, but doesn’t matter because all states of all pages need to be accessible
- Conformance issue: if conformance defined by URI and everything’s at one URI, how do you deal with if one of the pieces is to be excluded
- Task-based conformance doesn’t solve this because you can’t say what’s in and what’s out
Does this seem to hit what we’re trying to do?
- Author needs to know when state changes; covered by 4.1.2
- Issue with “simultaneously”: can’t have states, because in AJAX resources loaded in background, not simultaneous
- Discussion around a state is actually a Web page (different collection of resources even if at same URI), so what’s a Web page?
Consensus: Use “web page” unless we can’t make it work
Level Change Requests
Cur = Current Level; Calc = Calculated Level; Req = Requested Level
Only SC for which a requested or calculated level change included here
- 1.2.1 Cur L1, Calc L1, Req L2: No change
- 1.2.2 Cur L1, Calc L1, Req L2:
- 1.2.4 Cur L2, Calc L3, no req: Andi request move to L3; ben says didn’t expect anybody with resources to do live broadcast to target L3, no change right now
- 1.2.5 Cur L3, Calc L3, Req L2: no change
- 1.2.7 Cur L3, Calc L1, Req L1: similar issues with 1.2.1, no change for now
- 1.3.5 Cur L2, Calc L1, Req L1:
- 1.4.1 Cur L2, Calc L1, Req L1: pending change to 4:1 contrast, no decision
- 1.4.2 Cur L2, Calc L2, Req L1: no level change
- 1.4.3 Cur L3, Calc L2, Req L2: same issues as 1.4.1
- 1.4.4 Cur L3, Calc L3, Req L2: no change
- 2.2.3 Cur L2, Calc L1, no req: need to explore later
- 2.2.5 Cur L3, Calc L3, Req L1 or L2: no change
- 2.2.6 Cur L3, Calc L3, Req L2: no change
- 2.3.2 Cur L3, Calc L3, Req L2:
- 2.4.2 Cur L2, Calc L1, no req: defer set of Web units
- 2.4.4 Cur L2, Calc L2, req L1: defer
- 2.4.5 Cur L3, Calc L3, req L1, L2: defer; discussion of what constitutes a failure
- 2.4.6 Cur L3, Calc L1, Req L1, L2: discussion of moving to L1, but Alex abstains pending further investigation
- 2.4.8 Cur L3, Calc L3, Req L1, L2: defer; some desire to move but need to explore
- 2.5.2 cur l2, calc l?, no req: no action for now
- 2.5.3 cur l2, calc l3, req l3: defer
- 2.5.4 cur l3, calc l3, req l1: no change
- 3.1.1 cur l3, calc l2, req l3: defer
- 3.1.2 cur l2, calc l2 or l4, req l1, l3: defer (along with 3.1.1)
- 3.1.3 cur l3, calc l1 or l4, req l2: defer
- 3.1.4 cur l3, calc l2, req l1: defer
- 3.1.5 cur l3, calc l3, req l2: defer
- 3.1.6 cur l3, calc l1, no req: defer
- 3.2.5 cur l3, calc l3, req l1: no change
Level Change Resolutions
- No change to 1.2.5
- Move 1.3.5 to L1
- No change to 1.4.2
- No change to 1.4.4
- No change to 2.2.5
- No change to 2.2.6
- No change to 2.5.4
- No change to 3.2.5
Level Change Action Items
- Don, David, Cynthia, Bruce to explore a category called “timely content” and possibility of treating it as real-time for 1.2.1, 1.2.2, and 1.2.7
- David to write rationale why 1.2.5 not changed
- Gregg to investigate level and contrast ratio for 1.4.1 and 1.4.3
- David to write rationale for not changing level of 1.4.2
- David to write rationale for not changing level of 1.4.4
- Loretta to write rationale for not changing 2.2.5
- Gregg to write rationale for not changing 2.2.6
- Alex to investigate with colleagues impact if we move 2.4.6 to L1
- Andi, Don to work on response to 2.5.4
- Bengt to work on 3.1.1 and 3.1.2
- Alex to document response to 3.2.5