Meeting minutes
<Fazio> Hi, Jeanne!
Publication of FPWD rejoicing
E-Champaigne all around ...
<jeanne> https://
<JF> Present_
Also a permanent start link for WCAG 3 work will be:
js: This latter link best for new readers of our docs
js: Has pointers to all key docs
<Rachael> maybe this: https://
<ToddLibby> Thank you all for your hard work and being so welcoming!
sh: Q about process for collecting feedback and for our handling of those
sh: Do we have a plan for formal comment handling?
sh: Like that we're prompting people on their comments
<Fazio> Isn't that what github is for?
<Jemma> that is what I read from Jeanne's article.
<Jemma> github
<Bruce_Bailey> Bruce asks for some informal guidence on soliciting from DHS Trusted Tester and OCIO ACOP.
<Zakim> Bruce_Bailey, you wanted to ask about 30 day time limit on feedback
b: Notes 30 days really insufficient
<Fazio> That would only differ from github if it was a yes or no question survey
bb: I want to solicit feedback elsewhere in the U.S. Government, but that may well take longer
bb: Also wondering how hard to work on soliciting feedback
<CharlesHall> Feedback Prerequisite https://
<CharlesHall> Feedback Instructions https://
js: Suggest we could ask our specific questions in a WBS
<CharlesHall> neither prevent survey
<Fazio> I just did one for content usable
js: This is intriguing -- here's our specific questions and please comment
js: FYI: I'm working on a slidedeck about WCAG 3 and will be available to do presentations. Let me know!
<Lauriat> +1 to Sarah's idea if we can, before joining AG WG, I never knew how to comment or on what.
bb: Rephrasing: If we want significant feedback from the large collection of government folks, that will require more than 30 days
ca: Could also be tough in my company
<Bruce_Bailey> +1 to what Chuck is saying, that I think the 26th Feb. is kind of soft.
js: Every spec always has a date by which comments are expected, that doesn't mean comments recieved later will be dropped
<Bruce_Bailey> but having the date in print makes it harder for beurocracy to respond.
js: We can't promise to do anything about any comment in a future draft before the published comment deadline
<Zakim> JF, you wanted to ask if someone comments in a 'survey', would they be owed a response?
jf: Notes long standing process that comments are responded to. Would expect that even in an WBS, no?
rm: The date means that we will address recieved comments in the next draft somehow, but no next draft before the cutoff date
rm: Asks where this might best be clarified
<Zakim> Rachael, you wanted to say that feedback can continue after the 30 days
b: 60 days needs to be minimum for large bureauracracy to respond
bb: The short turn around becomes a disincentive to comment
<Fazio> I can get it to the Director of Federal Sector Programs for the EEOC if that helps
rm: Suggest we could do 60 days on the next draft
js: Next draft should also be more concrete
approving the requested change to Requirements
js: one more round of e-champaigne ...
js: Want to remind everyone how this change started ...
<jeanne> https://
js: We were asked to update in next draft, started multiple ways to measure
js: Notes we need to work this out with COGA. It's their objection.
js: NOtes the original text was published yesterday, but it drew an objection
js: on 1 Dec we agreed to change and have had a few go-rounds since
<jeanne> https://
<Jemma> Sarah's version "Silver guidance includes tests and other [performance indicators, quality measures, evaluation methods]. Some guidance may use true/false verification but other guidance will use other ways of measuring and/or evaluating (for example: rubrics, sliding scale, task-completion, user research with people with disabilities, and more) where appropriate so that more needs of people with disabilities may be included. This [approach,
<Jemma> strategy, principle] includes particular attention to people whose needs may better be met with a broad testing approach, such as people with low vision, limited vision, or cognitive and learning disabilities."
<Zakim> Rachael, you wanted to say that I think the first term was intended to be test procedures.
rm: May be slightly changing the intent with the bracketed choices, but OK by me
ca: Yes, but may also be addressing another raised concern, so may be beneficial
js: Asks RM if any bracketed terms OK?
rm: Yes
<jennifer> Apologies if I missed this, but how do we indicate which bracketed item we "vote" for?
js: reads at the term definitions ... several dictionaries
<KimD> me Jennifer, we aren't quite to thpoint yet. You haven't missed it!
js: quality measures seems to turn up health care related definitions
js: Found Wikipedia for "evaluation methods"
bb: Recalls wcag2 principle that our glossary term should substitute inline successfully
js: wasn't thinking of it that way
js: they just seemed synonous on first read
<Chuck> Evaluation methods. There are a variety of usability evaluation methods. Certain methods use data from users, while others rely on usability experts. There are usability evaluation methods for all stages of design and development, from product definition to final design modifications.
<Jemma> performance indicators
<Jemma> (wikipedia) A performance indicator or key performance indicator (KPI) is a type of performance measurement. KPIs evaluate the success of an organization or of a particular activity (such as projects, programs, products and other initiatives) in which it engages.
<Jemma> (Collins dictionary) a quantitative or qualitative measurement, or any other criterion, by which the performance, efficiency, achievement, etc of a person or organization can be assessed, often by comparison with an agreed standard or target
<Jemma> quality measures - (the first two SERP were all healthcare. Even wikipedia search turned up health care. I don’t think we should use this)
<Lauriat> Direct link to Definitions of terms proposed by Sarah: https://
sh: Agree that "quality measures" is health care related, and probably not best for us
sh: "Performance indicators" may also not be meaningful to our audience
sh: Recalls we were thinking along lines of organizational maturity
sh: Wilco objected to "procedures"
<Jemma> my vote will also be "evaluation method"
sh: Wonder whether "evaluation method" would work; tests and more?
<KimD> +1 to "evaluation methods"
sh: notes these are methods for evaluating a thing
js: asks if sh prefers evaluation methods?
<Zakim> Rachael, you wanted to express a preference
sh: yes
rm: me too
<Jemma> performance indicator is a bit more toward private sector.
rm: but flexible
rm: we can put it in two places, suggests using it twice ...
<Jemma> KPI - key perforance indicator for companies. KPI is not used often in Univeristy settings.
ca: What RM said
<Rachael> Silver guidance includes tests and methods of evaluation. Some guidance may use true/false verification but other guidance will use other ways of measuring and / or evaluating (for example: rubrics, sliding scale, task-completion, usability evaluation methods with people with disabilities, and more) where appropriate so that more needs of people with disabilities may be included. This approach includes particular attention to people whose
<Rachael> needs may better be met with a broad testing approach, such as people with low vision, limited vision, or cognitive and learning disabilities.
ca: like "methods of evaluation"
ca: do want to stay away from "quality measures"
<Zakim> Chuck, you wanted to ask my favorite is a derivative, "methods of evaluation"
jennifer: pasting thoughts ...
<jennifer> Silver guidance includes tests and other evaluation methods. Some guidance may use true/false verification but other guidance will use other ways of measuring and/or evaluating (for example: rubrics, sliding scale, task-completion, user research with people with disabilities, and more) key indicators where appropriate so that more needs of people with disabilities may be included. This [approach, strategy, principle] includes particular atten[CUT]
sh: looking at two other details ... the "other" word is important
<jennifer> Reflecting what I just said: Silver guidance includes tests and other evaluation methods. Some guidance may use true/false verification but other guidance will use other ways of measuring and/or evaluating key indicators where appropriate so that more needs of people with disabilities may be included (for example: rubrics, sliding scale, task-completion, user research with people with disabilities, and more). This [approach, strategy, princip[CUT]
<Chuck> +1 to other
sh: i.e. not everything is a test
sh: Also wanted to be more precise than saying "this"
jema: Also wanted to meet Wilco's concern
js: Suggest we not change too much from what was agreed with COGA; so hesitant for large changes
js: Like moving the examples to sentence end
<KimD> +1 to "other" also
<Jemma> I agree with Sukriti about key indicator can be many different thing depending on their roles.
<Fazio> How does everyone feel about "include but not limited to"?
js: asks df where that phrase? Examples?
df: yes
<KimD> I agree with Sukriti about "key indicator" language - has different context/meaning in corporate America.
js: Please file as a feedback issue because it's new and different enough
sukriti: indicators are usually very specific
<KimD> +1 to Sukriti
sukriti: concerned too much opportunity for misinterpretation
<Jemma> +1 Kim and Sukriti
js: how about eliminate "key indicators"?
sukriti: yes
<jennifer> Ah, perhaps I misunderstood. I did think it was saying that there would be key indicators to know if the item met standards. :)
jennifer: wonders whether I misunderstand, and "key indicators" was important
<Jemma> I think we are trying to resolve this sentence. "Silver guidance includes tests and other [performance indicators, quality measures, evaluation methods]. "
<Fazio> will is too absolute
<Jemma> this is the effor to reflect Wilco's obejction for the words, "procedure"
jennifer: Now agrees with Sukriti
<Fazio> may would be better
<Fazio> may is permissive will is similar to shall and means mandatory
ca: Believe we're at this ...
js: no!
<jeanne> Silver guidance includes tests and other evaluation methods. Some guidance may use true/false verification but other guidance will use other ways of measuring and/or evaluating where appropriate so that more needs of people with disabilities may be included (for example: rubrics, sliding scale, task-completion, user research with people with disabilities, and more).
js: Notes the example list at sentence end was strong
<Rachael> +1 the first 2 sentences
+1
<sarahhorton> +1
<Chuck> +1 first 2 sentences
<jennifer> +1
<ToddLibby> +1
<Sukriti> +1
<Chuck> +1 approach
<jeanne> Silver guidance includes tests and other evaluation methods. Some guidance may use true/false verification but other guidance will use other ways of measuring and/or evaluating where appropriate so that more needs of people with disabilities may be included (for example: rubrics, sliding scale, task-completion, user research with people with disabilities, and more). This [approach, strategy,
<jeanne> principle] includes particular attention to people whose needs may better be met with a broad testing approach, such as people with low vision, limited vision, or cognitive and learning disabilities.
<Fazio> 0
<Jemma> no objection to move on.
<JF> 0, with a note about "scoring" (but not a hill...)
<KimD> +1 to approach
<Sukriti> +1 approach
js: Seems we have the wording order ... Now we need to choose among the bracketed options ...
<sarahhorton> +1 approach
<jennifer> +1 to strategy
<JF> +1 to ap[proach
<Jemma> +1 approach - more netural
js: approach, strategy, or principle
<Rachael> +1 approach
+1 to approach
<AngelaAccessForAll> +1 approach
<jennifer> We already say approach in the sentence, which is difficult to read for me.
<ToddLibby> +1 approach
<Chuck> js: or just drop second approach
<Lauriat> +1 to JF, but I don't have a good replacement in mind
<Jemma> + jf
jf: suggest approach in the first use, and swap the second as "broader testing strategy"
<jennifer> Alternate terms for the second "approach": technique, procedure, method, means
<Bruce_Bailey> +1 for broad versus broader
js: "broader" asks broader than what, and leads to criticism of wcag2 that we don't want to go there
<Rachael> +1 to chuck's wording
<Fazio> robust?
<Jemma> +1 to Chuck
<sarahhorton> +1
<Jemma> +1
+1
<jennifer> +1 to Chuck
<JF> +0.5
<KimD> +1 to final sentence wording
<Sukriti> +1
<SuzanneTaylor> +1
<Bruce_Bailey> +1
<AngelaAccessForAll> +1
<ToddLibby> +1
js: About to propose entire para ...
<jeanne> Silver guidance includes tests and other evaluation methods. Some guidance may use true/false verification but other guidance will use other ways of measuring and/or evaluating where appropriate so that more needs of people with disabilities may be included (for example: rubrics, sliding scale, task-completion, user research with people with disabilities, and more). This approach includes
<jeanne> particular attention to people whose needs may better be met with a broad testing strategy, such as people with low vision, limited vision, or cognitive and learning disabilities.
<Chuck> +1 to the whole enchilada!
<Sukriti> +1
+1
<KimD> +1
<sarahhorton> +1
<Lauriat> +1
<AngelaAccessForAll> +1
<SuzanneTaylor> +1
<Bruce_Bailey> +1
<ToddLibby> +1
<Jemma> +1
<JF> +1
<jennifer> +1
<CharlesHall> +1
<Rachael> +1
<Fazio> Silver or WCAG3?
<Rachael> I can send it to COGA for a final approval by email.
bb: wonders about guidance vs guidelines
js: at time originally written, didn't know they'd be guidelines
bb: expecting testing with at beyond bronze is not going to happen
<jennifer> Not all people with disabilities use assistive technology.
bb: Suggests "Guidelines" with cap G would be bad -- so better guidance
js: asks any objections?
Resolution: TO adopt the changes to the 4.1 Requirement in response to Github issue #188 as follows:
<jeanne> Silver guidance includes tests and other evaluation methods. Some guidance may use true/false verification but other guidance will use other ways of measuring and/or evaluating where appropriate so that more needs of people with disabilities may be included (for example: rubrics, sliding scale, task-completion, user research with people with disabilities, and more). This approach includes
<jeanne> particular attention to people whose needs may better be met with a broad testing strategy, such as people with low vision, limited vision, or cognitive and learning disabilities.
<Fazio> lol
<Rachael> But ray - chel is fine too.
<Rachael> s/But ray - chel is fine too. /