Lauriat: First thing to talk
about is the Requirements. We have a chick and egg problem. We
want to get the Requirements in the general right direction,
but we still have open questions saying they're not sure how it
would work.
... so we want some more solid prototyping done of the
conformance model done first
... difficulty there is in defining how much needs to be done
before taking it back to the working group
... we need to take a look at the Requirements and make a
decision
<Lauriat> Link to the current open issues in github: https://github.com/w3c/silver/issues
Lauriat: this link goes to all open issues; not all about requirements, but several about requirements
<Cyborg> just reading now about lawsuits involving WCAG in there; does anyone know where to find out about those lawsuits?
Jeanne: We should pick another existing Success Criterion and write it from the ground up, from the test up, and include tests that are more innovative and flexible format.
Lauriat: I know the usability
evaluation test has the sticking point of many of us not having
the ability to put it together
... who should we reach out to for that
<Cyborg> which SC were you thinking of? one from COGA?
Lauriat: That would be fantastic. Ideally one from COGA, one from Mobile, one from Low Vision, etc.
Jeanne: We might not want to jump right into brand new content.
<Cyborg> is there a current SC where the current test is problematic?
Jeanne: maybe we should do one current one and one new one
<Cyborg> (like it can be passed and not really achieve its goal?)
<Cyborg> who has complained the most about the current tests? lol
Jeanne: Who could we reach out to
create some evaluation tests?
... who was the expertise to write the evaluation and make it
valid?
<kirkwood> can we put language to what we want in a person to reach out to
Charles: I'm not clear on what kind of evaluation you're talking about creating and how that solves the Requirements doc
Jeanne: We need to be able to
show people that we can go beyond a true/false statement of a
Success Criterion and have valid tests.
... This Requirement issue is about how can we test the content
that's in Silver.
Charles: So the assumption
statement is that alternate methods of testing that don't
produce binary true/false reults but have some other result on
a scale are still valid.
... and useful
... The tasks you come up with need to prove that scale is
valid and useful to validate the assumption that it is.
... or the people that say we need true/false will say you
proved your assumption wrong.
Lauriat: Why using an existing
success criterion would be helpful since people know and
acknowledge there is a perceived subjectivity to the quality of
a label or alternative text.
... If we can show it through that scale, it could go a long
way in demonstrating how this could work
Charles: First you want to show a non-binary test result is useful. Then you want to determine whether or not there are only certain Success Criterion that a non-binary test result is useful for.
<kirkwood> prove a non binary test result is useful then certian success criteria non binary test material is good for - excellent point!
Jeanne: Many of them have a quality test which could be scaled. There are a lot where we could write a scaled test.
<Cyborg> can you please provide an example of one that would do well with quality test?
kirkwood: Charles' point
resonated with me. The aspect of the non-binary testing
paradigm seems like a critical decision point for us
... I think it's a very good point and something we should
think through
Jeanne: Do you think we need to prove it or that we have enough research of people wanting it for it to be good enough?
Charles: I'd think it'd need to be proven if it's a normative statement claimed in the Requirements aspect
Lauriat: The research we have just proves that people want it
Charles: Like Color Alone. If you use color plus an attribute, it might pass the intent...
Jeanne: I was thinking of doing
something with images. With meeting the image is usable using
task based assessment. But color alone and contrast could also
be good.
... we need to get people that are comfortable writing the
tests
Lauriat: Sounds like before we
should send the Requirements, we need to show that other types
of testing can work.
... and including task based assessment, instead of element
based assessment. And this is something we want to more reflect
the user's experience
... is that something we need to better prototype or is it not
as much of an issue as the testing and measurability
Jeanne: Not as much of an
issue
... we could do, but let's not lock it into the Requirements
yet
<kirkwood> use case based assessment is a very captivating solution though
Charles: For testing and
measureability, the same solution that exists in other testing
contexts is that external testing can be valid to prove the
assumption
... other models in the world that work in a binary and
non-binary testing pattern can serve as validation of the
assumption
... like we were using the LEEDS model; if there are examples
out there that already have a two simultaneously seemingly
opposed testing methods that are valid...
<kirkwood> other standards such as the LEEDS model I think deserves research to give examples, agreed
Jeanne: If we gave people an example, they would see the validity from their own experience
<kirkwood> give an example and how it applies
Charles: Is bringing a single example going to address the concerns? Or do we need to create a test in order to prove it?
kirkwood: My feeling is the latter on that.
Jeanne: the example would be a
test. It's something they could put their hands around as a
valid test and it could work.
... do we have to prove that other types of testing are valid
in themselves?
... if that came up as an issue we could refer them to
research, but just having a thorough example would be
persuasive
Lauriat: Would be better to have examples that show different kinds of guidelines that we could have tests for like that
Jeanne: I'd be happy if we had one example at this point.
<kirkwood> I agree with that
<Lauriat> Testing efficiency question: https://github.com/w3c/silver/issues/39
Jeanne: We also have two other documents of issues from AGWG.
<jeanne> More Issues from AGWG Surveys https://docs.google.com/document/d/11eSnUw9iBf_07GZsna5ozj--zZx6DTATguaTv8uauEo/edit#heading=h.e4f7qe4qdp34
Jeanne: they have two surveys done about the Requirements document. We answered all of those.
<jeanne> Second AGWG survey: https://docs.google.com/document/d/1gQENTuHuOUErWHv-1YikFtJw_SNA8aERRKJyaaI5_1o/edit#heading=h.e4f7qe4qdp34
Jeanne: we addressed a number of
these, but not all of them
... there are some good ones in there
Lauriat: What do we still need to address.
Jeanne: We fixed typos, changes
to wording. There were things related to the conformance model
that had thoughtful questions.
... I think we need some real examples instead of
hypotheticals
Charles: I think there's still a
third category. If it's not pass/fail and it's a scale, where's
the cut off point?
... it goes back to the human need and the intent.
... if the criteria is written similar to WCAG then it would be
easy to keep that on a pass/fail testing method.
... but if you go backward to the intent of differentiation and
you can reach that in another way, it's not necessarily a
pass/fail or another method. It's validating by the test
... there's a third option to validate the intent of the
criteria
... if it's not pass/fail or not pass/fail based on the
criterion or a scale, it could still validate the content
achieved the intent of the criterion
Jeanne: Can you write something up?
Charles: I can write up an example considering color alone
Jeanne: If anyone could write a
general idea of how you could test it, we can get it to someone
to build off of
... go look at what we could possibly do and at least start
it
kirkwood: Not sure I fully understand the ask
Jeanne: Pick something in COGA
that needs more than a binary test
... write up an idea of how it could be tested
... not super polished or technical statistical validity...we
need to bring together examples of how we could do this
kirkwood: usability testing or automated testing?
Jeanne: If you have an example for automated, go for it.
kirkwood: IBM has been
experimenting with a content simplifier. It would take the
content on a page and simplify it...there might be something in
there
... not sure what that would mean in the world of real
users
Jeanne: There is a test in a WCAG technique related to Readability. We might be able to build upon that.
KimD: Do we have access to the COGA work?
kirkwood: Yeah, it's all in Github
KimD: Could we get a link to those things as a starting point for review?
Jeanne: I have a link that's publicly available...I have several links that were things to defer to Silver
<jeanne> David McDonald made this spreadsheet: https://docs.google.com/spreadsheets/d/1XShLFX8fxHYYLn8A6avDwu37w9JfnZCGWvAKBpK9Xo4/edit#gid=264773938
<jeanne> ... it lists all the SC that didn't make it into Silver
KimD: I heard there were things that didn't even get that far from the task force.
Jeanne: We have the original list from COGA...like 50-ish proposals
<Lauriat> Maybe this? https://github.com/w3c/wcag21/issues?q=is%3Aissue+is%3Aclosed+label%3ACOGA
Lauriat: They closed everything out, but I filtered for the label "COGA" and it has 52 items in there. There may be some subfiltering to find what didn't make it in
kirkwood: that's exactly what I was looking for
Jeanne: And there's a list that says "defer"
Lauriat: we're pretty much at
time, how do we make sure we can carry on from here
... we can reach out to some folks to send out requests for
help
Jeanne: Who else is taking an
action item
... Write a test
<jeanne> Jeanne: Or reach out to someone you know who could write one.
This is scribe.perl Revision: 1.154 of Date: 2018/09/25 16:35:56 Check for newer version at http://dev.w3.org/cvsweb/~checkout~/2002/scribe/ Guessing input format: Irssi_ISO8601_Log_Text_Format (score 1.00) Present: AngelaAccessForAll Lauriat LuisG Makoto kirkwood jeanne Cyborg KimD Regrets: Jennison No ScribeNick specified. Guessing ScribeNick: LuisG Inferring Scribes: LuisG WARNING: No "Topic:" lines found. WARNING: No meeting chair found! You should specify the meeting chair like this: <dbooth> Chair: dbooth Found Date: 06 Nov 2018 People with action items: WARNING: No "Topic: ..." lines found! Resulting HTML may have an empty (invalid) <ol>...</ol>. Explanation: "Topic: ..." lines are used to indicate the start of new discussion topics or agenda items, such as: <dbooth> Topic: Review of Amy's report WARNING: IRC log location not specified! (You can ignore this warning if you do not want the generated minutes to contain a link to the original IRC log.)[End of scribe.perl diagnostic output]