<scribe> scribe: Jean-Francois_Hector
VY: There's a new pull request:
<spectranaut> new PR: https://github.com/w3c/aria-at/pull/17
<spectranaut> this the new test design: https://github.com/w3c/aria-at/blob/initial-test-harness/tests/checkbox/read-checkbox.html
This is the test for reading the checkbox
It has two tests: reading when it's checked, and reading when it's unchecked
When writing a test, we might also be including code to get the example widget in the correct state. So the some of tests I've just shared include a script
<spectranaut> link to the demo: https://spectranaut.github.io/aria-at/tests/checkbox/read-checkbox.html
<spectranaut> demo: https://spectranaut.github.io/aria-at/tests/checkbox/read-checkbox.html
These files are served via Github pages, so if you wanted to test these files locally you'd need a basic web server to get them to run
The URL I've shared should automatically open a pop up with the actual test file. The pop up might get blocked by your browser
I can add a button that says 'open test page', to get around that
Here what we have are all the assertions listed by commands that we're testing. And you can say that all passed or all failed in one click, or you can specify that for each command
MCK: These should be radios rather than checkboxes
It feels hard to navigate here because you see all the details, even if they all passed, or all failed
All the checkboxes currently have the same labels. You don't know what you're checking
It'd be good to have an option to show or hide all the details (rather than show all all the time)
MCK: I don't see where to input the last utterances if the results have all passed or all failed
All the information for all the tests could be in one utterance
You wouldn't record the same utterance three times.
If it all passed or all failed, there should be an opportunity for comment if it failed
If it's the same utterance for all the single key commands, then we don't need to collect every utterance
MCK: It'd label it "speech output" rather than "last utterance" because it might not be the last utterance
And we might rerun the same test with Braille output, so I wouldn't label it "speech output"
But with Braille the key commands will possible be different
VY: Should we have "other detail" be only one input for command, or one input per assertion?
MCK: We'd want to associate the comment with the specific assertion
You might not necessarily need a comment, for example if it's a fail. But if the screen reader output is incorrect, it's important to record that
VY: Do we want to record that in a structured way?
MCK: There's a big different between 'No support' and 'Incorrect support'
No support is not necessarily a sin, but incorrect support is really bad. It's really important to flag incorrect support
VY: So should we have an option for flagging an assertion as incorrect support, without needing to read the notes?
MCK: My gut is yes
If everything passes, what I'd want to do is capture the speech output, and then for each command just check that it passed
If there was a problem with one of the comments, that's where we'd want to drill down, and know whether the problem was with name, role or state
MF: I think we should differentiate between no support and incorrect support
MCK: So, in that case, instead of just having options for 'Pass' and 'Fail', we'd have more options here, eg 'Good output', 'No output', 'Incorrect output'
Just a thought, I don't know whether that's the best way
YS: Sometimes it's hard whether an screen reader output is incorrect or not
MCK: It does take a lot of knowledge about the screen reader you're testing with. But screen reader vendors will have opportunities to push back
VY: We want to make instructions clear, and potentially screen reader specific
MCK: I just don't know that we
can afford to go that far. My assumption from the beginning has
been that we need to have testers with knowledge of the screen
readers for the project to be a success, because it's already
complex
... Let's use "Good", "Incomplete" and "Incorrect" (Note from
JF: I'm not sure I heard those right)
VY: The way things currently work, when a user presses submit, they get to the second part of the test
MCK: Let's use an alert when a
user presses that submit button before they've completed the
form well
... How do you know when it is ok to close the pop-up
window?
VY: currently, it closes after you've clicked 'submit' on the last part of the test
MCK: But if we have 5 checkbox related tests, is there a way to know that you should be using the same test file?
VY: we could decide that the opening of files isn't managed by the test harness automatically, but manually by users when they click a button
<spectranaut> https://github.com/w3c/aria-at/issues/14
<spectranaut> JF: Gotten a lot of a feedback but I still have two question we need to address before we move forward
<spectranaut> JF: Question 1: the format of the excel document. Is the format used here something we can programatically export?
<spectranaut> Question 2: how many specific commands do we want to ask users to test with? We don't want testers to test every command, but most important set?
<spectranaut> VY: I think we can't solve Question 1 right now because there are so many other things we need to work on
<spectranaut> we will need to solve it eventually, but for now, we can manual review in excell sheets then manual make json objects
<spectranaut> MCK: do we want the two different instructions "Navigate to the checkbox, using each of these methods:" and "With the cursor already positioned on the checkbox, read it using each of these methods:".. are these two different tests?
<spectranaut> JF: read checkbox grouping: I've grouped the instructions to "navigate from outside to inside", then "when the cursor is already on the checkbox, read the group using..."
<spectranaut> JF: can't join next meeting
<spectranaut> thanks Jean-Francois_Hector!
No problem! Thanks for the note taking
I've just sent the notes out
This is scribe.perl Revision: 1.154 of Date: 2018/09/25 16:35:56 Check for newer version at http://dev.w3.org/cvsweb/~checkout~/2002/scribe/ Guessing input format: Irssi_ISO8601_Log_Text_Format (score 1.00) Present: shimizuyohta Matt_King WARNING: Fewer than 3 people found for Present list! Found Scribe: Jean-Francois_Hector Inferring ScribeNick: Jean-Francois_Hector WARNING: No meeting chair found! You should specify the meeting chair like this: <dbooth> Chair: dbooth WARNING: No date found! Assuming today. (Hint: Specify the W3C IRC log URL, and the date will be determined from that.) Or specify the date like this: <dbooth> Date: 12 Sep 2002 People with action items: WARNING: Input appears to use implicit continuation lines. You may need the "-implicitContinuations" option. WARNING: IRC log location not specified! (You can ignore this warning if you do not want the generated minutes to contain a link to the original IRC log.)[End of scribe.perl diagnostic output]