W3C

– DRAFT –
ARIA and Assistive Technology Community Group

25 February 2021

Attendees

Present
hadi, jongund, jscholes, Matt_King, michael_fairchild, s3ththompson, sina, weston
Regrets
-
Chair
James
Scribe
jongund

Meeting minutes

JS: First thing is issue 363

https://github.com/w3c/aria-at/issues/363

JS: We have a need for tests with a sequence of commands, commands followed by other commands

JS: Open ended, repeating a keystroke, but may not know how many times

JS: When screen readers change behaviors the number of keystrokes may change

JS: For example between JAWS 2021 and 2022 the number of keystrokes may change

JS: Closed sequences, for example in the menubar example

JS: We need a flag to indemnify the type of sequence

JS: As the person writing the test you would need to open the keys file, and then use some quoted string

MK: I would like to separate isues

MK: The closed sequence, they seem really ligament to me, specifying this specific set of commands

MK: I am pushing back on the open ended, we really need to make sure we need it

MK: On the multiple command sequences, we have this list of steps

MK: We have things like the right mode, right starting point, the last set could be a sequence

MK: We want to capture output for each key pressed, for example "T" and "Down Arrow", so both are command sequences

MK: Then there are assertions to the combined output

SB: Assertions apply to the total concatenation of the key sequence

MK: You need to define something in keys file to identify a key sequence

SB: I am over simplifying this

JS: There is only one column in the CVS, we want to specify more than one key in the cell

MK: You could to left brace and then...

SB: What is good about that it is easy to check

MK: Your assertions are for the whole sequence

Chnages would be in: commands.csv, example: https://github.com/w3c/aria-at/blob/master/tests/modal-dialog/data/commands.csv

JS: They are linked through IDs

MK: The commands.csv will have an array, each item would be one of the keys

MK: It's a boon for automation?

SB: Because it is easy enough to do

SB: For a computer it is easy

MK: What you are saying, if we are gathering from humans we want to make it easy

SB: The computer can join it ...

MK: In the data model, if from automated tests the response will be an array of strings

JS: We want two types, string from humans and arrary from automation

MK: Hopefully we can get automated output toggle

SB: It is easy to add to the automated output

JS: If someone could write a JAWS script to do it

SB: Then you are done, its just output

MK: That would be awesome, ideally the same from every screen reader

MK: We can just get handshake agreements, like an apple script and commander

MK: JC said it was possible

MK: Just make it a command

MK: The thing is it has to have default configuration

SB: Once we have the JS file, it is an easier conversation

Issue is getting the script to be shipped with each screen reader

ST: This is great, and removes some indirection

MK: This is a super high priority

MK: You can write it up in 363

JS: Some of the commands can go in setup

MK: Only it they are part of setup

MK: Use the setup to get people in to the position to execute the command

JS: We do not have a way to identify keys in the setup, just strings

JS: How necessary is it, you could say it could be scripted

JS: We really need a reason to have setup command sequence

MK: Adding support for a setup sequence that pulls from the keys.mjs

MK: We are looking at moving the setup script from onload to a button

JG: Examples have to be designed to be scripted

MK: I would like you to raise issues with APG when their are setup script issues

MK: Open ended really needed?

MK: For example with combobox label

MK: We are looking for deterministic output from screen readers

MK: I think the combobox case is a failure, there is unexpected output, using the closed sequence

MK: If you used a difference labeling technique we do not want the experience to be difference

JS: Are we getting too opinionated about SR behavior

MK: It is being prescriptive about a bug

SB: You are saying the label twice

JS: I would like to press the down arrow twice and they don't get to the combobox and then it is a failure

MK: It would be a separate test to press "F" to get to combobox

MK: They only fail for some of the test

JS: Itf they press the down arrow one more time, then they pass

MK: That's were having multiple testers, we want to have testers follow precisely, so there will be conflicts if people do not follow the directions

JS: Do we need any additional instructions on following example

MK: I think we need to do a good job of on boarding people

SB: We can use experienced testers to train new testers

HR: We should observe behavior the starting point, the navigate forward from here, NVDA handles that properly, for JAWS you are asking to switch, it starts reading the page

MK: That's because we are doing the test after page load

MK: The next issue is starting using a command, instead of load

HR: I don't see a way in JAWS to stop reading on load

MK: We are using default configuration

JS: I am not changing the speech rate

JS: The tests are not short

SB: I don't want it to be painful for the testers

SB: If we do this with an add on, we can let them have their speech settings, but we can configure other features

MK: There should be a command we can drive all of that

SB: That gets rolled into automation

JS: Issue 300

https://github.com/w3c/aria-at/issues/300

prsent+ hadi

MK: This not about test writing, but it doesn't effect you as a test writer

MK: There are some basic quality assurance tests, then we merge to the master branch

MK: Now it can be pulled into app as a draft test

MK: We have to know, test that are updated..., it's draft by default, we want human testers to see in their list of tests

MK: They can review it while running it and file issues

MK: Then the admin sees complete test runs with no conflicts, then it can be considered for the next step after draft

JS: Do we want people to add github issues, it will be a lot of issues and the issue has to be consistently referenced

JS: I am concerns about a huge number of open issues

JS: There is manual overhead with have one issue per test, this doesn't allow an issue doesn't close

JS: Need to be concerned about the signal to noise ratio on the issues list

MK: can we move something to a DONE column

MK: We can have a project we can have an issue list

JS: That would work

MK: For single issue we can have checkboxes, and only once person can change the checkbox

JS: I like the idea of keeping everything in one place

MK: I use projects in the APG, I need to manually add issues

MK: Right now we are workignon tree, so I am going down looking at tree features, it has been very useful

JS: If we go that route I want it to be automated

MK: Github is automated, so you can move to the done column

MK: If something is no longer draft. there are three stages

MK: First step is testing in the community group results

MK: Second, is results that re reveiwed by screen readers, and they many look at where there are problems

MK: This second step is needed to move a test plan out of draft mode

MK: Three steps: "draft", "something" and "final"

MK: If screen reader developers have issues it could also be something about the plan

JS: Need modifications to the setup scripts

JS: Probably first thing for next week

Minutes manually created (not a transcript), formatted by scribe.perl version 127 (Wed Dec 30 17:39:58 2020 UTC).

Diagnostics

Maybe present: HR, JG, JS, MK, SB, ST