W3C

– DRAFT –
ARIA and Assistive Technologies Community Group

09 December 2021

Attendees

Present
howard_edwards, James_Schoels, JoeHumbert, Matt_King, mzgoddard
Regrets
-
Chair
Matt King
Scribe
Alaina

Meeting minutes

<RichardSteinberg> https://github.com/rusteinberg

<s3ththompson> (could you paste your github username one more time? i just joined the channel)

<RichardSteinberg> https://github.com/rusteinberg

Lessons learned from running tests for STG

Matt: I want to explain to people what I did because it is not obvious and I want to make sure that people are aware of what I did in order to get a report out. So, right now we're trying to get some reports available to screen reader developers for the tests that we've already drafted. We have about 40 test plans drafted. We don't have all of them in the queue in the app right now. I have not added them all to the queue, so right now we only ha[CUT]

Matt: The role of the community group is to test our tests to see if this is something we can report when we go to the screen reader developers

Matt: Before we make a report available to the developer, we want at least two people to run through all the tests in a test plan and have exactly matching results. That's our way of making sure each tester has the same understanding of the test plan. If they don't maybe the plan isn't clear enough. In our test queue, in the status column there's a table for each screen reader browser combination

Matt: NVDA with firefox, the select only combobox is listed as having 18 conflicts. There were results from Lewis and Jon- I went through each conflict. As the administrator I have the ability to run the test as someone else. I changed the results by re running the tests myself and trying to figure out whether it was an interpretation or testing issue then modified results accordingly and published. I'll call that the brute force method.

Hadi: When resolving those issues, what was your impression of the conflicts?

Matt: The most common conflict was people who have the same result completing the form a little bit differently, not raising an issue with the tests when an issue should have been raised. In one case there was a command in the disclosure menu where the test said to navigate backward from the first link in interaction mode, that's the generic word we use to say that JAWS should be in forms/ application mode and NVDA should be in focus mode.

Matt: In this example if you're in forms mode in JAWS and you press the arrow key because the way that example is coded it does not support the up arrow in that case. So the test was asking the tester to do something they could not do. So it was a mistake in the test where we included a command that should not be included. The expectation was that the tester would raise an issue- hey there's a mistake in the test, then James would respond and fi[CUT]

Matt: This could have been me being unclear about what to do when there's a mistake in the test. We need to correct the test before we correct any results. I think there were about five or six of those in the test plan. Those five or six differences ended up causing about 11 of the 20-something problems I resolved, so it was the most common cause of the problem.

Matt: The action here is for anybody who's running tests, we always want to have our first line of questioning be "is there a problem with the test?" if there is or if you're not sure, we have a raise issue link on the test page. Raise an issue, fill out the form, then it can be addressed by the test consulting team

Hadi: James, I think you are receiving our feedback when we're unsure about the tests. James do you have the disclosure test? Task 21, the navigation dropdown. I'm using JAWS and Chrome. I see something isn't okay but I don't have the right solution. I think it was going to the second item in the list (a disclosure menu) it was exposed. We were hearing something like "list of two items list of three items" so you could get the information about [CUT]

Hadi: In the parent list and sublist

James: In the meantime while we're working on this, thank you Hadi and Alyssa for the feedback- I would ask that you raise this as an issue on GitHub. It's more difficult to keep track when it's only sent to the mailing list. Overall I think this echoes Matt's conclusion- I'd like people to get in the habit of using this functionality

James: Even if you file this as an issue on GitHub even if we decide it's not an issue we can always close the issue on GitHub. For everyone on the call- if you have feedback submit it on GitHub. If it's filed in the results section then I don't see it. I suppose that's worth reiterating- even if you raise an issue and we can't do anything about it we can always close it but at least we have it there.

Hadi: What do we expect? What should be expected?

Matt: I'm in the test runner, I'm on that test, and it shows me the submitted results. But, when I'm in this view of the page I can't actually access the instructions. Jame: From the results page? Matt: Yes, so when you're in the runner- I guess this is more for Howard Z and Seth. Looking at test 21, if I want to see instructions and access the test case I have to go into edit mode. We should probably do something about this part of the design

Seth: That's a good point. Is the text not there? Matt: It's just a different view of the page. In edit mode you have a whole view of the form. If I go into edit mode I don't see a way to cancel out of edit mode without closing the page and coming back in

Seth: If you're in edit mode you have to submit to get out. Oh I see what you're saying, you don't necessarily want to submit.

James: The thing that's being tested is the commands, not the output you initially hear after pressing the button at the bottom of the page

Matt: In this particular case, if I'm looking at the right one, I press the setup run test button there's only one command to test and it's pressing escape. JAWS says collapsed

Hadi: How much information do you want to convey?

Matt: If I refresh and press the setup button the navigation region and both lists are announced

James: That's not a side effect of doing the test

Hadi: I found it interesting the amount of information that you have to go through to understand where you are. What's the boundary? How much do we want explained?

Matt: We have certain situations where most of our tests are saying this is the minimum you should expose. ARIA-at right now isn't designed to provide opinions about extra information unless that extra information is clearly a bug because we have that field where you can describe unexpected output (for example, for unrelated or repeated information)

Matt: We had a case reported where there was a repetitive announcement that interrupted something else, so that was a time when we called this out

<RichardSteinberg> Need to drop

Minutes manually created (not a transcript), formatted by scribe.perl version 185 (Thu Dec 2 18:51:55 2021 UTC).

Diagnostics

Maybe present: Hadi, James, Matt, Seth