W3C

– DRAFT –
(MEETING TITLE)

28 March 2022

Attendees

Present
David Emmanual, David Emmanuel, Harris Schneiderman, James Scholes, mzgoddard, Rich_Noah, Steven Lambert, Weston Thayer, zcorpan_
Regrets
-
Chair
Matt King
Scribe
s3ththompson

Meeting minutes

History & Overview

We've started with ARIA-AT CG work doing interoperability testing.

How do W3C Standards work?

Need a venue, under the ARIA-AT CG umbrella now, but there are other relevant groups, like BTT WG, and maybe AT Automation becomes its own group in future?

Start with a draft explainer that encapsulates the scope of the project

Need implementation experience

Standard develops in parallel with implementation experience, as much as the other way around

Goals for AT Automation Standard

https://github.com/w3c/aria-at-automation#goals

zcorpan:

Automate testing of screen reader + web browser combinations.

- Ability to start and quit the screen reader.

- Ability to start and quit the web browser.

- Ability to change settings in the screen reader in a robust way.

- Ability to access the spoken output of the screen reader.

- Ability to access the internal state of the screen reader, e.g. virtual focus position, mode (interaction mode vs. reading mode).

additional topics of concern from screen reader vendors:

- security

Matt_King: we also talked about key press simulation and the degree to which key presses would originate with the screen reader vs. at a lower level

s3ththompson: the consensus was that their might be multiple types of use cases: many developers might want an easy method to simulate keypresses directly via the API, while interoperability / correctness testing might prefer simulating keypresses from outside the system under test, for example, from outside of a VM boundary

James: i also want to bring up the implications of support for a screen reader in demo mode, vs. pro mode, etc. how do we think about concepts of licensing

James: and versioning, playwright for example handles easily installing its own version of the browser

mazgoddard: I view that as potentially one part of internal state that the AT could introspect

s3ththompson: I think the versioning question might be related to the idea of "sessions" or a "headless" AT mode...

James: certainly, but we need to think about isolation in general... would think it would be difficult to run JAWS in isolation mode while running NVDA at the same time (since they might conflict with each other)

mzgoddard: speaking of headless mode, i think it might be something that we might need to ask vendors about

Matt_King: I wonder if anyone knows whether we could run tests reliably in a headless mode... since some behaviors depend on knowing what's visible on the screen... or if you're in JAWS browse mode and it's scrolling through a webpage, how it reacts to the next command might be dependent on what's visible on the screen

mzgoddard: perhaps that's not a normative part of the spec, but an aspirational one

Weston Thayer: I wonder if the operating system is also an implicit stakeholder, since they are the API layer here

David Emmanuel: we ended up using virtual machines because we wanted accuracy / correctness (and it didn't work with JAWS unless we used the virtual machine)

David Emmanuel: but there were other advantages too... we could scale to do testing in parallel. and also we could take snapshots and store state

Matt_King: how about any difficulties

David Emmanuel: it's true that it makes it harder to start things the first time, but we used vagrant to automate

s3ththompson: what about nested virtualization? is it harder to do that?

David Emmanuel: that's a good point. we have had difficulties on github actions... it only works (for now) on macOS one version behind... we will have to deal with that

James Scholes: we may have to think about whether a virtual sound card affects the AT working. for me, it's a problem trying to work over virtual desktop

Weston Thayer: i can confirm that JAWS, for example, doesn't boot up correctly if they don't have a soundcard

mzgoddard: i've used something that used a virtual soundcard over TCP

James Scholes: i think it would be nice to decide whether we expect screen reader developers to change their functionality to support the standard, or does the standard include tooling that supports that functionality

Steven Lambert: one of the things we think about has to do with verbosities and descriptions... normalizing across screen readers so that we can make assertions that these things are the same.

mzgoddard: we might want to think about AOM and other W3C specs already standardize certain types of behavior

mzgoddard: maybe we use that to help pinpoint regressions when they happen

James Scholes: i agree with that, there is some adaptation... for example NVDA treats everything as lists even though the API underneath has more distinction... but it would help to troubleshoot issue if we had more information though

Minutes manually created (not a transcript), formatted by scribe.perl version 185 (Thu Dec 2 18:51:55 2021 UTC).

Diagnostics

Maybe present: James, Matt_King, mazgoddard, s3ththompson, zcorpan