W3C

– DRAFT –
(MEETING TITLE)

14 March 2022

Attendees

Present
Cameron Cundiff, Cynthia Shelly, Glen Gordon, Harris Schneiderman, James Craig, James Scholes, Lauriat, Matt_King, Steven Lambert, Travis_Leithead, zcorpan
Regrets
-
Chair
-
Scribe
s3ththompson

Meeting minutes

History & Overview

We've started with ARIA-AT CG work doing interoperability testing.

How do W3C Standards work?

Need a venue, under the ARIA-AT CG umbrella now, but there are other relevant groups, like BTT WG, and maybe AT Automation becomes its own group in future?

Start with a draft explainer that encapsulates the scope of the project

Need implementation experience

Standard develops in parallel with implementation experience, as much as the other way around

Glen Gordon: Are we coming in at an early stage or has something already been written?

<Travis> Link to the explainer here?

https://github.com/w3c/aria-at-automation

<Travis> 👍

Matt_King: we are coming in at an early stage, we've just done research and r&d to make sure that we're able to have informed conversation

James Craig: I have a few points I wanted to add to make sure they're covered: 1) lots of precedence around work that Joanie from Igalia did in the past. might want to involve them and make sure we look at precedence 2) there is already some precedence around changing settings programmatically that comes from work that was done to enable restoring sessions

James Craig: 3) Want to make sure we cover security from an early stage, because there are lots of tricky parts to that

Goals for AT Automation Standard

recitation of goals from draft explainer: https://github.com/w3c/aria-at-automation#goals

Cameron Cundiff: are we trying to test screen readers by capturing output? or are we trying to just test adherence to existing accessibility APIs

zcorpan: this project is more about testing ATs themselves, but it could be complemented by testing other parts of the accessibility stack

Matt_King: there are other projects, like the accessibility object model (AOM) that would do more of what you're asking. This isn't that... this is about ensuring that the screen reader experience that you see today doesn't regress in some way

Matt_King: the primary goal here is to test screen reader behavior itself. we're trying to ensure screen reader and browser (and interaction between the two) hasn't changed that would break expectations

Matt_King: James, should your concerns about security / privacy be framed as a goal?

James Craig: I think that would be helpful. VoiceOver and other AT products, for example, have greater access to the system than other long-running apps (e.g. access to login screen)

James Craig: there are some context in, e.g. XCode developer tools, where app developers can run a variety of automation tools on their own app... but it's limited to that tool. In addition to that, the aspect of XSS (cross-site scripting) could be a risk... effectively using a screen reader which might have access to run in a browser with multiple tabs to enable XSS

James Craig: those issues have limited the ability to ship developer friendly tooling in the past, so i think it would be good to capture as a goal to make sure others are aware and thinking about it

zcorpan: Is the XSS-esque security concern something that WebDriver already exposes?

James Craig: I think maybe we could set up some sort of sandboxing framework?

Seth Thompson: maybe this question is related to the question of a testing "session"?

James Craig: might also be useful framing this in terms of user expectations of explicitly invoking tools for a specific purpose in a safe and privacy-friendly manner

Cameron Cundiff: recently Circle CI and GitHub CI disabled SIP in their CI environments. i don't know that they know the implications there, but it is likely that that's a vector for security issues

(when System Integrity Protection, or SIP, is disabled, it's possible to programatically turn on VoiceOver. Not recommended to turn it off in production)

https://github.com/actions/virtual-environments/issues/4770

Glen Gordon: from JAWS side, this would never be on-by-default... would need setting or certificate or something to protect the average user

another open question has to do with whether the API should include simulating key presses

James Craig: if you're talking about OS / HID-level, that's a big security risk

James Craig: also, are we limiting to screen readers, or would this apply to something like a switch control too?

James Scholes: to me, I'm also interested in testing alternative gestures or input devices. so to me, the API should perhaps trigger a "simulated" input type, e.g. "please react *as if* this was the right-swipe gesture on a trackpad"

s3ththompson: the question about OS-keypresses has more to do with asking: "does a simulated keypress make sense in the context of an API that is implemented by the screen reader"

Glen Gorder: for JAWS, there is a dance where we "eat" the keypress and then re-emit it

Glen Gorder: there's a danger in passing on keypresses to the screen reader, because the system may be in a state where it is "eating" a key because it's in some sort of virtual mode, but in fact in reality that key should have gone to the browser

Glen Gorder: the above was only for keys attached to JAWS script... so doesn't apply to alphanumeric keys

Aaron Leventhal: i'd like to use this API to test the Chrome UI itself... so i'd like to ensure that the input method is as close to keyboard that user would type as possible

s3ththompson: I think we'll do monthly meetings going forward

but try to continue conversations over github issues going forward

s3ththompson: in the meantime, please use join the Community Group and mailing list at https://www.w3.org/community/aria-at/

Backup of the chat log

zoom chat log https://www.irccloud.com/pastebin/ziHYv45F/zoom_chat_log.txt

Minutes manually created (not a transcript), formatted by scribe.perl version 185 (Thu Dec 2 18:51:55 2021 UTC).

Diagnostics

Maybe present: s3ththompson