W3C

– DRAFT –
ARIA and Assistive Technologies Community Group Weekly Teleconference

13 June 2024

Attendees

Present
howard-e, IsaDC, James_Scholes, MattKing, mmoss
Regrets
-
Chair
-
Scribe
howard-e

Meeting minutes

Review agenda and next meeting dates

<jugglinmike> MattKing: No meeting Wednesday June 19 (US Holiday)

<jugglinmike> MattKing: Next community group meeting: Thursday June 27

<jugglinmike> jugglinmike: I will not be available for the automation subgroup meeting currently scheduled for July 8

<jugglinmike> MattKing: let's plan to meet on July 1, instead

<jugglinmike> MattKing: Requests for changes to agenda?

<jugglinmike> MattKing: hearing none, we'll move on

Current status

<jugglinmike> MattKing: Goal: 6 recommended plans by June 30

<jugglinmike> MattKing: We're going to miss that, but we're making good progress

<jugglinmike> MattKing: 5 plans in candidate review

<jugglinmike> MattKing: 1 plan in draft review: Modal Dialog Example Test Plan Versions

<jugglinmike> MattKing: Next up: color viewer slider, disclosure navigation menu, and action menu button

<jugglinmike> MattKing: Check in on dialog testing

<jugglinmike> IsaDC: We moved the setup script to the first and last element (a heading in this case, and a button)

<jugglinmike> IsaDC: This way, the screen readers don't have to move, per se

<jugglinmike> IsaDC: We are not asking the focus of cursors to move

<jugglinmike> IsaDC: So now, it only has to stay within the dialog

<jugglinmike> MattKing: This is for the test where we're trying to determine whether a screen reader lets the focus travel outside of the dialog

<jugglinmike> MattKing: That's why there are four tests

<jugglinmike> MattKing: We removed the dependency on the movement command so that now, you don't have to move focus to the top or the bottom--the setup script does that for you

<jugglinmike> MattKing: I think Joe, IsaDC, and Hadi ran tests on that one

<jugglinmike> MattKing: neither Joe nor Hadi are present today, though

<jugglinmike> IsaDC: Hadi sent an e-mail reporting that he is still observing the conflict that he originally reported

<jugglinmike> MattKing: That's the issue we closed during the meeting last week

<jugglinmike> James_Scholes: I can't reproduce this problem

<jugglinmike> MattKing: I feel like this project should be pretty careful about failing a test based on a flaky result

<jugglinmike> James_Scholes: I feel as though I would need to hear exactly how JAWS was behaving, not because I think Hadi was doing anything incorrectly, but because it's harder to interpret this as second-hand information

<jugglinmike> MattKing: The fact that MichaelFairchild also seems to get it to happen gives me pause

<jugglinmike> MattKing: I have the option of testing on at least one other machine, myself, but I don't have access to it right now

<jugglinmike> MattKing: this is such a bizarre difference!

<jugglinmike> MattKing: Suppose we could get it to reproduce on two different machines, and assuming the browser and screen reader versions match (and that the operating systems are at least of the same generation)

<jugglinmike> MattKing: Even then, it still feels a little problematic to report that it failed unless it failed consistently or we could document the requirements to observe the failure

<jugglinmike> MattKing: My inclination is to go with the systems where it passes, and then try to share documentation of the problem with Vispero

<jugglinmike> IsaDC: I agree with that

<jugglinmike> James_Scholes: I do, too

<jugglinmike> jugglinmike: I personally feel as though known-flaky behavior is unacceptable

<jugglinmike> jugglinmike: And that it is more appropriate to report that as failing rather than passing

<jugglinmike> MattKing: I hear that. Since the best course of action is kind of ambiguous, we'll go with the majority opinion here and assign a passing result for now and continue to support Hadi in getting that reported to Vispero

Website changes

<jugglinmike> MattKing: VoiceOver bot with macOS 14 is available, now! Hooray to jugglinmike and howard-e and the rest of the team because this is awesome!

<jugglinmike> IsaDC: Yes! That news made my morning

<jugglinmike> MattKing: The feature of adding the VoiceOver Bot didn't show up in the change log. Perhaps you didn't have the issue tagged

<jugglinmike> howard-e: The UI for making VoiceOver available (at any version) was in a previous release

<jugglinmike> howard-e: The work for updating the version of VoiceOver took place in another project (that is, not ARIA-AT App) so we didn't expect it to appear in the release notes for ARIA-AT App

<jugglinmike> MattKing: Understood

<jugglinmike> howard-e: As for the test queue changes, I expect to complete review either by the end of my day today (which is fast-approaching) or early in my day on Monday.

<jugglinmike> howard-e: We're looking at Tuesday or Wednesday of next week for deployment, but I'll have Carmen reach out to you if anything comes up in the mean time

APIs to support stakeholder use cases

jugglinmike: We want to assist screen readers developers in running the tests

jugglinmike: when the developers run the tests today, they are responsible for assigning the the verdicts themselves but this is also being provided by CG members

jugglinmike: https://docs.google.com/document/d/1utBv7LiYtF_9ztk-1LgcxQniYHI3g3lVXB_JTv82Qp4/edit describes a proposal that there is another step the developers could take which queries the app on if the CG has ever seen these repsonses before? And if so, what are the verdicts?

jugglinmike: So the results would include the verdicts for the assertions as already seen by the CG, this would less work for them to compare and verify with their local testing and more so, being able to identify what doesn't match?

jugglinmike: If they are able to see verdicts shared for previous versions, then when making changes to their live development version, they'd be able to quickly see what the differences are?

jscoles: Is this an API endpoint where if provided test, assertion, command, it asks if you've seen this verdict?

jugglinmike: That's right

jscholes: It sounds like there is a comparison happening behind the scenes to compare the speech output (acknowledgement of not fully going through the proposal as yet) -- I'm wary of giving people an api to let them obtain the data themselves would be more flexible for them, because they would then know what rules we're using

jscholes: Would this also permit batch checks because there should be concern about toll it may take on system with 50+ tests across test plans

jugglinmike: That's more technical than the proposal currently offers but there would be 1 request that bundles everything with the current runner for a single test

Matt_King: So it can be used with our response collector or not (through the api)

Matt_King: I'm still unclear of the full picture with how this set up may look in their CI. It may be useful to collect feedback from them (screen reader developers) on how they set up their CI to see if this may be useful to them

Matt_King: [proposing getting in touch with NVAccess] on that

<jugglinmike> https://github.com/bocoup/aria-at-gh-actions-helper/blob/f4f6a0d0a6220d6550eaff064dfd40cf34d173f5/.github/workflows/voiceover-test.yml

jugglinmike: [shares details on file in aria-at-gh-actions-helper repository on expectations of running the system in CI]

Minutes manually created (not a transcript), formatted by scribe.perl version 221 (Fri Jul 21 14:01:30 2023 UTC).

Diagnostics

Succeeded: s/James_Scholed/James_Scholes/

Maybe present: jscholes, jscoles, jugglinmike, Matt_King

All speakers: jscholes, jscoles, jugglinmike, Matt_King

Active on IRC: howard-e, jugglinmike, mmoss