19:07:06 RRSAgent has joined #aria-at 19:07:10 logging to https://www.w3.org/2024/06/13-aria-at-irc 19:07:15 rrsagent, make log public 19:07:25 Zakim, start the meeting 19:07:25 RRSAgent, make logs Public 19:07:26 please title this meeting ("meeting: ..."), jugglinmike 19:07:39 meeting: ARIA and Assistive Technologies Community Group Weekly Teleconference 19:08:45 present+ 19:09:05 present+ MattKing 19:09:09 present+ howard-e 19:09:16 present+ IsaDC 19:09:22 present+ James_Scholes 19:09:30 topic: Review agenda and next meeting dates 19:10:02 MattKing: No meeting Wednesday June 19 (US Holiday) 19:11:08 MattKing: Next community group meeting: Thursday June 27 19:12:27 jugglinmike: I will not be available for the automation subgroup meeting currently scheduled for July 8 19:12:37 MattKing: let's plan to meet on July 1, instead 19:13:16 MattKing: Requests for changes to agenda? 19:13:21 MattKing: hearing none, we'll move on 19:13:30 topic: Current status 19:13:36 MattKing: Goal: 6 recommended plans by June 30 19:13:45 MattKing: We're going to miss that, but we're making good progress 19:17:28 MattKing: 5 plans in candidate review 19:17:48 MattKing: 1 plan in draft review: Modal Dialog Example Test Plan Versions 19:18:11 MattKing: Next up: color viewer slider, disclosure navigation menu, and action menu button 19:18:58 MattKing: Check in on dialog testing 19:19:36 IsaDC: We moved the setup script to the first and last element (a heading in this case, and a button) 19:19:44 IsaDC: This way, the screen readers don't have to move, per se 19:20:07 IsaDC: We are not asking the focus of cursors to move 19:20:27 IsaDC: So now, it only has to stay within the dialog 19:20:51 MattKing: This is for the test where we're trying to determine whether a screen reader lets the focus travel outside of the dialog 19:21:01 MattKing: That's why there are four tests 19:21:44 MattKing: We removed the dependency on the movement command so that now, you don't have to move focus to the top or the bottom--the setup script does that for you 19:22:24 MattKing: I think Joe, IsaDC, and Hadi ran tests on that one 19:22:57 MattKing: neither Joe nor Hadi are present today, though 19:23:28 IsaDC: Hadi sent an e-mail reporting that he is still observing the conflict that he originally reported 19:23:46 MattKing: That's the issue we closed during the meeting last week 19:28:18 James_Scholes: I can't reproduce this problem 19:28:38 MattKing: I feel like this project should be pretty careful about failing a test based on a flaky result 19:29:16 James_Scholed: I feel as though I would need to hear exactly how JAWS was behaving, not because I think Hadi was doing anything incorrectly, but because it's harder to interpret this as second-hand information 19:29:49 MattKing: The fact that MichaelFairchild also seems to get it to happen gives me pause 19:30:06 MattKing: I have the option of testing on at least one other machine, myself, but I don't have access to it right now 19:30:15 MattKing: this is such a bizarre difference! 19:30:49 MattKing: Suppose we could get it to reproduce on two different machines, and assuming the browser and screen reader versions match (and that the operating systems are at least of the same generation) 19:31:28 MattKing: Even then, it still feels a little problematic to report that it failed unless it failed consistently or we could document the requirements to observe the failure 19:32:28 MattKing: My inclination is to go with the systems where it passes, and then try to share documentation of the problem with Vispero 19:32:57 IsaDC: I agree with that 19:33:07 James_Scholes: I do, too 19:33:20 s/James_Scholed/James_Scholes/ 19:40:32 jugglinmike: I personally feel as though known-flaky behavior is unacceptable 19:41:02 jugglinmike: And that it is more appropriate to report that as failing rather than passing 19:45:54 MattKing: I hear that. Since the best course of action is kind of ambiguous, we'll go with the majority opinion here and assign a passing result for now and continue to support Hadi in getting that reported to Vispero 19:46:01 Topic: Website changes 19:46:33 MattKing: VoiceOver bot with macOS 14 is available, now! Hooray to jugglinmike and howard-e and the rest of the team because this is awesome! 19:46:42 IsaDC: Yes! That news made my morning 19:47:07 MattKing: The feature of adding the VoiceOver Bot didn't show up in the change log. Perhaps you didn't have the issue tagged 19:47:35 howard-e: The UI for making VoiceOver available (at any version) was in a previous release 19:48:09 howard-e: The work for updating the version of VoiceOver took place in another project (that is, not ARIA-AT App) so we didn't expect it to appear in the release notes for ARIA-AT App 19:48:16 MattKing: Understood 19:48:44 howard-e: As for the test queue changes, I expect to complete review either by the end of my day today (which is fast-approaching) or early in my day on Monday. 19:49:05 howard-e: We're looking at Tuesday or Wednesday of next week for deployment, but I'll have Carmen reach out to you if anything comes up in the mean time 19:49:33 Topic: APIs to support stakeholder use cases 19:50:02 scribe+ howard-e 19:50:59 jugglinmike: We want to assist screen readers developers in running the tests 19:52:09 jugglinmike: when the developers run the tests today, they are responsible for assigning the the verdicts themselves but this is also being provided by CG members 19:52:11 jongund has joined #aria-at 19:52:52 jugglinmike: https://docs.google.com/document/d/1utBv7LiYtF_9ztk-1LgcxQniYHI3g3lVXB_JTv82Qp4/edit describes a proposal that there is another step the developers could take which queries the app on if the CG has ever seen these repsonses before? And if so, what are the verdicts? 19:53:43 jugglinmike: So the results would include the verdicts for the assertions as already seen by the CG, this would less work for them to compare and verify with their local testing and more so, being able to identify what doesn't match? 19:54:25 jugglinmike: If they are able to see verdicts shared for previous versions, then when making changes to their live development version, they'd be able to quickly see what the differences are? 19:54:54 jscoles: Is this an API endpoint where if provided test, assertion, command, it asks if you've seen this verdict? 19:54:58 jugglinmike: That's right 19:56:26 jscholes: It sounds like there is a comparison happening behind the scenes to compare the speech output (acknowledgement of not fully going through the proposal as yet) -- I'm wary of giving people an api to let them obtain the data themselves would be more flexible for them, because they would then know what rules we're using 19:58:17 jscholes: Would this also permit batch checks because there should be concern about toll it may take on system with 50+ tests across test plans 19:59:13 jugglinmike: That's more technical than the proposal currently offers but there would be 1 request that bundles everything with the current runner for a single test 20:00:34 Matt_King: So it can be used with our response collector or not (through the api) 20:01:37 Matt_King: I'm still unclear of the full picture with how this set up may look in their CI. It may be useful to collect feedback from them (screen reader developers) on how they set up their CI to see if this may be useful to them 20:01:55 Matt_King: [proposing getting in touch with NVAccess] on that 20:03:34 https://github.com/bocoup/aria-at-gh-actions-helper/blob/f4f6a0d0a6220d6550eaff064dfd40cf34d173f5/.github/workflows/voiceover-test.yml 20:03:37 jugglinmike: [shares details on file in aria-at-gh-actions-helper repository on expectations of running the system in CI] 20:04:04 Zakim, end the meeting 20:04:04 As of this point the attendees have been mmoss, MattKing, howard-e, IsaDC, James_Scholes 20:04:06 RRSAgent, please draft minutes 20:04:07 I have made the request to generate https://www.w3.org/2024/06/13-aria-at-minutes.html Zakim 20:04:14 I am happy to have been of service, jugglinmike; please remember to excuse RRSAgent. Goodbye 20:04:14 Zakim has left #aria-at 20:04:19 RRSAgent, leave 20:04:19 I see no action items