18:54:03 RRSAgent has joined #aria-at 18:54:07 logging to https://www.w3.org/2023/03/30-aria-at-irc 18:54:07 RRSAgent, make logs Public 18:54:08 please title this meeting ("meeting: ..."), Matt_King 18:54:38 MEETING: ARIA and Assistive Technologies Community Group 18:54:47 CHAIR: Matt King 18:54:54 present+ 18:55:03 rrsagent, make minutes 18:55:04 I have made the request to generate https://www.w3.org/2023/03/30-aria-at-minutes.html Matt_King 18:56:33 TOPIC: Review Agenda and Next Meeting Date 18:57:25 Agenda is available at https://www.w3.org/events/meetings/402ca109-ea77-4ad0-b088-1ccd8a0f00c5/20230330T120000#agenda 18:57:47 Next meeting is scheduled for April 6. 19:03:04 Sam_Shaw has joined #aria-at 19:04:02 jugglinmike has joined #aria-at 19:07:25 Zakim, start the meeting 19:07:25 RRSAgent, make logs Public 19:07:26 please title this meeting ("meeting: ..."), jugglinmike 19:07:51 TOPIC: ARIA and Assistive Technologies Community Group Weekly Teleconference 19:07:59 present+ jugglinmike 19:08:03 scribe+ jugglinmike 19:08:11 present+ Matt_King 19:08:22 present+ Sam_Shaw 19:09:38 michael_fairchild has joined #aria-at 19:10:37 Matt_King: Our next meeting will be April 6 as per the usual schedule 19:10:45 present+ michael_fairchild 19:11:10 TOPIC: AT Support Table Launch Update 19:11:25 MEETING: ARIA and Assistive Technologies Community Group Weekly Teleconference 19:11:38 TOPIC: AT Support Table Launch Update 19:11:46 Matt_King: We have made a lot of progress since last week! 19:12:30 Matt_King: The support tables for Button and Toggle Button have been merged to the main branch. That does not mean that they are live yet, but they will be 19:13:08 Matt_King: Alert, Link, and Radio are in progress and will be merged soon 19:13:32 Matt_King: James Scholes, and I have tentative plans to meet with Vispero next week 19:13:39 Matt_King: We will also attempt to meet with Apple 19:13:59 Matt_King: These two stakeholders will see support tables for all 5 patterns we're targeting 19:14:15 Matt_King: They'll also see a draft of the announcement. And I'll share that draft in this meeting next week 19:14:58 Matt_King: The live reports on the ARIA-AT site for alert, button, and toggle button have all been updated 19:15:11 Matt_King: I still have to do that for Radio and Link 19:15:30 Matt_King: Bocoup fixed the issues with those that I'd previously reported 19:15:41 Matt_King: So things are all lining up for the launch! 19:16:05 Topic: Current testing check-in 19:16:22 present+ James_Scholes 19:17:18 Matt_King: We're going to get more data for the five plans. Originally, we only had data for JAWS + NVDA in Chrome and voiceover in safari. We're hoping to get even more combinations in time to be live for April 13 19:17:48 James_Scholes: All testing is complete for Command Button and Toggle button, for two testers each. One conflict 19:18:26 James_Scholes: Command Button for NVDA and Firefox is complete for two testers. Toggle Button for NVDA and Firefox is completed on the PAC side, but it looks like Alyssa has not yet started 19:18:51 James_Scholes: VoiceOver with Chrome is complete from PAC and from John, but it looks like there are two conflicts 19:19:13 James_Scholes: Toggle Button in VoiceOver and Chrome is complete from two testers, but there are seven conflicts 19:20:13 James_Scholes: Good progress--only one test run still to be completed (from two plans, three combinations with two testers each) 19:20:50 Matt_King: Once you look into that conflict, let me know if we need to put any conflict resolution issues on the agenda for next week 19:21:08 Topic: Process (Working Mode) Questions Issue 914 19:21:26 https://github.com/w3c/aria-at/issues/914 19:21:45 Matt_King: Some background: we're working on some analysis of the current app functionality and comparing it to the working mode 19:22:23 Matt_King: I'm building a [GitHub] Project to map out exactly what requirements of the working mode are not supported (either correctly or at all) that are necessary to delivering "recommended" reports 19:22:39 Matt_King: I didn't reference that [GitHub] Project here, yet. We'll talk more about that later 19:22:55 Matt_King: As I'm doing that, I'm going through the working mode and looking at various scenarios for how we use it 19:23:12 Matt_King: The first scenario -- the "happy path" or "scenario 0" 19:24:16 Matt_King: A perfect draft goes into the working more and goes straight to community feedback. Everyone runs it with no feedback and there's no conflict. It goes to the "candidate" phase, so the implementers look at it and approve it without comment. So it reaches the "recommended" phase 19:24:34 jongund has joined #aria-at 19:24:38 Matt_King: When reviewing with "scenario 0" in mind, I came up with three questions 19:24:56 Matt_King: Those are listed in the GitHub Issue we're discussing, now 19:26:30 Matt_King: First question: "Should we scope test plans to a group of AT with identical testing requirements?" 19:26:49 Matt_King: Right now, the scope of all of our test plans is JAWS, NVDA, and VoiceOver for macOS 19:26:58 Matt_King: There are two reasons why scope is super-important 19:27:16 Matt_King: One is that we seek consensus from major stakeholders which include developers of the ATs 19:27:42 Matt_King: Two is that it determines which ATs we consider when we're trying to prove whether the test is good 19:28:35 Matt_King: At some point in the future, we will be testing VoiceOver for iOS and TalkBack for Android. We'll also be testing Narrator and maybe Chrome Box. Beyond that, we'll hopefully be testing voice recognition and eye gaze (way down the road) 19:29:06 Matt_King: What should we do when we add additional ATs? Should they be new test plans? Or should they get added to an existing test plan? 19:30:58 James_Scholes: Second question: do all future ATs have the same testing requirements? 19:32:04 James_Scholes: I ask because when you create a test plan, it's possible to have only a subset of tests that apply to a given AT. For instance, the "mode switching" tests apply to NVDA and JAWS, but they do not apply to VoiceOver 19:33:17 James_Scholes: If we were to update an existing test plan to add voice recognition commands (for example). We could add them to an existing test plan either by extending all of the existing tests to support speech recognition commands, but if we decided that a particular test did not apply, we could simply omit it. 19:33:48 James_Scholes: So I'm inclined to do that rather than create a whole new test plan 19:34:06 michael_fairchild: My process is similar to what James_Scholes has outlined 19:34:39 Matt_King: Let's talk about different possible approaches before discussing pros and cons of particular approaches 19:35:51 Matt_King: We could look at AT that have essentially the same functionality--desktop screen readers as a category. They largely perform the same functions in very similar ways. But they're quite different from mobile screen readers in fundamental ways. And very different from eye gaze, voice control, and magnification 19:37:03 Matt_King: We could have a test plan scoped to just a specific type of AT where they essentially mirror one another. Where we have the need to support similar tests. Maybe not identical tests, but where we only have ocassional need for minor differences 19:37:25 Matt_King: Or we could group them in broad categories: "all screen readers" or "all eye gaze ATs" 19:38:27 michael_fairchild: what if we limited each test plan to a single AT? 19:38:47 Matt_King: If we did that, we'd have to determine which test plans require agreement with one another in order to establish interoperability 19:39:44 Matt_King: If I compare ARIA-AT to wpt.fyi... In wpt.fyi, we have a spec like the one for CSS flexbox. It contains normative requirements, and those requirements are translated to tests 19:40:26 Matt_King: I kind of look at the set of tests in a test plan as equivalent to the tests in wpt.fyi 19:41:20 Matt_King: For everyone who makes "widget X", the test plan is a way of saying, "here is the set of tests to verify that you have created an interoperable implementation of 'widget X'" 19:41:48 michael_fairchild: So a test plan is a way to verify that several ATs are interoperable. Is that the only way to verify interoperability? 19:42:01 Matt_King: For sure not--keep thinking outside the box! 19:43:02 James_Scholes: If we just limit ourselves to the screen reader and browser combinations that we have now, we are basically right now saying that it's acceptable to compare across all of those 19:43:42 James_Scholes: Is it reasonable to make the same assertion after adding additional screen readers? Do we expect to hold iOS VoiceOver to the same standards as the macOS version? 19:44:12 James_Scholes: Would it be reasonable to compare the set of results between a screen reader and a voice recognition tool (given that the tests could be significantly different)? 19:45:07 Matt_King: Right now, we list the test plans along the left-hand column. But actually, right now, those test plans are synonymous with a test case. 19:46:24 Matt_King: Let's say that we're adding support for Combobox with Eye Gaze tools... The tests are completely different, but we can still give a report about how well a particular eye gaze tool satisfies the expectations 19:46:58 James_Scholes: It doesn't make sense to compare the support of JAWS and Dragon Naturally Speaker for a given pattern 19:47:29 James_Scholes: It makes sense mathematically, but users may be using both of those ATs 19:47:53 James_Scholes: It also makes me think that the table would grow much too large 19:48:32 Matt_King: The presentation doesn't concern me so much. We could aggregate the data in many ways 19:49:09 James_Scholes: I still think that they would be better-served by separate categories 19:49:24 James_Scholes: e.g. one for screen readers and one for magnifiers 19:49:50 James_Scholes: as opposed to having them all mixed: "here are the results for five screen readers and four magnifiers" etc. 19:50:49 Matt_King: I can imagine for some patterns, the expectations for all desktop screen readers are the same 19:51:24 Matt_King: But when it comes to desktop screen readers versus mobile screen readers, we may end up with dedicated tests that are quite different 19:53:05 Matt_King: We have to consider when/why we are asking AT developers to revisit test plans. If we change an existing test plan by adding VoiceOver for iOS, does it make sense to be asking Vispero to review the new version of the test plan? 19:54:25 Matt_King: Do we have to "re-do: the transition from Candidate whenever we add new ATs to a Recommended test plan? 19:55:04 Matt_King: We might say that two products are different enough that they need separate test plans for the same pattern 19:55:46 Matt_King: But if we add Narrator to the test plan that JAWS, NVDA, and VoiceOver already went through, I would expect that those three already agree. 19:57:15 jugglinmike: Doesn't that give undue preference to the ATs which happen to participate earlier? 19:57:19 Matt_King: Yes 20:02:19 James_Scholes: It seems undesirable to have to revisit consensus that we've already obtained whenever adding a new AT 20:04:12 James_Scholes: I'd like to explore a concrete scenario in which adding a new AT would require the tests in a recommended test plan to be changed 20:07:13 Matt_King: We're out of time. We will continue this discussion. We'll get answers to these questions and make whatever changes to the working mode they imply. Thanks, all! 20:09:28 RRSAgent, make minutes 20:09:29 I have made the request to generate https://www.w3.org/2023/03/30-aria-at-minutes.html jugglinmike 20:09:52 That's just peachy 22:28:03 jongund has joined #aria-at 23:27:14 jongund has joined #aria-at 23:35:01 jongund has joined #aria-at