19:03:48 RRSAgent has joined #aria-at 19:03:48 logging to https://www.w3.org/2021/08/30-aria-at-irc 19:04:00 rrsagent, make log public 19:04:03 present+ 19:04:07 CHAIR: Michael Fairchild 19:04:11 MEETING: ARIA and Assistive Technologies Community Group 19:04:18 rrsagent, make minutes 19:04:18 I have made the request to generate https://www.w3.org/2021/08/30-aria-at-minutes.html s3ththompson 19:04:23 TOPIC: AT Automation Update 19:05:14 jugglinmike has joined #aria-at 19:06:45 present+ 19:07:37 present+ 19:09:37 ST: We've been working on potential approaches for making the "automation voice" accessible 19:09:42 scribe: s3ththompson 19:10:01 ST: wrote a description of the issue here: https://github.com/bocoup/at-automation-experiment/issues/1 19:11:26 MP: we're still thinking about automation as a system of tools that work together. at a high level, they might be coordinated by a consistent API, but at a low-level they might be implemented in different ways across different OS/AT combos 19:18:12 MP: 1. Screen reader + screen reader, 2. Screen reader + screen reader in VM, 3. Screen reader + plugin to retrieve speech data, 4. Automation voice + automated toggling, 5. Automation voice + ability to vocalize, 6. Automation voice + forward to built-in voice, 7. AssitivLabs 19:20:36 MF: For 7. AssistivLabs, would that use automation voice or a plugin? 19:20:40 MP: could use either 19:23:42 MF: so just to step back and restate the question... this is a question of both the UX and usability... 19:24:11 MP: yes, but i want to leave space to recognize that this may be dangerous, not just a UX issue 19:24:57 MF: how do non-sighted at voice developers build at voices? i think that's the crucial question. i don't feel comfortable making that determination 19:27:26 ST: perhaps we could reach out to some developers in our extended network to try to collect some feedback there 19:31:08 MP: if we did have to write a vocalizer, the question becomes: would we have to support languages other than english? how much does this explode the complexity of what we're working on 19:32:11 MP: also, to speak to the plausibility of wrapping the automation voice in a VM... it seems quite a challenge to work out the platform/licensing issues around spinning up VMs on all OSes... seems also like a lot of extra work 19:34:14 MF: I think the next step is to reach out to James and other non-sighted contributors. My take would be to go for 4. Automation voice + automated toggling, given the complexities around the other options, but I would defer to community consensus here 19:35:09 MP: By the way, speaking of feasibility on macOS, we investigated https://github.com/ckundo/auto-vo and unfortunately, it may not be very robust as we originally hoped. the project uses polling to check the last utterance 19:35:40 MP: since it's not event-based, it can't tell us if the same thing was uttered twice, or if two things were uttered in rapid succession. 19:35:52 MF: i can definitely see that being problematic, especially for something like aria live regions 19:37:00 re: custom voices on macOS, https://www.cereproc.com/ claims to support it 19:37:16 > CereProc's SAPI voices are compatible with Microsoft SAPI 5 and are supported on Windows XP, Windows Vista, Windows 7, Windows 8, Windows 8.1 and Windows 10. They appear in the Windows Text-to-Speech Control Panel. We recommend using our SAPI voices on systems with at least a 1GHz processor and 256MB RAM. CereProc's Mac voices are supported on Lion, Mountain Lion, Mavericks, Yosemite, El Capitan, Sierra, High Sierra, Mojave and Catalina. The[CUT] 19:37:33 ... They add to the system voices list, found under 'Accessibility > Speech' in 'System Preferences'. 19:47:38 ST: want to also bring up that we should still pursue as a long-term project getting first-party support for some new shared APIs 19:48:20 Here's the NVDA lib by Sebastian (using NVDA's log) https://github.com/eps1lon/screen-reader-testing-library 19:49:01 WT: just want to underscore that in the long run we really do need this kind of API. I spoke with someone from Material Design who worked on a similar black box library (https://github.com/eps1lon/screen-reader-testing-library) but found that he was stymied by throttling 19:50:44 And here's an example test using that lib: https://github.com/eps1lon/mui-scripts-incubator/blob/main/lib/a11y-snapshot/screen-reader.test.js 19:52:29 MP: i want to caution against saying that a first-party API would obviate the need for the black box testing approach... there's a question of trust... we might always want to validate that the block box testing yields the correct assertions 19:52:50 ST: good point, i guess it's not so much a short term and long term option, so much as a 2-pronged approach 19:54:16 MP: right, and i would also say that it is to our benefit to highlight other use cases outside of strict assertion testing when we go to vendors to ask for better API support 19:55:21 Another potential macOS 3rd party voice example: https://www.assistiveware.com/legacy-apps (Infovox iVox) 19:59:47 MF: Next steps: run proposal by james, seth to try to contact external at voice contributors 20:00:22 rrsagent, make minutes 20:00:22 I have made the request to generate https://www.w3.org/2021/08/30-aria-at-minutes.html s3ththompson