16:05:00 RRSAgent has joined #aria-at 16:05:00 logging to https://www.w3.org/2019/09/11-aria-at-irc 16:05:08 rrsagent, make log public 16:05:28 MEETING: ARIA and Assistive Tech CG for Sep 11, 2019 16:05:35 CHAIR: Matt King 16:05:48 present+ Matt-King 16:05:53 rrsagent, make minutes 16:05:53 I have made the request to generate https://www.w3.org/2019/09/11-aria-at-minutes.html mck 16:05:56 present+ 16:06:00 present+ Valerie-Young 16:09:31 Valerie we can't hear you 16:14:31 TOPIC: Research on packages 16:16:34 V: Been researching 4 options. Made progress. Let's Test Harness and Test vocabulary first. 16:16:44 TOPIC: Test harness and Test vocabulary 16:16:55 Discuss wiki page: https://github.com/w3c/aria-at/wiki/Test-Harness-and-Test-vocabulary 16:18:20 Valerie describes the document 16:19:34 V: Set up instructions are about preconditions (rather than instructions to perform the test) 16:21:25 Harness displays the setup code to be tested. This could be through a URL link. 16:23:25 Each test will have a set of abstract operating instructions, and hopefully we can have a programatic mapping to specific operating instructions 16:23:41 The role of the test harness here is to present the specific operating instructions in a clear way 16:24:41 Hoping that this document will help us have a shared language. 16:25:46 MCK: I'm hoping that we can have some sort of translation system that'd take key words from abstract instructions and abstract expectations, and translate them into specific instructions and expectations for the particular screen reader 16:26:25 Which one people want to see in a report might depend on the user. 16:27:06 E.g. if you're a screen reader developer you might want to see the specific terms related to your screen reader. But if you're a web developer, you might prefer using the more abstract language 16:27:47 V: Another part of this test harness system is importing the test. Hoping that we can have a format for writing tests, that we can just import into the test harness. 16:29:17 MCK: Anybody familiar with the accessibility conformance test taskforce's? Would they have any useful vocabulary or test structure? 16:29:41 (Typo: 'task force', not 'test force') 16:30:13 https://www.w3.org/WAI/GL/task-forces/conformance-testing/ 16:30:40 They've developed a test format. We should investigate whether this (or any other) test format would be useful to us. 16:31:51 MF: It's a good standard with good ideas. With a11ysupported.io I couldn't directly use it, but we might be able to use some of the ideas from the task force. 16:32:01 E.g. it has concepts around atomic rules and composite rules 16:32:37 (typo: a11ysupport.io) 16:34:36 V: Is 'put the AT into reading mode' a set-up instruction? It might need to be more specific for different assistive technologies 16:35:36 MCK: In a 'test run' (or 'test session'), I might have a set of set up instructions, and perform different tests 16:36:03 It's not clear to me where the boundaries are between setup and doing the tests. It depends how we define the test. 16:37:39 Eg. if testing an expectation, like "the screen reader announced the beginning and the end of the menu bar", if we give very detailed set instructions, then the test because just "and press the F key". 16:38:00 Or we don't have all these instructions for setting up for every test, because we assume that the user knows more about how to use the screen reader 16:38:12 So to some extend this depends on the level of knowledge / experience we assume 16:38:52 We need to assume some level of knowledge, otherwise the amount of detail that'll need to be generated would sink the project 16:39:14 V: This concept of session, or a group of tests using the same setup code and setup instructions, would be useful. 16:40:03 What MCK is talking about is similar to what I'm describing as 'abstract operating instructions'. It's also similar to 'user task' is github issue 5 16:40:20 e.g. an abstract operating instruction could be "Operate checkbox in reading mode" 16:41:56 MCK: I was thinking that, at a higher level, there's a 'user task' (eg navigate to checkbox), then one or more expectations (eg that the screen reader states the name state and role of the checkbox). And that expectation would correspond to a lot of assertions. E.g. for each of name/role/state, and for each specific commands 16:43:43 V: A test expectation could be "are the checkbox's name, role and state announced". It'd correspond to several test assertions. 16:44:36 MF: It'll be important to record success as well as failure. And the speech output that is success. 16:45:04 This would help us improve our credibility, and help future testers reference how it tested last time. 16:45:47 V: It'd be great if the test harness did that. 16:46:09 MCK: In JAWS and VO it's possible to capture the last utterance of the screen reader. NVDA might have a plug in to do it too. 16:47:01 The tester could record one of the instances of the screen reader fulfilling the expectations. Not necessarily capture it for every single assertion (e.g. name, state and role). 16:47:50 If we structure our high level expectations that way, it'll be easy to capture how a particular screen reader fulfils an expectation. 16:47:59 The same utterance could cover many assertions. 16:48:48 V: I might start doing another wiki or issue page that records the different features that we need a test harness to have 16:49:46 TOPIC: Research on packages 16:51:13 Tuleap was very promising, but it turns out the open source version is too limited. Add-ons need to be bought. Price is per user. 16:52:19 There are other two that I'm looking at together: Kiwi TCMS (Test Case Management System) and Nitrate. One is a fork of the other. 16:53:01 The last open source test management solution I'm looking at is TestLink 16:53:27 TOPIC: Screen reader terminology 16:53:50 JF: See page here: https://github.com/w3c/aria-at/wiki/Screen-Reader-Terminology-Translation 16:54:18 MCK: The definitions, if they include explanations, are getting too big for table cells 16:54:26 So now I'm thinking of providing links to a glossary 16:55:48 Maybe I could include some related key commands, but it gets complicated fast. And I'm not sure whether this should be in this table or somewhere else. 16:56:47 E.g. There are half a dozen things that would trigger you to get in and out of reading mode. So there's a main way to force JAWS into reading mode, but in practice, in most usage situations, you don't need to use it 16:57:31 But having the instructions might still be useful. E.g. at the moment with JAWS you can't browse grid elements without getting out of mode manually 16:58:05 V: Agree that this page should be focused on the language. But might be useful to record this knowledge 16:59:07 MCK: I want this to cover all the knowledge we're going to use when writing expectations 16:59:20 Should this cover the different ways that screen reader speak ARIA stuff? 16:59:53 E.g. JAWS calls a menubutton one thing and VO calls it something else 17:00:29 V: This should cover everything that would be needed to describe a test to a tester. This would cover instructions, but also expectations 17:01:03 MCK: I'm debating about that one (i.e. whether or not to cover instructions), because 90% of the time test results should be obvious 17:01:45 I need to take off to attend another meeting. Don't want to interrupt so leaving a message here 17:02:00 MF: In a11ysupport.io I record successful output, and also create an array of examples for each expectation, across different screen readers 17:02:52 MCK: Not yet sure about the choice of verbs. Right now we use disparate language eg 'reading' 'perceiving'. It'd be good to use words that have specific meanings, so that when you write expectations it has an unambiguous meaning 17:03:52 A user test is always going to include some kind of verb. Eg "Navigate to a checkbox in reading mode" is a task. 17:03:55 E.g. "Perceive the group label of a group of checkbox in interaction mode". 17:04:15 Not sure whether we should use the word "read" for perceive. 17:05:00 Some things are only announced in a transient manner. Eg one screen reader might only tell you when you're entering or leaving a group as you are navigating. But not when you're in that group 17:06:15 V: language about instructions and language about expectations might be different. eg "perceive" is more about an expectation 17:06:41 MCK: there's an expectation that the screen reader informs you about where you are. 17:06:53 MF: Would be good to revisit this in the next all 17:06:58 (typo: call) 17:07:17 rrsagent, make minutes 17:07:17 I have made the request to generate https://www.w3.org/2019/09/11-aria-at-minutes.html mck 17:07:57 MCK: No meeting next week. Next teleconference will be on Sept 25 17:08:06 rrsagent, make minutes 17:08:06 I have made the request to generate https://www.w3.org/2019/09/11-aria-at-minutes.html mck 18:33:58 Zakim has left #aria-at