W3C

- DRAFT -

ARIA and Assistive Tech CG for Sep 11, 2019

11 Sep 2019

Attendees

Present
Matt-King, michael_fairchild, Valerie-Young
Regrets
Chair
Matt King
Scribe
Jean-Francois_Hector

Contents


Valerie we can't hear you

Research on packages

V: Been researching 4 options. Made progress. Let's Test Harness and Test vocabulary first.

Test harness and Test vocabulary

<mck> Discuss wiki page: https://github.com/w3c/aria-at/wiki/Test-Harness-and-Test-vocabulary

Valerie describes the document

V: Set up instructions are about preconditions (rather than instructions to perform the test)

Harness displays the setup code to be tested. This could be through a URL link.

Each test will have a set of abstract operating instructions, and hopefully we can have a programatic mapping to specific operating instructions

The role of the test harness here is to present the specific operating instructions in a clear way

Hoping that this document will help us have a shared language.

MCK: I'm hoping that we can have some sort of translation system that'd take key words from abstract instructions and abstract expectations, and translate them into specific instructions and expectations for the particular screen reader

Which one people want to see in a report might depend on the user.

E.g. if you're a screen reader developer you might want to see the specific terms related to your screen reader. But if you're a web developer, you might prefer using the more abstract language

V: Another part of this test harness system is importing the test. Hoping that we can have a format for writing tests, that we can just import into the test harness.

MCK: Anybody familiar with the accessibility conformance test taskforce's? Would they have any useful vocabulary or test structure?

(Typo: 'task force', not 'test force')

https://www.w3.org/WAI/GL/task-forces/conformance-testing/

They've developed a test format. We should investigate whether this (or any other) test format would be useful to us.

MF: It's a good standard with good ideas. With a11ysupported.io I couldn't directly use it, but we might be able to use some of the ideas from the task force.

E.g. it has concepts around atomic rules and composite rules

(typo: a11ysupport.io)

V: Is 'put the AT into reading mode' a set-up instruction? It might need to be more specific for different assistive technologies

MCK: In a 'test run' (or 'test session'), I might have a set of set up instructions, and perform different tests

It's not clear to me where the boundaries are between setup and doing the tests. It depends how we define the test.

Eg. if testing an expectation, like "the screen reader announced the beginning and the end of the menu bar", if we give very detailed set instructions, then the test because just "and press the F key".

Or we don't have all these instructions for setting up for every test, because we assume that the user knows more about how to use the screen reader

So to some extend this depends on the level of knowledge / experience we assume

We need to assume some level of knowledge, otherwise the amount of detail that'll need to be generated would sink the project

V: This concept of session, or a group of tests using the same setup code and setup instructions, would be useful.

What MCK is talking about is similar to what I'm describing as 'abstract operating instructions'. It's also similar to 'user task' is github issue 5

e.g. an abstract operating instruction could be "Operate checkbox in reading mode"

MCK: I was thinking that, at a higher level, there's a 'user task' (eg navigate to checkbox), then one or more expectations (eg that the screen reader states the name state and role of the checkbox). And that expectation would correspond to a lot of assertions. E.g. for each of name/role/state, and for each specific commands

V: A test expectation could be "are the checkbox's name, role and state announced". It'd correspond to several test assertions.

MF: It'll be important to record success as well as failure. And the speech output that is success.

This would help us improve our credibility, and help future testers reference how it tested last time.

V: It'd be great if the test harness did that.

MCK: In JAWS and VO it's possible to capture the last utterance of the screen reader. NVDA might have a plug in to do it too.

The tester could record one of the instances of the screen reader fulfilling the expectations. Not necessarily capture it for every single assertion (e.g. name, state and role).

If we structure our high level expectations that way, it'll be easy to capture how a particular screen reader fulfils an expectation.

The same utterance could cover many assertions.

V: I might start doing another wiki or issue page that records the different features that we need a test harness to have

Research on packages

Tuleap was very promising, but it turns out the open source version is too limited. Add-ons need to be bought. Price is per user.

There are other two that I'm looking at together: Kiwi TCMS (Test Case Management System) and Nitrate. One is a fork of the other.

The last open source test management solution I'm looking at is TestLink

Screen reader terminology

JF: See page here: https://github.com/w3c/aria-at/wiki/Screen-Reader-Terminology-Translation

MCK: The definitions, if they include explanations, are getting too big for table cells

So now I'm thinking of providing links to a glossary

Maybe I could include some related key commands, but it gets complicated fast. And I'm not sure whether this should be in this table or somewhere else.

E.g. There are half a dozen things that would trigger you to get in and out of reading mode. So there's a main way to force JAWS into reading mode, but in practice, in most usage situations, you don't need to use it

But having the instructions might still be useful. E.g. at the moment with JAWS you can't browse grid elements without getting out of mode manually

V: Agree that this page should be focused on the language. But might be useful to record this knowledge

MCK: I want this to cover all the knowledge we're going to use when writing expectations

Should this cover the different ways that screen reader speak ARIA stuff?

E.g. JAWS calls a menubutton one thing and VO calls it something else

V: This should cover everything that would be needed to describe a test to a tester. This would cover instructions, but also expectations

MCK: I'm debating about that one (i.e. whether or not to cover instructions), because 90% of the time test results should be obvious

<Isaac> I need to take off to attend another meeting. Don't want to interrupt so leaving a message here

MF: In a11ysupport.io I record successful output, and also create an array of examples for each expectation, across different screen readers

MCK: Not yet sure about the choice of verbs. Right now we use disparate language eg 'reading' 'perceiving'. It'd be good to use words that have specific meanings, so that when you write expectations it has an unambiguous meaning

A user test is always going to include some kind of verb. Eg "Navigate to a checkbox in reading mode" is a task.

E.g. "Perceive the group label of a group of checkbox in interaction mode".

Not sure whether we should use the word "read" for perceive.

Some things are only announced in a transient manner. Eg one screen reader might only tell you when you're entering or leaving a group as you are navigating. But not when you're in that group

V: language about instructions and language about expectations might be different. eg "perceive" is more about an expectation

MCK: there's an expectation that the screen reader informs you about where you are.

MF: Would be good to revisit this in the next all

(typo: call)

MCK: No meeting next week. Next teleconference will be on Sept 25

Summary of Action Items

Summary of Resolutions

[End of minutes]

Minutes manually created (not a transcript), formatted by David Booth's scribe.perl version 1.154 (CVS log)
$Date: 2019/09/11 17:08:11 $

Scribe.perl diagnostic output

[Delete this section before finalizing the minutes.]
This is scribe.perl Revision: 1.154  of Date: 2018/09/25 16:35:56  
Check for newer version at http://dev.w3.org/cvsweb/~checkout~/2002/scribe/

Guessing input format: Irssi_ISO8601_Log_Text_Format (score 1.00)

Present: Matt-King michael_fairchild Valerie-Young
No ScribeNick specified.  Guessing ScribeNick: Jean-Francois_Hector
Inferring Scribes: Jean-Francois_Hector

WARNING: No date found!  Assuming today.  (Hint: Specify
the W3C IRC log URL, and the date will be determined from that.)
Or specify the date like this:
<dbooth> Date: 12 Sep 2002

People with action items: 

WARNING: Input appears to use implicit continuation lines.
You may need the "-implicitContinuations" option.


WARNING: IRC log location not specified!  (You can ignore this 
warning if you do not want the generated minutes to contain 
a link to the original IRC log.)


[End of scribe.perl diagnostic output]