W3C

- DRAFT -

SV_MEETING_TITLE

11 Dec 2019

Attendees

Present
Matt_King, Jean-Francois_Hector, shimizuyohta, michael_fairchild, Jemma_
Regrets
Chair
Matt King
Scribe
Jean-Francois_Hector

Contents


<Matt_King> rrsagemt, make log public

<Matt_King> MEETING December 11, 2019 ARiA and Assistive Technology Community Group Telecon

<michael_fairchild> presnt+

<scribe> scribe: Jean-Francois_Hector

Discussion on the prototype status

<Matt_King> https://w3c.github.io/aria-at/

MCK: We have now a homepage for our project

(at the URL above)

JF: Yeah

MCK: There are some significant limitations to what we have here. And we have some limited documentation on the wiki

Let's talk about the runner page, and the result page

Valérie and I made some last minute decisions about limitations on the runner, so that we can get the report into something that I feel could be shareable

If you go to the runner page, the limitation we made here is that you can only choose tests from 1 pattern for a given run

Then the file you generate will be named with the name of the pattern (eg Combobox), the name of a screen reader, browser [I think] and a timestamp

The way it'd work now is to choose a test, run it, download a JSON file, create a branch on the repository, and create a pull request for your result, that others can review before merging

All those result files will show up in the result page. The results will be grouped by pattern name.

We have some mock result in the result pages now; they're all garbage

MF: One bug: if you click on a file, you're taken to that file result page. Then if you click the browser's back button, then that list is duplicated (eg you have 4 items where you should have 2)

MCK: We can decide what we want to do in the near term. This is what we have at this point

What we have right now is a reasonable way of proofing what a test should look like (eg the test format, how to write it)

We're at a place where we can evaluate what we want a production site to do, in a concrete manner

MF: In the past week I've had a chance to go through the prototype, and start writing some feedback. (a lot of it has changed already).

<Matt_King> https://github.com/w3c/aria-at/issues/25

Going through MF's feedback on the prototype (see Issue 25)

(See link shared just above)

MF: Some of these questions are just prompts for myself. Others are questions for the group

I have a bunch of notes re. software design

Mostly notes for me

MCK: Authoring tests is not too time consuming if you know what you're doing. But it's not what I want to be doing with my time. So we might employ people

But once a test is inputed, we need a very good review process

It'd be impractical to have to run a test to do the review

It'd be useful to have some kind of idea of how long that takes. But it'd vary person by person

YS: It took me 2h to write the checkbox test, for JAWS (that was the first one)

[Correction: Yohta was talking about running tests, not writing tests]

MCK: We might be able to do optimisations to make it quicker. Like a NVDA plugin maybe

MF: It'd be good to know how long these tasks take, so that we can set realistic goals

Let's look at questions related to authoring tests:

Looking at Question 1

What if output assertions are shared between tests? What if AT behavior is shared between tests?

https://github.com/w3c/aria-at/issues/25

JF: It's the first question under "Questions related to authoring tests:"

MCK: Reading mode tests and interaction mode tests are now in different styles. You could have assertions that apply to both, even with the same keys. So there'll be duplication across files
... Is you concern that duplicated elements might get out of sync?

MF: Yes

In a file, while some of the assertions apply to a specific APG pattern, other things (like the group role) would be the same across several APG examples

I wonder if there's opportunity to link to a pre-defined assertion, whenever an author writes a test

<Jemma_> it sounds like a good idea

So instead of writing an assertion "the role group is spoken", the author could just link to a pre-defined assertion. That'd make it easier to author the test

MCK: When you say "link", you mean look up and find a token that represents another assertion

But it'd help ensure that if a role needs to be conveyed in a particular way, that assertion is always written in that same way

But actually it's not just a token the author would have to select, because they'd also need to pass a parameter

Maybe there are only a dozen or so standard assertions that are repeated often. So maybe it won't be hard to keep track of them

But yes, I see that that could be a possibility

If we have a test authoring page where from a dropdown you choose what an assertion is about (eg role, state or property), and then it could provide you with pre-written assertions, and it'd build your assertion for you

That sounds like a lot of work to design and build, but it's the kind of thing that we should think about as we go into a production environment

It's worth thinking about, to have good consistenty

MF: If we wait too long to implement something like that it becomes harder

MCK: It makes me wonder about the test writing user interface, what it'd look like

At some point I thought that authors could write and import CSV files. But that'd turn into a big mess. We'd have to be super coordinated with the language

Some tests would apply to all screen readers, and some tests would only apply to some.

So in the test we'd need to specific its scope. Eg automatic mode switching primarily only apply to JAWS and NVDA (at least mode switching when pressing the Tab key)

Although VoiceOver does mode switching when you interact

So that applies to three desktop screen readers

MF: This makes me curious because so far I haven't seen a test that's written that way

Eg navigate through checkbox group. I don't see how that applies to just a subset of screen readers

MCK: Maybe that one does apply to all screen readers. But I want to make sure we design things so that we can scale to more screen readers (like mobile) without needing to rewrite tests

MF: My next question:

(Question 2)

Orca for example conveys the role 'group' differently then other screen readers

So should the assertion say that a name 'group' is spoken?

MCK: We could use 'convey' rather than 'spoken'.

MF: I wasn't sure whether we'd go down the route of using one of these words ('convey' or 'spoken'), or whether we'd go down the route of customising the assertion based on how different screen readers render a particular role

MCK: The question is "does the thing that it does with role 'group' is appropriate"?

And there's going to be some judgement involved in answering that question. That's why we need the screen reader output. So that the judgement can be reviewed and discussed

I don't think that we should have to tailor every assertion to every screen reader

MF: We should use general language in the assertions to allow for flexibility, and be consistent

I had other questions about performing tests

It's under the heading "Questions related to performing tests:"

Question 1:

It looks like there's still room to improve the user experience around the validation of the form

MCK: Yes, Valérie didn't have enough time to cover validation

MF: Sounds good, I know it's a prototype, I'm not worried about it

Question 2: should the jaws version be a required field?

MCK: I think it should probably be required. But how specific should it get? Version number and build?

It never hurts to have more information

We might want to do reports for say, JAWS 2019

Another option could be to present users with a dropdown (rather than a text field)

But what about new versions? Maybe we don't want to control like this which screen reader versions are tested. Maybe we control at the project level

There could be great value in controlling what version can be tested. So that we don't allow testing on an older version of an AT

We might want to be very picky about that

MF: One more question:

Question: form fields appear to be grouped into fieldsets, but those fieldsets are missing legends (programmatic names). Is this expected?

MCK: It's probably just due to the fact that this is a prototype, because of limited time

MF: I want to touch on question 4

Question: a single output is provided for forward and backward navigation for the same command (tab/shift+tab or up arrow/down arrow). However, the output might be different between forward and backward navigation. For example, using the down arrow to enter a named group will yield different output than using up arrow to exit the same group. Should we track these unique output situations?

MCK: If we use up and down arrow in the command list, they're covered by the same output field. But if we put up arrow and down arrow as two separate items in the commend list, we'll have two output fields, one for each

So we can author the tests in a flexible way, depending on the level of detail we want

Over time, we might get more granular

Meeting times and frequency

MCK: I think we should still consider weekly meetings starting in the new year

I propose we wouldn't start until the week of January 15

How can we figure out the best day and time for all?

In the APG group we used a survey, to people who want to be active

This opened the door to participation for more people

YS: I can circulate some sort of survey. Do you imagine circulating it to this mailing list?

MCK: One year ago Jemma run the one we did for the APG task force. We offered 10 possible meeting times. We came up with them by just talking among some of the people we knew for sure needed to be active

YS: What about we start by identifying availabilities that MCK and MF have in common, and then distribute a survey based on these?

MCK: Yes we can do something like that
... We need to think about people in other time zones (eg Europe)
... So we'll gather some possible times, and Yohta can help circulate a Google Form survey to share with everybody

YS: I'll send a quick email to you after this meeting to discuss more

MCK: I assume we'll stick to 1h per week

MF: I'm ok with that

MCK: At some point in the project we'll need more

I want to see a production system and people doing testing in the next six months

By CSUN I hope our prototype is a lot more rich, and we have testing data to get more buying from more players

For now we could tentatively plan a January 15th meeting

And we can move forward with a new meeting time beyond that

Summary of Action Items

Summary of Resolutions

[End of minutes]

Minutes manually created (not a transcript), formatted by David Booth's scribe.perl version 1.154 (CVS log)
$Date: 2019/12/11 18:02:10 $

Scribe.perl diagnostic output

[Delete this section before finalizing the minutes.]
This is scribe.perl Revision: 1.154  of Date: 2018/09/25 16:35:56  
Check for newer version at http://dev.w3.org/cvsweb/~checkout~/2002/scribe/

Guessing input format: Irssi_ISO8601_Log_Text_Format (score 1.00)

Succeeded: s/very/vary/
Default Present: Matt_King, Jean-Francois_Hector, shimizuyohta, michael_fairchild, Jemma_
Present: Matt_King Jean-Francois_Hector shimizuyohta michael_fairchild Jemma_
Found Scribe: Jean-Francois_Hector
Inferring ScribeNick: Jean-Francois_Hector

WARNING: No meeting title found!
You should specify the meeting title like this:
<dbooth> Meeting: Weekly Baking Club Meeting


WARNING: No date found!  Assuming today.  (Hint: Specify
the W3C IRC log URL, and the date will be determined from that.)
Or specify the date like this:
<dbooth> Date: 12 Sep 2002

People with action items: 

WARNING: Input appears to use implicit continuation lines.
You may need the "-implicitContinuations" option.


WARNING: IRC log location not specified!  (You can ignore this 
warning if you do not want the generated minutes to contain 
a link to the original IRC log.)


[End of scribe.perl diagnostic output]