<Matt_King> https://github.com/w3c/aria-at/labels/Agenda%2B
<westont> Hello! I'm @WestonThayer on GitHub, here mainly for https://github.com/w3c/aria-at/issues/321
<scribe> scribe: michael_fairchild
seth: we have been polishing the
UX, keyboard usability, and AT usability of our new
features.
... we also changed how we are using pagination and accessible
drop downs
... we are planning on running a small usability study starting
december 2nd
<zcorpan> https://github.com/w3c/aria-at/issues/321
simon: I've been working on this
for a couple of weeks. Our goal is to automate with NVDA and
integrate with aria-at, so that tests can be ran as part of
their CI
... and possibly in the future have a web driver like protocol
for automated screen reader testing
... in NVDA they already have an approach for automated tests
called system tests
... in this model, you say which key to press, and there is a
spy on the spoken output so that you can compare against the
spoken output
... the interesting impact for ARIA-AT is how we write and
represent tests, in a way that makes them useful for
automation
... we may want to discuss changing that, I haven't figured out
what that might look like yet
sina: I'll +1 that
matt: one of the things we want
to make sure we are doing is that we are using the exact same
assertion for each AT that assertion applies to. We also don't
want mixed assertion (multiple things in the same
assertion).
... my thinking was that you get the output, and we will have
known good expected output. The known assertions is what the
human has to figure out what assertions it maps to. I don't
expect a machine to figure this out.
sina: if you invert that, you get more flexability. For each screen reader, you define the ouput for each assertion, so that the system can then join them. For example, for NVDA, the role is mapped to "checkbox"
matt: that matches what I'm saying, this would cover regression testing but not initial testing.
sina: I think it applies for initial testing too
matt: okay, I think we have similar visions. but there isn't a 1:1 between what a human says and what the machine says.
sina: (concern about verbosity and repeated output)
matt: I want to make sure that we can automate what a human is testing
sina: I agree, and I also want to make it easy to automate these
matt: in the near term, our goal is more concerned about integrating with aria-at than integrating with their CI
simon: the tests would still live in aria-at, and they are interested in running the aria-at tests during development. or we could run these in our own ci, it doesn't really matter.
sina: I think it matters from a credibility perspective
simon: I didn't really follow the discussion about how you invisioned that manual testing would feed into automated testing. In my model, we would author the tests so that you would never need a human to manually test it.
boaz: maybe we should have a breakout about this
sina: I agree that a deep dive would be good, but need to figure out how this relates to authoring tests
joe: there may be multiple valid ways to get to a control
sina: exactly, right now we are not specifying which keystrokes to use and how many keystrokes to use.
matt: our goal for the end of the
year is to learn our problems, but not necessarily solve all of
our problems
... sina, I'm sorry if that means that we might need to re-work
that some of your work next year
sina: I agree, but I think that humans are also affected by the keystroke question.
matt: sina, you just make the
decision then.
... what do we think timing wise for a deep dive on
automation?
sina: we should identify who should be on the call
matt: yes
simon: late meetings are okay on Mondays
<boazsender> summary of tension between the two approaches: 1) specify key strokes to get to an interaction so AT-Driver can automate versus 2) keep test conditions abstract and high level so that they can run across multiple screen readers.
simon: I'm also available right after the aria-at call
(thank you, boazsender)
<boazsender> proposed approach: a high level test assertion, with key stroke mappings for each AT
sina: the workflow seems to be going very well. we re-ordered the items on the backlog.
james: the pr for the tri-state
checkpox is filed
... we are working on research for the editable combobox, then
disclosure buttons and dialog.
... 5 or 6 by the end of the month
matt: we have preview functionality now
<boazsender> I think this is potentially the same issue as the deep dive that we just got into
valerie: we want to make sure that when we are writing these tests that the results are consumable to users
matt: I think we are actually talking about assertions, is that correct?
valerie: (describes idea 1 found in #336)
boaz: I think this fits into the deep dive
This is scribe.perl Revision of Date Check for newer version at http://dev.w3.org/cvsweb/~checkout~/2002/scribe/ Guessing input format: Irssi_ISO8601_Log_Text_Format (score 1.00) Present: Jemma JoeHumbert Matt_King isaacdurazo juliette_mcshane michael_fairchild rob-fentress s3ththompson spectranaut westont zcorpan boazsender jongund Found Scribe: michael_fairchild Inferring ScribeNick: michael_fairchild WARNING: No date found! Assuming today. (Hint: Specify the W3C IRC log URL, and the date will be determined from that.) Or specify the date like this: <dbooth> Date: 12 Sep 2002 People with action items: WARNING: Input appears to use implicit continuation lines. You may need the "-implicitContinuations" option. WARNING: IRC log location not specified! (You can ignore this warning if you do not want the generated minutes to contain a link to the original IRC log.)[End of scribe.perl diagnostic output]