<scribe> scribe:shimizuyohta
Valerie: Go forward looking into
CSS working harness.
... Met with maintainer of CSS harness and find it optimistic
to expand the use of this harness.
... I'm trying get support from this maintainer.
... First step is figuring out update, and also make sure I
have support from maintainer to lead software.
... Starting next week, I have two days of working for ARIA AT.
Next time we meet, I hope to have more clarity of how we can
use CSS harness.
... And for next week, I hope I'll have more detail of test
harness, and run it locally in detail.
Matt: When you say the format of the test, that would be essentially the assertion that we're making about the case?
Valerie: Yes, what example we're looking at, what we're testing etc...
<Matt_King> https://github.com/w3c/aria-at/issues/8
Michael: If the CSS harness doesn't pan out, do we have plan B?
Valerie: There's one another open software test case management system on Kiwi. If its generally accessible out of the box, it might be worth exploring.
Matt: Plan C would be going
bespoke code, might be Michael's or something else. Bespoke is
always options, unless we sufficiently investigate avoiding the
downside of custome solutions by using someone's work.
... The whole licensing thing can be barrier, due to financial
resources.
Valerie: There's two lists. The
first one would be 'How customizable the tes can be'
... Can we branch? What is the requirement?
... When things faield, do we wanna know exactly why that
failed?
Matt: This could be potentially
done without branching out, by having many options.
... The ability to capture more than pass/fail/comments would
be critical. Must have in other word.
... The first question would be pass/fail/partial. Depending on
the input of first quesiton, prompt would go to optional
input.
... There's concrete lists of customization we need in order to
make it user-friendly.
Michael: Sum up, adding comment is critical if results is failed.
Valerie: Second question would
be"Can we correct output from SR."
... I'm envisioning some feature users can copy SR output, as
opposed to automated capturing.
Matt: This would be nice to have, I don't thing we need to investigate this more than necessary.
Valerie: This would be field in
the case of partial/failed.
... Next question would be "Can we correct the input
command"
Michael: I think that would be out of scope.
Discuss interoperability and metadata collection of the testings.
Matt: There's multiple layers of test aurhoring.
-Accessibility assertion -Assitive technology command generic data -Assemble those two together to create actual instances of the test.
Matt: We need to be carefull what metadata to include. If AT changed their interface, we need to figure out the new versions of the test for updated AT.
Valerie: Next would be 'Expectation of abitlity for testers'
Michael: Testers don't need to be packaged into the same system.
Matt: Maybe there should be login capability before testers submit data.
Michael: I agree that to avoid misguided input. Probably by having review step before submit?
Matt: It could be opened to
anybody with Github ID, or have to be a member of our community
group?
... Given the nature of data, we want knowledge of who's
testing.
Valerie: Requirements is we want
to know where users came from, and maybe preapproved users can
access. Perhaps we can adopt the result don't go directly into
the database, and having them stored to reviewable test
collection spaaces (Michael's approach).
... Next issue is "Ability to track change history over
time"
Michael: My thinking for this is a nice to have. Being able to summarizes overview of changes over time would be helpful, not necessary detail.
Matt: There's cases not having detailed history will leads to unusable data, such as version issue.
Valerie: Anythign else people can think of for harness?
Matt: Use cases for test runners, there's something you can easily see which tests have been completed.Is there any way to see which test we want done but not complete?
Valerie: I'm prioritizing that.
Matt: Test authors can mark as
needed. Can we mark them based on technology, such as 'needed
for JAWS'?
... Our requirement here is that system needs to be able to
present list of tests based on two factors :1.SR 2.Browser
This is scribe.perl Revision: 1.154 of Date: 2018/09/25 16:35:56 Check for newer version at http://dev.w3.org/cvsweb/~checkout~/2002/scribe/ Guessing input format: Irssi_ISO8601_Log_Text_Format (score 1.00) Present: Matt_King michael_fairchild spectranaut shimizuyohta Jean-Francois_Hector Found Scribe: shimizuyohta Inferring ScribeNick: shimizuyohta WARNING: No date found! Assuming today. (Hint: Specify the W3C IRC log URL, and the date will be determined from that.) Or specify the date like this: <dbooth> Date: 12 Sep 2002 People with action items: WARNING: Input appears to use implicit continuation lines. You may need the "-implicitContinuations" option. WARNING: IRC log location not specified! (You can ignore this warning if you do not want the generated minutes to contain a link to the original IRC log.)[End of scribe.perl diagnostic output]