W3C

– DRAFT –
ARIA and Assistive Technologies Community Group

02 March 2023

Attendees

Present
Alysa, howard-e, Isabel, James Scholes, jongund, Matt_King, Michael, Mike_Pennisi, Sam_Shaw
Regrets
-
Chair
Matt King
Scribe
Sam_Shaw

Meeting minutes

Meeting time survey

SS: 17 people have responded to our survey

JG: We could narrow down the options and resend out the poll

JS: If we had a smaller form, it may encourage people to fill it out

MK: I think thats a good idea

The most popular options are Thursdays at 12 pm PST, Mondays at 9am PST, Wednesdays at 8am PST

MK: Next steps, ask Lola to fill out form. Sam and I will work on the next version with three options listed above

ARIA-AT App Admin Use Cases

MK: We have a challenge that right now we can't update the test reports, because when we do we end up with two reports on the reports page and the candidate test page. What we really want is for the newest version of the results to be the results that are exposed

MK: There is an update in the sandbox, that is intended to solve this problem.

MK: The problem I had when testing was that I couldn't fully understand what was going on or how it was working. I'd like to get some of those answers and talk about what we really want it to do

MK: The changes we have in sandbox may be a good stop gap for the next few weeks, but not beyond that.

MK: I will walk through what I experienced and ask questions

MK: I did this for Alert, I will test not with Pivot

MK: I'm going to the test queue.

MK: I did this for Alert, I will test now with Radio

MK: There is two different rows for Radio Group in test queue for JAWS and Chrome

MK: How is that possible?

HE: One was published in October, one was published in December

MK: That means when we brought the December version into the test plan, what did we not do so that we ended up with two?

HE: If it wasn't added manually, and the check box was checked for "Save these old results" was checked, we would keep those results and have two sets

MK: Does this mean we did not choose to keep any of the old results for Radio group?

JS: I don't know, I'm not the one who manages the test queue

JS: I think this is part of the confusion. I understand why you can have multiple test plans in the queue, however i'm not sure it is a use case we need and may be causing some confusion

MK: Okay well we will just leave that there for now. Under NVDA we have the december 8th plan, is draft mark as review. In VO we have the december plan as draft mark as review

MK: Mark as in review, if I press that button for these reports, it doesn't seem like anything happens the first time I press it

MK: I'll press the VO one right now. I get a message "Updating test plan status" Then the entire page reloads which is disruptive. Then we are back at the top of the page

MK: So focus is lost

MK: What appears to have happend, the only visibile change is the report status has changed to "in review"

MK: It hasn't changed any data anywhere else

MK: I'm not sure what the difference between the in review and draft status are

JS: This is something from the working mode, I don't know why it exists, we usually end up pressing the button twice

HE: The in review state is something that came from the database status, right now its more of a validation step, it was something that is planned to be removed based on a new design we started last year. It should be removed

HE: This should be marked as a bug

MK: Its bizzare that it reloads the whole page

HE: I agree

MK: I wanted to ask if there is a reason why this button is labeled "mark as candidate"

HE: I don't have a reason. If it needs a new label thats fine

JS: A big part of the confusion for us is that the test queue and test reports page, and candidate review pages seem isolated, we would benefit from a single page that lists all test plans that are known to the system with thier status, and all relevant actions that are available

JS: At the moment, for us, how to get an overview of all of the test plans in the system is very difficult

MK: I want to avoid designing UI right now, though I am documenting this. One of the things I ran into here, is the language isn't distinguishing test results from the plan

MK: When the conflicts are resolved, we can say the test results are complete, then we want to mark them as complete and create a version

JS: That is confusing for the admin side, wouldn't it be easier to just version the report?

MK: Does the report represent the results? I think the report is a collection of reports for each SR/Browser combination. For a single SR/Browser combination then maybe, but thats not what should be on the reports page

MK: We need to be able to update the data for each combination, separate from the report

JS: That makes sense, the reports page is an amalgamation of the reports. We need to be able to edit the single combinations and for example remove a single set of data for a SR/Browser combo

MK: We do have this defined as a test plan run results

MK: That needs to be versioned, that version is when we finished it. So we know " Oh is this the latest one? or not?"

JS: The results set version is not tied to the version of the AT tested

MK: correct, but we can say this version is related to this version of the test plan

MK: The results sets can have a status of "creating right now" when two people are testing, once conflicts are resolved, then when that work is done, date mark it is done by pushing it into a report, which will make it show up in the candidate reports or recommended reports

MK: So now when I press "Mark as candidate" for alert, that seems to push the updated results into the cadidate test queue, but there is no way for me to know that other than the number of passed/failed results changed.

MK: The only way I've found to see what failed or passed, is to open the test and compare the page to the reports page

HE: I am following along with a local version

HE: I can see the data is moving around

MK: The report ID is not visible

HE: You are right that there is not enough information presented to the admin

HE: To what James said ealier, as a admin page that lists all relevant data, is something that is included in the upcoming design we have been working on

HE: Even doing things like pulling out results, isn't as easy as we'd expect. While we can do it, it will lead to a place where there is alot of derrived data which could lead to errors. The design needs to address the database changes as well

HE: I think data exposure would be very helpful in the short term, that is how I get around some of the confusion while developing

MK: Alright so what you are expecting, when I press the button for safari, it did disappear from the test queue, when I load the candidate tests, and go to the VO table, what I see in the data here is % done, # of failed assertions, is the same as what I saw. That may be because the changes we made didn't effect the changes for VO, is that correct James?

JS: This is where it gets tricky, that plan was updated as a result of feedback from Freedom Scientific, but we updated the version of the plan for JAWS in the restesting, but couldn't updating the plan for VO and NVDA, because we'd have assertions that we didn't have those results for

JS: I'm not sure if we added or removed assertions

Isa: We removed some assertions

Isa: We didn't get rid of additional tests, and changed priorities to optional

MK: If we remove assertions or change them to optional we shouldn't have to rerun tests

HE: Currently in Production, there is 35 failed assertions

JS: It may be that the reports for the other SR haven't been updated

MK: So the other thing that happend when I pressed the button, it changed the candidate start date to today, which is a real problem because when we put a plan through the candidate phase, it started last year April 9th. When we change the plan during this phase or we update the results its still the same candidate phase and has the original start date

MK: I think target date button is in the wrong table

MK: I'm most concerned that the current build is changing the date, can we fix that before we deploy this?

HE: yes. I think I mentioned this in the initial email, that there is some nuances around this date. The big issue is that the test plant date is the report. We want to create a test plan group

HE: So that the test plan date lives on the overarching group

HE: For example this radio group example test plan report, for VO /Safari, would be a member of the group as would be NVDA/Firefox. They are all members of the group. Does that make sense?

HE: A member of the test plan group may be derived

MK: A test plan is what PAC produces, the tests for a particular test case, and those have a set of inscope AT. All the commands for each AT for each test, is what defines them as in scope. Browsers are not a factor in a test plan

MK: The version of the AT doesn't matter

JS: I think we would benefit from a conversation that we clearly enumerate what's included in these examples, go through those, go through all the things we need to with each of them

JS: An Ideation phase what things do we want to be coupled, what to be decoupled, and discuss what we want to do and how it would make sense to do those

MK: I started documenting the specific use cases and going through the working mode process

MK: I'm writing down what I think needs to be represented in the UI, if we can associate those with the specific ENTs. If we can align what the user sees and what the database sees

MK: Next steps Howard, the date problem.

MK: I don't think we should push to prod, because if we change the dates in prod that would mess up the data without a way to correct it

MK: I don't think we can put this into production until we create a solution to that problem

MK: I can try to write this user journey thing by Monday

HE: If that is a major blocking point, we could add logic to maintain these dates, but that feels risky, or we can add a button to update the button manually but that feels like more work.

MK: The original date shouldn't need to be editable

MK: I guess we do need a way to undo it

MK: I will think about those requirements and options for mitigation

Minutes manually created (not a transcript), formatted by scribe.perl version 210 (Wed Jan 11 19:21:32 2023 UTC).

Diagnostics

No scribenick or scribe found. Guessed: Sam_Shaw

Maybe present: HE, Isa, JG, JS, MK, SS

All speakers: HE, Isa, JG, JS, MK, SS

Active on IRC: howard-e, jongund, Matt_King, Sam_Shaw