19:41:56 RRSAgent has joined #aria-at 19:42:00 logging to https://www.w3.org/2023/03/02-aria-at-irc 19:42:05 Zakim has joined #aria-at 19:42:28 MEETING: ARIA and Assistive Technologies Community Group 19:42:34 present+ 19:42:41 CHAIR: Matt King 19:42:57 rrsagent, make log public 19:43:36 rrsagent, make minutes 19:43:38 I have made the request to generate https://www.w3.org/2023/03/02-aria-at-minutes.html Matt_King 20:01:09 Sam_Shaw has joined #aria-at 20:01:13 present+ 20:04:10 present+ jongund 20:04:27 howard-e has joined #aria-at 20:04:31 present+ 20:04:55 present+ Alysa 20:05:04 present+ Isabel 20:05:07 present+ James Scholes 20:05:12 present+ Michael 20:08:44 TOPIC: Meeting time survey 20:11:38 SS: 17 people have responded to our survey 20:11:55 JG: We could narrow down the options and resend out the poll 20:12:18 JS: If we had a smaller form, it may encourage people to fill it out 20:12:41 MK: I think thats a good idea 20:17:57 The most popular options are Thursdays at 12 pm PST, Mondays at 9am PST, Wednesdays at 8am PST 20:19:42 MK: Next steps, ask Lola to fill out form. Sam and I will work on the next version with three options listed above 20:19:58 TOPIC: ARIA-AT App Admin Use Cases 20:20:26 scribe+ 20:21:35 MK: We have a challenge that right now we can't update the test reports, because when we do we end up with two reports on the reports page and the candidate test page. What we really want is for the newest version of the results to be the results that are exposed 20:22:03 MK: There is an update in the sandbox, that is intended to solve this problem. 20:22:38 MK: The problem I had when testing was that I couldn't fully understand what was going on or how it was working. I'd like to get some of those answers and talk about what we really want it to do 20:23:02 MK: The changes we have in sandbox may be a good stop gap for the next few weeks, but not beyond that. 20:23:50 MK: I will walk through what I experienced and ask questions 20:24:34 MK: I did this for Alert, I will test not with Pivot 20:24:52 MK: I'm going to the test queue. 20:26:03 MK: I did this for Alert, I will test now with Radio 20:26:48 MK: There is two different rows for Radio Group in test queue for JAWS and Chrome 20:26:53 MK: How is that possible? 20:27:14 HE: One was published in October, one was published in December 20:27:49 MK: That means when we brought the December version into the test plan, what did we not do so that we ended up with two? 20:28:23 HE: If it wasn't added manually, and the check box was checked for "Save these old results" was checked, we would keep those results and have two sets 20:28:41 MK: Does this mean we did not choose to keep any of the old results for Radio group? 20:28:50 JS: I don't know, I'm not the one who manages the test queue 20:29:26 JS: I think this is part of the confusion. I understand why you can have multiple test plans in the queue, however i'm not sure it is a use case we need and may be causing some confusion 20:30:05 MK: Okay well we will just leave that there for now. Under NVDA we have the december 8th plan, is draft mark as review. In VO we have the december plan as draft mark as review 20:30:35 MK: Mark as in review, if I press that button for these reports, it doesn't seem like anything happens the first time I press it 20:31:13 MK: I'll press the VO one right now. I get a message "Updating test plan status" Then the entire page reloads which is disruptive. Then we are back at the top of the page 20:31:18 MK: So focus is lost 20:31:48 MK: What appears to have happend, the only visibile change is the report status has changed to "in review" 20:31:58 MK: It hasn't changed any data anywhere else 20:32:15 MK: I'm not sure what the difference between the in review and draft status are 20:32:34 JS: This is something from the working mode, I don't know why it exists, we usually end up pressing the button twice 20:33:22 HE: The in review state is something that came from the database status, right now its more of a validation step, it was something that is planned to be removed based on a new design we started last year. It should be removed 20:33:30 HE: This should be marked as a bug 20:33:41 MK: Its bizzare that it reloads the whole page 20:33:48 HE: I agree 20:34:39 MK: I wanted to ask if there is a reason why this button is labeled "mark as candidate" 20:34:56 HE: I don't have a reason. If it needs a new label thats fine 20:35:44 JS: A big part of the confusion for us is that the test queue and test reports page, and candidate review pages seem isolated, we would benefit from a single page that lists all test plans that are known to the system with thier status, and all relevant actions that are available 20:36:08 JS: At the moment, for us, how to get an overview of all of the test plans in the system is very difficult 20:37:21 MK: I want to avoid designing UI right now, though I am documenting this. One of the things I ran into here, is the language isn't distinguishing test results from the plan 20:37:49 MK: When the conflicts are resolved, we can say the test results are complete, then we want to mark them as complete and create a version 20:38:04 JS: That is confusing for the admin side, wouldn't it be easier to just version the report? 20:38:54 MK: Does the report represent the results? I think the report is a collection of reports for each SR/Browser combination. For a single SR/Browser combination then maybe, but thats not what should be on the reports page 20:39:19 MK: We need to be able to update the data for each combination, separate from the report 20:39:58 JS: That makes sense, the reports page is an amalgamation of the reports. We need to be able to edit the single combinations and for example remove a single set of data for a SR/Browser combo 20:40:14 MK: We do have this defined as a test plan run results 20:40:34 MK: That needs to be versioned, that version is when we finished it. So we know " Oh is this the latest one? or not?" 20:41:22 JS: The results set version is not tied to the version of the AT tested 20:41:36 MK: correct, but we can say this version is related to this version of the test plan 20:43:32 MK: The results sets can have a status of "creating right now" when two people are testing, once conflicts are resolved, then when that work is done, date mark it is done by pushing it into a report, which will make it show up in the candidate reports or recommended reports 20:44:44 MK: So now when I press "Mark as candidate" for alert, that seems to push the updated results into the cadidate test queue, but there is no way for me to know that other than the number of passed/failed results changed. 20:45:13 MK: The only way I've found to see what failed or passed, is to open the test and compare the page to the reports page 20:45:31 HE: I am following along with a local version 20:45:38 HE: I can see the data is moving around 20:45:59 MK: The report ID is not visible 20:46:20 HE: You are right that there is not enough information presented to the admin 20:46:45 HE: To what James said ealier, as a admin page that lists all relevant data, is something that is included in the upcoming design we have been working on 20:47:38 HE: Even doing things like pulling out results, isn't as easy as we'd expect. While we can do it, it will lead to a place where there is alot of derrived data which could lead to errors. The design needs to address the database changes as well 20:48:23 HE: I think data exposure would be very helpful in the short term, that is how I get around some of the confusion while developing 20:50:34 MK: Alright so what you are expecting, when I press the button for safari, it did disappear from the test queue, when I load the candidate tests, and go to the VO table, what I see in the data here is % done, # of failed assertions, is the same as what I saw. That may be because the changes we made didn't effect the changes for VO, is that correct James? 20:51:22 JS: This is where it gets tricky, that plan was updated as a result of feedback from Freedom Scientific, but we updated the version of the plan for JAWS in the restesting, but couldn't updating the plan for VO and NVDA, because we'd have assertions that we didn't have those results for 20:51:29 JS: I'm not sure if we added or removed assertions 20:51:40 Isa: We removed some assertions 20:51:57 Isa: We didn't get rid of additional tests, and changed priorities to optional 20:52:13 MK: If we remove assertions or change them to optional we shouldn't have to rerun tests 20:52:45 HE: Currently in Production, there is 35 failed assertions 20:52:57 JS: It may be that the reports for the other SR haven't been updated 20:54:22 MK: So the other thing that happend when I pressed the button, it changed the candidate start date to today, which is a real problem because when we put a plan through the candidate phase, it started last year April 9th. When we change the plan during this phase or we update the results its still the same candidate phase and has the original start date 20:54:34 MK: I think target date button is in the wrong table 20:54:58 MK: I'm most concerned that the current build is changing the date, can we fix that before we deploy this? 20:55:48 HE: yes. I think I mentioned this in the initial email, that there is some nuances around this date. The big issue is that the test plant date is the report. We want to create a test plan group 20:56:08 HE: So that the test plan date lives on the overarching group 20:57:54 HE: For example this radio group example test plan report, for VO /Safari, would be a member of the group as would be NVDA/Firefox. They are all members of the group. Does that make sense? 20:58:07 HE: A member of the test plan group may be derived 20:59:08 MK: A test plan is what PAC produces, the tests for a particular test case, and those have a set of inscope AT. All the commands for each AT for each test, is what defines them as in scope. Browsers are not a factor in a test plan 20:59:29 MK: The version of the AT doesn't matter 21:00:30 JS: I think we would benefit from a conversation that we clearly enumerate what's included in these examples, go through those, go through all the things we need to with each of them 21:01:13 JS: An Ideation phase what things do we want to be coupled, what to be decoupled, and discuss what we want to do and how it would make sense to do those 21:01:28 MK: I started documenting the specific use cases and going through the working mode process 21:02:09 MK: I'm writing down what I think needs to be represented in the UI, if we can associate those with the specific ENTs. If we can align what the user sees and what the database sees 21:02:26 MK: Next steps Howard, the date problem. 21:02:47 MK: I don't think we should push to prod, because if we change the dates in prod that would mess up the data without a way to correct it 21:03:03 MK: I don't think we can put this into production until we create a solution to that problem 21:03:19 MK: I can try to write this user journey thing by Monday 21:03:59 HE: If that is a major blocking point, we could add logic to maintain these dates, but that feels risky, or we can add a button to update the button manually but that feels like more work. 21:04:06 MK: The original date shouldn't need to be editable 21:04:16 MK: I guess we do need a way to undo it 21:04:28 MK: I will think about those requirements and options for mitigation 21:05:40 rrsagent, make minutes 21:05:41 I have made the request to generate https://www.w3.org/2023/03/02-aria-at-minutes.html Matt_King 21:07:26 present+ Mike_Pennisi 21:11:28 rrsagent, make minutes 21:11:30 I have made the request to generate https://www.w3.org/2023/03/02-aria-at-minutes.html Matt_King 22:24:35 jongund has joined #aria-at 23:04:51 jongund has joined #aria-at 23:48:40 jongund has joined #aria-at