IRC log of aria-at on 2023-09-27

Timestamps are in UTC.

17:00:42 [RRSAgent]
RRSAgent has joined #aria-at
17:00:46 [RRSAgent]
logging to https://www.w3.org/2023/09/27-aria-at-irc
17:00:52 [jugglinmike]
rrsagent, make log public
17:08:10 [Sam_shaw]
Sam_shaw has joined #aria-at
17:08:30 [Matt_King]
Matt_King has joined #aria-at
17:08:30 [jugglinmike]
Zakim, start the meeting
17:08:30 [Zakim]
RRSAgent, make logs Public
17:08:32 [Zakim]
please title this meeting ("meeting: ..."), jugglinmike
17:08:38 [Matt_King]
present+
17:09:03 [jugglinmike]
meeting: ARIA and Assistive Technologies Community Group Weekly Teleconference
17:09:07 [jugglinmike]
present+ jugglinmike
17:09:10 [jugglinmike]
scribe+ jugglinmike
17:10:25 [jugglinmike]
Matt_King: We have four topics on the agenda today, all all related to the app
17:10:51 [jugglinmike]
Matt_King: the most meaty topic is the final one
17:10:58 [howard-e]
howard-e has joined #aria-at
17:11:05 [howard-e]
present+
17:11:11 [jugglinmike]
Matt_King: Are there any additional topics anyone would like to bring?
17:11:20 [jugglinmike]
Matt_King: Hearing none, we'll stick with these fours
17:11:38 [jugglinmike]
Matt_King: the next meeting is Thursday October 5
17:12:03 [jugglinmike]
Topic: Issue 795: Discuss changing title of reports page
17:12:17 [jugglinmike]
github: https://github.com/w3c/aria-at-app/issues/795
17:13:03 [jugglinmike]
Matt_King: We talked about chaging the title of the "Candidate Review" page, but I think that the title of the "Test Reports" page could also use improvement
17:13:18 [jugglinmike]
Matt_King: If you saw that title in Google search results, it would tell you nothing
17:13:59 [jugglinmike]
Matt_King: Because so much of what we talk about is about AT interoperability
17:14:35 [jugglinmike]
So my proposal for that page is "Assistive Technology Interoperability Reports". In contexts where we need an abbreviated version, we could use "AT Interop Reports"
17:14:53 [jugglinmike]
s/So my/Matt_King: So my/
17:15:02 [jugglinmike]
Matt_King: Any thoughts?
17:16:04 [jugglinmike]
present+ Joe_Humbert
17:16:40 [jugglinmike]
Joe_Humbert: I like your title, and I agree it will make the page's meaning more clear when it appears in search results
17:16:48 [jugglinmike]
present+ IsaDC
17:16:52 [jugglinmike]
present+ James_Scholes
17:16:59 [jugglinmike]
James_Scholes: I support the new title
17:17:02 [jugglinmike]
IsaDC: Me too
17:17:17 [jugglinmike]
Matt_King: Great!
17:17:29 [jugglinmike]
Topic: Issue 738: Changes to unexpected behavior data collection and reporting
17:17:43 [jugglinmike]
github: https://github.com/w3c/aria-at-app/issues/738
17:18:16 [jugglinmike]
Matt_King: I'd like to prepare a better mock up so there's no ambiguity in our future discussion
17:19:07 [jugglinmike]
Matt_King: But one of the decisions we made last time is that when we record an unexpected behavior, we would assign one of two severities (rather than one of three)
17:19:41 [jugglinmike]
Matt_King: I was working on this because I didn't want to change from "high", "medium", and "low" to just "high" and "medium". "medium" and "low" also felt wrong
17:20:01 [jugglinmike]
Matt_King: For now, I've settled on "high impact" and "moderate impact"
17:20:49 [jugglinmike]
James_Scholes: As you say, "moderate" is almost meaningless because it could apply to any level impact that is not "high"
17:21:23 [jugglinmike]
Matt_King: We want to set a fairly high bar for something that is "high impact."
17:22:03 [jugglinmike]
Matt_King: The assertion will read somthing like "The AT must not exhibit unexpected behaviors with high impact"
17:22:19 [jugglinmike]
Matt_King: And "The AT should not exhibit unexpected behaviors with moderate impact"
17:22:39 [jugglinmike]
Matt_King: I'm going to move forward with those terms for now and bring it back to this meeting next week
17:22:52 [jugglinmike]
Topic: Issue 689: Data integrity strategy
17:23:01 [jugglinmike]
github: https://github.com/w3c/aria-at-app/issues/689
17:23:35 [jugglinmike]
Matt_King: One of the things that we have to have a lot of confidence in is that all the tallies and counts and information we present in reports is accurate--and that we don't break it
17:24:12 [jugglinmike]
Matt_King: When you run a report, the system is going to count up the number of passes and fails, its going to calculate percentages, and it's going to capture dates and times
17:24:24 [jugglinmike]
Matt_King: There are an awful lot of ways for things to go wrong in that process
17:24:53 [jugglinmike]
Matt_King: And as we transfer data to APG in the form of support tables, I wanted to ask: how are we going to approach making sure that the system doesn't produce errors?
17:25:08 [jugglinmike]
present+ Lola_Odelola
17:25:26 [jugglinmike]
Lola_Odelola: Through some of the work that we've already done, some of these questions have already been answered
17:25:51 [jugglinmike]
Lola_Odelola: An outstanding question is: do we have any ideas for the types of queries we'd like to preform to make sure there are no anomalies in the data?
17:26:10 [jugglinmike]
Lola_Odelola: What kind of anomalies would we want to check before and after a deployment?
17:26:56 [jugglinmike]
howard-e: For the most part, I'd want to be able to trust that the tests that are being added--that the system catches problems with those
17:27:42 [jugglinmike]
Matt_King: I added quite a long list in the V2 format--a list of checks for the format
17:27:59 [jugglinmike]
Matt_King: While I was doing that, though, I wasn't thinking about how mistakes in the plan could introduce inconsistencies in the data
17:28:24 [jugglinmike]
Matt_King: There are some checks like, "every AT must have at least one command mapped to every assertion" or something like that
17:28:57 [jugglinmike]
Matt_King: And I have a separate issue related to being able to specify that an AT has no command
17:29:17 [jugglinmike]
Matt_King: But now, I'm thinking more about the data that's on the "reports" site
17:29:56 [jugglinmike]
Matt_King: For instance, the number of assertions which have verdicts--that number shouldn't change after a data migration
17:30:34 [jugglinmike]
Matt_King: I think it would also be important to check that for subcategories of the data (e.g. the total number of reports generated from recommended test plans, the total number of recomended test plans)
17:31:11 [jugglinmike]
James_Scholes: Are we talking about validating user input? What are we validating against?
17:31:44 [jugglinmike]
Matt_King: Against the data before the deployment. This is about ensuring that we maintain data integrity during deployment operations
17:32:04 [jugglinmike]
Matt_King: Maybe we need to enumerate the scenerios where we believe data integrity could be compromised
17:32:22 [jugglinmike]
Matt_King: I'm assuming that when you save and close in the test runner, that some checks are performed
17:32:33 [jugglinmike]
James_Scholes: To what extent are checks like that already present?
17:33:10 [jugglinmike]
James_Scholes: for example, during a test run, when a Tester provides results for a single Test, I assume that when they save those results, checks are made to verify that the data is correctly saved
17:33:49 [jugglinmike]
Lola_Odelola: I think that part of the issue here (as Matt mentioned earlier), this is an oldish issue, and in the time since it was created, there have been a lot of improvement to the code and the processes
17:34:58 [jugglinmike]
Lola_Odela: Now, what we want to identify is: are there scenarios that could cause inconsistent data? We're asking because we have seen inconsistent data in situations we didn't expect
17:35:09 [jugglinmike]
s/Odela/Odelola/
17:35:33 [jugglinmike]
Lola_Odelola: I'm happy for this to be put on the back burner until something happens in the future where we need to readjust
17:36:17 [jugglinmike]
Matt_King: Okay, though I'm still interested in building/documenting a fairly rigorous set of checks to ensure data integrity before and after deployment
17:36:33 [jugglinmike]
Matt_King: That is, any time we make changes to the data model
17:36:59 [jugglinmike]
Matt_King: For instance, when we do the refactor, we want to make sure that the total number of assertions doesn't change, et. cetera
17:38:06 [jugglinmike]
James_Scholes: I'm not necessarily advocating for tabling this discussion, but I do believe that we need to have the existing checks documented before we can have a constructive conversation on the topic
17:38:18 [jugglinmike]
Lola_Odelola: That makes sense. We can put something together for the group
17:42:44 [jugglinmike]
Topic: Issue 791: Generating reports for specific AT/Browser versions
17:42:53 [jugglinmike]
github: https://github.com/w3c/aria-at-app/issues/791
17:43:11 [jugglinmike]
Matt_King: Right now, everything we've been dealing with is reviewing drafts and getting them to candidate
17:43:29 [jugglinmike]
Matt_King: We haven't cared too much about browser and AT versions; we've just recorded what Testers have been using
17:43:44 [jugglinmike]
Matt_King: That's fine until we reach the "Recommended" phase of a report
17:44:01 [jugglinmike]
Matt_King: At that phase, we're reporting to the world how this software works
17:44:28 [jugglinmike]
Matt_King: The expectation we've had from the beginning is that once we have a "Recommended" test plan, we're going to keep that data current indefinitely
17:45:38 [jugglinmike]
Matt_King: At that point, it becomes important for us to re-run the reports when each AT releases a new version and (perhaps to a slightly less critical extent) when each browser releases a new version
17:46:10 [jugglinmike]
Matt_King: So, this can be a little bit challenging, and there are a lot of decisions to make in this space
17:46:16 [jugglinmike]
Matt_King: Let's talk about screen readers, first
17:46:33 [jugglinmike]
Matt_King: Right now, when we add a plan to the queue, we don't specify a screen reader version on purpose
17:46:49 [jugglinmike]
Matt_King: (We had this ability three years ago, but we removed it because it was causing problems for us)
17:47:06 [jugglinmike]
Matt_King: We still want to say "any recent version is fine" for draft and candidate
17:47:20 [jugglinmike]
Matt_King: But when we reach "recommended", we need to enforce specific versions
17:47:51 [jugglinmike]
Matt_King: One problem is that we don't have a way in the test queue that a specific version is required (that seems pretty easy to solve)
17:48:18 [jugglinmike]
Matt_King: It also means requiring Testers to confirm that they're using the appropriate version
17:49:18 [jugglinmike]
Matt_King: Thirdly, the Test Admin needs to choose a specific version make new Test Plans
17:49:28 [jugglinmike]
James_Scholes: that all seems right to me
17:49:34 [jugglinmike]
Matt_King: Okay, moving on to browsers.
17:50:41 [jugglinmike]
Matt_King: All of this gets so much harder because (1) there are a lot more browsers, and (2) because we will need to maintain a list of browser versions, and (3) a lot of people don't have control over the version of the browser which is installed on their system
17:51:39 [jugglinmike]
James_Scholes: for regular users who are not using corporate machines, it is quite difficult to install a specific version of a browser (first because they aren't always published--as in the case of Chrome). Secondly, even if you manage to do that, it's probably going to update itself, anyway
17:52:19 [jugglinmike]
James_Scholes: I think that's separate from the "enterprise problem" where Testers are working in a corporate setting where there is strict external control over the applications installed on their machine
17:52:49 [jugglinmike]
James_Scholes: I think it's really difficult to ask people to use a specific browser build. Even requiring just a major version is tricky
17:53:02 [jugglinmike]
Matt_King: But we could ask machines to do this
17:53:34 [jugglinmike]
Matt_King: I'm thinking about a decision that we could make. For the time being--until we have full automation across the board--we only ever require specific AT versions...
17:53:53 [jugglinmike]
Matt_King: ...that we TRACK the browser versions (just like we do now), but we don't require it
17:54:30 [jugglinmike]
James_Scholes: If we say that, "oh, we don't want someone testing with a very outdated version of Chrome", I think that may be a good middle ground
17:55:22 [jugglinmike]
James_Scholes: Because if we go super-specific on browser version requirements, we'd probably also have to requirements on operating system versions
17:57:41 [jugglinmike]
jugglinmike: We might want to track version release dates as well, because "four major versions old" means something very different for Chrome and for Safari
17:58:48 [jugglinmike]
jugglinmike: The kind of warning we're discussing (telling Testers that they're browser is suspiciously old) will be hard to express for a single "version delta" across all browsers. Expressing it in terms of release date will be more meaningful
17:58:55 [jugglinmike]
Matt_King: I agree!
18:00:04 [jugglinmike]
Matt_King: So for the foreseeable future, we will not require updating the data for Recommended Reports in response to the release of new browser versions. We will only require updating in response to new versions of ATs...
18:00:18 [Joe_humbert]
Joe_humbert has joined #aria-at
18:00:30 [jugglinmike]
Matt_King: And in that case, we will accept "any recent" version of the browsers under test
18:00:47 [Joe_humbert]
Present+
18:00:50 [jugglinmike]
James_Scholes: sounds good
18:00:56 [jugglinmike]
IsaDC: sounds good
18:01:08 [jugglinmike]
Lola_Odelola: sounds good
18:01:58 [jugglinmike]
Matt_King: Okay, that's good. This helps a lot. I'll update my work to consistently use the terminology "any recent browser"
18:02:30 [jugglinmike]
Matt_King: Thanks, everyone!
18:02:36 [jugglinmike]
Zakim, end the meeting
18:02:36 [Zakim]
As of this point the attendees have been Matt_King, jugglinmike, howard-e, Joe_Humbert, IsaDC, James_Scholes, Lola_Odelola
18:02:38 [Zakim]
RRSAgent, please draft minutes
18:02:40 [RRSAgent]
I have made the request to generate https://www.w3.org/2023/09/27-aria-at-minutes.html Zakim
18:02:47 [Zakim]
I am happy to have been of service, jugglinmike; please remember to excuse RRSAgent. Goodbye
18:02:47 [Zakim]
Zakim has left #aria-at
18:02:50 [jugglinmike]
RRSAgent, leave
18:02:50 [RRSAgent]
I see no action items