Meeting minutes
App Development Update
MK: A new version of app is available, mostly back end changes: https://
MK: A new data model has been implemented to allow more flexibility in using the data
MK: The next thing for the app is the report page, it will not be different than the old page, but it is being connected to the new back end
ST: The settings page is ready, there are no reports yet
ST: We have been looking at a large refactoring of the backend, I am happy to have shorter release times going forward
MK: There will be more frequent updates
MK: The very next thing is the report pages and there are other things that need to prioritize
MK: We hope to be running tests soon
ST: Another feature is adding testers is much easier, we have total control
ST: Who else would like to be a tester
JS: I added a member of our team as a tester, what does the deploy look like...
ST: Deployment is manual, there are some expectations on testers, we will discuss a process for merging..
JS: Sounds great
MK: Spend some time how on this now?
JS: That could be an off line discussion
ST: Before I promise on deply speed when new person is added, who approves it? Can they just be added?
MK: We have not talked about the process about adding testers, but people on the call ...
MK: That is a really good discussion
JS: Maybe talk about it next week, but it is easier
MK: That is part of the on boarding process, people need to have an understanding of what is expected
JS: Are you volunteering to write it?
MK: Let's talk about that next week
MK: Do we have anything to say about automation MF?
MF: Not at this time
MK: JS is there any plans on what you want to share
JS: Focusing on test plans for the new patterns, for the media seek slider
JS: We have 16 tests coming down the pipe in the next few weeks
New Test Que
https://
MK: I am opening the test que, but I am signed out
MK: I believe there about 16 test plans, but only 3 have been added to the test que
MK: Can people see the app
AV: I cannot see the page
MK: I had that problem and I had to log out and in
ST: AV is not a user, I will add now
MK: JH are you in?
ST: JH and AV are not users yet, can yo put you put your username in chat
JS: JH is a tester
MK: There is a table for each screen reader/browser combination
MK: it is using the identifier "latest", originally going to use version, but discussions the past few weeks we realized we will collect version information automatically
MK: The combinations will just use SR and browser name
<michael_fairchild> I have a question about app permissions in github.
ST: There is a file in arai-at called support-at, so you could have an in progress test, but not in the app, is this still useful
MK: We need a key in the file
MK: JS and ST you need to coordinate on fixing this issue
MK: For now we have some useful names, we can fix them going forward
MK: We have a test plan column, and a column for tester for who is working on it
MK: A column for report status
MK: Actions is mostly administrative
MK: For screen reader users there are some labeling issues, so you have to look at the heading on the page for the name of the test
MK: I want to talk about the basic layout and getting into a test
MK: Are people able to get in?
MK: Who has seen the test plans?
AV: I have not
HR: I have
JG: We may want to look at the use of "Success Criteria", concerned with confusion with WCAG
MK: We need to have some time to review the labeling of the sections
MK: Each test has it's own form for each part of the test
MK: Each section allows recording of results
AV: That makes sense
MK: Our goal for the next several weeks, is to give feedback on the plans themselves
MK: are the instructions correct, are the commands correct and are these the right behaviors
MK: If we find problems there is a button on the page "Raise An Issue"
MK: Have people used markdown?
HR: I have used before
MK: You can just use the simplest features of markdown
MK: How are we grouping them so that JS can use them
JS: What we discussed that each test is under an issue
JS: We can go to issue and see an issue is related
MK: I am worked about labels
JS: Maybe a test feedback label for now
JS: We can then discuss at the next meeting
ST: There is a uniform label being applied now
MK: you do have in the title the name of the test
MK: Do you want people add to the title field, to make it more descriptive
MK: If you get 3 people providing feedback o the same testm that would have the tile
JS: We can see what comes in we can see what happens and make some adjustments
MK: If you find problems with instructions, assertions or the commands
MK: We are not looking for feedback on UI issues, just on the content of the tests themselves
MK: Another problem that can be reported is a problem with a setup script
MK: There is a button "Open Test Page" that always opens a new window, with the test code and may have a initial javascript to setup the test
MK: Is there an example of a script
JS: The test page will tell you what should happen, if thet does not happen it is an issue
JS: They run when you open the test page
MK: Do you where having a button is?
ST: We are having an issue with including a button onload, we need to wrapper all the APG examples in a framework
ST: We are currently testing arbitrary pages
MK: If it is always a button with the same button with an ID
JS: Can we put this on the agenda
MK: JS seems to have similar ideas as mine
MK: Could e have an asynch discussion on ...
MK: JG we don't have the button yet
JS: When you said there are 21 tests, those are the one that are applicable, there are more that apply to other SR/browsers
MK: SOme of the tests for VO are different than for NVDA/Jaws
MK: The type of feedback make sense
MK: We have 6 total combinations, we are missing other combinations
MK: When we are looking at the tests plans these combinations should be sufficient
MK: I think less in more in testing the current test cases
ST: How do we know when we are done with this stage
MK: If we testers get different results, it may identify confusion over the test plan,
MK: So that will be one way to identify issues in the test plan, we want people to get the same results
MK: I want to make sure we have a mix of ....
MK: of experience on each one
MK: We need to find what we mean as a group what consistency between testers looks like
ST: That allows us to get started
ST: Do we need a way to mark when a test is done
MK: We need a way to move from draft to candidate plan, that can be sent to screen reader companies for their review and comment
MK: Then after their review we can move to a recommended plan
MK: Can we assign some people, people can assign themselves
MK: We will use calls to talk about the results
MK: You can start testing now, and you can raise issues