Meeting minutes
A11y4Children follow-up
SuzanneTaylor: janina and I discussed where we might have synergies with Adapt. We were asked to come up with some use cases to show how the Adapt work could be beneficial for children with disabilities, and to solve some general problems we found.
… Bob Dolan had done a similar project on adapting learning materials for children, which we included, and a general curb cut use case for children in general.
… I added some code to demonstrate possible end-to-end use too.
SuzanneTaylor: Our group is very excited about this work.
… We're also working on some work called "The ethics of interface" which is about providing interfaces that we are working on in parallel, and may be more related to Adapt in future.
janina: Thanks and welcome! I've taken some of these questions to our Research Questions TF, which focuses on how we can turn these use cases into standards. We'd like to know a bit more about what you're trying to achieve in your use cases. This will help us determiine what we can use (e.g. adapt-* attributes, browser xtensions, ...) to build on this work.
… It may be a great time to talk with Epub, as they've just finisihed a big release of their spec; we (APA in general) are talking to them about a range of things at TPAC [W3C conference].
bob: I have a lot of ideas for how UDL may be used, as well as its limitations.
<SuzanneTaylor> document being discussed is: https://
Use cases (genereal)
bob: This is an NSF-funded project called CLIPPS.
… for developing a technological approach to delivering content to learners that meets their pedagogical and accessibility needs, using UDL.
… We developed an ontology to support a tagging schema to allow content elements to be added to a library, then chosen adaptively on-the-fly to produce learning materials.
… Rules would determine which components would be chosen; the ordering; and the rendering (different media/formats).
[ Scribe note - this is the presentation Bob gave some weeks back ]
bob: We wanted to avoid jumping to conclusions about a given impairment resulting in a specific choice of content/AT—there is a lot of context, personal preferences, etc. to take into account.
[ Scribe note: info Bob provided before: https://
bob: E.g. "right now I'm just skimming/studying for a test/in a noisy environment"
1.2.6 Trouble remembering and using details
[ Scribe note - ref the Google document linked above ]
bob: We relied heavily on a concept of user agents or digital companions—think Clippy but more sophisticated—which would act like an expert tutor.
bob: This could be as simple as mousing over the content and having something pop up, or it could be an animated avatar, that asks how help may be provided, or monitors the student's efforts, and comes up with recommendations.
bob: Kurt Vanlehn did some pioneering work in this area. One thing expert tutors do is to allow the student to engage in productive struggle, e.g. allow them to explore far, and get stuck.
Lionel_Wolberger: We're excited by this work, as we often feel that we could develop things to help with this work. We could listen for hours! Wondering if we can focus on the things that we might be able to develop to support your work.
bob: +1; excited to work on this.
bob: could you say more about what you meant by "chunking"?
Lionel_Wolberger: We're keen to work on all of these exciting ideas, but we need to prioritise on the ones that we can develop first. So we'd like to know more so we can decide which to work on first.
SuzanneTaylor: We in the CG have paraphrased Bob's research into these use cases, based on our understanding of Bob's research. +1 to this approach.
Lionel_Wolberger: Suggest we go to a slightly higher level to figure out what should be our best first step. We have a good team for keeping the standardisation process moving. I'm keen to listen more about items we _could_ work on, and then it should become clear which make sense for us to work on _next_.
… We'd then work on them for a while, and check our understanding with you.
… Some things may go into our current draft; others into a later one. We can keep tabs on these. The conversation is very important, more of that feels apt for the CG. In the WG we should probably focus on what we want to get into the spec, and work on that here.
bob: +1. It's good for us to understand on where you've been focusing. We're not necessarily wed to these approaches, but think these will help us understand the work you've been doing.
Lionel_Wolberger: Let's look at each item in this doc, and put it in a "current working draft" or "later working draft" basket?
… We need your help to determine which is which.
… 1.2.6 looks like a cognitive/neurodiversity issue. We're very interested; you've already come up with some good markup. This is more of a "next WD" topic for us, as we have not undertaken work in earnest here.
1.2.7 Needs concrete examples
From the doc: "User has mild cognitive disabilities and reads significantly below grade level. Although science is the user’s least favorite subject, they are able to understand abstract concepts when the concepts are connected to real-life experiences."
bob: ... i.e. not through reading.
Lionel_Wolberger: This would require more conversation; sounds like for us it's in the "next" basket. We have a lot of exciting ideas wrt new attributes, to work on in the future.
1.2.8 Does best with pure math and science concepts
Example from doc: "The user is highly talented in math and physics, but when he moved to a new school district he found that his new math and physics materials were largely based on narratives and discussion. The user cannot pay attention to the reading and discussion materials long enough to reach the math and physics concepts that he loves."
SuzanneTaylor: When math curriculum switched to a more verbose/story-based approach, which worked for the majority of students, this was a barrier for students with autism.
SuzanneTaylor: Now all the concepts that the student loves are hidden under extra layers of text.
Lionel_Wolberger: This is excellent work; another one for later, for us.
1.2.9 Children in Learning-specific contexts
from the doc: "Content or user interfaces can be too complex for the current skills of children with and without disabilities. A child may need a variety of simplification levels as they become more familiar with the content and/or the user interface. Agency to move between levels and to do this separately for the UI and for the Content allows children to not only consume information at their own pace, but
to also develop the skills needed to consume increasingly complex information."
SuzanneTaylor: We put this one is a curb cut, per janina's suggestion.
… We thought this one, wrt Adapt, needs expansion. There's UI and there's content, and either one could be too complex or to simple for the child.
SuzanneTaylor: So we took your idea of simplification, and asked if there might be two?
Lionel_Wolberger: I think this is in the first bucket (current work).
Lionel_Wolberger: We have a spec relating to this under development.
… This was one of our key use cases to start with.
… Whilst this is going into our next working draft, we could take it up sooner.
… We've spend a lot of time on simplification.
SuzanneTaylor: You had a demon online with a store. We presented that to our CG. Initial feedback was concern around simplification resulting in taking away the opportunity to learn, but after we showed the demo, people really liked the level of adaptation, based on their needs at that particular time.
… A student could go to the simpler site, understand it, then make it more complex once they understood it. This is empowring.
matatk: Likes that Chn thought about simplification--though it may get more comples!
matatk: Have COGA concern that simpler for one person isn't simpler for another; so what where how?
matatk: Notes critical for dimensia
matatk: Recalls that hosting content provider may actually know a lot about the user and be able to simplify based on stored backend
matatk: we're not to a solution yet, but don't want to block the more direct approach either
matatk: Agree we want to work on this soon
bob: This is an example of where UDL can be a powerful framework to leverage, looking for ways things could be simplified.
Lionel_Wolberger: ACK SuzanneTaylor's note that items in the doc below this point were discussed less. Let's try to get through some more...
1.2.10 Parents of different ability
from the doc: "A child may be deaf/hard of hearing, mute, blind, depressed and their parents may have neither of those experiences, or vice-versa."
SuzanneTaylor: We want to have two people working together, and allow them to collaborate.
… Sounds like "next" bucket stuff, but important use case.
Lionel_Wolberger: We're interested in the concepts of guardians etc.
matatk: My Ph.D. was in this area (accessible collaboration) so very interesting to hear about what you're doing here.
janina: We're doing work on collaborative editing environments; curious as to your thoughts on these, and whether they're related.
Bob: We've not gone down that path, but I think there is something there; it's an interesting area.
1.2.11 Children
SuzanneTaylor: These were notes from our meeting, on the ehtics of interface.
… This is about coming up against dead-ends in digital systems. E.g. if you see a word you don't understand, what do you do?
… Some people may go away and look it up. A robust interface would provide means to address the problem within the UI. The problem is where the presentation doesn't match where you are today.
… This is work that the CG decided to dive into for the next year (2023-2024).
1.2.12 Trauma – informed adaptations
SuzanneTaylor: You might thing that this content is not triggering, but if we know someone is triggered by particular things, is there some way to adapt for this. It could be any random thing, about which we don't know.
matatk: Notes COGA also interested here
matatk: May be important to prevent triggers
matatk: Also there seems to be evidence from literature that trigger warnings aren't all that effective, though this work is more about removing things that we know are problematic before they are presented.
1.2.13 Help to empathize (social skills)
from the doc: "In a collaborative exercise or when reading information about social activities, a child might not know how to react or have more questions about the content. This might discourage them."
Next steps
Lionel_Wolberger: We're out of time—please could you return?
Lionel_Wolberger: Also, some things we have been looking for. We have some work on AAC. Wondering what you have found regarding this.
… One example is videos, even to absorb information. They have automatic transcription and chapter headings. However, chapter headings don't support AAC.
Lionel_Wolberger: What could your group do, if you had the ability to add more images/symbols to content?
Lionel_Wolberger: Our first bucket is AAC stuff (in our current draft).
bob: +1
bob: I love it, and would like to start our next conversation on this.
janina: Our innovation is we believe we have the rosetta stone of AAC.
Lionel_Wolberger: If we can get more velocity on the case we have now (AAC), it will help us to move all the rest forward.
janina: In order to get AAC into chapter headings, we would need changes to HTML, so we're building the case that we need to make to the TAG, and WHATWG, "are you sure you want to leave these people out?" - we have a lot of cool stuff that we'd like to do.
bob: We can also put on the agenda for next time: why that, as opposed to relying on ARIA?
Lionel_Wolberger: If we have solid arguments as to how this would help students' lives, it would really help our case.
SuzanneTaylor: We can discuss in our CG.