Skip

Innovative Adaptation, Personalization and Assistive Technologies

Facilitator: Matthew Atkinson

The world of research has developed a numbers of super-helpful, user-tailored and also fairly transparent adaptations that could help a wide range of people more easily access devices, computers and the web. This session will introduce a few of these, but the primary goal is to discuss them and seek feedback on how we might incorporate them into the web. Get a flavour of the adaptations.

Slides

Minutes (including discussions that were not audio-recorded)

Previous: Consent Communication on the Web All breakouts Next: Online Harms - a European and UK perspective

Skip

Skip

Transcript

So my name is Matthew.

Welcome, everyone.

I'm an accessibility consultant with The Paciello Group.

And I'm also a member of the W3C's Accessible Platform Architectures or APA Working Group.

My pronouns are he and him.

If you like, you can include yours in your Zoom name via the rename function, if you need to use that.

This is a slide about Zoom and IRC, but I think that my colleagues have already covered everything.

Essentially, there is an IRC channel, but if you want to speak you can use the Raise Hand feature or the Zoom chat, and we'll be monitoring that after the session, after the presentation I mean.

And we already have a scribe, I believe, so thank you Anina.

So an overview.

I've got about a 20 minute presentation to set the scene.

And we're gonna be talking about a bit of background, assistive technologies, adaptations, human capabilities, and then the rest of the session will be about a discussion about all of those things.

And there's a few suggestions I have, but it really depends on what your interests are as the audience, as to where we go with that.

So some background.

What are the challenges that we're aiming to address, here?

Well, first of all, there is a awareness of access needs.

Now what I'm talking about there is often people with minor to moderate impairment don't notice that they have a particular impairment, and they develop work arounds for dealing with those.

So for example, it might be somebody putting on their reading glasses to look at their iPad, for example, or other tablet.

There's also an issue of discoverability of the sort of help that is on offer.

So if somebody knows that they're finding an access barrier, they might not know how to find help with it.

A really good example of this is the magnifier that's built into every iOS device.

It's really helpful, but I don't know many people that are actually aware of it.

And the other challenge that I'd like to talk about is how we might increase the level of personalization and adaptivity there is, particularly on the web, but just generally from the system to meet the needs of the user.

First of all, let's talk about assistive technologies.

This assistive technologies are hardware or software.

Often designed for a particular access need.

So that could be a screen reader, for example, which reads out in voice what is present on the screen.

So that's used by people who can't necessarily see the screen, or who struggle reading.

There's also screen magnification which is like zooming into the screen to aid with low vision.

Alternative keyboards, which could be hardware or software.

Also switch input for people with motor difficulties.

Maybe alternative user interfaces.

So a more streamlined user interface, for example, the AlwaysInMind web app.

Or different interaction modalities such as conversational user interfaces and smart speakers and things like that.

Something about assistive technologies: they often make really significant changes to the output or input from a system.

Let's have a look at one of the downsides of that.

I used to use screen magnification quite a lot, and this is an example here.

So first of all, let's just consider browsing the web and zooming in.

So we have a representation of a web page here, and when you're zoomed in you're effectively seeing just a small part of that page.

And often that means horizontal and vertical scrolling.

Now that's distracting enough, but if you also have a screen magnifier running and you've zooming into not only the webpage but the screen itself, you end up with two levels of scrolling in two dimensions.

And that is really difficult, in terms of physical and cognitive load.

So that's quite a downside of one particular assistive technology, there.

Obviously it's great that you can read something, but there's a lot of effort involved.

Now let's talk about adaptations.

And by adaptations, I mean generally more focused and often smaller changes to a user interface or content.

So some examples, and these are all written down in the slides.

It could be a font size setting, either in the OS or the browser.

It could be a responsive layout.

It could be using text to speech to read one paragraph on a page, 'cause maybe it's quite a big paragraph and you just find that easier.

Or maybe to read out an email before you send it.

It could also be visually highlighting key characters or objects in a game, for example, to make them more obvious.

Or perhaps muting the background music but not speech or other sounds, to just help with cognitive load.

And there are many other examples.

I'm sure you could think of more.

The thing about adaptations is they often make smaller changes to the output or input, and you might have more than one at a time.

And that's really key.

So, going back to our example about text size, instead of using a magnifier to zoom in to the screen, maybe you could just change the font size.

And here are two pictures of the accessibility settings in iOS.

One with the default font size, and one with the larger font size.

And of course, the trade of is that you have to scroll, but it's only in one dimension so it's easier.

And on the larger font size, the user interface has changed a little bit.

Options that were on one line have now wrapped on to two so that they fit.

So that's really nice.

But there are some challenges with this type of API, regardless of platform.

One of the challenges is clipping.

Unfortunately, lots of apps can't cope with large font sizes, and you end up with clipped content, which can make them unusable.

Sometimes you get a great deal of scrolling, as well.

Which can be difficult, even though it's only one dimensional.

And also, they're quite hard to implement.

Probably hence the clipping problems.

Although modern declarative approaches like Flutter and SwiftUI are trying to make that easier.

So what could we learn, perhaps, from the world of research about these sorts of things?

Well, there's an automated interface adaptation project called SUPPLE.

And that recognizes that different users have different devices, capabilities, and preferences.

And that there's no one interface to fit everyone in every situation.

So what SUPPLE does is it has the user interface expressed in an abstract way, so think data types instead of particular concrete widgets.

And then the user's capabilities and preferences are taken into consideration, and an optimal user interface for a given situation is found.

And the advantage of this is that you can consider multiple constraints.

So you could have, for example, an interface which is suitable for somebody with a vision impairment as well as motor difficulties as well.

I've got a couple of pictures as examples.

These are actually taken from the SUPPLE paper, which is linked from the presentations, so you'll be able to read it later.

The first example just shows a particular interface.

It's the same interface, in terms of its abstract model, but it's been rendered for mobile phones, very small screen mobile phones, as was the case in the early 2000's.

A PDA, different native platforms, MAC and JAVA I think, and also a webpage.

So it's the same interface, but rendered in different ways.

And then another example, which again is in the same paper.

Here is a print dialogue from Word.

It's modeled on Word's print dialogue.

And there's the standard version of it, and then a version for somebody with dexterity difficulties.

And the difference is the buttons are really big and it's spaced out differently, in the latter case.

So I've got some further information for you on SUPPLE, that's just a glimpse of what it can do.

I've got some links here, but I'm not going to read them out because they are in the slides.

The address is in the chat and in the session information, so hopefully you'll be able to find those.

And I'll move on to the next adaptation, which is called Daltonization.

And what this does is it recognizes that people see different spectrums of color.

They have sensitivity which differs.

A popular, huh popular", perhaps not the best phrase. A reasonably common form of color deficit is red/green. So you may have heard of that one. It affects quit a large number of people. I think that's about 10% of makes have that color blindness, as it's also called. There are many different forms of it, and the Daltonization process basically recognizes that the spectrum needs to be adjusted so that people can still get the maximum amount of information out of your content. So here we have the classic color test plates, I believe they're called Ishihara plates. The one of the left is the initial version, which you may be able to see a number 29 in there. And then there are two alternative versions for different types of color perception deficit. And they've been adjusted so that people with those conditions should be able to discern the same information, that being that there's a number 29 in there. And the clever thing about this is that it adjusts the spectrum and the point of it is to give you the same level of discerning ability, so you're not losing information from the image. Now, again, I've got some more links on this. Some historic links and some modern implementations of it. The interesting thing is, this actually has made it into mainstream OS's, from Apple and Android as well. Although, there does seem to be anecdotally some concern as to how effective it is for certain people. And I think those concerns come down to two different things. One is that performing this adaptation does alter the aesthetics of the content. So if you're not used to seeing it in its different form, it is gonna look different to the source material. Even if it's helpful in some cases, some people may not like that aspect of it. So that's something to consider. Another aspect is that everyone's different, and even within different types of color perception deficit, there's severities and slight differences. And I think the options that have been implemented so far maybe are not sufficiently tunable. But I think more research is really needed, I just looked for some anecdotal feedback on it. There's another adaptation I wanted to discuss, which is very simple. And it's about making things easier on the eyes. So here is the page on The Paciello Group's website about the Color Contrast Analyser. And it's predominantly a white background with a blue accent color. If we were to perform a classic invert colors adaptation on this, we get a dark background which makes it easier to read for some people, but the blue color has turned orange. Which is definitely not what the designer of the page had intended, because blue is the accent color for The Paciello Group. What we actually probably wanted was invert brightness, rather than invert color. So that I'm now showing a picture with the brightness inverted, but the hue is kept as close as possible to the original. So we see a dark background, but still get the blue accent color. And that's really helpful because changing the colors can change the meaning, as you've seen. So you might think this sounds quite a lot like the so called Dark Mode" media query prefers-color-scheme.

And you're right, it is quite similar.

And there are some really good things about the dark mode color CSS media query.

It's there out of the box, it's supported.

And it's easily discoverable.

If you set dark mode in your system it will be picked up and it will be reflected, so it just works.

Again these points are on the slide.

The bad points, however, are that you need to provide a color scheme for it.

So it can't just be automatically discerned, which is where invert brightness might help as a fall back.

So it depends on the web content author.

And another thing is it also does just work.

Some people actually prefer to have some control over whether the alternative theme is used on a particular site.

So for a lot of people, they might like it being completely automated.

But other people may not.

A couple more example of adaptation just to round off.

The W3C's Personalization Task Force, which is within the APA Working Group, have an example of a page here on wikiHow, which is called How to Make a Good Cup of Tea." This is what it normally looks like. And then if we employ the prototype adaptation, what happens is the page content is reformatted. The extraneous content, so anything other than the article itself, is hidden. And symbols are inserted to help the user understand what the meaning of the content is. And I'd urge you to follow the link and learn more about that. Really interesting work going on there. That's another example of content adaptation. So we have seen some examples of adaptations, and some are applied automatically and other are out there that need to be further developed or just employed by the user. So how would we orchestrate that? I have on the page a picture of the Global Public Inclusive Infrastructure, or GPII homepage. This is a project that seeks to provide that plumbing for connecting users to adaptations and helping with that process of discovery through standardization and development of various software tools. And just a last point about this and how it works. How best should our preferences be expressed? Well, we could say for example that we have a favorite font size. But is that a font size on a computer or a phone? Or maybe a TV? It would probably be different for each person depending on the platform. So perhaps a nice way to represent this or at least bootstrap this process of finding the font size on a platform would be to look at the human capabilities of the people involved. Because visual acuity isn't gonna change as much as the font size across different platforms. You may have a condition where your visual acuity changes through the day, but at any given point it's going to be a certain value. And you could use that visual acuity plus an expected distance from the device, for example, to suggest a reasonable starting point for the font size that the user might want. And that's quite a portable way of dealing with capabilities. Because when new devices are invented or the user moves to a new platform, they could take more of their settings with them. And that was a proposal that was made in a paper from a project that I used to work on call Sus-IT. And I've linked to the paper there for more information. So that was a whirlwind tour of adaptations and assistive technologies. And I'd like to discuss the implications of that and your thoughts for the rest of the session. It's really up to you as to where we take this. So I've got some suggestions here, like these are on the slide. Do you know of any other interesting adaptations? How can we make content more adaptive, or browsers more adaptive? Are there any relevant community groups that you're aware of? Are there any related specifications? So that's all I have for the presentation. I'm happy to answer questions about the presentation as well, of course. But I would also like to think about these discussion topics and see if there's anything that we can learn from you.

Skip

Sponsors

Platinum sponsor

Coil Technologies,

Media sponsor

Legible

For further details, contact sponsorship@w3.org