This pack uses the template for slides for TPAC 2022.
To start the slide show:
Press ‘A’. Return to the index by pressing
‘A’ or ‘Esc’. On a touch screen, use a 3-finger touch. You can also
double click to open a specific slide. In slide mode, press ‘?’
(question mark) to get a list of available commands.
If it doesn't work: Slide mode requires a
recent browser with JavaScript. If you are using the ‘NoScript’ add-on (Firefox or the
Tor Browser), or changed the ‘site settings’ (Chrome, Vivaldi, Opera, Brave and some other
browsers), or the ‘permissions for this site’ (Edge), you may have
to explicitly allow JavaScript on these slides. Internet Explorer
is not supported.
Leaving slide mode.
Breakout Session:
Architecting for Privacy, Media Accessibility and Product development: the video element
Nigel Megitt, BBC
14 September 2022, 13:30-14:30 America / Vancouver
Think about architectural models for allowing user accessibility choices while maintaining privacy and providing data to support product development, with reference to the video element in particular.
Session goals
Understand what architectural pattern(s) can allow user choices without exposing additional fingerprinting vectors or increasing privacy risks, while also providing usable product data.
Context
People with barriers to accessibility need to adjust the presentation of content to suit their needs
Every distinct platform, page and application has a different mechanism for doing this. Lack of consistency is an accessibility problem.
Users’ settings are not portable between systems.
ITU IRG-AVA is working on a “common user profile” for capturing user needs in a shareable and reusable way
FCC is also asking for something similar from CTA, for TV media - captions, audio description, signing etc
Proposed Solution
The solutions being looked at seem to have these properties:
A standard format document expresses user accessibility requirements
The document is somehow made available to apps, web pages, devices etc
The document can be edited somehow, and shared by the user
Potentially different settings can be recorded for different scenarios, e.g. device, screen size etc.
The problem(s)
The proposed solution to this is very powerful: it’s power also comes with a significant risk
The risk is that any document describing a user’s accessibility needs on many dimensions has high entropy and is sensitive data, so privacy is a concern
It’s not just about users though: content providers also have a legitimate interest in knowing at a macro level about usage of accessibility features
Polyfill implementations of accessibility features need to be possible to allow new solutions
Core question for this session
What architecture can allow users to set preferences in a reusable way without exposing them to unwanted fingerprinting, while allowing content/app/page providers to get useful product data about accessibility feature usage, and allowing for creative/improving solutions for accessibility?
The <video> element
Let's step through some key features of the <video> element...
<video> accepts no children:
<video controls>
This text will not be rendered
by browsers that
support the video element
</video>
<video> is a shadow host but accepts no shadow tree
It's a built-in element that already contains a shadow DOM and doesn't accept an additional one.
No parent-child layout
Pages can not use traditional parent-child layout to put overlays on top of video - they are forced to create a separate element and use CSS to position them in the same location.
Ever tried tracking video element moves when the page layout changes?
No general purpose polyfills
It is essentially impossible to create a general purpose polyfill to provide, say, caption rendering, because the introduction of a new container element to the DOM may have unknown effects on the code in the page.
It is possible to do it if the page creates a specific container element and arranges for positional alignment.
But that's what the polyfill should do!
Even if it were possible, there would be no way to access the user’s accessibility settings, even if the user wanted to allow it, or there were other ways to track the user (so no incremental fingerprinting).
Usage tracking is limited
Basic usage data can be retrieved by using cue entry and exit handlers in JS
No data about user accessibility setting choices can be collected because they are opaque to the page.
Good for privacy, bad for product improvement.
Both the current choices are non-ideal
Choice 1: Use <track> children with VTT captions and hand over to the browser to render:
no information about any accessibility preferences, e.g. what if 90% of users modift the size/colour/font of the text? Should the content provider modify the defaults? They have no way to know.
Browser/native caption rendering is often weak.
Both the current choices are non-ideal
Choice 2: Create a separate container element as part of the page and use that to render captions; use CSS to colocate it positionally with the video; customisation is whatever the page offers.
Can get good data
Cannot honour centrally set accessibility preferences.
We seem to be able to satisfy user needs or product owner needs but not both!
Can we do any better? (1/2)
Here are some ideas, not mutually exclusive (maybe co-dependent):
Allow <video> element shadow host to accept shadow tree components for controls and captions
Allow user to relax strict constraints on access to user’s settings for pages where the user is already signed in (i.e. the site tracks the user anyway, by agreement, so fingerprinting provides no further identification)
Can we do any better? (2/2)
Here are some ideas, not mutually exclusive (maybe co-dependent):
Allow trusted / signed polyfill web components to provide shadow DOM implementations attached to the <video> element and give them access to the user’s accessibility settings and allow them to report to a trusted party, who can then collate data that’s anonymous for both users and sites. Is there any organisation you’d trust to be that 3rd party?! Assume the DOM implementation would be signed and hosted by the UA in a sandbox, and user may need to authorise its use. Maybe users could choose their favoured caption implementation provider…
What can native apps offer?
By way of comparison, what do native apps do?
Android and Apple OSes provide an API to fetch caption settings.
Is privacy not important after all?!
In native apps there may be a moderation process that could add a layer of user safety to apps that use those APIs: app developers tell me they doubt that any usage of such data is being checked.
Perhaps there is a way to provide additional safety for web components?
This pack uses the template for slides for TPAC 2022.
To start the slide show:
Press ‘A’. Return to the index by pressing ‘A’ or ‘Esc’. On a touch screen, use a 3-finger touch. You can also double click to open a specific slide. In slide mode, press ‘?’ (question mark) to get a list of available commands.
If it doesn't work: Slide mode requires a recent browser with JavaScript. If you are using the ‘NoScript’ add-on (Firefox or the Tor Browser), or changed the ‘site settings’ (Chrome, Vivaldi, Opera, Brave and some other browsers), or the ‘permissions for this site’ (Edge), you may have to explicitly allow JavaScript on these slides. Internet Explorer is not supported.