W3C

– DRAFT –
Accessible Platform Architectures Working Group Teleconference

03 March 2021

Attendees

Present
janina, jasonjgw, joconnor, jpaton, scott_h, shadi, SteveNoble
Regrets
-
Chair
jasonjgw
Scribe
joconnor

Meeting minutes

RAUR and XAUR any updates.

<jpaton> joconnor: we're in a position to be able to publish

<jpaton> joconnor: do we try to publish together?

<jpaton> janina: would recommend doing it sequentially

<Zakim> joconnor, you wanted to say I'm not sure we need wide review for XAUR

Accessibility of natural language interfaces.

<jpaton> jasonjgw: currently in the scoping phase with some fruitful discussions taking place

https://www.w3.org/WAI/APA/wiki/Voice_agent_user_requirements

<jpaton> joconnor: work is progressing nicely

<jpaton> joconnor: current issue is that the scope is currently very broad so a decision to narrow the scope or keep it wide and choose focusses carefully

<jpaton> jasonjgw: I contacted a colleague working on this in an educational setting and they were interested

<jpaton> janina: Good question raised: will w3c be writing standards in this area?

<jpaton> janina: if not then APA creating user requirements would be new and unchartered territory. Do we want that?

I thought Jasons point about WCAG 3 defining Voice, if there is a dearth of other W3C standards work was noteworthy.

<jpaton> scott_h: are the wcag 3 taskforce aware that we may be looking to feed this into their work?

<jpaton> joconnor: If this isn't tackled as a seperate project then wcag 3 may be taken as the guidance for this topic

<jpaton> Judy: it's fine to take up a new topic and lay groundwork for a new area of work. This has been referenced as a priority piece of work in a year or two.

<jpaton> shadi: agree should not be driven just by wcag or other work happening in w3C.

<shadi> https://www.w3.org/WAI/about/projects/wai-coop/

<jpaton> shadi: driving factor should be on what is happening in the world and what guidance may be needed on products being designed in future

<jpaton> shadi: WAI-coop could be used to gather input from other groups

<jpaton> shadi: question on width of scope could be opened up for input from other groups

<jpaton> joconnor: question of whether we have a mandate: one answer could be to do it in a modular way. Maybe start with speech then extend to background services etc

<jpaton> joconnor: shadi's suggestions of gap analysis would be great

<Zakim> Judy, you wanted to comment on feasible scope of activities, as well as extent of time in naming discussions

<jpaton> Judy: putting a lot of work into scoping may be ambitious for the capacity of the RQTF

<jpaton> shadi: aim would be for external work to support the team rather than all the work to happen in team

<Zakim> MichaelC, you wanted to talk about voice interaction vs agent functionality vs other interaction modalities

<shadi> +1 to Michael!

+1 to Michael

<jpaton> MichaelC: there's a bit of overlapping. Voice interaction has it's own set of a11y issues. smart agents are one thing that uses this tool. Scope needs to be crisply defined. suggest focus on voice interaction then smart agents.

<shadi> +1 to Michael (again)

<jpaton> janina: core of a smart agent uses some text processing based on speech recognition interface. This could have different interface.

<Judy> https://github.com/w3c/strategy/issues/221

<jpaton> MichaelC: voice interface itself has a11y issues before handing data to the smart agent.

<jpaton> jasonjgw: much of the work here will be modality independent but there will be aspects confined to the individual modalites.

<jpaton> jasonjgw: work could cover NLP interactions with subdivisions on concerns for the interaction modalities.

<Zakim> joconnor, you wanted to talk about timing and to say there is an opportunity here to be part of a wider move towards VUIs

<jpaton> Judy: need to ensure needs of deaf and hard of hearing concerns are taken into account so focus should not be just on voice interactions

<Zakim> MichaelC, you wanted to say speech will be used in emerging technologies, we should anticipate that and to say a smart agent with only a voice interface is not accessible

<jpaton> MichaelC: a smart agent or any tool that only offers voice interaction is not accessible and would not meet WCAG

<jpaton> shadi: a smaller scope may be easier to handle. Voice agents are a specific class of device which need accessibility guidance.

+1 to Shadi

<jpaton> Judy: by framing an accessibility user requirements document we create a conceptual anchor. We can focus on one class of device but should try to balance that with the aim to set that anchor to influence thought.

<Zakim> MichaelC, you wanted to say I came in to look at scope creep and to say W3C is not a shining example of avoiding scope and confounding problems

<jpaton> joconnor: feel we should start with voice agents and set the expansion on that as further work.

<jpaton> shadi: potentially set title as "accessibility of voice agents with cross-disability considerations"

SAZ: I may be missing something, but what is wrong with the term voice?

JW: I think that it refers to a particular output modality.

SAZ: So does Television.

JB: No, its multimodal.

JW: My concern is because many dont have to those broad multimodal capabilities.

Defining your software as a topic, sends a particular message to how they are defined.

You are using a modality specific term, independent of implementation - needs a more inclusive term..

SH: Lets keep this on email.

I have thoughts on how the Amazon echo works and supports deaf users etc

Minutes manually created (not a transcript), formatted by scribe.perl version 127 (Wed Dec 30 17:39:58 2020 UTC).

Diagnostics

Succeeded: s/creating guidelines/framing an accessibility user requirements document/

Succeeded: s/advice on accessibility through other modalities/cross-disability considerations

No scribenick or scribe found. Guessed: joconnor

Maybe present: JB, JW, SAZ, SH