W3C

- DRAFT -

Silver XR Subgroup

08 Oct 2020

Attendees

Present
jeanne, Crispy_, bruce_bailey, mikecrabb, Joshue108
Regrets
Chair
MikeCrabb
Scribe
bruce_, Joshue108

Contents


<bruce_bailey> scribe: bruce_

<bruce_bailey> two XR sessions at TPAC

<bruce_bailey> plus one Silver session

<jeanne> https://www.w3.org/WAI/GL/task-forces/silver/wiki/Silver_at_W3C_TPAC_2020

<jeanne> 13 Oct Accessible Platform Architectures (APA) WG & Timed Text WG & Silver (XR subgroup) 10:00AM-11:00AM ET (14:00-15:00 UTC)

<jeanne> 15 Oct Accessible Platform Architectures (APA) WG & Immersive Web WG and Silver (XR subgroup) 12:00PM- 1:00PM ET (16:00-17:00 UTC)

<bruce_bailey> Silver session is with EPUB

<bruce_bailey> TTML and Immersive Web

<bruce_bailey> for TTML, we will want to address meta data

<bruce_bailey> meta data for captions and XR

<bruce_bailey> We have something in how to we can send to TTML

<bruce_bailey> Jeanne: for meta data we should specifically call out caption location

<bruce_bailey> ... they might have different idea for what is meant by meta data

<mikecrabb> https://github.com/w3c/silver/tree/master/methods/xr-audio-metadata

<bruce_bailey> concern that some recent edits have not be captured in most current drafts, Jeanne will look into

<bruce_bailey> Mike cites folder that has the information in it

<mikecrabb> Card sort: https://github.com/w3c/silver/projects/2

<bruce_bailey> Jeanne thinks card sorting exercise might be more useful than github branch

<bruce_bailey> Mike: We hoping that this metadata would be additional datapoints for TTML

<bruce_bailey> ... unsual to have older technology surface issues

<bruce_bailey> ... location for everything is not even posssible, consider "voice of god"

<bruce_bailey> ... have to think quite a bit where to get started

<bruce_bailey> Mike Crabb: Other TPAC meeting is Thursday with immersive web

<bruce_bailey> ... we need to use opportunity to dicuss work we have done so far, but really we need input

<bruce_bailey> Jeanne: We can show them where we are for first draft, and ask where we should go for 2nd draft

<bruce_bailey> MC: might help with gap analysis

<bruce_bailey> MC: 4 to 5 UK time

<bruce_bailey> MC: EPUB meeting is right before, 3 to 4 UK time

<bruce_bailey> ... so we won't have THIS meeting next week

<bruce_bailey> ... Silver meeting also overlaps on Tuesday

topics for TPAC

<bruce_bailey> finish discussion on TPAC

<bruce_bailey> zakim take up item 2

missing scoring example for xr

<bruce_bailey> Jeanne: lets focus on first outcome

<jeanne> https://w3c.github.io/silver/guidelines/#captions

<bruce_bailey> Jeanne: 2nd outcome not in draft

<bruce_bailey> from draft: Translates speech and non-speech audio into alternative formats (e.g. captions) so media can be understood when sound is unavailable or limited. User agents and APIs support the display and control of captions.

<bruce_bailey> Jeanne: we need to focus on critical failures

<bruce_bailey> ... so must it be every video ?

<bruce_bailey> ... educational videos would be covered

<bruce_bailey> ... if main content is video, then captions required

<bruce_bailey> MC: WCAG 2x had exception for decorative video

<bruce_bailey> Jeanne: 2x exception is for media alternatives for text

<bruce_bailey> ... so does this change outcome?

<bruce_bailey> MC: Critical errors are not on method?

<bruce_bailey> Jeannne: no, critial errors attach to outcomes now, following conversations about conformance

<bruce_bailey> Jeanne: so lets write it up now, here in IRC, then I will email Rachelle and Michael Cooper

<jeanne> Exception: When the media is a media alternative for text and is clearly labeled as such.

<bruce_bailey> Discussion about decorative videos and animations without audio

<bruce_bailey> Jeanne: 1st critical error is video on the path

<bruce_bailey> 2nd would be video that blocks like under CR5

<bruce_bailey> 3rd would be cumulative addition of small blocker that add up in net to critical failure

<jeanne> Any media elements without an appropriate text alternative that stops people from completing a task.

<jeanne> Any media elements without an appropriate translation of speech and non-speech audio that stops people from completing a task.

<bruce_bailey> Jeanne 1st pass adapting text alternatives

<Joshue108> +1

<mikecrabb> +1

<Crispy_> +1

<bruce_bailey> need to add audio or reference to hearing

<Joshue108> scribe: Joshue108

JS: I think its ok, as there should be a translation.
... Looks at scoring
... Lets look at the method.

We have outcome but the methods are linking to the old G docs

MC: I made them green to track what was or wasnt in HTML

<jeanne> If we don’t accept Open Captions, then each multimedia encountered is Pass/Fail (100%/0%)

<jeanne> If we do accept Open Captions, then the rating will be:

<jeanne> 0 is no captions

<jeanne> 1 is Open Captions (also called Burned-in Captions) OR captions have not been corrected for accuracy.

<jeanne> 2 Captions are Closed Captions and accurate.

<jeanne> 3 Captions are accurate and follow best practices

JS: This is in the method

<gives overview>

MC: Did you say that has a critical failure?

<jeanne> Video that is on the path that is essential for completing the task that does not have captioning is a critical failure. For example, an education site with a video that a student will be tested on or a shopping experience of previewing movies. If they do not have captioning (closed or open captioning), they fail.

<bruce_bailey> Mike Crabb notes that draft has placeholder for critical error

MC: Our XR meta data doesn;t have critical failure

<mikecrabb> https://github.com/w3c/silver/blob/master/methods/xr-text-equiv/scoring.html

<mikecrabb> https://github.com/w3c/silver/blob/master/methods/xr-caption-reflow/scoring.html

MC: The others do
... Caption reflow is one Josh did

JS: Have we critical failure for this?

<bruce_bailey> MC shares last bits for scoring and caption reflow

MC: We have 3

JS: Please paste it in.

<mikecrabb> XT Text Equivalent Critical Failure: <p>Video that is on the path that is essential for completing the task that does not have captioning is a critical failure. For example, an education site with a video that a student will be tested on or a shopping experience of previewing movies. If they do not have captioning (closed or open captioning), they fail.

<mikecrabb> XR Caption Reflow Critical Failures:

<mikecrabb> 1 The user cannot associate a caption with its source.

<mikecrabb> 2 The user cannot distinguish or tell apart one caption from another

<mikecrabb> 3 There is no clear semantic relationship that allows transformation/personalization.

MC: There we are.

<bruce_bailey> From scoring blob:

<bruce_bailey> <li> 0 is no captions

<bruce_bailey> 1 is Open Captions (also called Burned-in Captions) OR captions have not been corrected for

<bruce_bailey> <li> 2 Captions are Closed Captions and accurate.

<bruce_bailey> <li> 3 Captions are accurate and follow best practices

JS: These failure may not be critical errors.

They are test fails but not critical

<bruce_bailey> Jeanne, these reflow criteria are about scoring, not critical errors (as we are using term)

JS: This is related to the player?
... We should mark caption reflow as XR caption reflow UAAG

We need to be clarify this is about the plaer

<bruce_bailey> Jeanne: xr caption reflow becomes caption reflow uaag and we make clear this is about player

<bruce_bailey> Josh: accuracy is not about player

JS: Are there critical errors in the one Josh wrote?

<bruce_bailey> Jeanne: Do we have critical errors for reflow?

JS: They should come out?

MC: No, they;re ok

JS: This doesn't work - there may be some structural error

MC: It may be not worth taking them out, the GL is quite large

may need to be split

<bruce_bailey> Mike: we might need to split critical errors for caption creation and another about caption consumption

JS: You can create captions as the author and not care what the player does

<bruce_bailey> Jeanne: author probably does not have control over player

<bruce_bailey> JS: seems like no critial errors for reflow

<bruce_bailey> ... but want those into testing are or method

<bruce_bailey> Mike Crabb working on pull request during meeting

<bruce_bailey> some starter content is in test.html but Micheal Cooper commented out, asking for additional detail

<bruce_bailey> Jeanne, so long as we don't loose it. Tests for UAAG need to be different from test to authors

<bruce_bailey> ACTION: Jeanne to talk to Micheal Cooper as to how to include UAAG testing

<mikecrabb> Completed with Pull Request #194

<bruce_bailey> Josh clarifies that this is our only UAAG item for now. More can be expected as we go along.

<bruce_bailey> All agree.

<bruce_bailey> Jeanne: Do we have a 3rd method?

<bruce_bailey> Mike Crabb: Audio meta date is probably the one

<mikecrabb> Metadata method: https://github.com/w3c/silver/tree/master/methods/xr-audio-metadata

<bruce_bailey> ... associates on-screen speaker with particular captions

<bruce_bailey> ... this is the one we need to talk with TTML as welll

<bruce_bailey> Jeanne: We need Outcome for this one.

<mikecrabb> Conveys information about the sound in addition to the text of the sound (for example, sound source, duration, and direction) so users know the necessary information about the context of the sound in relation to the environment it is situated in

<mikecrabb> List of all our outcomes: https://w3c.github.io/silver/subgroups/xr/captioning/functional-outcomes.html

<bruce_bailey> Jeanne: no critical errors for 2nd outcome

<mikecrabb> Scoring for MetaData:

<mikecrabb> https://raw.githack.com/w3c/silver/master/methods/xr-audio-metadata/scoring.html

<bruce_bailey> Jeanne: scoring for meta data looks good

<bruce_bailey> ... just add editors note that its a work in progress

<bruce_bailey> ... 6 axis is probably over kill

<bruce_bailey> bruce raises concern that scoring for basic caption seems stricter than others scores so far

<bruce_bailey> https://github.com/w3c/silver/blob/master/methods/xr-text-equiv/scoring.html

<bruce_bailey> Jeanne open captioning at 1 is probably okay

<bruce_bailey> Jeanne having one to three scale is proabably okay

<bruce_bailey> ... overall rating is an average for content on the path

<bruce_bailey> ... ratings 3 / 4 do not exist

<bruce_bailey> Video that is on the path that is essential for completing the task that does not have captioning is a critical failure.

<bruce_bailey> bruce: change critical failure to critical error

<bruce_bailey> For example, an education site with a video that a student will be tested on or a shopping experience of previewing movies. If they do not have captioning (closed or open captioning), they fail.

<bruce_bailey> Bruce asks about scoring applies to missing captions of video off the path.

<jeanne> Rating Criteria

<jeanne> Rating 0 Average score 0-.7 or a critical error

<jeanne> Rating 3 Average score .8-1.5 and no critical error

<jeanne> Rating 4 Average score 1.6 to 2 and no critical error

<bruce_bailey> Mike Crabb describes how university might have non-academic video that are not part of curriculum, but maybe cover student services

<bruce_bailey> Discussion about scoring videos that are somehow present but not "on the path"

<bruce_bailey> Example of video advert when shopping is better example of video not on the path

<bruce_bailey> Another example might be videos in a timeline that arent really the reason for viewing the timeline

<bruce_bailey> Lets us score (up/down) "incidental" videos

<jeanne> https://docs.google.com/document/d/1YyH_VHcEzeKCK7vsNqRCwApJEsomr7KJliYhj6VAe2s/edit#heading=h.lbp67a4fv4ed

<bruce_bailey> Jeanne: we need editors notes that help parse these issues

<bruce_bailey> MikeCrabb: Is weighting of captioning for advert videos not on the path the same as the main content?

<bruce_bailey> Updated draft at:

<bruce_bailey> https://docs.google.com/document/d/1YyH_VHcEzeKCK7vsNqRCwApJEsomr7KJliYhj6VAe2s/edit#heading=h.jy8mkh5zjlox

<bruce_bailey> Jeanne: please be encouraged to comment on doc

Summary of Action Items

[NEW] ACTION: Jeanne to talk to Micheal Cooper as to how to include UAAG testing
 

Summary of Resolutions

[End of minutes]

Minutes manually created (not a transcript), formatted by David Booth's scribe.perl version (CVS log)
$Date: 2020/10/08 15:20:52 $

Scribe.perl diagnostic output

[Delete this section before finalizing the minutes.]
This is scribe.perl Revision of Date 
Check for newer version at http://dev.w3.org/cvsweb/~checkout~/2002/scribe/

Guessing input format: Irssi_ISO8601_Log_Text_Format (score 1.00)

Succeeded: s/JS: No I didn't - why did that not make it?//
Present: jeanne Crispy_ bruce_bailey mikecrabb Joshue108
Found Scribe: bruce_
Found Scribe: Joshue108
Inferring ScribeNick: Joshue108
Scribes: bruce_, Joshue108

WARNING: No date found!  Assuming today.  (Hint: Specify
the W3C IRC log URL, and the date will be determined from that.)
Or specify the date like this:
<dbooth> Date: 12 Sep 2002

People with action items: jeanne

WARNING: Input appears to use implicit continuation lines.
You may need the "-implicitContinuations" option.


WARNING: IRC log location not specified!  (You can ignore this 
warning if you do not want the generated minutes to contain 
a link to the original IRC log.)


[End of scribe.perl diagnostic output]