See also: IRC log
<janina> agenda: this
<scribe> scribe: JF
<silvia> close Action-98
<trackbot> ACTION-98 Create a statement with geoff to forward need for caption and description techniques for wcag closed
JF: re Action 98, posted draft to the list for CFC, and no feedback received
should forward to the appropriate stake holders
<silvia> Action-88?
<trackbot> ACTION-88 -- Sean Hayes to review Media Fragment URI 1.0 http://www.w3.org/TR/2010/WD-media-frags-20100624/ -- due 2010-11-24 -- OPEN
<trackbot> http://www.w3.org/WAI/PF/HTML/track/actions/88
<silvia> Action-96?
<trackbot> ACTION-96 -- John Foliot to media Sub Team to revisit bug 11395 (Use media queries to select appropriate <track> elements) -- due 2011-01-06 -- OPEN
<trackbot> http://www.w3.org/WAI/PF/HTML/track/actions/96
Re: Action 88 - will leave as is, needs to go back to PF
<Sean> can you make the due date on 88 end of March
Issue 96 reassign to Eric Carlson
<silvia> close Action-97
<trackbot> ACTION-97 Follow up on bug #9673 closed
Issue 97 - to be closed
<silvia> Action-99?
<trackbot> ACTION-99 -- Janina Sajka to annotate 9452 with clear audio discovery and selection, as well as independent control of multiple playback tracks -- due 2011-01-19 -- OPEN
<trackbot> http://www.w3.org/WAI/PF/HTML/track/actions/99
Issue 99
Ad agenda item - overview of FCC status/situation
<Judy> http://www.fcc.gov/cib/dro/VPAAC/
Judy: VPAAC - Video Programming Accessibility Action Commitee
recommend to look at the Mission Statement (Word Doc: http://hraunfoss.fcc.gov/edocs_public/attachmatch/DOC-303943A1.doc)
meetings and actions with tight time-lines around video accessibility - captioning and descriptive audio
some awareness of work that is happening at W3C
Janina: interested to understand what this applies to, penalties, etc.
Geoff: there will also be rules about amount of video description as well as requirements for emergency information
they are also looking at getting television shows already captioned for on-air broadcast, must also move to the web
this now involves SMPTE
and SMPTE TT will likey emerge as a recommendation from the committee
Janina: unless we find accessibility issues with this
this will potentially inovolve massive amounts of programming (TV shows)
including older content as well as future content
+q
<silvia> +q
Judy: can we get differences between SMPTE TT (which is a derivitive of TTML)
adds the ability to add background images, as well as binary data
also some additional metadata content
JF: are broadcasters aware of the browser vendors will or wont support?
Sean: we can already support, doesn't require native support for this to work. will likely wait to see how the market plays out
Silvia: SMPTE TT is a new format, how much content is currently available
Geoff: there is not yet a lot of implementation, but there is one major support - UltraViolet - which is a DRM-like solution to view content from the cloud
since SMPTE TT is based on TTML, there is potential for growth
Eric: is SMPTE a full profile subset of TTML?
Sean: yes
Judy: with this superset nature of SMPTE TT to what extent are the added features - things that align with accessibility user requirements that we've uncovered?
Sean: the addition of images was from a request from asian territories
they would rather not use actual fonts, and rather have images as more 'hand-drawn' character-sets
the binary data is mostly for commercial requirements, for set-top boxes, etc.
not really for user-benefit, but rather operator-benefit
Janina: one of the other things coming from the FCC work is requirements for devices being sold in the US market, there will be more of these types of devices, and more regs to follow
<kenny_j> Need to drop off the call for another meeting. bye all.
Synopsis of questions re: time Tracks
Silvia: the track element allows us to associate external caption files, sub-title files and other text files to videos
Judy: is there a mechanism that can discover those assets
+q
ERic: the track element is for things that have timing with them
so if the description has timing info thta needs to be displayed in sync with the video, then it is appropriate to use track element
Sean: we've identified that there is no mechanism for labeling a transcript as such - there is no semantic link-up at this time
<gfreed> geoff needs to go-- will read the minutes later this evening.
Judy: a case can be made that access to a transcript would serve certain user needs for a11y
+q
Janina: we've identified that if there is timing data, that it should be linked to the video, but even if a transcript has no timing it may need to be programmatically associated to the video none-the-less
Judy: the order of presentation /positioning
that has been a problem in the past
if we are trying to support multple media formats - foolproof discoverablility and sharability
discussion about discoverability versus mechanisms for delivery
ERic: discussion is not that there is disagreement on this, but how we deliver it - in sync (with time)
it makes no sense to try and repurpose track and source for non-time-aligned content
how does the content author package it
Judy: so do we need another element?
<silvia> s-
given that we are under a very tight timeline at this point?
eric: don't think we need a different/new element
echos silvia's observation thta at transcript would be avialable for all users
+Q
<Judy> eric: you could just do the association with an attribute
<Judy> jf: that would take us down the same path as with longdesc
<Judy> ...we need to be able to package the transcript in some way that makes it available to users, not just visible on screen
<silvia> http://www.w3.org/WAI/PF/HTML/wiki/Media_Multitrack_Media_API
Janina: bottom line is that we do not have a means of associating a transcript to a the video resource
whether an element or an attribute
silvia are you on mute?
<janina> Silvia, we don't hear you
<Sean> try redialling. not hearing you
Judy: we should record everything we can in terms of what is still open
Silvia: we should have an email discussion on transcript
(JF will check for that bug and post to the list)
eric: when the durations are not the same - it's not an issue when they are not the same, but rather when the internal timing information are not the same
when segments of one don't exactly overlap segments of the other
there is no way of describing those associations
Silvia: on the multi-track API
summarize from discussions and an email thread from last fall - will summarize into a wiki page for further discussion
we re-start a new mail thread
Janina, another isue is if the user wants to control the secondary content - change font size, colors, adjust audio levels, etc.
Janina: on one hand, this is very specific to Operating Systems
but what we should be discussing is a systematic way for authors to create content, and signify this to the browser
rssagent, make log public
rssagent, make log public
rssagent, make minutes
This is scribe.perl Revision: 1.135 of Date: 2009/03/02 03:52:20 Check for newer version at http://dev.w3.org/cvsweb/~checkout~/2002/scribe/ Guessing input format: RRSAgent_Text_Format (score 1.00) Found Scribe: JF Inferring ScribeNick: JF WARNING: No "Present: ... " found! Possibly Present: Geoff Judy MikeSmith Sean eric gfreed html-a11y inserted janina jf kenny_j left silvia trackbot You can indicate people for the Present list like this: <dbooth> Present: dbooth jonathan mary <dbooth> Present+ amy Got date from IRC log name: 02 Feb 2011 Guessing minutes URL: http://www.w3.org/2011/02/02-html-a11y-minutes.html People with action items: WARNING: Input appears to use implicit continuation lines. You may need the "-implicitContinuations" option.[End of scribe.perl diagnostic output]