W3C

Timed Text Working Group Teleconference

13 May 2021

Attendees

Present
Atsushi, Chris_Needham, Cyril, Gary, Nigel, Pierre
Regrets
Rob_Smith
Chair
Gary, Nigel
Scribe
nigel

Meeting minutes

This meeting

Nigel: Today we have a couple of TTML2 issues to circle back on, and an agenda item on WebVTT requirements gathering for possible syntax changes.
… And in AOB, TPAC 2021
… Is there any other business, or anything to make sure we cover?

group: [no other business]

Shear calculations and origin of coordinate system. w3c/ttml2#1199

github: https://github.com/w3c/ttml2/issues/1199

Cyril: The initial issue is about clarifying what happens.
… I think we came up with a possible clarification for some writing mode.
… We need to propose some text.

Nigel: Yes, that's great, do that!

Cyril: Okay I'll propose text maybe for next time.
… The discussion veered a bit to how to map to CSS, which won't be solved easily.
… But better documenting what we have is important for interoperability.

Nigel: From https://github.com/w3c/ttml2/issues/1199#issuecomment-802057127 I summarised that we need to know
… what we want it to do. Do you think that's clear now?

Cyril: I think we said it does depend on the writing mode

Pierre: I'm not sure about that actually
… Clarifying the current text is a good idea.
… I'm hesitating only because CSS was about to do something. Do we know if they are planning
… to address it soon? It would be a shame if we come to a different conclusion from what they are planning to do.

Cyril: Maybe we could say it's not defined in Horizontal writing mode, which we don't need for now.

[css-font-4] oblique angle for vertical text with text-combine-upright #4818

Nigel: I noticed today that the issue above got a useful comment 13 days ago.

Pierre: Yes, but what that says is "to the side"

Nigel: Yes, it's vertical for vertical text, i.e. in the inline direction, which some could consider "to the side"

group: [amusement]

Cyril: I can propose for the TTML spec an update that says the behaviour is undefined for anything
… other than top-to-bottom, right-to-left, and that behaviour would match what CSS implementations do.
… [thinks] Maybe the solution will be different for fontShear, lineShear and shear. I'll think about it.

Nigel: Thank you.

Pierre: Cyril, a bigger question is: today IMSC supports only blockShear. Is that really the right thing?

Cyril: It's a difficult question. I can tell you that ideally what we would like at Netflix is the behaviour of fontShear, with vertical
… and tate-chu-yoko and ruby being handled correctly, where correctly here is still subject to interpretation.
… I think we understand that lineShear is complicated in terms of line layout and reflow, and blockShear is the simplest we came up with
… in terms of implementation.

Pierre: That's how it's done but it's not clear if that's right. It has the potential of overflowing.
… It sounds like we don't have a answer there.

Nigel: I'm surprised by your view Cyril, I thought lineShear would be the preferred option, as it is simpler for layout and retains the alignments.

Cyril: But line lengths can change with lineShear.

Nigel: I think that's blockShear.

Pierre: For lineShear you can predict the line length in advance and layout once.
… For blockShear you need to know the height of the block, but then there might be overflow causing a change to the block height.

Cyril: Ok I thought that it was simpler, I need to think about it.
… I know what we want to achieve with shear in subtitles, it's complex because of what is implemented.

Pierre: I'm fairly sure we want lineShear, but we couldn't adopt it because of lack of implementation in CSS.

Cyril: The difference between lineShear and fontShear is essentially that they're the same but if you combine glyphs before shear for tate-chu-yoko they come out the same.

Nigel: What about ruby alignment?

Pierre: The alignment changes - I have heard it argued both ways.

Cyril: The difference is subtle, I wonder if we need to worry about it.

Pierre: If tomorrow all browsers supported fontShear with the tate-chu-yoko hack then I suspect we'd use it,
… rather than CSS shear. I don't disagree with you.
… Treating tate-chu-yoko as a single glyph is kind of weird though.

Pierre: I like your plan for vertical text to provide a clarification. That would be helpful, even if we
… leave the rest undefined for now.

SUMMARY: @cconcolato to propose text

Mention fingerprinting vectors in privacy considerations. w3c/ttml2#1189

github: https://github.com/w3c/ttml2/issues/1189

Nigel: Just to note I opened a pull request for this yesterday.

Pull Request: Add further fingerprinting considerations w3c/ttml2#1231

Nigel: That's open for review, please take a look.
… The original commenter, Jeffrey Yasskin, gave a thumbs-up to the analysis I did 2 weeks ago, and this pull request implements that.
… I see Glenn has already approved it.
… It'd be good to get this merged in our normal 2 week period if we can to get this done and dusted.

SUMMARY: Group to review as per normal process

WebVTT - Requirements-gathering for syntax changes to support unbounded cues, cue updating etc

Chris: This is a use case and requirements gathering exercise for unbounded cues in WebVTT
… specifically. It's been discussed here, and in the Media Timed Events Task Force that the MEIG is running.
… We brought it to an IG meeting on Tuesday where we decided to do the use case and requirements work as part of the
… Media Timed Events activity that we have, and then use the information that we gather there to help with design decisions
… around support for unbounded cues in WebVTT.
… To that end I created an initial (very initial) requirements use case document that we can use as a basis.

<cpn> https://github.com/w3c/media-and-entertainment/blob/master/media-timed-events/unbounded-cues.md

Chris: Link pasted above.
… What I'm looking for in terms of use cases I think are the quite detailed use cases like the specific actions that we need.
… For example, if I pick the WebVMT example, we can distil a lot of what Rob is looking for to the idea that we have timed measurements,
… be it location or whatever, that are aligned to the video, and those get updated at points in time in the video, and he's choosing to
… represent those as unbounded cues.
… Then the application can receive and respond to those or in his case he does interpolation. I'm not sure if that level of detail matters
… from the point of view of how it may affect the syntax.
… We also have the use cases around captions that span multiple VTT documents, like in DASH or segmented media delivery in general.
… I'm hoping we can gather those handful of use cases and capture and explain them, and make sure we have everything cover.
… That leads us towards being able to consider how the syntax may need to change.
… I include the backwards compatibility requirement in there.
… All of those are captured in the document as it stands. There is a list of to-do comments to write some information.
… I don't know if it is complete. I'm hoping that contributors will be able to help fill in the details.
… I think Cyril, in the last meeting you mentioned that there's an MPEG document that talks about how this may be carried in MP4.

Cyril: Yes, there was a proposal to update the carriage of WebVTT in MP4 for unbounded cues, but it was mentioned that since
… there was no syntax for unbounded cues you could not carry them.
… So the proposal was to remove the amendment to 14496-30, but the resolution of the comment
… is currently "if there is a way to specify unbounded cues then here's how you deal with it". It's moving sand.

Chris: The dependency is on us?

Cyril: Yes

Pierre: I've not been following this closely. Are we talking about unbounded cues in the file format, or the API.

Cyril: The API problem is solved, it's merged.

Pierre: I don't understand why there need to be unbounded cues in the serialisation.
… Especially in the case of ISOBMFF wrapping or segmentation.

Cyril: It's a valid point, I don't fully understand it either.

Pierre: I'm 99% certain that they want something other than what they're asking for.

Cyril: Think of a cue serializer separate from the packager.
… Let's say a cue is produced but the end time is unknown, but you still want to package and send.
… One approach is to assign some time.
… Another is to do it unbounded, and then update it later.
… I think that's the use case.

Gary: Yes. I think the key with the proposal is to be able to mark a cue as "we don't know what the time is".
… It may never get an end time, but you should be able to specify an end time at a later date.

Pierre: I agreed with the first statement, not the second.
… My understanding of how implementations have been designed and built is to allow the last cue to have no end time.

Gary: VTT doesn't care right now - everything requires an end time.

Pierre: Right, but I don't think there's a model that allows _any_ cue to be unbounded.
… Allowing the _last_ cue to be unbounded would be the least impact on the WebVTT model.

Gary: Right, that's the question, why is this needed. Once we know that we can work on the solution.

Pierre: I think you can do it today so I don't think you really need it.
… Going to unbounded is a pandora's box. I'm not a proponent of WebVTT but when you go into live subtitling and
… captioning, people type in real time. Sometimes they backspace. If a cue can be updated later, then is it the same one, or being replaced?

Gary: Right now the proposal is only about the end time, but it has been brought up that we could allow updating everything.
… It's worth discussing.

Cyril: One thing that's important to clarify is if there is a use case for more than one unbounded active cue at a time.
… That would shape the solution.

Gary: Yes that has also been discussed, how to match cues, or only allow one.

Chris: This is the level of detail I want to get to.

Cyril: In terms of packaging in MP4, there's the notion of a sync sample which can be randomly accessed without knowing previous data.
… If you have unbounded cues then you'd have to duplicate them at sync samples and aggregate them, which gets complicated.
… Frankly I think it should be the job of the serializer to do this.

Pierre: I think you can do it today without changing anything.
… It might not do what you want semantically, but with richness comes complication, like causality, how far back do you have to go.
… It's really complicated.

Nigel: This is a specific question related to the broader point that there is an API that is not fully supported by the syntax, and
… we are wondering what parts of the API need to be opened up within the syntax.
… Also anything that requires statefulness in the receiver is a recipe for different clients having different experiences, in a bad way.

Pierre: Yes, one of the advantages of TTML and WebVTT over 608, say, is the lack of statefulness.

Gary: Yes, one of the issues now is that you can't have both the proposed new syntax and the fallback syntax in the same file,
… because new clients will show two cues where older ones that don't support the new syntax will only show one, which is not good.

Chris: Next steps: we have a monthly meeting for media timed events. The next meeting would be Monday 17th May, so I propose
… we use that as the place to discuss. Same hour as this call now.
… I'm aware that there's another strand around DASH and emsg events that is being covered in that activity. I need to be careful to allow
… enough time to cover both. Could be a dedicated separate call.

Cyril: I favour both (but may not attend both).

Chris: I'm open to suggestion for when.

Pierre: For the issues related to the syntax of WebVTT I think this call is the best one.

Nigel: I support the wider scope of MEIG gathering requirements.
… Our calls are every 2 weeks so there's a potential slot, say on 20th May, in this hour, that might work for people.

Chris: Happy to do 20th. We should use that meeting to decide a frequency.
… Gary and Cyril, you've both mentioned knowledge of people with use cases, so that would be really useful input.
… Otherwise, aside from the WebVMT use case, I'm less aware of who the proponents are.

Nigel: It's surprisingly low-key in the discussion so far, but I think live delivery of captions is a use case, and
… it may be worth understanding and describing a working model for how to deliver live captions in a VTT context.
… I'm 100% confident we know how to do that in a TTML context, but it could be that there's a different model for it in VTT.

Meeting close

Nigel: Thanks everyone, we're out of time.
… Apologies again for the difficulty joining at the start. I hope nobody was excluded because of that.
… [adjourns meeting]

Minutes manually created (not a transcript), formatted by scribe.perl version 131 (Sat Apr 24 15:23:43 2021 UTC).