23:55:02 RRSAgent has joined #mediawg 23:55:02 logging to https://www.w3.org/2019/09/19-mediawg-irc 23:55:12 RRSagent, make logs public 23:55:26 horiuchi has joined #mediawg 23:55:32 Meeting: Media WG F2F - Day 2/2 23:55:39 Chair: Jer, Mounir 23:55:54 cyril has joined #mediawg 23:55:58 Agenda: https://www.w3.org/wiki/Media_WG/TPAC/2019#Friday 23:56:36 markw has joined #mediawg 23:57:04 present+ 23:57:43 Present: Yongjun_Wu, Glenn_Adams, Jer_Noble, Tess_O_Connor, David_Singer, Cyril_Concolato, Mark_Watson, Chris_Cunningham, Andreas_Tai, Chris_Needham, Francois_Daoust, Mounir_Lamouri, Richard_Winterton, Narifumi_Iwamoto, Greg_Freedman 23:59:23 takio has joined #mediawg 00:00:00 richw has joined #mediawg 00:00:11 mattwoodrow has joined #mediawg 00:00:18 niwamoto has joined #mediawg 00:00:35 scribe: tidoust 00:00:52 MasayaIkeo has joined #mediawg 00:01:25 Topic: DataCue and TextTrackCue / joint session with TimedText 00:01:57 cyril: There have been a few discussions this week on DataCue, TextTrackCue, I put together a few slides to summarize them. 00:02:10 ... DataCue, goal is to expose in-band data to applications. 00:02:14 ak has joined #mediawg 00:02:19 ... The major use case is emsg 00:02:30 cpn: I would say it's not only about in-band events. 00:02:45 suzuki has joined #mediawg 00:02:51 wolenetz_ has joined #mediawg 00:03:00 ... For user agent that feature native support of HLS or DASH, events could be in manifest. 00:03:14 ... We also want to be able to support arbitrary objects that applications may want to synchronize. 00:03:31 cyril: DataCue essentially flows from the media to the application. 00:03:41 ... TextTrackCue flows the other way around. 00:04:04 ... You let the browser do the synchronization, where the cue should be displayed, but the application prepares the cue. 00:04:27 ... And then MSE for TextTrack is enabling end-to-end synchronized processing and rendering, from the container to the display. 00:04:40 ... Browser vendors do not like additional parsing, which may be an issue here. 00:04:55 ... [showing diagram of MSE/EME pipeline] 00:05:16 rrsagent, this meeting spans midnight 00:05:49 cpn: Different points of view of how much parsing would be done by the user agent. 00:06:18 ... For certain well-known events formats, the user agent could expose a structured object, or just the raw thing. 00:07:58 cyril: TextTrack for MSE would hand the parsing to the JS but the user agent would still have the cues and handle the synchronization. We don't even know the time to the JS app, because it's the same time when the event is handed over to the app and when it comes back for rendering. 00:08:51 glenn: The call to the app would be synchronous? 00:08:59 cyril: Either way. 00:09:38 glenn: Asynchronous may be simpler, you'd need a handle. 00:09:51 cyril: The whole thing seems similar to WebCodecs. 00:10:00 padenot: In fact, that's the exact opposite. 00:10:20 ... Things are still fuzzy though 00:11:06 present+ Gary_Katsevman 00:11:08 padenot: We did some experiments, that went wall. 00:11:12 present+ Nigel_Megitt 00:11:16 chcunningham has joined #mediawg 00:11:23 cyril: Yes, you don't depend on "time marches on" 00:11:26 present+ Scott_Low 00:12:05 gkatsev has joined #mediawg 00:12:07 michael_li has joined #mediawg 00:12:28 scottlow has joined #mediawg 00:12:31 jer: In a way, for custom parsing, the user agent could just produce a DataCue and the JS app would create theright cue that gets feeded back into the rendering 00:12:40 present+ 00:12:46 present+ 00:12:58 nigel has joined #mediawg 00:13:05 Present+ Nigel_Megitt 00:13:12 yuki_ has joined #mediawg 00:13:19 ... Strawman: push data in MSE, get metadata samples exposed as DataCue, without needing to expose any specific interfaces to define. 00:13:30 ... I'm just throwing that out as an alternative. 00:14:09 rrsagent, pointer 00:14:09 See https://www.w3.org/2019/09/19-mediawg-irc#T00-14-09 00:14:31 [discussion on putting decryption out of the picture since events are not encrypted] 00:15:02 jer: In summary, mechanim to add custom support for currently unsupported timed events? 00:15:06 cyril: Yes 00:15:52 padenot: Metadata have been problems for years. People routinely demux things in JS, e.g. to get ID3 out of it. Easy for MP3s. 00:16:02 ... The load is not really complex in that case. 00:16:18 jer: It does require specific knowledge about timed events formats. 00:16:39 padenot: Yes, and the UA already parses the data, so double-parsing happens here. 00:17:52 cyril: Question is what's next? 00:18:07 cpn: We have a DataCue repo in the WICG. This would be great input there. 00:18:28 ... In the IG, we ran use cases and requirements. This proposal gives us much more what we're looking for. 00:18:44 ... The ability to get events ahead of time is baked in. Separated from the triggering of the cue. 00:18:59 ... I'm suggesting you use the explainer in the repo to iterate on this design and shape the API. 00:19:51 andreas: I think it could be one option. Your slides show that everything is connected. Possibly something that needs to be discussed together. I'm just worried that if it's just in the DataCue repository, it might be limited to the topic. 00:20:11 nigel: What's needed there is that the architectural components need to be separated. 00:20:36 ... I have a similar point is that what would be a real shame is that, if we did all of this and didn't solve the synchronization aspects. 00:21:45 ... One thing I'm conscious of is that the rendering side for audio sends samples to your digital audio converter. The rendering for video puts pixels in your video buffer. The rendering model for TextTrack is parse JSON, create DOM fragments, apply styles, seem to take longer. I'm wondering if we need to do that earlier to prepare things in advance. 00:22:11 David: Isn't that a bit tricky? 00:22:31 cyril: If the only place that is allowed to change the CSS comes from the in-band data. 00:22:41 jer: There's always a validation and caching problem. 00:22:56 ... The web browsers are meant to render things very quickly. 00:23:03 ... I don't know what the requirements are. 00:23:34 nigel: We've discussed threshold. We sort of ended up with 20ms. 00:24:00 ... How do you know that the text is there on the screen. 00:24:15 jer: One of the problems we have is JS. 00:24:38 q+ 00:24:50 ... One of the points is that TextTrackCue v2 absorbs JS to process the cue. 00:24:53 ack cpn 00:24:53 present+ 00:25:19 cpn: It's not only for text track cue placement, so need a general solution 00:25:33 s/How do you know that/The metric is to measure when 00:25:41 jer: We do as much as we can to keep things out of JS for this reason, not to have to get back to the main thread. 00:26:16 ... That said, I ran an experiment. The average latency is 4ms on main browsers between when the event is triggered and when it's handled by JS. 00:26:26 ... So things may already have been addressed. 00:26:41 q+ 00:26:46 ack wolenetz_ 00:27:19 wolenetz_: Couple of questions. Trying to get out reliable synchronization between in-band events to what? JS? 00:27:38 nigel: The strong sync requirement is related to output. 00:27:56 ... There's good sync for handling of the input. 00:28:02 ... The metric we need is for the output. 00:28:14 ... Changing display of a subtitle caption. 00:28:42 ... Or real-time sync with audio handled with Web Audio API. 00:28:58 q? 00:28:58 cpn: There's also the ad-insertion case where the event triggers a switch in the video. 00:29:21 jer: It's not the time required to parse the metadata. It's more about display and rendering. 00:30:25 wolenetz_: If I understand correctly, to get a strong sync, need to offload the custom processing of data cues to JS on a separate threads, or media types that the MSE parser should understand. There's a large gap between the two. 00:30:52 jer: At the MSE level, sync issue seems not as much important as rendering 00:31:38 GregFreedman: There seems to be two things here. TextTracCue v2 and this thing. If it's all done in advance, do we really need MSE? 00:32:00 jer: The thing is that there may be timed events in the media stream already and you don't want to do the demux twice. 00:32:50 nigel: Alternately, it would also make sense to push all of our components to the same pipeline. 00:33:30 jer: Yes, we are kind of combining 2 discussions in one use case. It's good to have an overview picture. 00:34:31 nigel: Wondering if there's a model we can think of to change the firing of cues. 00:34:48 ... Instead of going through the "time marches on" algorithm and the browser's idea of where the time probably is. 00:35:10 chcunningham: Through the MSE proposal, you'd have a separate source buffer for the text track data? 00:35:15 cyril: I would think so 00:35:41 chcunningham: Is this imagining new types of metadata that don't exist yet, or exposing metadata that already exist? 00:35:57 cyril: All the specs exist to do that. In practice, not a lot of people do that. 00:36:07 ... I don't know about others. 00:36:32 jer: One possible use case is 608 captions. 00:37:06 ... 608 will carry things in-band. Currently, in Safari, it shows up as a TextTrack. 00:37:19 ... That would be one type of currently existing text track. 00:37:40 cpn: On that note, there is this sourcing in-band document that is in an unofficial state. 00:37:47 ... Is it something that we'd want to rationalize? 00:38:02 -> https://dev.w3.org/html5/html-sourcing-inband-tracks/ Sourcing of inband tracks draft 00:38:19 ... From my reading, there's a number of things referenced from HTML that is not clear whether they are supported. 00:38:37 ... E.g. Media fragments, advanced media fragments. 00:38:51 ... The fact that it is not REC is a concern for me. 00:39:14 wolenetz_: We in Chrome have not shipped widely in-band parsing through MSE. Behind a flag. 00:39:23 jer: Neither Webkit. 00:39:42 s/Behind a flag./Was behind a flag, now the old code is removed./ 00:39:54 https://www.w3.org/TR/media-frags/ 00:40:12 https://www.w3.org/TR/2011/WD-media-frags-recipes-20111201/ 00:40:36 Yongjun: We always use out-of-band. In-band has never worked in our experience. 00:41:12 wolenetz_: Getting back to the DataCue use case, seen some use cases around emsg. 00:41:33 ... If there could be agreement about specific types, would that satisfy most needs? 00:42:57 jer: It seems that it's more difficult to implement than the naive approach to expose events when you get them. 00:43:43 nigel: I feel that the distinction between in-band and not in-band is not always clear. 00:44:00 ... MPEG-DASH is fetching audio, video and text tracks from separate URLs for instance. 00:44:06 ... Is that in-band or not in-band? 00:44:31 ... There are schemes in DVB for putting TTML in transport stream. Mandated for set-top boxes in nordic areas. 00:44:45 ... If you fragment that and send that through, you'd like that to be exposed. 00:45:01 ... On the other end, the BBC always does its captioning stuff out of band. 00:46:26 chcunningham: Two worlds. Broadcast use cases would like to leverage in-band. With the DASH question, it seems it should have a clear answer. 00:46:49 yongjun: Both are possible in DASH. You can put things in-band but no one does that in practice. 00:47:18 David: DASH manifest is essentially in-band. The fact that it's processed in JS is secondary. 00:47:28 q? 00:48:09 [side discussion on the definition on in-band and band] 00:48:42 andreas: Appropriate rendering of cues is a priority. For me TextTrackCue v2 goes in the right direction. 00:48:57 NJ_ has joined #mediawg 00:49:06 chcunningham: Yes, I'm trying to understand what are the priorities for needs are. 00:49:17 andreas: The question is how to separate the different activities in different groups. 00:49:42 ... WICG DataCue repository. A TextTrackCue proposal for WICG. 00:50:15 ... PAL mentioned yesterday that he wanted to make responsibilities clear. 00:50:35 cpn: The new generic cue proposal would go in WICG. 00:50:57 mfoltzgoogle has joined #mediawg 00:51:02 hober: Yes, the main goal is to end up with updates on WebVTT specs 00:51:33 cpn: We can of need a place where we can do the overall architectural piece. 00:51:45 ... I guess we could use one or the other to do that. 00:52:15 jer: It seems that the architectural discussion does not need to generate technical spec. It could belong in Media & Entertainment IG. 00:52:28 ... The DataCue portion, end goal is to do it here. 00:52:45 ... For generic TextTrackCue, end goal is Timed Text 00:53:51 cpn: How would the interaction with WHATWG work? 00:53:52 hober: The current envision working mode with WHATWG is to have them react when needed on CG repos. 00:54:11 ... This room seems like a good place to discuss effective changes. 00:54:31 q? 00:54:43 q+ 00:54:49 q+ 00:55:00 ack nigel 00:55:00 ack nigel 00:55:39 nigel: I'm just recognizing that there is lot of media activity going on in different groups. No group chartered to do horizontal reviews of media specs. 00:55:56 hober: Unofficially, the Media WG is the right group to do that. 00:56:12 chcunningham_ has joined #mediawg 00:56:28 jer: It's going to be the job of Mounir and I to coordinate discussions. 00:56:46 ... To make sure that the different groups are aware of discussions when needed. 00:56:47 ack tidoust 00:57:07 q+ tidoust 00:57:43 richw has joined #mediawg 00:57:47 jongjun: CMAF, WebM, other file formats, what's the integration story? Do we cover all of them? 00:58:09 Guest19 has joined #mediawg 00:58:10 jernoble has joined #mediawg 00:58:10 nigel: In Timed Text, we have liaisons with a bunch of external organizations. 00:59:28 ... e.g. CMAF might say "subtitles shall have IMSC1" 00:59:32 ack tidoust 00:59:34 ack tidoust 00:59:46 scribenick: cpn 00:59:54 q+ 01:00:00 tidoust: regarding the sourcing in-band tracks, is anyone interested in working on it? 01:00:10 Guest19 has joined #mediawg 01:00:31 jer: it would naturally fit in scope for this group 01:00:44 mfoltzgoogle has joined #mediawg 01:02:03 tidoust: I'm wondering if there are people willing to update the document and implementers's interest should updated be made. 01:02:12 [silence heard] 01:03:01 [Media WG charter allows for group to take the spec on board through DataCue] 01:03:05 RRSAgent, make minutes 01:03:05 I have made the request to generate https://www.w3.org/2019/09/19-mediawg-minutes.html tidoust 01:03:29 i/tidoust: I'm wondering if there are/scribe: tidoust 01:03:33 RRSAgent, make minutes 01:03:33 I have made the request to generate https://www.w3.org/2019/09/19-mediawg-minutes.html tidoust 01:03:45 cyril has joined #mediawg 01:03:55 the slides I presented https://docs.google.com/presentation/d/1Oir_gRhleMSpR850KZlxnz20xnvYnJoNk-ZlsMVrbIY/edit?usp=sharing 01:08:07 jernoble has joined #mediawg 01:13:25 wolenetz_ has joined #mediawg 01:19:38 nigel has joined #mediawg 01:20:06 dsinger has joined #mediawg 01:20:07 horiuchi has joined #mediawg 01:23:14 michael_li has joined #mediawg 01:26:29 horiuchi_ has joined #mediawg 01:27:05 horiuchi_ has joined #mediawg 01:30:02 tidoust has joined #mediawg 01:30:30 present+ 01:30:57 Youngsun has joined #mediawg 01:31:39 atai has joined #mediawg 01:32:03 jernoble has joined #mediawg 01:32:12 ericc has joined #mediawg 01:34:12 horiuchi has joined #mediawg 01:34:46 cyril has joined #mediawg 01:35:35 steveanton has joined #mediawg 01:35:44 scribenick: cpn 01:36:13 takio has joined #mediawg 01:36:19 Topic: Media Source Extensions 01:36:29 kajihata has joined #mediawg 01:36:30 MasayaIkeo has joined #mediawg 01:36:43 yuki has joined #mediawg 01:36:43 s/Extensions/Extensions v.Next/ 01:37:20 q+ 01:37:27 GregFreedmanq has joined #mediawg 01:37:29 q- nigel 01:37:36 ack wolenetz_ 01:37:47 chcunningham has joined #mediawg 01:38:04 Matt: For MSE v.Next, we currently are trying to figure out the editors. I'm happy to edit MSE 01:38:26 ... Netflix will find someone, also Microsoft will try to find someone 01:38:42 ... Is anyone else interested? 01:39:33 q+ 01:39:58 ... How to discuss MSE on calls? Last time, we had dedicated calls 01:40:03 ack jernoble 01:40:23 Jer: We're rotate which specs need attention for the monthly calls, and can have topic specific calls 01:40:30 ... If MSE needs more time, we'll figure it out 01:41:01 Matt: Some maintenance work is happening on the W3C repo, and incubation in WICG repo with branches for each v.Next feature 01:41:28 ... Would like a better idea of the process for incubating v.Next features, and how to merge upstream 01:41:58 Jer: I suggest upstreaming the existing WICG work as the starting point, then we can do PRs against the newly upstreamed spec 01:42:05 ... Versioning will be the hard part 01:42:26 Matt: It's more complex for MSE than EME, as there are some old things 01:42:51 ... How can we simplify? Will follow up with the team about how to manage the branches and v.Next 01:43:07 ... The only incubation feature with a shipped implementation is codec switching 01:43:20 ... We have some tests in WPT for that feature 01:43:49 ... There's clarification added to MSE for codec parameters for addSourceBuffer and canChangeType, browser not required to accept them, we're relaxing Chrome's requirements around that 01:44:25 ... Are there IPR considerations around v.Next MSE? 01:44:43 ... Can we bulk upstream everything from WICG into the W3C spec? 01:45:22 Francois: No problem to merge upstream. At some point we'll publish FPWD. We have full IPR commitment with the Rec 01:45:41 ... So don't worry for now. Publishing FPWD will trigger call for exclusions 01:46:18 Matt: The tests are in the same media source repo, is this expected procedure? Do we want a folder for V.Next features? 01:47:02 Mounir: We could keep that, for backwards compatibility. Want to avoid breaking stuff. Can put things into MSE, don't see the need to separate them 01:47:17 Francois: I think the tests people prefer to avoid versioning 01:47:24 scottlow has joined #mediawg 01:47:35 ACTION: mounir to talk to foolip to double check 01:47:56 s/to double check/to double check whether versioning is needed for MSE v2 WPT/ 01:48:08 Matt: We'll continue using Respec. What were the problems with EME regarding Respec? 01:48:12 Mounir: There's no problem 01:48:43 ACTION: tidoust to exchange with wolenetz on setting up MSE repo, updating boilerplate, ReSpec, etc. 01:49:04 Guest19 has joined #mediawg 01:49:21 Matt: There was some related MSE discussion around reducing overhead for applications that take media, containerise it, only for MSE to decontainerise and play 01:49:42 ... There's a proposal for having metadata around a basic format for a new byte stream format 01:49:43 https://discourse.wicg.io/t/proposal-allow-media-source-extensions-to-support-demuxed-and-raw-frames/3730 01:49:59 Matt: We can follow this up after the meeting 01:50:11 markw has joined #mediawg 01:50:36 ... Yesterday, we discussed latency hint. It seems this is meant to describe what happens after decode. these actions don't need to depend on what the source was 01:51:09 ... I think a latency hint on the media element is for playback. We could think of use cases wanting to tie this hint to MSE behaviour, e.g., play through gaps 01:51:27 ... Don't want to bind that hind to any of those things. Also garbage collection regimes 01:51:37 ... Prefer to see this done on the media element, rather than MSE 01:52:21 Mounir: This seems to agree with the conclusion from yesterday 01:52:52 JohnRiv has joined #mediawg 01:53:26 Matt: The next proposal, working on a prototype, is using MSE from a dedicated or shared Worker context 01:53:43 ... Had a demo at FOMS, found some severe problems with it, related to implementation rather than the spec 01:54:10 q+ 01:54:24 ... As it stands, the prototype doesn't use a new MediaSourceHandle object, it uses a URL to communicate the identity from the worker context to the main context 01:54:37 ... Should have more to say, and an improved demo, by FOMS 01:54:55 ack mounir 01:55:09 Mounir: Do you have service worker in scope? Is it working in the prototype? 01:55:28 Matt: Service worker is different, it's about intercepting requests and servicing from a cache 01:55:45 ... It's unrelated to Workers which is about threading 01:56:03 Mounir: I think it would make sense to use it from Service Worker 01:56:07 scottlow_ has joined #mediawg 01:56:22 Jer: SW are shared across pages, what's the use case? 01:57:09 Mounir: I see the SW as the thing that does networking for the page, it's alive when the page is closed. Enables some offline use cases. When the page tries to play, it can serve segments from the SW cache 01:57:42 ... I think there's benefit of doing that. From a spec point of view, SW is a kind of Worker 01:58:01 ... Is SW out of scope for the spec? What do other implementers think? 01:58:15 Paul: I think it's not good, I don't see MSE in SW 01:58:25 q? 01:58:44 q+ 01:58:49 ... I'd have to see real use cases, I'm skeptical 01:59:14 Jer: There are implementation restrictions. The difficulty would be connecting the two processes 01:59:32 ... I agree with Paul, we'd have to have concrete use cases to judge this proposal against 02:00:00 Mounir: Do we have many APIs not available in SW in the platform? 02:00:09 Paul: Yes 02:01:00 yajun has joined #mediawg 02:01:07 Matt: There seems to be consensus that exposing MSE in SW increases complexity, source buffers and GC issues. I agree about seeing use cases that can't be polyfilled 02:01:29 q+ 02:01:34 ack wolenetz_ 02:02:05 Mounir: Can you fetch from a DedicatedWorker? 02:02:07 ack mounir 02:02:20 Jer: Yes, then the fetch is interposed by a SW if it exists 02:02:35 Paul: Yes, but not a lot of code uses it 02:03:00 Matt: I believe Facebook does the fetching from a Worker context and hands it off to the main context for MSE 02:03:18 https://github.com/wicg/media-source/blob/mse-eviction-policies/mse-eviction-policies-explainer.md 02:03:19 Matt: I'm also working on eviction policies 02:03:23 s/but not a lot of code uses it/and that does not mean there are going to be a lot of memory copying/ 02:03:42 ... Please read the proposal 02:04:15 ... A use case is game streaming, where low latency live is critical, and want to minimise delay by having a single keyframe, infinite GOP 02:04:44 ... MSE doesn't work well with that, it has to buffer everything, a keyframe and all dependent frames are treated as one unit for buffering and GC 02:05:13 ... I'm working on simplifying this to the core. Should seeking be allowed? What about seekable and buffered ranges 02:05:30 ... I'll have a prototype implementation in Chrome to look at in more detail at next FOMS 02:05:37 Paul: What's the use case for seeking? 02:06:15 Matt: Seeking to the live head, if you've got behind. 02:06:23 q+ 02:06:51 ... If you're using the infinite GOP mode, the keyframe may already have been dropped from the buffered range, so it may not be available for seeking 02:07:05 mattwoodrow has joined #mediawg 02:07:17 ... There's potential for race conditions between seek, decode, and playback 02:07:47 Jer: Could solve this in the spec by disallowing seek with infinite GOP 02:08:12 ... Could set playbackrate to Infinite to catch up 02:08:23 ... Decode as fast as possible 02:08:28 chcunningham-mobile has joined #mediawg 02:08:46 Matt: Could disallow it, or allow but stall if seeking to a range without a nearby keyframe 02:08:54 ... I'm investigating the complexitiies in Chrome 02:09:25 Matt: There's a policy that could collect everything before the currently playing GOP 02:09:32 GregFreedman has joined #mediawg 02:09:50 ... GOP is codec specific, so we'll need to update the proposal and spec to be less specific 02:10:17 ... I'd like to get help with purgeable or pre-emptive GC 02:10:46 ... This could be used to prevent the UA from running out of memory, by not waiting for an explicit remove call 02:10:54 ... I would like help with the spec for that 02:11:14 ... Not all implementations may be able to do that, and we wouldn't want it to become the default mode 02:11:38 ... Jer, would appreciate your help with that 02:11:40 Jer: OK 02:11:42 q? 02:11:49 q- 02:11:56 https://github.com/w3c/media-source/issues/156 02:12:43 Jean-Yves: It's hard to know before implementing, there can be nasty surprises 02:13:25 ... so it's hard to comment on what we may need. I'm looking forward to seeing the prototype 02:14:12 Matt: What's a keyframe, something that's signalled as such, or something that actually is a keyframe? 02:14:28 NJ__ has joined #mediawg 02:14:56 Matt: Issue 156. When MSE was first worked on, the createObjectURL created URLS that were auto-revoking 02:15:19 ... The implementation would revoke immediately, so couldn't use in a later event handler 02:15:39 ... From discussion at FOMS, Firefox still does this. 02:15:55 ... New there's a createFor method for auto-revoking object URLs 02:16:28 ... If we're using these from a Worker context to communicate to a main thread media element, there's a race condition if we use the original form of object URLs 02:17:09 ... Chrome doesn't do auto-revokation currently, so there's an issue with media elements and object being kept alive 02:17:58 ... Working on a new form where things can be removed, and delay auto-revocation. 02:19:30 ... One complexity from auto-revocation, it's diverged from the MSE spec, will need to coordinate with the File URL folks 02:19:52 https://github.com/w3c/media-source/issues/160 02:20:30 ... Issue 160 discusses ways to solve how an app can tell an implementation what to do when it hits a buffered range gap 02:21:10 ... Solving interop issues, as well as trying to prevent stalling, and make the implementation more relaxed with respect to gaps 02:21:46 ... And seeking forward in infinite GOP. Would like this in v.Next implementation, but not at top of my priority list 02:22:31 q+ 02:23:11 q+ to talk about MSE in Workers a bit more 02:23:17 Jer: The new editors could take a pass through the issue list, and bring the list to the group for further triage 02:23:44 Matt: Sounds good to me 02:23:52 ack jya 02:24:23 Jean-Yves: For MSE v.Next, the most requested feature we see is dealing with missing data or gaps, and eviction policy for low latency video 02:25:08 ack mounir 02:25:08 mounir, you wanted to talk about MSE in Workers a bit more 02:25:32 rrsagent, draft minutes 02:25:32 I have made the request to generate https://www.w3.org/2019/09/19-mediawg-minutes.html cyril 02:26:16 Jean-Yves: There was a bug from David at BBC, it would stall on one browser and not on another 02:26:46 ... having a uniform approach to dealing with gaps, so should we wait for data to be appended, or should we skip over it 02:27:09 ... In HLS.js, if they see a gap, they seek over it 02:27:52 ChrisN: Want to keep all viewers at the live playhead as much as possible 02:28:22 Jer: How does it interact with I frames? 02:30:05 Jer: The spec says you must pause at the end of the buffered range. Could specify a time limit 02:30:11 horiuchi has joined #mediawg 02:30:12 q+ 02:31:00 ack wolenetz_ 02:31:25 Matt: Some kinds of gaps may not be full gaps, maybe the audio could play through but not have enough video 02:31:33 Jer: there are two kinds of gaps: known by the application, and unknown. one potential to solve the application-case would be to allow the application to explicitly coalesce ranges. 02:31:53 Matt: Should we coalesce the buffered ranges? The app would have to poll for unexpected buffered ranges 02:32:19 Jean-Yves: If the gap is small and will be ignored, should we reflect this in the buffered ranges? 02:32:40 Jer: I think we do already, it's a CPU problem to poll for the buffered ranges 02:33:27 ... If we decide to add spec language on which ranges to skip, we'll also specify how the buffered ranges would reflect them. 02:33:52 Matt: Two ways of looking at it. One idea is to let the media element continue to describe what the playback behaviour would be 02:34:22 ... Or maybe the sourceBuffer is the place to look at the gaps and see how they've been coalesced. 02:34:41 ... Proposal didn't allow apps to report the gaps 02:35:30 Jean-Yves: If you have no video but audio can play through, you don't want to have to wait for the video 02:35:56 Jer: Should we have a different have gap skipping behaviour for audio vs video tracks? 02:36:41 Jean-Yves: With gaps within the same sourceBuffer, reflected in the source buffered range. Then gaps due to missing gaps at the intersection of two buffered ranges, this is data that will not come 02:36:58 scottlow has joined #mediawg 02:37:18 Jer: The ability for a client to bridge gaps on a source buffer basis... 02:37:58 Matt: Most MSE players use one track per sourcebuffer, but there's no ability in a multi-track source buffer, so you'll see gaps 02:38:20 ... Should file an MSE issue to get some notion of track buffered 02:38:33 .. Useful for implementations using muxed content 02:39:03 Jer: CPU usage was high due to requirement to create new objects from a polling loop 02:39:21 s/new objects from/new buffered range objects from/ 02:39:21 richw has left #mediawg 02:39:54 richw has joined #mediawg 02:40:09 Jer: HTML seems to have changed such that bufferedRanges doesn't require a new object to be created, may want this in MSE as well 02:40:20 Matt: That would help 02:40:52 q? 02:41:24 -> https://html.spec.whatwg.org/#dom-media-buffered Definition of the buffered attribute in HTML 02:41:26 ... There's room for improvement for the app to tell the implementation what to do. Should it stop, or let time march forward, or skip to the earliest buffered thing, lots of options. 02:41:54 ["The buffered attribute must return a **new** static normalized TimeRanges object"] 02:41:56 ... I'd like some concrete use cases. Keeping up with the live edge is a good one 02:42:25 ... May not be solved by what's proposed so far. I haven't had time to look at this, concrete proposals are welcome. 02:42:44 mattwoodrow has joined #mediawg 02:42:44 [+ warning: "Returning a new object each time is a bad pattern for attribute getters and is only enshrined here as it would be costly to change it. It is not to be copied to new APIs."] 02:43:00 Jean-Yves: Eviction policy, can only evict when you get new data 02:43:38 Jer: It's bad at the end of video playback where we hold onto the buffered data unnecessarily. It has been requested by people at Apple concerned by memory usage on limited memory devices 02:44:11 ... This one might be worth prioritising by the editors 02:44:33 Matt: We experimented in Chrome with pre-emptive eviction, but didn't see much improvement in the playback metrics 02:45:01 Jean-Yves: Also out-of band evictions 02:45:27 Jer: We can't change behaviour of existing applications 02:45:55 Matt: Bad for apps already tuned to existing eviction policiy 02:46:48 for information: sourcebuffer.buffered needing to return the same object if it hasn't changed: https://github.com/w3c/media-source/issues/16 02:47:21 Matt: The newer eviction policies would certainly be more aggressive 02:47:51 dsinger has joined #mediawg