00:01:49 RRSAgent has joined #me 00:01:49 logging to https://www.w3.org/2019/09/16-me-irc 00:02:03 cpn has joined #me 00:02:10 Meeting: Media and Entertainment IG f2f meeting at TPAC 2019 in Fukuoka 00:02:18 koizuka has joined #me 00:03:01 Igarashi has joined #me 00:04:08 Agenda: https://www.w3.org/2011/webtv/wiki/Face_to_face_meeting_during_TPAC_2019#Agenda_Monday_16_September_2019 00:05:18 cpn: welcome and let's get started 00:05:25 koizuka has joined #me 00:05:26 ... this is a one-day meeting today 00:05:37 ... starting with the general view about the MEIG 00:05:44 ... [W3C Code of Ethics] 00:06:03 i/cpn:/Welcome and introduction/ 00:06:20 ... [Media&Entertainment IG: Mission] 00:06:48 ... apply Web technology in general to media services 00:06:50 nr has joined #me 00:07:12 ... new use cases/requirements to drive the discussion 00:07:32 ... [History of Major Initiatives] 00:07:41 nr has joined #me 00:07:44 nr has left #me 00:07:48 ... 1. 2011-2014: HTML5 Media Pipeline 00:08:03 ... 2. 2011-2017: Adaptive streaming & content protection 00:08:08 suzuki has joined #me 00:08:26 ... 3. 2017-2019: Media Web App Platform 00:08:37 ... [Charter] 00:08:46 ... scope covers almost everything 00:08:52 ... end-to-end pipeline 00:09:09 ... continuous experience 00:09:12 hfuji has joined #me 00:09:24 ... increasing interactive media 00:09:28 ... including games 00:10:11 ... [Charter (cont.)] 00:10:14 ... Tasks 00:10:23 ... identify requirements 00:10:32 ... incubation of technical 00:10:42 lin_li has joined #me 00:10:46 ... review media-related deliverables 00:11:06 ... coordinate with other media-related groups, e.g., MPEG, HbbTV, ... 00:11:20 ak has joined #me 00:11:37 ... internationalization, accessibility, etc., are very important 00:11:41 ... [Work Flow] 00:12:08 ... new ideas & issues from Members and SDOs 00:12:20 ... use cases, requirments and gap analysis 00:12:26 ... but not specs themselves 00:12:32 ... because this is an IG 00:13:03 ... but some of the results could promote new features for other WGs 00:13:30 ... we're encouraged to work on more and more new features 00:13:34 gkatsev has joined #me 00:13:35 ... [WICG] 00:13:47 ... discourse forum there 00:14:24 ... get implementer supporter for your idea 00:14:37 ... GitHub repo for your proposed feature 00:16:18 ... [Contributing to HTML and DOM] 00:16:24 ... HTML WG and WHATWG 00:16:28 ... [Task Forces] 00:16:46 ... 2018-19: Media Timed Events TF 00:17:06 ... 2016-17: Cloud Browser API TF (dormant) 00:17:56 remote+ Kazuhiro_Hoya, Lei_Zhai 00:18:20 cpn: [Monthly conference call] 00:18:34 ... 2019/2018 00:18:39 ... (list of topics) 00:18:46 ... [Activities for 2020] 00:19:00 urata_ has joined #me 00:19:00 ... whole bunch of topics for Media WG 00:19:08 ... potential candidates for adaptation 00:19:33 ... MEIG can input use cases and requirements 00:19:42 ... [Activities for 2020 (cont)] 00:19:48 ... what will be new topics for 2020? 00:19:54 ... what would be the main things? 00:20:25 ... would like to capture ideas 00:20:44 ... in the after noon, we'll have more open discussion 00:20:58 ... [Schedule] 00:21:00 aYukiYoshida_ has joined #me 00:21:12 ... (shows the agenda) 00:21:23 ... any additions? 00:22:14 ... [Resources] 00:22:21 ... various links here 00:22:30 ... btw, we have a new co-Chair, Pierre, here 00:23:01 pal: involving standard activity, e.g., IETF 00:23:08 ... started with HTML WG for W3C 00:23:20 ... co-editor of TTML 1.0 00:23:32 ... feel free to contact me and Chris (and Igarashi-san) 00:24:14 cpn: would mention that Mark Vickers stepping down as a co-Chair 00:24:20 remote+ Mark_Vickers 00:24:36 cpn: has been leading the group successfully 00:24:49 ... really getting the Web as the platform for media 00:25:02 MarkVickers has joined #me 00:25:13 ... Mark will continue to participate in the MEIG 00:25:52 mv: one of the founding co-Chairs 00:26:03 ... plan to stay involved as an Invited Expert 00:26:22 ... the best source consolidated expertise is media, video and audio 00:26:39 ... before HTML5 media support in the Web 00:26:48 ... we've been a good source 00:27:05 ... for W3C, WHATWG, Kronos, etc. 00:27:16 Youngsun_Ryu_ has joined #me 00:27:20 ... we don't write specs ourselves 00:27:32 ... but see what's the priority for media on the Web 00:27:38 ... provide expertise 00:28:02 yyuki has joined #me 00:28:03 ... communicate with media companies, etc., which are not in the W3C as well 00:28:19 ... it takes a lot of work 00:28:23 ... glad to help 00:28:39 ... a lot of leadership in studio world 00:28:44 ... and so on 00:28:55 ... have three very strong co-Chairs 00:29:01 ... and Pierre is joining 00:29:17 ... aiming for HTML5 media 2.0 00:29:32 ... this is really a time to better support for media 00:29:59 ... Comcast, my company, provides a new rep 00:30:06 jr: yes, I'm here 00:30:57 topic: Hybridcast update 00:31:51 -> https://app.box.com/s/gdgzwpnfqyzvjgts9me104ce6c3zfujv slides 00:32:00 ikeo: welcome to Japan! 00:32:12 ... would talk about Hybridcast 00:32:36 ... [Today's outline] 00:32:38 Lei has joined #me 00:32:56 ... recent achievement of hybridcast 00:33:17 ... [Deployment status] 00:33:29 ... [History of standardization and experiments] 00:33:40 ... 2014-2019 00:34:02 ... hybridcast connect deployed on some of the TV sets 00:34:21 ... [Shipment of Hybridcast receivers] 00:34:55 ... number of receivers over 10 million 00:35:06 ... [Trial deployment "Hybridcast-Connect" 00:35:13 lzhai has joined #me 00:35:35 ... what is hybridcast connect? 00:35:53 ... new functions/services uses new APIs 00:36:12 ... new APIs are experimentally implemented in some of the TV sets 00:36:18 ... brought an example here 00:36:30 ... a number of companies are involved 00:36:43 ... [Typical Sequence by additional APIs] 00:36:54 ... 5 functions 00:37:01 ... 1. media availability API 00:37:17 ... 2. ChannelsInfoAPI 00:37:24 ... 3. StartAITAPI 00:37:29 ... 4. TaskStatusAPI 00:37:34 ... 5. ReceiverStatusAPI 00:37:45 s/media availability API/MediaAvailabilityAPI/ 00:37:54 ... [Hybridcast Connect demo] 00:38:00 ... will show a demo here 00:38:12 ... (brings a TV set in front of the screen 00:38:45 ... [Hybridcast-Connect demos] 00:38:48 ... two demos 00:38:56 ... 1 emergency alert 00:39:10 ... 2. smooth guidance of catch-up 00:39:24 ... [Supporsed use cases demo (1)] 00:40:21 ... (Kaz adds webcam to webex) 00:40:45 ... embedded buttons here on my PC 00:41:33 ... press a button on the PC and get a notification on the smartphone 00:42:24 @@: accessibility consideration? 00:42:42 ... would be important 00:42:47 ikeo: agree 00:43:06 pal: API to TV? or smartphone? 00:43:23 ikeo: pairing between TV and smartphone 00:43:40 iga: depending on the TV's implementation 00:43:55 ... possibly implemented as an application on the TV side 00:44:18 ... the Hybridcast connect specification itself just specify the protocol 00:45:11 yongjun: additional latency? 00:45:37 kumekawa has joined #me 00:45:41 iga: remote control for playback, etc.? 00:45:55 ikeo: go back to [Typical Sequence by additional APIs] 00:46:11 ... explains the sequence of communication 00:46:20 ... using websocket 00:46:26 jongjun: how much latency there? 00:46:39 ikeo: TV-set dependent 00:46:54 iga: support all the functions? 00:47:02 ikeo: some specific functions are supported 00:47:20 iga: arrow keys are supported? 00:47:20 ikeo: yes 00:48:14 ... would like to support all the keys included in the TV remote 00:48:20 iga: but there are too many buttons 00:48:48 ikeo: also we need to consider security 00:49:04 ... e.g., to avoid unexpected change of volume 00:49:34 ... here, all the 5 APIs are implemented based on HTTP 00:50:33 sudi: infra-red remote vs this API? 00:50:44 ... what kind of values are added? 00:51:02 pal: TV implements some specific capabilities 00:51:40 iga: TV vendors have to implement the APIs 00:51:49 lilin: @@@ 00:52:01 ikeo: more than two 00:52:05 s/sudi/sudeep/ 00:52:20 s/@@@/how many TVs could be controlled?/ 00:52:30 inamori_ has joined #me 00:52:46 ... we'd like to handle more than one TVs 00:53:10 ... but TV vendors say hundreds of mobiles can't be connected 00:53:17 ... maybe 2-3 00:53:49 ikeo: TV is used within a local network 00:54:00 ... user selects which would be the best one to get connected 00:54:24 david: the system detects the devices available 00:54:24 s/@@/david/ 00:54:47 ... is children notified? 00:54:51 s/is/are/ 00:55:10 ikeo: application keep it stored in the session information 00:55:23 ... the user don't have to mind 00:55:37 pal: emergency notification itself is not included in the protocol 00:55:41 ... its separate 00:55:45 ikeo: right 00:55:55 ... these 5 APIs implemented within the device 00:55:59 ... so device APIs 00:56:05 ... not Web PIs 00:56:24 cpn: looking at secure protocol? 00:56:28 ikeo: some solution 00:56:43 ... we have two devices using some key 00:57:02 cpn: second screen wg works on secure protocol 00:57:07 ... so you're aware of that 00:57:09 ikeo: right 00:57:18 inamori_ has joined #me 00:57:29 ... the problem is HTTPs in local network 00:58:02 ... tx for your comments! 00:58:20 ... [Supposed use cases demo (2)] 00:58:41 ... implemented as a service like Netflix 00:59:04 ... application selects a program from a list 00:59:12 ... using just one API 00:59:43 ... (select a program on his smartphone) 00:59:43 Q: What would the Hybridcast group like from W3C? New specs? Changes to specs? 01:00:11 Thx 01:00:29 ikeo: launch the HTMl5 app 01:00:45 ... using Dash.js 01:01:00 iga: can control playback, forward/backward? 01:01:17 ikeo: can be done using websocket 01:02:23 While it's always interesting to see what other groups are doing, we have to focus on our goals to drive changes into W3C and increase adoption of W3C standards outside of W3C. 01:02:45 yongjun: device features can be controled? 01:02:51 ikeo: some of the features 01:03:01 youngjun: how many subscribers? 01:03:23 ikeo: 30% of the TV sets include this feature 01:03:38 cpn: what would the Hybridcast group like from W3C? 01:03:39 JohnRiv has joined #me 01:03:46 ... new specs, gap analysis? 01:04:03 ikeo: would like to combine some Hybridcast APIs to W3C standards 01:04:19 ... e.g., playback API, as Igarashi-san mentioned 01:04:38 ... between a mobile and a TV 01:05:14 kaz: kind of like the formerly proposed TV control API? 01:05:20 ikeo: yeah... 01:05:37 cpn: or something like proposed by the second screen wg? 01:05:55 ikeo: we have to select web standard APIs 01:06:12 ... can't create another APIs ourselves 01:06:33 ... that's the second demo 01:06:45 ... [Conformance Test and verification] 01:06:56 ... [Conformance Test for Hybridcast-Connect] 01:07:11 s/Netflix/Netflix or Amazon Prime Video/ 01:07:36 ... IPTV Forum Japan provides hybridcast connect standard 01:07:40 ... and also test kit 01:07:47 ... this is the overview 01:07:51 ... (shows a diagram) 01:08:17 ... emulator as the test environment 01:08:52 cpn: cover the Web application? 01:09:06 ikeo: end-to-end test 01:09:19 ... similar to the tests by HbbTV, etc. 01:09:38 ... [Service Verification by MIC project and others] 01:10:03 ... MIC is Ministry of Internal Affairs and Communications from the Japanese Government 01:10:14 ... service verification with Hybridcast-Connect 01:10:27 ... in 2018 19 companies 01:10:32 ... in 2019 23 companies 01:10:43 ... that's it for Hybridcast update 01:10:49 ... thank you! 01:11:12 cpn: what other specific things to address the gaps? 01:11:23 ... relationship with the Web platform test, etc. 01:11:37 ikeo: we require some functions from Web APIs 01:11:57 ... TV vendors sometimes want and sometimes not 01:12:14 cpn: ok 01:12:22 ... move on to the next topic 01:12:50 topic: Media Timed Events in Hybridcast 01:13:07 ken_ has joined #me 01:13:14 ikeo: [Service Patterns with Hybridcast Connect] 01:13:56 ... broadcasters in Japan need trigger message to switch to broadcast service 01:14:12 ... pattern1: from mobile app to broadcasing on TV 01:14:30 ... pattern2: from another app in TV to broadcasting 01:14:47 ... [Media Timed Events with Hybridcast-Connect] 01:14:59 ... JP broadcasters interested in media timed events 01:15:09 ... same function as the trigger message 01:15:21 ... there are two possible choices 01:15:38 s/events/events (MTE)/ 01:16:18 ... (MTE data in video resource + push emergency alert notification) to the smartphone 01:16:38 ... another option (MTE data in video resource to another app on TV) 01:16:53 ... there are those two possible patterns 01:17:03 cpn: is this emsg? 01:17:09 ... in ISO container 01:17:14 ... in the DASH format 01:17:17 ikeo: yes 01:18:04 iga: upper pattern can be realized using mobile API 01:18:20 ... but what about the bottom pattern? 01:18:43 ... is the TV device at the bottom same as the one on the right? 01:18:45 ikeo: yes 01:19:23 ... in the case of Android platform, the mechanism is something like intent 01:19:39 iga: general question about MTE 01:19:57 ... unclear why you want to embed events within the video stream 01:20:30 ikeo: main reason is the cost of access the message API from mobile 01:20:36 iga: cost of notification servers 01:20:42 ikeo: right 01:20:53 ... also the accuracy 01:21:17 iga: do we share the same requirements? 01:21:47 yongjun: which layer to be handled? 01:22:01 ... should be fragment or manifest 01:22:20 iga: manifest embedded event? 01:22:26 ikeo: it depends on the needs 01:22:44 atai has joined #me 01:23:24 ... in case of outbound, MTE might be written in the manifest 01:23:33 ... and there would be possible delay 01:23:38 YukiYoshida has joined #me 01:23:45 iga: could be updated frequently 01:24:03 ikeo: related to cost of access transfer 01:24:24 ... trade-off of accuracy and cost 01:24:34 ... show another demo on MTE 01:25:05 ... (select an app on his mobile) 01:25:21 ... send a message using hybridcast-connect to the TV 01:25:34 ... this is embedded event 01:25:48 ... emergency alert shown on the upper-right of the TV 01:26:01 cpn: intended to synchronization of media? 01:26:24 ikeo: this mechanism just sends an alert 01:26:39 ... and the Hybridcast application on the TV can handle how to display it 01:27:09 ... [Usecases of MTE] 01:27:41 ... switch broadcasting service from OTT triggered by emergency message 01:28:08 ... super-impose time-dependent metadata, e.g., weather icon and event information 01:28:32 ... new style of ad-insertion on a broadcasting service 01:28:48 rrsagent, make log public 01:28:53 rrsagent, draft minutes 01:28:53 I have made the request to generate https://www.w3.org/2019/09/16-me-minutes.html kaz 01:29:06 ... [MediaTimedEvents demos] 01:29:11 ... demo implementations 01:29:43 ... use case 1: switch live news program on braodcasting service from OTT service by emergency-warning message 01:30:47 ... use case 2: super-impose a weather icon on the Internet video 01:30:52 -> https://app.box.com/s/twutrrnnognzueecgcz1ojc7bqe5q9s9 slides 01:31:14 cpn: what is the requirements? 01:31:24 ikeo: would like to show the picture on the warning 01:31:53 ... but sometimes overlaps with the important content (e.g., peoples faces) 01:32:04 iga: depends on the apps 01:32:18 ... for some apps, accuracy is not important 01:32:59 ikeo: we have to consider the accuracy of timing for many cases 01:33:13 ... that's all 01:33:20 cpn: tx! 01:33:27 rrsagent, draft minutes 01:33:27 I have made the request to generate https://www.w3.org/2019/09/16-me-minutes.html kaz 01:33:54 tidoust has joined #me 01:34:24 ikeo: btw, we would like other devices, e.g., drones, from the app on TV 01:34:38 ... during the WoT demo, we'll show home appliance demos 01:35:08 kaz: at the lunch place, Argos 01:35:14 ... and Wednesday breakout 01:35:25 s/Argos/Argos on the 1st floor/ 01:35:29 cpn: excellent 01:35:42 ikeo: we'd like to use MTE as the basis 01:35:57 [break till 11am 01:36:02 s/11am/11am] 01:36:11 rrsagent, draft minutes 01:36:11 I have made the request to generate https://www.w3.org/2019/09/16-me-minutes.html kaz 01:37:32 nr has joined #me 01:38:40 nr has joined #me 01:39:52 ericc has joined #me 01:57:07 glenn has joined #me 02:02:18 Zakim has left #me 02:02:32 yyuki has joined #me 02:04:51 takio has joined #me 02:05:04 atai has joined #me 02:05:19 nr has joined #me 02:05:42 nn has joined #me 02:06:48 MO has joined #me 02:06:59 lilin has joined #me 02:08:22 hfuji has joined #me 02:08:35 topic: Media Timed Events Task Force 02:09:03 -> https://docs.google.com/presentation/d/1f8LVFY3shrUsksKWLyBVQk3icDN4zEsgV0NX9oqPXNw/edit slides 02:09:05 YukiYoshida has joined #me 02:09:22 MarkVickers has joined #me 02:09:22 cpn: [Topics] 02:09:29 JohnRiv has joined #me 02:09:36 kumekawa has joined #me 02:09:55 ... in-band timed metadata and timed event support 02:10:16 ShinyaAbe has joined #me 02:10:26 ... out-of band timed metadata 02:11:42 ... improving synchronization of DOM events triggered on the media timeline 02:11:53 ... also MPEG carriage of Web resources in ISO BMFF 02:12:26 ... [History] 02:12:34 ... our TF started in 2018 02:13:01 ak has joined #me 02:13:03 ... Giri Mandyam from Qualcomm preesnted work at ATSC and MPEG on eventing 02:13:21 ... published use cases and requirements document early this year 02:13:46 ... [Use cases for timed metadata and in-band events] 02:13:56 ... MPEG-DASH specific use cases 02:14:06 ... notification to media player 02:14:24 ... another use case about getting matrix during playback 02:14:56 ... ID3 tags: title, artist, image URLs 02:15:23 ... ad insertion cues: SCTE35, SCTE214-1, 2, 3 02:15:39 ds: keeping web page in synch with media 02:15:47 ... you got slides and talking about the slide 02:15:54 ... flip the slide dec and showing 02:16:03 cpn: we have something like that in the explainer 02:17:00 pal: we heard another use case in the morning 02:17:17 cpn: multiple contents and multiple events 02:17:37 pal: do you know if the cues tied to entire content? 02:17:43 ... somebody may remove the trigger 02:17:56 cpn: emsg separately handles 02:18:10 pal: can remove part of the content and it's still relevant 02:18:12 cpn: right 02:18:33 ... [Recommendations] 02:18:53 ... allow web application to subscribe to event streams by event type 02:19:03 ... discussion on type of event 02:19:20 ... maybe some concern 02:19:29 ... something we can discuss 02:19:43 ... also allow web applications to create timed event/timed metadata cues 02:19:56 ... including start time, end time and data payload 02:20:30 nonoka has joined #me 02:20:30 iga: in the morning, we had some discussion on in-band message 02:20:45 ... wondering if the current W3C standards support it 02:20:54 ... only the scope could be in-band events? 02:21:22 cpn: there are some implementations 02:21:26 ... e.g., for HbbTV 02:21:34 ... exposing MPD events 02:21:58 ... W3C specs don't say anything about type of events 02:22:38 ... next, actual triggering 02:22:42 mam has joined #me 02:22:55 ... when cues are parsed from the media container by the UA 02:23:09 ... when the current playback position reaches the cue start/end on the media timeline 02:23:49 ... allow cues with unknow end time 02:24:18 ... and finally 02:24:34 ... improving synchronization (within 20 msec on media timeline) 02:25:23 ken_ has joined #me 02:26:16 suzuki has joined #me 02:26:56 ds: covers seeking? 02:27:11 ... duration of events to be understood 02:27:23 ... what would happen if jump-off? 02:27:30 ... very hard to handle spike event 02:27:34 s/event/events/ 02:27:43 cpn: some of that kind of use cases for DASH 02:27:54 ... absolutely right 02:28:27 iga: requirement might be can detect that kind of delay 02:28:44 ... applications would know about the difference between specified timing and actual fired timing 02:28:56 ... we need to improved the timing (if possible) 02:29:01 ... but should identify the gap 02:29:10 ... e.g., based on the timestamp 02:29:25 cpn: [Current status] 02:29:30 ... almost complete 02:29:51 lilin_ has joined #me 02:29:56 s/complete/complete Task Force use cases and requirements/ 02:30:10 ... WICG DataCue explainer in progress 02:30:55 ... API spec not started yet 02:31:21 david: need to revise DASH spec, etc.? 02:31:40 cpn: we need to have discussion about what kind of mechanism is needed first 02:31:53 ... do we ask the UA to give structured data, etc. 02:32:06 ... question about how the different formats should match the need 02:32:22 s/david/yongjun/ 02:32:38 mv: the issue is how to map particular data format 02:32:44 ... and how to present it 02:32:51 ... the reference by HTML5 02:32:56 ... need to be updated 02:33:15 ... based on the newest MPEG spec 02:33:18 https://dev.w3.org/html5/html-sourcing-inband-tracks/ 02:33:21 ... maybe another form 02:33:26 cpn: really interesting 02:33:40 ... other things reference it 02:33:47 ... URL spec 02:34:03 ... not really standardized 02:34:27 ... definitely right we need handle it 02:34:40 ... in more standardized shape 02:34:43 mv: another question 02:34:55 ... data cue was implemented in webkit before HTML5 02:35:00 https://www.w3.org/TR/media-frags/ 02:35:43 mv: concern about syntax and semantics 02:36:05 cpn: don't know the answer now 02:36:18 ... a session by the Media WG will be held 02:36:35 mv: sounds like a right place 02:36:55 cpn: [MPEG Carriage of Web Resources in ISO-BMFF Containers] 02:37:01 ... saw TAG advice 02:37:43 ... since then people working on MPEG as ISO/IEC FDIS 23001-15 02:37:56 ds: probably public 02:38:53 cpn: this topic is welcome to the MEIG 02:39:24 ds: good to have a workshop including the MEIG, ISO, etc. 02:39:40 ... trying to get users of technology at the same time at the same place 02:39:51 ... including security experts 02:40:03 pal: what is the use case? 02:40:07 ds: two things 02:40:23 .. carriage of web pages 02:40:53 ... synchronization of media 02:41:03 pal: but what would be the actual business cases? 02:41:31 FYI, we have previously gotten permission from MPEG to host MPEG documents on the W3C member-only website. We could ask for MPEG for permission to host CMAF spec for this purpose. 02:41:33 ... btw, the community draft is available 02:41:42 s/... btw/pal: btw/ 02:42:03 iga: benefit of web resource embedded to MPEG 02:42:22 ... possibly reduce the cost for the web servers 02:42:27 ... could be beneficial 02:43:02 pal: the offline case is weird to me 02:43:23 iga: one of the use cases to be addressed 02:43:29 ... there are some offline use cases 02:43:35 ... packaged delivery 02:43:45 s/packaged/for packaged/ 02:44:01 suzuki has joined #me 02:44:41 cpn: [Browser support for DataCue] 02:44:48 ... current support by browsers 02:45:07 ... edge: HTML 5.1 DataCue attribute ArrayBuffer data; 02:45:17 ... chrome: no support 02:45:23 ... safari: supported 02:45:27 ... firefox: no support 02:45:51 ... HbbTV: HTL 5.1 (8 Oct 2015 ED) DataCue with native handling of player specific events 02:46:02 ... [Player support for DASH and HLS events] 02:46:54 ... Shaka Player: shaka.Player.EmsgEvent no internal handling of manifest refresh events. 02:47:04 ... (some more examples) 02:47:11 ... [Next steps] 02:47:27 ... breakout session on Wednesday about "DataCue API and time marches on in HTML" 02:47:39 s/HTML"/HTML" at 11:00am/ 02:48:01 ... raise issues against WHATWG HTML to propose changes to time marches on 02:49:07 mw: may have people aware of the algorithm 02:49:22 ... during the Media WG meeting 02:49:38 s/media WG meeting/breakout/ 02:49:45 iga: in scope of the Meida WG? 02:49:49 cpn: right 02:50:05 ... the Media WG would take on standardization if the direction is correct 02:50:24 ... we started a TF within MEIG and now are looking at WICG 02:50:52 iga: any friction? 02:51:04 cpn: we're looking at the possible API design 02:51:23 iga: what I remember from the previous TPAC 02:51:42 ... we were looking at WICG 02:51:49 ... but what would be the direction now? 02:51:58 cpn: we don't have enough input 02:52:12 ... need more concrete feedback for the possible API 02:52:22 ... in JavaScript at the moment 02:52:33 ... would be good to have more involvement 02:52:45 ... also would be good to have more browser vendors 02:52:57 ... need to have wider discussion 02:53:34 ... if we proposed issues, that should go to WHATWG 02:53:49 ... increasing timing accuracy 02:53:56 ... [References] 02:54:03 ... (links to resources) 02:54:09 iga: WICG would have their meeting? 02:54:15 cpn: Thu/Fri 02:54:31 ... also we have a breakout session ourselves 02:54:40 iga: it's good timing to have discussion with them 02:55:05 ... should ask the other participants about opinions as well 02:55:30 ... need to get opinions from the MEIG guys 02:55:40 pal: when would be our final report available? 02:56:14 ... more input needed? 02:56:30 ... anybody have any specific objections? 02:56:49 iga: we have not specifically asked the MEIG for opnions 02:56:55 s/opnions/opinions/ 02:57:07 ... report itself is about requirements 02:57:17 ... it's an IG Note. right? 02:57:19 cpn: yes 02:57:50 pal: the report says something is missing and to be added? 02:58:08 ... shouldn't say that explicitly? 02:58:22 cpn: solution design to be done by WICG 02:59:08 ... our TF could continue editorial changes 02:59:38 ... everybody, please join in 02:59:45 topic: CTA WAVE update 02:59:53 sangwhan has joined #me 03:00:27 RRSAgent, draft minutes 03:00:27 I have made the request to generate https://www.w3.org/2019/09/16-me-minutes.html sangwhan 03:00:44 (we're delayed by 30mins) 03:01:10 -> https://drive.google.com/file/d/1-mAhZe8s2TRDygCW1fPc-aJkvA1yCTH1/view?usp=sharing slides 03:01:37 yyuki has left #me 03:01:50 yyuki has joined #me 03:02:07 jr: John Riviello from Comcast 03:02:32 ... quick update on CTA WAVE 03:02:40 tidoust has joined #me 03:02:42 ... [The Web Appliction Video Ecosystem Project] 03:02:54 ... aims, focuses, ... 03:03:05 ... [Supporting a fragmented OTT world] 03:03:20 ... fragmentation impacts content providers and device makers 03:03:29 ... [Brief history] 03:03:42 ... CEA initiated the GIVE project in 2015 03:03:57 ... CEA becomes CTA in Nov. 2015 03:04:10 ... [Steering Committee] 03:04:26 ... technical WG 03:04:40 ... CSTF for content specification 03:04:50 ... DPCTF for testable requirements 03:05:04 ... HATF for reference application framework 03:05:26 ... [WAVE bridges media standards & web standards] 03:05:34 ... [Curent WAVE Membership] 03:05:37 ... many members 03:05:47 ... overlapping with W3C Members 03:05:54 ... [What is the Common...] 03:06:04 ... [WAVE COntent Spec & Published CMAF Media Profiles] 03:06:20 ... [Media Profile Approval] 03:06:40 ... profiles are added 03:06:49 ... typically updated once a year 03:07:02 ... [WAVE Content Specification 2018 AMD 1 - Video Profiles] 03:07:23 ... [WAVE Content Spec 2018 AMD 1 - Audio Profiles] 03:07:24 tidoust has joined #me 03:07:37 ... [WAVE Programs and Live Linear...] 03:07:48 ... [Anticipated WAVE Content Spec 2019 Updates] 03:07:59 ... [Test SUite: COntent Verification Tool] 03:08:06 ... verification content 03:08:29 ... shared with DASH-IF conformance validator 03:08:37 ... [CSTF - Specification Process] 03:08:45 ... annual f2f meeting 03:08:48 ... [Links] 03:08:57 ... links for resources 03:09:04 ... [HATF: HTML5 API...] 03:09:10 ... What We Do in the HATF] 03:09:20 ... playback audio-video media 03:09:52 ... [HATF Work Plan] 03:10:00 ... W3C Web Media API CG 03:10:22 ... [HTML5 APIs: Reference Platform] 03:10:33 ... one content format but multiple devices 03:10:40 ... [HATF Specs] 03:10:44 ... snapshots 03:10:52 ... Web Media API snapshop (WMAS) 03:11:12 ... CTA and W3C co-publishing 03:11:21 iga: what do you mean? 03:11:30 jr: working on the same document 03:11:44 iga: not WG but CG? 03:12:02 ... it's not "W3C Recommendation" but "CG Report" 03:14:19 hfuji_ has joined #me 03:14:21 fd: fyi, there will be discussion about W3C process during this week 03:14:29 ab: part of the plenary 03:14:39 s/plenary/plenary on Wednesday/ 03:14:54 jr: [Anticipated Web Media API 2019 Snapshot Updates] 03:15:03 ... update to ECMAScript 7 03:15:12 ... CSS snapshot 2018 03:15:30 ... [HATF Testing Framework] 03:15:33 FYI on referencing WAVE specs: ATSC references the WAVE WMAS as published by CTA, which is referencable. The W3C version of the WMAS spec, like all CG specs, includes boilerplate language that it should not be referenced. 03:15:38 ... WMAS Testing Suite Updates] 03:16:09 ... [Abstracted Device Playback Model] 03:16:27 ... (skips some slides) 03:16:35 ... [Spec Highlights and Outline Dec 2018] 03:16:51 ... [Promisses in Spec for 2019 and beyond] 03:17:07 ... [Test Sutie: RFPs] 03:17:19 ... [Q&A] 03:17:24 ... questions? 03:17:38 iga: what are you going to talk about "type1 player"? 03:17:59 ... any room for W3C standardization? 03:18:27 ... if you have any specific requirements, the MEIG can discuss that 03:18:43 ... btw, what is the "Content Model Format"? 03:20:05 cpn: question around testing 03:20:19 ... is the work related to the web platform testing? 03:20:53 pal: should we put that on the agenda for the afternoon? 03:20:57 all: ok 03:21:20 topic: Review open issues 03:21:39 cpn: we use GitHub to manage issues 03:22:03 ... most of the issues will be covered in the afternoon jointly with the other WGs 03:22:22 ... but one specific issue here about frame accurate synchronization and seeking 03:24:33 fd: [Related GitHub Issues] 03:24:41 ... issue 4, 5, 21 03:25:00 ... the main issue is #4 frame accurate seeking of HTML5 MediaElement 03:25:13 ... [Categories of Us ecases] 03:25:23 s/Us ecases/Use cases/ 03:25:29 ... 2 different use cases 03:25:36 ... seeking and rendering 03:25:52 ... [Main Seeking Use Cases] 03:26:00 ... non-linear edition in a browser 03:26:32 ... can be cloud-based 03:26:53 ... collaborative review 03:27:08 ... evidence playback by camera and video 03:27:23 ... [Seeking Gaps] 03:27:39 ... currentTime is not precise enough to identify individual frames 03:28:20 ... also no way to seek to the next/prev frame in the generic case 03:28:27 ... just matter of time 03:28:40 ... when is going to be the next frame 03:29:05 ... [Main Rendering Use Cases] 03:29:34 ... dynamic content insertion (splicing) 03:29:56 ... video overlays 03:30:58 ... media playback synchronized with map animations 03:31:09 ... synchronization between audio and timed text, e.g., karaoke 03:31:21 ... synchonized playback across users/devices 03:31:58 iga: requirements for time seeking? 03:32:09 fd: this is rather rendering issues 03:32:21 pal: sample alignment and duration 03:32:48 pal: current web platform doesn't allow frame-accurate timing 03:32:58 fd: [Rendering Gaps] 03:33:10 ... currentTime is not precise enough to identify individual frames 03:33:35 ... also timestampOffset is not precise to identify frame boundaries 03:33:49 ... it's hard to track media timeline frame by frame 03:34:02 ... in any case there is no mechanism to handle frame accuracy 03:34:28 ... also synchronization between video and audio 03:34:41 ... if you look at global synchronization 03:35:07 ... no way to tie the rendering of a video frame to the local wall clock 03:35:31 Following up on earlier question: It has always been the intention of WAVE to contribute back to W3C any new tests and also any changes to the W3C test runner. WAVE representatives met with the W3C test group at TPAC 2018. There was an issue opened on April 2, 2019: https://github.com/web-platform-tests/wpt/issues/16214 There was a PR entered on June 12, 2019: https://github.com/web-platform-tests/rfcs/pull/23 03:36:19 [lunch till 1:30pm] 03:36:26 rrsagent, draft minutes 03:36:26 I have made the request to generate https://www.w3.org/2019/09/16-me-minutes.html kaz 03:43:32 MO_ has joined #me 03:58:48 ericc has joined #me 04:11:43 ericc has joined #me 04:23:36 lilin has joined #me 04:30:35 yyuki has joined #me 04:31:02 tidoust has joined #me 04:32:56 MO has joined #me 04:33:49 present+ Kaz_Ashimura, Tatsuya_Igarashi, Andreas_Tai, John_Riviello, Chris_Needham, Pierre_Lemieux, Gary_Katsevman, Scott_Low, Greg_Freedman, Mark_Watson, Eric_Siow, Sudeep_Divakaran, Li_Lin, Xu_Song, Youngsun_Ryu, Hiroshi_Fujisawa, Masaya_Ikeo 04:33:57 takio has joined #me 04:34:34 suzuki has joined #me 04:34:38 kumekawa has joined #me 04:34:41 topic: Future directions for media on the web 04:34:57 -> https://docs.google.com/presentation/d/1YNMt_opIT8B5niUvYGsgjuaLOsElxDHxQw31f3eI7fM/edit slides 04:35:05 ken has joined #me 04:35:06 Joshue108 has joined #me 04:35:07 cpn: [M&E IG Recommendations] 04:35:14 present+ 04:35:16 YukiYoshida has joined #me 04:35:19 ak has joined #me 04:35:29 s/topic: Future directions for media on the web// 04:35:32 horiuchi has joined #me 04:35:49 ak has joined #me 04:35:55 topic: Frame accuracy synchronization (contd) 04:36:05 s|-> https://docs.google.com/presentation/d/1YNMt_opIT8B5niUvYGsgjuaLOsElxDHxQw31f3eI7fM/edit slides|| 04:36:15 fd: continue the slides 04:36:34 s/cpn: [M&E IG Recommendations]// 04:36:44 fd: [Rendering Gaps that Remain] 04:36:55 hfuji has joined #me 04:36:56 ... currentTime is not precise enough 04:37:04 ... timestampOffset is not precise enough 04:37:22 ... three following requirements deleted 04:37:30 ... [Seeking Gaps that remain] 04:37:35 ... [Next Steps?] 04:37:43 ... what do we want to do? 04:38:58 ... follow up on MTE recommendations around synchronization? who? 04:39:06 samira has joined #me 04:39:20 ... wrete a UCR document on frame accurate synch? who? 04:39:33 ... feed needs back into WHATWG and Media WG? who? 04:39:59 ... different possible groups to bring ideas to 04:40:22 ... possibly machine learning group? 04:40:46 ... discussion with different people on Wednesday 04:41:08 cpn: production use cases? 04:41:09 Youngsun_Ryu has joined #me 04:41:17 pal: put together a presentation 04:41:30 ... what's happening about a lot professional asset 04:41:44 hfuji has joined #me 04:41:53 atai has joined #me 04:41:57 ... have some demo as well 04:42:35 mw: the problem is probably how to identify individual frame 04:42:59 nakakura has joined #me 04:43:09 ... could be ended up with overlaps 04:43:20 cpn: we need rationale 04:44:07 iga: (asks new comers to sing up with the attendees list) 04:44:17 MasayaIkeo has joined #me 04:44:18 ... btw, this proposal includes two different points 04:44:35 ... because the difficulty for realization is quite different 04:44:59 ... depending on theperformance of the browsers and the hardware 04:45:30 present+ MarkVickers 04:45:39 fd: maybe write two documents or might be simply continue discussion 04:46:10 ... there are different use cases 04:46:20 ... some of them might be out of scope 04:46:24 ReinaldoFerraz_ has joined #me 04:46:51 pal: sound and video synchronization is a use case 04:47:00 ... not even possible currently 04:47:11 ... there is no API for that purpose today 04:47:24 iga: that requirement is related to time seeking 04:47:40 ... different from synchronization itself 04:47:56 pal: when you say "seeking", it's API level. right? 04:48:26 iga: currently there is no way for requirements to specify how quickly browsers should behave 04:48:29 hfuji has joined #me 04:49:02 pal: it's largely implementation-dependent? 04:49:05 iga: yeah 04:49:23 ... current time issue and synchronization are different issues 04:49:42 ... wonder if any other W3C specs handle that kind of performance 04:49:47 ak has joined #me 04:49:57 ... how fast browsers are expected to render the data 04:50:09 ... we need to talk about performance 04:50:24 pal: sounds like you're interested in making contribution :) 04:51:36 JohnRiv has joined #me 04:52:02 hfuji has joined #me 04:52:17 yongjun: need some mechanism to handle frame jumping 04:52:34 ... not only at the beginning 04:52:47 ... if we care about one case, we may miss another case 04:53:02 iga: ad-insertion seamlessly? 04:53:41 ... accuracy of seeking is important for many use cases 04:54:04 ... but we should distinguish time seeking accuracy 04:54:25 hfuji has joined #me 04:54:47 pal: if there is a stream and also another stream starts, in that case, need for frame accuracy 04:54:52 iga: that's true 04:55:23 topic: Professional media workflows on the web 04:55:26 pal: very timely topic 04:55:32 ... proposal by MovieLabs 04:55:49 ... increasing popularity 04:56:05 ... [Web applications are coming to profesisonal media workflows] 04:56:07 ... why? 04:56:12 mam has joined #me 04:56:19 ... web applications have become mainstream 04:56:31 ... web platform media capabilities are tantalizingly close 04:57:01 ... profesisonal audiovisual assets are moving to the cloud 04:57:12 ... [Why move audio-visual assets to the cloud?] 04:57:59 .. instead of using UPS 04:58:10 ... now available immediately on the cloud 04:58:31 ... it's more secure actually 04:58:42 ... and of course more efficient 04:58:44 ... [Today] 04:59:20 ... previsualization, visual effects, grading, editing, localization, mastering, qualit check, archival, distribution 04:59:28 ... [tomorrow] 04:59:54 ... all of them will be on the cloud (and can be accessed via web applications) 05:00:03 ... [demo] 05:00:12 ... ownzones 05:00:22 ... content already on the cloud 05:01:26 ... there is an editor here 05:01:33 ... (for audio and timed text) 05:01:50 ... (going back to the presentation) 05:02:04 ... [Some steps of the workflow remain out of reach of web applications] 05:02:13 ... gaps exist in the web platform 05:02:20 ... what's missing? 05:02:26 ... that's it 05:02:55 cpn: we have many items, so don't want to dive into the details 05:03:46 yongsun: as far as I know, people use MSE 05:04:07 s/youngsun/yongjun/ 05:04:20 iga: video editing using browser 05:04:39 ... requirements for rendering related to multiple video clipping 05:04:59 ... handle frames seamlessly 05:05:12 pal: we need volunteers 05:05:46 mw: not volunteering myself but support the use cases 05:05:50 NJ_ has joined #me 05:06:24 scott: folks here might want to consider local context 05:06:27 s/context/content/ 05:06:36 iga: local content using browser? 05:06:46 scott: not necessarily on the cloud 05:07:11 ... how to handle frame accuracy on the local devices 05:07:23 pal: help document the issues? 05:07:34 ... take a first step 05:07:43 ... somebody needs to take the lead 05:08:05 ... this is listing the current issues 05:08:34 gary: interested 05:08:56 cpn: thank you 05:09:52 @@: gathering data 05:09:58 ... we have a few ideas 05:10:12 ... one of them is adding attribute to video tags 05:10:25 ... somebody from Google also proposed media container 05:10:34 ... my first question is 05:10:48 ... would you have any thoughts 05:10:56 ... will host a session on Wednesday 05:11:21 cpn: related to content representation 05:12:17 ... quite a lot of valuables 05:12:22 s/@@/samira/ 05:12:31 q? 05:12:47 cpn: we'll talk about caption later 05:12:51 yos has joined #me 05:13:03 andreas: where to standardize 360 video, etc. 05:13:08 igarashi has joined #me 05:13:17 ... we have a presentation on that later in the afternoon 05:13:27 ... also a session on Wednesday 05:13:44 ... possibly tomorrow as well 05:14:05 cpn: anybody aware of MPEG format update? 05:14:12 ds: whole bunch of work 05:15:17 https://mpeg.chiariglione.org/standards/mpeg-i/omnidirectional-media-format 05:15:20 dsinger has joined #me 05:15:21 scottlow has joined #me 05:15:25 https://mpeg.chiariglione.org/standards/mpeg-i 05:15:30 markw has joined #me 05:15:30 https://mpeg.chiariglione.org 05:15:46 andread: TTWG has liaison with MPEG 05:15:55 ... but just one part of scenarios 05:16:00 ... inband information 05:16:12 Song has joined #me 05:16:12 ... doesn't sort the issues about outband captionning 05:16:19 s/nning/ning/ 05:16:28 s/andread/andreas/ 05:16:44 There are also accessibility requirements around how 360 is standardised. 05:16:54 andreas: possibly discuss that tomorrow? 05:17:00 samira: possible 05:17:25 ... how many content producers, providers, here? 05:17:29 ... what block you? 05:17:39 song: China Mobile 05:18:04 iga: VR content protection? 05:18:12 samira: can be represented as VR 05:18:24 ... magic window scenario 05:18:44 ... just wanted to bring this discussion up 05:18:57 cpn: what's the natural home for this discussion? 05:19:08 ... first candidate is Timed Text 05:19:32 samira: just wanted to share the ideas since this is an IG 05:19:55 josue: possibl multimodal information 05:20:10 andreas: would like to come back later in the afternoon 05:20:18 ... where to do it 05:20:27 ... really difficult to find a right place 05:20:47 cpn: related to accessibility 05:21:57 sudeep: Chair of the Web&Networks IG 05:22:05 ... will have our meeting tomorrow 05:22:09 ... please drop by 05:22:22 ... interested in Media Timed Event as well 05:22:26 ... network latency 05:22:38 ... very happy to give inputs 05:22:48 cpn: interesting questions 05:23:01 ... very close relationship with this group 05:23:10 s/josue: possibl multimodal information/Josh: There are accessibility requirements if 360 is to be standardised, around an architecture that will support accessibiity and multimodal requirements. 05:23:56 cpn: having a Web interface 05:24:36 ... webrtc stream for multiple different sources 05:24:50 ... it is stuff we've been implementing 05:25:12 ... not necessarily synchronized with each other 05:25:46 sudeep: how should we bring back? 05:25:52 cpn: GitHub issues 05:25:59 ... also we have monthly IG calls 05:26:20 ... have media-related topics 05:26:58 josh: particular accessibility issue in synch with video stream 05:27:06 cpn: yeah 05:27:12 josh: bunch of stuff 05:27:33 https://www.w3.org/WAI/APA/wiki/Accessible_RTC_Use_Cases 05:27:39 ... can put resource on what I'm working on (above) 05:27:56 s/... can/josh: can/ 05:28:02 ... related to this group 05:29:33 ... different modality channels based on user's preference, TTS, braille, etc. 05:30:02 (kaz remembers the MMI Architecture and SCXML :) 05:30:20 cpn: any other issues? 05:30:24 iga: local packaging? 05:30:33 ... publishing group is working on packaged media 05:30:53 pal has joined #me 05:31:07 ... playback locally 05:31:15 pal has joined #me 05:31:20 ... on a local storage 05:31:25 ... might be with very high resolution of time 05:31:40 cpn: seems we need another gap analysis 05:32:11 [Note the breakout session on Web Packaging planned on Wednesday: https://w3c.github.io/tpac-breakouts/sessions.html#wpack ] 05:33:03 topic: bullet Chatting 05:33:14 song: Song Xu from China Mobile 05:33:26 ... would give a presentation about bullet chatting 05:33:38 ... Michael from Dwango as well 05:34:10 https://w3c.github.io/danmaku/index_en.html 05:34:20 -> https://w3c.github.io/danmaku/index_en.html proposal 05:34:56 scribenick: tidoust 05:35:05 rrsagent, draft minutes 05:35:05 I have made the request to generate https://www.w3.org/2019/09/16-me-minutes.html kaz 05:35:35 chcunningham has joined #me 05:35:58 iwelcome and let's get started/scribenick: kaz/ 05:36:18 s/Welcome and introduction/topic: Welcome and introduction/ 05:36:18 Joshue108 has joined #me 05:36:22 rrsagent, draft minutes 05:36:22 I have made the request to generate https://www.w3.org/2019/09/16-me-minutes.html kaz 05:36:34 song: Interactive tool for video broadcasting over the Internet. Use cases: see reviews of group users. Real-time interaction, engagement for young generation, to show social presence. 05:37:14 ... Implementation is difficult because you need to compute the positioning and animation of bullet chatting, rendered in DOM or Canvas and overlaid on top of the video. 05:37:24 ... Strong demand for this type of applications, particularly in Asia 05:37:56 ... Standardization would improve UX, reduce the difficulty in implementation. 05:38:13 ... We suggest to define a standard format for bullet curtain. 05:40:05 ... We started an analysis to identify gaps. No specific API introduced for the time being. 05:40:34 ... Bullet chatting is basically floating text over the screen with four attributes: 05:40:55 ... mode, basic properties, timeline, and container (typically the video) 05:41:28 ... [going through Bullet Chatting Proposal document] 05:42:18 ... During streaming, two main ways to present: chatting room or bullet chatting. 05:43:02 i/cpn: welcome and let's get started/scribenick: kaz/ 05:43:07 rrsagent, draft minutes 05:43:07 I have made the request to generate https://www.w3.org/2019/09/16-me-minutes.html kaz 05:43:25 ... Advantages of bullet chatting display are that there is a wider display area and it does not require the user to move her eyes. 05:44:51 ... The movement from right to left allows users to read content quickly (and again without moving her eyes). 05:45:37 ... Sometimes, it's not only about comments, it can be text to improve the feeling of horror videos for instance. 05:46:16 ... Also used to share messages in stadiums on a big wall. 05:46:31 takio_ has joined #me 05:46:52 Chair: Chris, Igarashi, Pierre 05:47:20 Michael: I'm from Dwango. Use cases and requirements for our current service Niconico. 05:47:54 ... Niconico is a streaming Web site launched in 2006. Since its inception, its unique feature has been its comment system. 05:49:22 ... [showing a demo] 05:49:33 ... allows to create a user experience. 05:49:54 pal: Who specifies at what vertical position the bullet curtain appears? 05:50:06 ... Do you foresee that to be done at the client side? 05:50:14 Song: No, done on the server side 05:50:33 pal: So the format has all the positioning information. 05:51:26 Michael: In the current implementation, clients do the rendering, and they all have the same algorithm, so deterministic. 05:51:26 present+ Mamoru_Takagi, Daiki_Matsui, Hiroaki_Shimano, Shinya_Abe, Yuki_Yamakami, Masayoshi_Onishi, Yongjun_Wu, David_Fazio, Florent_Castelli, Nonoka_Jinushi 05:51:53 pal: If things were standardized at W3C, would the positioning be imposed by the server? 05:52:11 Michael: Currently, we'd like the client to have the ability to position the comments. 05:52:44 pal: So the client receives the comments and decides where to lay them out. 05:53:12 igarashi: You want to let the browser do the whole rendering? 05:53:25 Michael: No, the Web application. 05:53:30 q+ 05:54:04 ... Goal of the standardization is to have a shared format for bullet curtains, because many providers have a similar comments system (Niconico, Bilibili, etc.) 05:54:39 Song: First step is to define an interoperability format. If there is a way to involve the browser vendors, then great, second step. 05:55:00 yajun_chen has joined #me 05:55:07 markw: Browsers would want to know why something cannot be done in JS. 05:55:28 dsinger: And you could possibly do it with WebVTT / TTML. 05:56:22 Song: For advanced features, there are things that TTML does not address. Happy to talk with TTML folks though. 05:57:11 Michael: Use cases and requirements level for now. Possible solutions are still very early stage. 05:57:46 ... Bullet curtain allows to create feelings such as sharing content with friends. 05:58:07 ... Comments can be used to improve the video with artwork, or even to flood the video with comments. 05:58:26 ... Comments have become an important part of Niconico's culture. 05:58:41 ... Part of on-demand and live-streaming services of Niconico. 05:58:49 hyojin has joined #me 05:59:07 ... Comments move right to left across at set times, based on the media timeline. 05:59:16 cpn: If I pause the video, do the comments pause? 05:59:20 Michael: Yes. 05:59:54 ... Comments are clipped to the edge of the player (or to an arbitrary region). 06:00:11 ... When the video loads, comments are loaded from the server and rendered. 06:00:27 ... If a user submits a comment, it appears immediately to the user, and gets shared to other viewers. 06:00:28 sudeep has joined #me 06:00:57 ... Seeking to the same time in the same video will have the same comment appear at the same time and at the same position. 06:01:15 ... As if the comments were part of the video, comments scale with the video in particular. 06:01:31 ... Comments can be interactive (e.g. context menu) 06:01:40 q? 06:02:07 Zakim has joined #me 06:02:09 q+ 06:02:32 markw: Layout problem (HTML is good at it), animation problem (Web Animations), but the thing is Web Animations ties animations to the wall clock, whereas here animation is tied to the media clock. 06:02:50 ... That may be a useful gap to identify 06:03:18 q+ 06:03:24 cpn: Came earlier during Francois' presentation. Tying non-media content rendering to media timeline. 06:04:01 igarashi: Some requirements about positioning the subtitles. 06:04:14 ... Client decides arbitrary where to position the comments. 06:04:21 Michael: Yes. 06:04:33 igarashi: Content provider does not care about positioning of subtitles. 06:04:56 sangwhan: Aside from Web, do you also want to handle support for native players? 06:05:03 ... That would change perspectives. 06:05:27 Michael: We do have native apps, so we'd be interested with a solution that covers that space too. 06:06:06 sangwhan: According to Mark's idea, if it's tied to the animation timeline in browsers, you're restricting yourself to Web environment. 06:06:24 ack kaz 06:07:01 ack iga 06:07:04 kaz: When I talked to Niconico, he mentioned extension about tiles 06:07:20 s/Nikoniko/Koizuka-san from Nikoniko/ 06:07:44 yyuki has left #me 06:07:55 yyuki has joined #me 06:08:22 s/extension about tiles/extension mechanism named "Niko-script", and that mechanism has capability of specifying style and position of captions. so that capability could be also considered at some point. maybe not now, though./ 06:08:34 rrsagent, draft minutes 06:08:34 I have made the request to generate https://www.w3.org/2019/09/16-me-minutes.html kaz 06:10:00 I'm not staying connected for the joint meetings. Have a good TPAC all! -mav 06:11:25 horiuchi_ has joined #me 06:11:35 MasayaIkeo has joined #me 06:13:04 takio has joined #me 06:20:11 suzuki has joined #me 06:20:34 YukiYoshida has joined #me 06:24:03 suzuki has joined #me 06:27:08 takio has joined #me 06:32:02 MasayaIkeo has joined #me 06:32:43 ericc has joined #me 06:32:59 ShinyaAbe has joined #me 06:33:44 tidoust has joined #me 06:35:21 pal has joined #me 06:36:15 MasayaIkeo has joined #me 06:37:28 NJ has joined #me 06:37:40 NJ has left #me 06:37:47 Topic: Joint meeting with Second Screen WG/CG 06:38:27 cpn: The Second Screen WG/CG made a lot of progress on the Open Screen Protocol for discovering, authenticating and controlling remote displays on the local network. 06:38:48 Youngsun_Ryu has joined #me 06:38:53 horiuchi has joined #me 06:39:09 anssik has joined #me 06:39:18 Present+ Anssi_Kostiainen 06:39:30 glenn has joined #me 06:39:35 mfoltzgoogle: I work for Google. Been involved in Second Screen since 2015. Second screen for the Web is the way we want to enable Web applications to take advantage of connected displays/speakers and render different types of content. 06:39:47 ... Content can be a full Web page or specific media. 06:40:13 ... The Presentation API enables a web page, called the controller, to request display of an URL on a remote display on the LAN. 06:40:49 tak has joined #me 06:41:21 ... Example of a photo app that displays the loaded picture on a large display. You can play media, do gaming, collaboration tools. Pretty agnostic, but our experience shows that it's mainly used for media playback. 06:42:33 ... The Remote Playback API allows a web page on which there is a media element to remote the playback of the media element on a second screen, either through media flinging where the URL to play gets sent to the remote device, or media remoting where the media gets streamed to the second screen. 06:42:50 ... Both APIs are in Chrome. 06:43:26 lilin has joined #me 06:44:09 ... The APIs were designed to take advantage of proprietary protocols. To get broad adoption, we decided to develop an open set of protocols so that implementers could all support the APIs in an interoperable way. 06:44:30 ... We hope to converge at the end of the Second Screen F2F meeting this week to v1.0 of the Open Screen Protocol. 06:45:14 ... One use case for the future: enabling Web applications to generate their own media and present it to a connected display, e.g. for gaming. 06:45:24 ak has left #me 06:45:40 ... The Open Screen Protocol supports all sorts of use cases that we hope to expose to Web applications in the future. 06:47:16 Yongsun: Support of QUIC in smart TVs. UDP is not supported in some TVs. 06:47:28 sangwhan: UDP is supported at the kernel level. 06:47:58 present+ Yajun_Chen 06:48:02 mfoltzgoogle: in our library implementation, we expose UDP but that's pretty much the same thing as what you get at the system level. 06:48:17 remote+ Yajun_Chen 06:48:24 present- Yajun_Chen 06:49:12 cpn: One of the question that came up in our previous F2F meeting is around synchronization, e.g. ability to provide audio description on their device while they are sharing a media element on a second screen. 06:49:17 present+ Takio_Yamaoka, Akihiko_Koizuka, Keiichi_Suzuki, Michael_Li, Takahiro_Kumekawa, Chris_Cunningham, Taki_Kamiya, Jeff_Jaffe, David_Singer, Glenn_Adams 06:49:42 ... Within that, there is the question of how close the synchronization needs to be. 06:50:09 ... We worked on close synchronization between main screen and companion device in HbbTV. 06:50:25 mfoltzgoogle: Does the HbbTV specification rely on clocks? 06:50:50 cpn: Yes, clock synchronization and then the devices can make adjustments to playback to stay in sync. 06:51:23 mfoltzgoogle: We need a mechanism for the two sides agree on a wall clock for presentation. 06:51:41 ... If the HbbTV covers all of that, we can have a look for OSP. 06:51:49 cpn: Yes, it does. 06:52:09 present+ Hyojin_Song, Francois_Daoust, Ken_Komatsu, Toshiya_Nakakura, Jonathan_Devlin, Amit_Hilbuch, Steve_Anton, Sebastian_Kaebisch, Daniel_Peintner 06:52:11 Open Screen Protocol issue Requirements for multi-device timing while streaming https://github.com/webscreens/openscreenprotocol/issues/195 06:52:21 ... Some implementers have found it difficult to achieve that level of synchronization. It's not so widely implemented for now. 06:52:29 ... I can provide information on how that has been done. 06:52:45 mfoltzgoogle: Collaboration between the protocol and the application levels. 06:53:03 cpn: And also something that exposes the pipeline delays. 06:53:55 mfoltzgoogle: One of the things that seem very important is the establishment of a secure communication between devices, which could have broader implications, such as connected home scenarios. 06:54:31 mfoltzgoogle: it could be a good foundation for that. Part of the OSP focus has been on authenticating devices, currently based on SPAKE2. 06:55:07 ... We're not currently focused on enabling one piece of software to find out attributes of another, for instance who manufactured it, what does it do. 06:55:08 SPAKE2 https://datatracker.ietf.org/doc/draft-irtf-cfrg-spake2/ 06:55:39 ... You could take the chapter on authentication and use it elsewhere. 06:56:07 ... We did anticipate that there may be other use cases than the ones we foresee, so have landed an extensibility mechanism. 06:56:22 sangwhan: Is there a registry for these capabilities? 06:56:31 mfoltzgoogle: Yes, it's on GitHub. 06:56:55 rrsagent, draft minutes 06:56:55 I have made the request to generate https://www.w3.org/2019/09/16-me-minutes.html kaz 06:57:00 ... You can be a presentation controller, receiver, send or receive media, that's all negotiable in the OSP. 06:57:33 dsinger has joined #me 06:57:49 cpn: I suspect remote playback of encrypted content is a use case shared by different members here. 06:58:35 mfoltzgoogle: The API is pretty much agnostic. At the protocol level, we haven't tried to add support for messages to exchange to support encrypted media. 06:59:31 ... That seems more to be a use case for the Presentation API where the application can create and exchange application-specific message commands. 07:00:01 ... Remote playback of encrypted media is closely tied to credentials, and that's application level. 07:01:09 markw: The thing that you don't have here is the streaming model where the controlling device has the decryption key and wants to stream the content to the receiver device. 07:01:53 Joshue108 has joined #me 07:01:57 markw: What happens to the media stream when it reaches the receiver? Goes to a media element or through JS processing? 07:02:19 Peter: receiver is handling the decoding. 07:02:41 cpn: Is there an IG recommendation that we'd want to make? 07:03:23 markw: The most likely model for us for doing this would be to have a receiving web application that handles the user's credentials 07:03:53 cpn: That would make the sync issue interesting because it is then at the application level. 07:04:46 ... One of the issues we have with Remote Playback is that we want to provide a custom UI, which means that we rather want to use the Presentation API for that. 07:05:19 ... Didn't we discuss having a Media element through the Presentation API that gets automatically synchronized with local content? 07:05:29 NJ has joined #me 07:05:57 mfoltzgoogle: I believe that's correct. I don't recall the status of it. It came up in May 2018, I think. 07:05:59 Second Screen May 2019 F2F https://www.w3.org/wiki/Second_Screen/Meetings/May_2019_F2F 07:07:33 mfoltzgoogle: I think we probably agreed that it should be possible. It probably requires a few tweaks to the protocol so that it knows that the remoting is part of a shared presentation. 07:07:37 q+ 07:08:50 mfoltzgoogle: We discussed whether everything could be done in script. Same recommendation for synchronization. What you might be missing is the latency of the media rendering pipeline. 07:08:57 samira has joined #me 07:09:21 cpn: I have seen implementations that manage to do synchronized playback across devices through a timing server. 07:09:25 ack igarashi 07:09:42 JohnRiv has joined #me 07:09:57 igarashi: I don't follow the discussion on encrypted media. You are not going to define how keys are exchanged in the protocol? 07:11:13 mfoltzgoogle: Someone with more experience on EME might be able to shed some lights as to what would be required. 07:12:01 ... One reason we designed an extension system is that people interested in new features can propose them, prototype implementations, and then we can incorporate them in the spec if all goes fine. We don't have the expertise in the group. 07:12:19 nigel has joined #me 07:12:34 present+ Nigel_Megitt 07:12:36 ... We're not defining the path for encrypted media from one device with another. Might work if both devices support HDCP. 07:12:41 rrsagent, pointer 07:12:41 See https://www.w3.org/2019/09/16-me-irc#T07-12-41 07:12:58 ... I think there is an open issue in our GitHub about remote playback and encrypted media. 07:14:11 rrsagent, make log public 07:14:19 igarashi: Arbitrary application message passing is supported? 07:14:28 mfoltzgoogle: Yes. 07:15:08 rrsagent, draft minutes 07:15:08 I have made the request to generate https://www.w3.org/2019/09/16-me-minutes.html cpn 07:15:18 ... In the spec, you'll see bindings between the API and the messages exchanged in the protocol. 07:15:46 ... For instance, video.remote.prompt() requires exchanges messages between devices 07:16:30 markw: Could the protocol work on TCP? 07:16:41 Peter: You'd have to advertise it differently 07:17:25 glenn has joined #me 07:17:40 igarashi: [question on security during remote playback] 07:18:22 mfoltzgoogle: the Remote Playback API does not require the receiver to be a user agent in the usual sense, it does require the receiver to support media playback as in the HTML spec. 07:19:14 markw: The Presentation API requires the receiver to be able to render the URL, but the URL could be a non HTTP URL, custom schemes may be supported instead. 07:20:10 mfoltzgoogle: The spec defines processing of HTTPS URL, the rest is undefined. 07:20:30 Open Screen Protocol https://github.com/webscreens/openscreenprotocol/ 07:20:30 ... We have a writeup of how the protocol interacts with custom schemes in the GitHub repo. 07:21:07 cpn: That has been one of the extension mechanisms that we've been interested in for opening a Web page that has broadcast capability in HbbTV (perhaps Hybridcast has similar needs) 07:22:25 Custom Schemes and Open Screen Protocol https://github.com/webscreens/openscreenprotocol/blob/gh-pages/schemes.md 07:22:40 [discussion on second screen support in Hybridcast] 07:24:05 mfoltzgoogle: regarding authentication, we looked at J-PAKE and request/response challenges but we had memory concerns there so switched to SPAKE2 following internal discussion with security experts at Google. 07:24:20 Peter: The protocol allows for more authentication mechanisms in the future. 07:24:31 ... Devices can support their own mechanism. 07:25:17 igarashi: Co-chair of HTTPS in local network CG, meeting on Thursday morning. We haven't reached discussion on authentication. Would be good to align with Open Screen Protocol. 07:25:29 sangwhan: Is there a prototype? 07:26:28 mfoltzgoogle: We recently decided to add streaming to the OSP, which complicated things. We have a first implementation of Presentation API commands. No crypto because we've kept changing that. 07:27:00 ... The library is coming. It implements the protocol. It does not do media rendering, it does not have JS bindings, etc. 07:27:52 Open Screen Library implementation https://chromium.googlesource.com/openscreen/ 07:28:01 igarashi: If you want to apply the OSP to the broadcast protocol, we need to consider the case where the remote device is not a browser. For instance, channel change is done by the system, not the application. 07:28:57 mfoltzgoogle: Capabilities like supporting channel tuning is not in the OSP. If you think that the communication channel needs to be terminated on channel change, that can be added. 07:29:40 igarashi: In the case that some arbitrary message protocol is still necessary, you'd use the Presentation API, but the receiver may not be a browser agent. 07:29:50 mfoltzgoogle: seems like something for an extension. 07:30:03 cpn: OK, thank you for the discussion. 07:30:37 mfoltzgoogle: Mostly, we want input on use cases that we haven't considered yet. We'd love to get feedback on the extension mechanism as well. 07:30:41 pal: Thank you. 07:31:35 Topic: Joint meeting with Timed Text WG 07:32:19 andreas: We could start with 360 standardization 07:32:42 nigel: In TTWG, we're in the final stages of rechartering. 07:33:01 ... Some things that we're considering such as karaoke. 07:34:34 https://www.w3.org/WAI/APA/wiki/Accessible_RTC_Use_Cases 07:34:41 ... Quick agenda bashing, any topic you'd like to cover? 07:34:54 Josh: accessibility use cases? See accessible RTC use cases document 07:35:04 cpn: TTML and MSE? 07:35:48 nigel: Yes, opinions about exposing TextTracks from MSE. 07:37:09 apologises for throwing a curve ball to Nigel, I'm here for the XR bit but think this doc may still be useful as an FYI 07:37:19 angel has joined #me 07:37:31 rrsagent, draft minutes 07:37:31 I have made the request to generate https://www.w3.org/2019/09/16-me-minutes.html angel 07:38:03 andreas: Focus the discussion of the day on standardization of 360 subtitles. Most of the stuff comes from an EU research project. 07:39:01 ... To make it short, there have been extensive user tests. For captions, main requirement is to have subtitles that are always in the field of view. It's enough to have them on a 2D plane, no need to have them positioned in 3D. 07:39:22 ... There should be some indication of where the audio source is positioned. 07:39:38 samira has joined #me 07:39:48 ... Of course, you also need features present in TTML, TTML-IMSC profile being a good example. 07:41:00 ... [demo of an application to test subtitles positioning] 07:42:55 ... Lots of activity starting last year at TPAC. We started with a discussion in the Immersive Web CG. Then discussion within the TTWG, Media & Entertainment IG. 07:43:16 ... In the end, we realized we needed more people from immersive and browser vendors. 07:43:30 ... We wrote a proposal to be discussed in the WICG. 07:43:47 ... There has been no comment on the WICG forum yet, so question is how do we proceed? 07:44:45 ... Two additional activities worth noting. A colleague from Google proposed the creation of an Immersive Caption Community Group, and XR accessibility W3C workshop in November. 07:44:58 ... There is awareness that something needs to be done. 07:45:12 ... Hard to get enough resources to get started though. 07:45:31 ... How to get time and resources from implementors? 07:45:33 Inclusive Design for Immersive Web Standards W3C Workshop Seattle Nov 5-6 07:45:35 https://www.w3.org/2019/08/inclusive-xr-workshop/ 07:45:47 ... Everything is evolving, nothing really fixed. 07:45:48 q+ to ask what is the state of documentation of the requirements right now 07:45:56 ... Is it really a web platform topic? 07:46:18 ... Important to know when to stop if there is not enough interest. 07:46:28 glenn has joined #me 07:46:43 ... Apart from which group should deal with it, the question is also where does this solution fit? 07:46:46 yajun_ch_ has joined #me 07:47:37 ... Authoring environments (Unity, Unreal), Web applications, WebXR API (linked to OpenXR) and 360 / XR device 07:48:25 ... How to follow-up? I thought WICG would be the right place, but if there is not enough place, there is still the question of whether that's the right place. Not sure about Immersive Caption CG since it does not exist yet. 07:48:37 angel_ has joined #me 07:48:38 ... TTWG is the right group but we need more expertise from the XR world. 07:49:00 ... Another solution is to continue the work in a "private" repository. 07:49:02 ack nigel 07:49:03 nigel, you wanted to ask what is the state of documentation of the requirements right now 07:49:11 q+ 07:49:20 nigel: What is the state of documentation in terms of the requirements? 07:49:21 q+ 07:49:22 q+ 07:49:40 ... Describing positioning in 3D space, can I do it with audio? 07:50:38 Takio has joined #me 07:50:39 andreas: There are documented user tests, as part of an European project deliverable. 07:51:33 nigel: I was thinking about requirements documentation. What is the problem that you're trying to solve, user needs. 07:52:16 ack samira 07:52:38 q+ 07:52:38 samira: Who was the person who started the Immersive Caption Community Group? 07:52:54 andreas: [missed name] 07:53:13 samira: OK. Another comment is that WebXR is becoming more stable. 07:53:20 s/[missed name]/Christopher Patnoe at Google/ 07:53:43 andreas: Yes, the question for me is where should this go. 07:54:21 ... The WebXR API does not know anything about what's inside the WebGL right now. 07:54:38 cpn: Is all that's needed a delivery format and then some library can place that in the immersive environment? 07:55:28 igarashi: Do we need to extend APIs in the browser to support this? 07:55:44 -q 07:56:03 Manishearth has joined #me 07:56:04 andreas: OMAF defines a way to multiplex IMSC subtitles with MP4, but then it's all bound to that content format. Not sure it's sufficient for interoperability scenarios. 07:56:16 Manishearth has left #me 07:56:17 +q 07:56:42 ack kaz 07:57:26 kaz: Is it related to WebVMT? 07:57:54 Francois: WebVMT is about tracks positioned on a map, not in 360 videos. 07:58:12 cpn__ has joined #me 07:59:00 ack me 07:59:02 q+ 07:59:03 suzuki has joined #me 07:59:22 andreas: It would be an option to have a subtitle format, but burning captions in a frame does not provide good user experience. 07:59:55 josh: Looking at things from an accessibility perspective. APA would seem a good group to talk to. 08:00:05 andreas: We talked a lot with Judy, Janina and so on. 08:00:10 https://www.w3.org/WAI/APA/wiki/Xaur_draft 08:00:19 q? 08:00:27 josh: We created a list of requirements for XR in APA. 08:01:15 IW group is also discussing dom overlays so this is another option for subtitles 08:01:51 pal: How many people in this group doing 360 videos and XR content? 08:02:08 ... One possibility is that this group is not the best group to get feedback from. 08:02:20 andreas: I don't know, that's what all groups say ;) 08:02:34 yy has joined #me 08:02:41 ... We need a critical mass to do it. 08:03:09 pal: People that build apps for Oculus, are they around? 08:03:34 andreas: I spoke to some of them. They always say that they don't provide subtitles. 08:03:52 ... Some discussion in Khronos with Unity and Epic. 08:03:53 s/is it related to WebVMT?/wondering about the possible relationship with WebVMT (because 360 video could be mapped with some kind of map image)/ 08:04:51 ... I talked with Immersive Web folks. We'll talk about that on Wednesday 11:00 during Samira's breakout session. 08:05:03 q? 08:05:35 ... The issue that we have is that there is not endless time to deal with it. The project is running out. It stops next year. To push a standard, it will take 2-3 more years. 08:06:36 ack igarashi 08:06:41 There are very few testing with people with disabilities in this space so this is very interesting. 08:06:59 igarashi: From a content production perspective, I'm interested in a format, but not sure about browser support for this. 08:07:38 ack tidoust 08:08:05 https://github.com/immersive-web/dom-overlays 08:08:28 q+ to wonder what the smallest thing is that we need to standardise first - is it a syntax for expressing a 3D location? 08:09:28 Francois: Not clear to me what you want to be standardized. DOM overlays could be one building block. 08:10:18 andreas: Yes, DOM overlays may be a good way forward to render captioning thatn burning things in WebGL. 08:10:23 ack nigel 08:10:23 nigel, you wanted to wonder what the smallest thing is that we need to standardise first - is it a syntax for expressing a 3D location? 08:10:50 +1 to Nigel 08:11:09 nigel: Same point. Do we have agreement that it's about a syntax for expressing a 3D location? 08:11:37 andreas: Actually, that's not what we need, since we want it to appear on a 2D plane, that is what the users want. 08:12:02 ... We need a way to indicate where in the 3D space the audio source is coming from. 08:12:06 q+ 08:12:31 Gary: So you need some positioning in 3D to make that possible. 08:12:54 andreas: Define a good container is another issue. 08:13:06 ack me 08:13:21 q+ 08:13:34 Josh: in the User requirements document I showed you, we took a module approach. 08:13:48 ... This architecture does not exist yet. 08:13:58 s/module/modular 08:14:08 https://www.w3.org/WAI/APA/wiki/Media_in_XR 08:14:23 ... We're also looking at Media requirements in XR. Not vetted by the APA WG yet. 08:15:09 andreas: Lots of 360 content for the time being, and a lot of it without captioning. 08:15:13 s/module approach/modular approach 08:16:02 Gary: WebVTT update. I joined TTWG half a year ago. Trying to get WebVTT to progress. One of the big thing is an implementation report exists right now. 08:16:14 ... Something like 6-7 issues with it. 08:16:15 Link to 360 subtitle requirement https://github.com/immersive-web/proposals/issues/40 08:16:55 ... Basically, we're looking at features implemented in browsers and in VLC. Then identify features at risk, and possibly remove them to get a V1 out. 08:17:24 ... Then hopefully convince browser vendors to implement the features that we may remove. 08:17:36 yajun_chen has joined #me 08:17:40 -> https://www.w3.org/wiki/TimedText/WebVTT_Implementation_Report WebVTT Implementation Report 08:17:54 q? 08:18:05 ack glenn 08:18:38 glenn: Any SMPTE spec that includes positioning of subtitles? 08:18:44 nigel: That's a good question. 08:19:26 ... One of the things we're doing around TTML2 is adding new functionality in extension modules. We're trying to constrain the core, and then provide the rest in extensions. 08:19:29 s/positioning of subtitles/3d positions of audio sources/ 08:19:48 ... There are a few ones that are ongoing. 08:21:10 ... [details extensions] 08:21:43 ... Right now, audio/video comes to MSE but not text. 08:22:16 markw: My personal position is that things should be symmetrical across media types. 08:22:37 ... At least in our application, we prefer to do the rendering of text tracks ourselves. 08:22:56 ... It would be advantageous in which the browser is aware of text tracks. 08:23:13 nigel: You said my sentiment much better than I could. 08:23:40 Gregg: I would argue that we don't want to render them ourselves, but we still want to control the rendering with our styles. 08:24:05 markw: Yes, we want to have enough control of the rendering, but we could offload the rendering to the browser, that would be great. 08:24:47 nigel: It's been hard to get statistics about user customization, or people that play back content with captions. 08:25:10 markw: In terms of rendering, you would still want the site to control enabling/disabling. 08:25:34 +1 08:25:42 markw has joined #me 08:26:21 kumekawa has joined #me 08:26:21 Gary: We shouldn't try to do the same thing twice. If there's more support to do the new generic TextTrack thing, then that's good. 08:27:02 pal: Two different questions: any objection to enabling symmetry in MSE? Are you going to use it? 08:27:27 markw: First question is whether people think that could be harmful. 08:28:11 takio__ has joined #me 08:28:29 nigel: OK, I just wanted to raise it to get feedback. 08:29:12 [No concerns expressed regarding question on whether people think that could be harmful] 08:30:00 Josh: About accessibility in WebRTC use cases, challenge of synchronizing some of these things together when switching to a different modality. That's one. 08:31:15 nigel: It would make sense to talk about live contribution to see where that fits. How does live contributions actually work, what's the mental model? 08:31:33 ... Alright, I think we covered all topics. 08:31:40 Topic: Closing and wrap-up 08:32:59 cpn: Thinking about Media Timed Events, some editorial work. Planned discussion on DataCue. Around bullet chatting, more conversation will happen this week. 08:33:25 ... Some possibility to go to Timed Text WG. 08:33:53 nigel: It feels to me that this IG could be a better home for that if there's no clarity in TTWG on Friday about that. 08:34:08 andreas: Can you explain again how you want to proceed? 08:34:22 s/a better home/the best place to give guidance 08:34:22 ... Draft published in the Chinese IG, what would the ideal next step be? 08:35:11 song: Initially, contributors were from China. Now that NikoNiko is engaged in discussions, work could go to TTWG, or perhaps in another group. 08:35:47 ... We want the use cases to be approved by the IG, afterwards we'd like to push standardization work on identified gaps. 08:36:06 ... Within the next few weeks, we'll have a last version of the use cases. 08:36:39 andreas: OK, so this week would be a good opportunity to decide where this should go. 08:36:54 c/NikoNiko/Niconico 08:36:58 cpn: We had a lot of discussion around synchronization today. Frame accurate rendering. 08:37:18 ... Ability to seek accurately within videos. 08:37:31 ... Some interest to follow-up, although no one volunteers. 08:37:59 ... The media production use case that Pierre presented would be a good perspective to address this. 08:38:18 pal: With an action on Gary to follow up with a colleague on that. 08:38:40 s/a colleague/Garrett Singer 08:40:05 cpn: Secure communications between devices, we heard interesting stuff from Hybridcast and HTTPS in local network, and Second Screen. Interesting set of approaches that could be compared. 08:40:23 ... Seems like a good fit for HTTPS in local network CG discussions. 08:41:49 ... Clearly the immersive captioning is interesting, but not sure what next step in this group should be. Maybe the Immersive Captioning CG could be the right forum. 08:43:16 ... We talked about 360 videos. That's something that the IG could follow on. We have liaison with MPEG. Unless you feel that immersive group would be a better home. 08:43:26 Samira: Possibly. At this point, I'm gathering input. 08:44:06 cpn: Finally, there's the timed text in MSE proposal. Would that sit in TTWG? 08:44:13 markw: It would be in scope of the Media WG. 08:44:22 cpn: Have I missed anything from the summary? 08:44:56 pal: One encouragement for you to clarify the scope in Media Timed Events. 08:45:27 cpn: And also possibly make more specific recommendations. 08:45:36 pal: I think it helps to have something concrete. 08:45:50 cpn: OK, I think that's everything, thank for your presence today! 08:45:58 RRSAgent, draft minutes 08:45:58 I have made the request to generate https://www.w3.org/2019/09/16-me-minutes.html tidoust 08:57:44 MasayaIkeo has joined #me 08:59:17 yajun_chen has joined #me 09:30:07 atai has joined #me 09:32:12 horiuchi has joined #me 09:49:50 horiuchi has joined #me 10:10:12 atai has joined #me 10:11:06 horiuchi has joined #me 10:41:18 horiuchi has joined #me 10:43:35 horiuchi has joined #me 11:16:25 Zakim has left #me 11:25:06 yajun_chen has joined #me 12:25:27 pal has joined #me 12:35:42 cpn has joined #me 12:50:26 cpn has joined #me 12:50:37 rrsagent, draft minutes 12:50:37 I have made the request to generate https://www.w3.org/2019/09/16-me-minutes.html cpn 12:50:44 rrsagent, make log public 13:44:17 atai has joined #me