13:56:59 RRSAgent has joined #me 13:57:04 logging to https://www.w3.org/2024/09/03-me-irc 13:57:04 Zakim has joined #me 13:59:37 ohmata has joined #me 14:00:09 tidoust has joined #me 14:00:30 present+ Chris_Needham, Thomas_Stockhammer, Bill_Rose, Louay_Bassbouss, Jon_Piesing, Hisayuki_Ohmata, Tatsuya_Igarashi 14:00:46 igarashi5 has joined #me 14:00:52 present+ 14:00:53 rrsagent, make log public 14:01:06 rrsagent, draft minutes 14:01:07 I have made the request to generate https://www.w3.org/2024/09/03-me-minutes.html kaz 14:01:14 present+ Francois_Daoust 14:01:23 present+ Ryo_Yasuoka, Hiroki_Endo, Piers_O'Hanlon, Alicia_Boya, Javier_Arellano 14:01:35 scribe+ cpn 14:01:35 chair: ChrisN, Igarashi 14:01:44 present+ Kaz_Ashimura 14:01:45 present+ Yan_Jiang 14:01:53 present+ Paul_Hearty 14:02:02 present+ Javier_Arellano 14:02:23 present+ Alicia 14:02:23 present+ Francois_Daoust 14:02:29 present+ Hiroki_Endo, Hisayuki_Ohmata 14:03:00 present+ Paul_Hearty, Piers_O'Hanlon 14:03:17 present+ Ryo_Yasuoka 14:03:17 louay has joined #me 14:03:23 rrsagent, draft minutes 14:03:24 I have made the request to generate https://www.w3.org/2024/09/03-me-minutes.html kaz 14:03:51 alicia has joined #me 14:04:02 meeting: Media and Entertainment IG + CTA WAVE Joint Meeting 14:04:22 jpiesing has joined #me 14:04:28 present+ Wolfgang_Schildbach 14:04:41 present+ William_Rose 14:04:43 hiroki_endo has joined #me 14:04:50 Topic: Introduction 14:05:07 Chris: Welcome. Collaboration with CTA WAVE and W3C 14:05:08 present+ John_Riviello 14:05:29 present+ Fritz_Heiden 14:05:51 present+ Alexandra_Blasgen 14:05:54 Topic: CTA WAVE Streaming Media Test Suite - Devices 14:06:04 rrsagent, draft minutes 14:06:05 I have made the request to generate https://www.w3.org/2024/09/03-me-minutes.html kaz 14:06:26 Louay: Jon Piesing and I will present. Jon with the first part, then I'll go into technical details 14:06:33 agenda: https://lists.w3.org/Archives/Public/public-web-and-tv/2024Aug/0008.html 14:06:35 ... We have a demo I'll show 14:07:21 Jon: CTA is the Consumer Technology Association. ANSI approved standards and trade association in the US market 14:07:46 present+ Ilja_Gavrylov 14:07:48 ... WAVE is Web Application Video Ecosystem, video on CE devices 14:08:15 ... Here's a list of specs that WAVE has delivered. Point specifications that solve real-world pain points 14:08:24 present+ Peter_Shorrock 14:08:30 rrsagent, draft minutes 14:08:31 I have made the request to generate https://www.w3.org/2024/09/03-me-minutes.html kaz 14:08:38 ... Device Playback Capabilities spec, CMSD, CMCD, Web Media API Snaphot, and the Content spec 14:08:49 ... We have test suites and tools, developed for CTA by Fraunhofer 14:09:09 Chris has joined #me 14:09:11 ... WAVE has been a contributor of updates to the DASH validator, with DASH-IF, DVB, Hbb_TV 14:09:38 ... The Device Playback Capabilites test suite exercises the Device Playback Capabilites spec 14:09:46 ... How you'd deploy MSE and EME in a real world situation 14:10:03 ... The majority is in sections 8 and 9, you can see the points that are covered 14:10:15 ... It's a slightly old version of the document, some things added 14:10:33 ... Playback of fragements, switching sets, and more. It's all CMAF 14:10:39 present+ Chris_Lorenzo 14:10:45 ... We have audio and video media profiles 14:11:00 chair+ ChrisL 14:11:00 rrsagent, draft minutes 14:11:01 I have made the request to generate https://www.w3.org/2024/09/03-me-minutes.html kaz 14:11:27 ... We went back and forth on subtitles. But the reality is they're handled in JS, not browsers. We focused on things like timing, interested if we missed anything there 14:11:41 ... [slide 5] 14:12:14 Louay: Many partners have contributed to the project. This slides shows the high level architecture of the test suite 14:12:22 ... We'll share slides, with links to the GitHub repos 14:12:30 ... And give some hints on how to get started 14:12:53 ... Start with mezzanine content. Annotated audio and video content. CMAF content based on the mezzanine content 14:13:06 ... We have a test runner based on WPT test runner. HTML and JS templates 14:13:13 ... And the observation framework 14:13:29 ... [slide 6] 14:13:54 ... Mezzanine content. It's annotated video content in many resolutions 14:14:17 ... It gives you the opportunity to do automatic observation. So no need for human to do that 14:14:44 ... Annotated in the video is a QR code, some information about the current frame, resolution, frame rate, timestamp. Human readable text for debugging 14:15:07 ... A flashing square for A/V sync. The red triangle is for checking all the content is visible on screen 14:15:14 ... Content based on pseudo-random noise 14:15:27 ... [slide 7] 14:15:45 ... The CMAF test content is validated from the mezzanine content. Metadata in the form of a DASH MPD 14:16:11 ... Groups of CMAF streams, content option combinations, options for debugging, resolutions for testing CMAF switching sests. Also tests for splicing and encryption 14:16:26 ... We have sparse matrix of test content and options 14:16:45 ... We have validation of the content as valid CMAF usng the DASH-IF validator 14:17:03 ... [slide 8] 14:17:21 ... After creating the content, we need to create test instructions. This is done in HTML and JS 14:17:35 ... Sequential track playing, random access to times 14:17:58 ... There's a template, using MSE and EME. Implements instructions for the DPC spec 14:18:30 ... We're not using open source players like DASH.js, just a basic MSE player. Don't want additional features not relevant for doing the tests 14:18:46 ... So we can see the test really implements what's in the DPC spec 14:19:01 ... For sequential track playback, make sure the content plays from beginning to end 14:19:21 ... How to do that? In a JS implementation, can rely on events triggered by the video element 14:19:57 ... This isn't enough. If there are issues with integration of the browser in a device, the media element tells you the video is playing, but you see skipped frames or just a black screen 14:20:15 ... Things yuo can't discover from a JS API, so need information observation externally 14:20:48 ... An important aspect, while we're playing the content we show information on top of the content, showing the mezzanine annotations, and QR codes from the JS app 14:21:20 ... So when recording, and in the observation framework, you can see differenes between what's shown on the screen and what's reported in JS 14:21:38 ... Check whether it follows the DPC or not, and if the test assertion passes or fails 14:21:39 i|After creating the|-> https://github.com/cta-wave/dpctf-tests cta-wave/dpctf-tests| 14:21:48 rrsagent, draft minutes 14:21:50 I have made the request to generate https://www.w3.org/2024/09/03-me-minutes.html kaz 14:21:57 ... [slide 9] 14:21:59 ... We have a test runner, software that can do test automation 14:22:29 ... We use web platform tests, not only in the DPC but the Web Media API Snapshot. We extended WPT to be able to run on TV and STB devices 14:22:57 JohnRiv has joined #me 14:23:03 ... We've contributed changes back to WPT. The difference is, the test logic it outsorced to a test server 14:23:18 ... WPT can be run on a device under test, but can't do that on a TV device 14:23:23 i|We have a test runner|-> https://github.com/cta-wave/dpctf-test-runner cta-wave/dpctf-test-runner| 14:23:34 ... So we extended the runner, we have a server that manages the tests and results 14:23:50 ... There's an option to configure the tests. Difficult to customise just using a TV remote 14:24:07 ... This is why we have a companion app, can a QR code, configure the tests from the companion page 14:24:23 ... We have a REST API for test automation. This is used in HbbTV test tools 14:24:41 .... Also, when a test session completes, you can export results to HTML and JSON 14:24:53 ... These are compatible with the WPT test runner 14:24:56 ... [slide 10] 14:25:18 ... The observation framework can run on a PC or tablet, any browser platform that supports MSE 14:25:36 ... Record all the test runs. We use a camera, e.g., a smartphone, to record the test session 14:26:06 ... The OF analyses the recording automatically, extract QR code from each frame, process the data and compare to DPC spec 14:26:49 ... Can depoy the OF to the same device or test runner. Or to a more powerful machine, because you're doing video decoding, media processing, so needs a device with good performance 14:27:14 ... You can run it in a Docker container, but make sure the performance fits your requirements, e.g., to produce fast results 14:27:20 ... [slide 11] 14:27:37 ... There's a landing pages for the test suite. This should be your starting point 14:28:08 ... Go to the webpage, there's an explainer, link to the GitHub repository from where you can deploy the test runner 14:28:28 ... [slide 12] 14:28:34 ... I'll show a demo video 14:28:49 i|[slide 12]|-> https://github.com/cta-wave/dpctf-deploy cta-wave/dpctf-deploy| 14:29:05 rrsagent, draft minutes 14:29:06 I have made the request to generate https://www.w3.org/2024/09/03-me-minutes.html kaz 14:29:21 ... You see the device under test in our lab. It's in HbbTV mode 14:29:49 ... There's a smartphone camera that records the test run, and a cable to the smartphone, it's the audio output from the TV 14:30:12 ... If you just use the smartphone mic, it doesn't work, needs cleaner audio 14:30:23 ... And you have the audio and video in the same file 14:30:56 ... Scan the QR on the TV, then you can open a companion page where you can configure and select the tests to run on the TV 14:31:25 ... We have long time playback tests, content more than 2 hours. It gives you a fast way to select the relevant tests 14:32:01 ... Start the test run, and between each test you'll get a screen showing what the test is 14:32:17 ... Tests are 30 or 60 seconds, or the longer 2 hour tests 14:32:40 ... You can see annotations from the mezzanine content on the screen. A QR generated in JS 14:33:07 ... Encodes hat is the last action, timestamp. Used in the OF 14:33:46 ... The OF detects that the first frame is displayed, to check the content is played from the beginning to the last 14:33:55 ... The test runner then starts the next test 14:35:03 ... From human observation, we can see it's fine. But some things are harder to detect, so why we need the OF 14:35:21 ... On the last page, it shows session completed. The companion screen shows all the info about the test run 14:35:42 ... Number of passes and fails, buttons to download the results as JSON or an HTML report 14:36:15 ... Now copy the recording from the camera and run the OF to check for passes/fails 14:36:26 igarashi has joined #me 14:36:29 ... The report is automatically updated based on the OF results 14:36:35 ... [slide 13] 14:37:22 ... Technical requirements. We support all OSs: Linux, Windows, Mac. We recommend Linux as it has native Docker support, it's the fastest and easiest way to deploy 14:37:33 ... Docker Desktop means additional steps, but that's documented 14:37:52 ... Need TLS server certs, it's important for EME tests, which requires HTTPS 14:38:13 ... The camera for recording, to analyse 50 or 60Hz, you need double the frame rate, so 120Hz or higher 14:38:31 ... We use Samsung S23+ smartphones, happy with the results 14:38:47 ... [slide 14] 14:38:56 ... The test report is the same format as WPT 14:39:18 ... OF shows additional assertions added to the test results. It shows the reason for any test failures 14:39:41 ... [slide 15] 14:40:04 ... We did validation in our lab, and in HbbTV plugfests. We tested on the latest TVs coming into the market in the next year 14:40:31 ... At DTG in London, in Fraunhofer, next event in October 14:40:37 ... [slide 16] 14:41:08 ... HbbTV test setup. At plugfests we take a DVB modulator, where we configured a transport stream with HbbTV app pointing to the test suite landing page 14:41:25 ... HbbTV developers will be famiilar with this 14:41:43 ... [slide 17] 14:42:06 ... Testing on many devices. For each result session, we have tools to get results across test runs, summarise the information into a single table 14:42:24 ... Easy to see which tests are performing well, and which need checking 14:42:32 ... [slide 18] 14:42:40 ... Thank you. Happy to take questions 14:42:41 q? 14:44:23 Topic: Discussion 14:44:51 Alicia: You mentioned MPEG-TS, that part is separate from ME? You have tests for broadcast applications? 14:45:13 Louay: This part is just to launch the landing page on the TV. Tests only rely on MSE and EME, no broadcast dependency 14:45:37 I run the tests on a mobile phone when exercising them. 14:45:41 ... If you have a smart TV app, you could have another way to launch the landing page 14:45:54 Alicia: What protocol is used to get the tests into the TV? 14:46:32 Louay: The test runner uses a REST API to the test server, HTTP. When the test runs on the TV, results reported locally, then results sent over the REST API to the server 14:46:41 Some SmartTVs allow URLs to be pushed to the TV using a home network protocol like DIAL. That can also be used to push the URL of the test runner to the TV. 14:46:58 Alicia: Does the DVB modulator just send the URL? 14:47:01 Louay: Yes 14:47:48 Wolfgang: You said you run the test suite at the HbbTV plugfest. Any area where you find failures, any hot spots? 14:48:10 Louay: Focusing on the test runner and process for running tests, it's stable, no issues on TVs we tested 14:48:40 ... But a group of tests, initial frames skipped or not displayed in the video, something we've observerd 14:49:02 ... Working on audio observations. One challenge is getting the audio from the TV. We use line-out, but not supported on all TVs 14:49:08 .... Now use HDMI for capture 14:49:46 Piers: Have you tried using DIAL for launching the app? 14:50:12 Louay: You can also use DIAL, but we use the DVB modulator, as in the plug fests we run other tests as well 14:50:30 ... If you have control on the local network. There might be limitations on device discovery at plugfests 14:50:41 ... So we rely on having our own setup 14:51:09 Piers: Is the media configurable, whether it comes from the CTA media streaming site, or can you host locally? 14:51:32 Louay: We try to avoid dependencies on the open internet. Streams are available on a CDN, but we download them and run locally 14:52:13 ... It's important when you deploy, the device under test and test runner should be as close as possible. We're not testing content delivery over the internet. We want to avoid dependencies on network conditions 14:52:46 ... Can be a server in your orgsisation. We have a dedicated server for the TVs in the lab. You could use a laptop. 14:53:04 ... One challenge is having a TLS cert, for JS tests 14:53:04 rrsagent, draft minutes 14:53:06 I have made the request to generate https://www.w3.org/2024/09/03-me-minutes.html kaz 14:53:16 ... and a domain name 14:54:59 Chris: I found some limitations with the WPT tests, e.g., EME persistent usage record. Any suggestions for improving WPT 14:55:33 Louay: Can't use JS to check the results, e.g., video plays but is delayed, you don't get this from the API. So you need an external system to check the playback starts from a given event 14:56:04 ... It's something we solved with the test framework and mezzanine content, and external observations. Things you can't capture via APIs 14:56:30 ... Needs an external system to give more detail 14:57:32 ... How to do more automatically? Currently need to copy file and upload to a server, could make a fully automated pipeline, but that needs additional resources 14:57:47 Chris: Any specific tests for VideoPlaybackQuality, e.g., dropped frames? 14:58:51 Louay: Not specifically, but the QR code gives you info about quality level with MSE, but analysing that it's displaying as it should, e.g., when upscaling is applied. Could be a future discussion, we just make sure the bitrate representation is displayed now from the info in the QR code 14:59:56 Alicia: I have this problem about the audio, writing WPT, but didn't work well. I put in single tones, to seeif the audio changed. But then you also depend on Web Audio API, but the test ended up flaky 15:00:37 ... Not sure if it was an MSE but or with Web Audio. You could do something similar, where yuo can make beeps that are easy to analyse with FFT 15:01:26 Louay: We rely on external framework to measure those things 15:02:11 Jon: The audio is a pattern based on pseudo-random noise. The OF can analyse the pattern and confirm it's the audio that's expected 15:02:18 Alicia: That could test if it's not in sync 15:03:32 ... Also about the domain for TLS certs. Is that configurable? 15:03:50 Louay: You can configure it, but need to create your own domain 15:03:59 Alicia; That's very cool 15:05:18 Chris: Lots of value here. WPT focuses on automatic execution, API interactions, things not needing human review 15:05:47 Louay: Some need user interaction, which can break the automation of the tests 15:07:02 Chris: Improvments in WPTs or web APIs 15:07:46 Thomas: It's about dissemination of the information. Make a bit of a promotion. We're open to collecting feedback, e..g, do we need new tests? We focused on basic codecs, so look at more advanced codecs? 15:08:25 ... Things could be stimulate. Needs to people to understand the value. Any W3C platforms we can use for dissemination, including demos 15:08:58 q+ 15:09:07 Alicia: I can use something like this. We've been trying to get more tests running WPE. There's a need for tooling. Personally, I'll advocate to have people look at this in my company 15:09:09 q- 15:09:32 q+ to suggest we wrap up and have another call to talk about the details 15:11:07 ChrisL: I put a link to the BBC meetup, could be another way to promote 15:11:19 Kaz: Could have a TPAC breakout session? 15:11:27 Louay: I'll try to join remotely, but happy to support 15:12:09 q- 15:12:59 rrsagent, draft minutes 15:13:00 I have made the request to generate https://www.w3.org/2024/09/03-me-minutes.html kaz 15:13:17 Chris: Look forward to future meetings to continue the discussion 15:13:25 [adjourned] 15:13:47 rrsagent, draft minutes 15:13:48 I have made the request to generate https://www.w3.org/2024/09/03-me-minutes.html kaz 17:30:26 JohnRiv_ has joined #me 17:39:47 Zakim has left #me 18:05:27 JohnRiv has joined #me 18:32:08 JohnRiv has joined #me