W3C

Second Screen WG/CG - 2023 Q1 virtual meeting

08 March 2023

Attendees

Present
Anssi Kostiainen, Brad Triebwasser, Brent Gaynor, Chris Needham, Francois Daoust, Fritz Heiden, Hakan Isbiliroglu, Mark Foltz, Mike Wasserman
Regrets
Louay Bassbouss
Chair
Anssi
Scribe
anssik, tidoust

Meeting minutes

Welcome

anssik: Welcome!

anssik: Welcome Brent, invited today to provide dev experience feedback.

Brent_Mometic: I've been building apps over the years. Along the way, I build Mometic that encapsulates all capabilities in terms of front-end, back-end, UX, etc.

https://mometic.com/

Brent_Mometic: Hard to blow up in this competitive market. I started to do MOMO in Polymer.
… A couple of years ago, we moved into React, with a single page application.
… For trying to do analytics with heads-up displays, this does not work very well.
… In trading circles, people may have multiple screens. We tried to do something more special.
… That is how we bumped into APIs developed by the Second Screen WG.

anssik: I'm Anssi, working for Intel, chair of the group

anssik: Francois is our W3C Staff contact

msw: Mike, working at Google on Chrome.

anssik: Mark covers a lot of ground in this group, editing 3 specifications.

mfoltzgoogle: I've been with the group since the beginning, starting with the Presentation API. I'm the editor of that spec, available in Chrome for some time.
… The second API that was incubated was the Remote Playback API. It is now available in Chrome and Safari. That was originally edited by Mounir, I've inherited that one.
… Finally, third incubation is Open Screen Protocol, targeted at providing a common platform on which the APIs may be implemented interoperably.
… The means of communication between devices is based on proprietary protocols right now. The Open Screen Protocol is an attempt to bridge that gap.
… We will also look into Mattr later today that could fill a few of the needs as well.
… DLNA was also in the same space, with a different approach.

btriebw: Also at Google, working with Mike, also on Fullscreen Popups that we'll introduce later today.

cpn: Working at BBC. Interested in second screen support. Also co-chair of the Media WG and Media & Entertainment IG

Fritz: Working at Fraunhofer as student. I joined last year to develop tests for the Remote Playback API. Also involved in the CTA WAVE Project.

Hakan: Working at Google. Joined the group recently. New to standards, will be listening in.

anssik: First a few quick updates from W3C ecosystem

W3C Workshop on Permissions

W3C Workshop on Permissions report

anssik: Workshop held in Dec '22, was well attended, diverse topics. I presented on Permissions UX Across Form Factors including Multi-Screen Window Placement API. We discussed interesting solutions to permission prompting relevant to this WG. See "push" vs "pull" permission flows in the report.

Screen Capture Community Group

Screen Capture Community Group

anssik: This new CG had its first telcon last month. Scoped on new screen capture APIs and extending existing ones. Mark also participated the CG kick off and we can help coordinate between that CG and Second Screen WG/CG.

Service Discovery Community Group proposed

Service Discovery Community Group proposed

anssik: The purpose of this CG is to define a browser API allowing service discovery via mechanisms such as mDNS. Possibly relevant to OSP that does discovery with mDNS.
… CG is still looking for supporters
… any other updates from others?

mfoltzgoogle: We've had a few conversations in the past about sites that may want to capture or generate media and stream that to other devices. If there is a way to make these use cases with the capture API, I could see that as a path forward.
… That's the main thing that comes to mind in terms of coordination with the Screen Capture CG.
… Possibly also with the Remote Playback API.

Multi-Screen Window Placement API

Repository: w3c/window-placement

Production use of multi-screen layout in a web-based stock market tool

anssik: Honored to get your feedback, Brent.
… Anything you would like to hear from Brent, Mike?

msw: I would love to hear what this API made possible that wasn't possible before on the Web, for user convenience.

Brent_Mometic: Generally, I'm the product UX guy and then go to my developers to turn that into code.

Slideset: PDF

[Slide 1]

Brent_Mometic: MOMO is a stock scanner for day traders, enabling traders to take actions in real-time.

[Slide 2]

Brent_Mometic: I covered a bit of the background already. It's a web app, using React with a Node backend.
… Ultimately, we want to turn the single page application approach into a layout that spans multiple tabs, screens and devices.
… The use of multiple screens is pervasive.
… I wanted to have a primary and secondary layout options. E.g. desktop layout and mobile layout.
… I also wanted to maintain this "dynamic" responsive layout.
… Also wanted the ability to drag and drop and resize components.

[Slide 3]

Brent_Mometic: Just two screenshots: on the left side, multi-layout picker to change the layout. Layout can be locked so that you can get back to it at any point in time.
… Useful for traders to zoom in different parts, then reset.
… On the right hand-side is a zoom on the component toolbar. Using mouseover. You can drag it or expand it.

anssik: This toolbar leverages the multi-window placement API, right?

Brent_Mometic: Yes.
… We tell users that, for best experience, they should enable the window placement API.

[Slide 4]

Brent_Mometic: Here is a dump of the code we use to get screen details.

anssik: Any feedback on the permission prompt?

Brent_Mometic: I guess I'm pretty annoyed by the prompts. Users just want to get things going. It would be good to have a simple and cross-browser experience.

anssik: In general, it is annoying but required.

Brent_Mometic: Honestly, we've had great feedback on the implementation. Nobody's complained. Maybe people in the finance space are familiar with multi-screen issues and more patient.

[Slide 5]

Brent_Mometic: Second area, getting precise layout positions is hard. We noted that some of the different window title bars needed to be tricked.

msw: Is the problem restricted to multiple windows of MOMO or to multiple windows of MOMO and other applications?

Brent_Mometic: Both. These guys may have 5 different screens and they want precise positioning.

[Slide 6]

Brent_Mometic: Here are examples of trader settings.

anssik: What setup is typical for traders? Typical positioning?

Brent_Mometic: I'd say 3. Usually left and right display, with one on the top.

[Slide 7]

Brent_Mometic: [showing a demo video]
… The demo shows the ability to create layouts across displays and save that for future use.
… It's easy to run out of screen estate with a single window, even on a 5K monitor. Creating multiple windows allows to optimize things for users. When you're trading, often time, you'll be in a frenzy to explore through windows and you'll want to reset things at some point.

[Slide 8]

Brent_Mometic: [demoing window placement permission under the lock in the address bar]

anssik: So users are not annoyed by the permission bar?

Brent_Mometic: Well, it is annoying. It would be good to register our domain or something like that to become a more trusted entity and get rid of the notification.

anssik: Permission prompting is still something that is being explored today.

Brent_Mometic: I think it should be handled more collectively.

msw: Certainly, saving and restoring window placements was one of the original use cases, so it's great to see it in action here.

anssik: This feedback also helps other browsers look into the API.

Brent_Mometic: This works on Safari too.

msw: I suppose you can do same display placement, but not across displays.

Brent_Mometic: My developers managed to code some workaround in practice.

[Slide 9]

Brent_Mometic: Are we the first ones to do that?

msw: First full-feldged application I've seen, indeed.

Use cases - Quick Distraction, Idle Distraction, and Social Watching Scenarios

Use cases: Quick Distraction, Idle Distraction, and Social Watching Scenarios

anssik: I talked with our UX designers
… In the interest of times, I'm not going to enter details.
… 4 use cases described.

Document Picture-in-Picture (Specification)

Document Picture-in-Picture (Explainer)

anssik: I believe that they can inform some APIs, including Picture-in-Picture API (and the Document Picture-in-Picture proposal developed in the WICG), Presentation API.
… Are you working together with Google folks involved in that project (Tommy and Frank)?

mfoltzgoogle: I know a bit of the background and history of that feature.

anssik: I think it intersects with some of our group work.

Fullscreen Popups

Explainer: Creating Fullscreen Popup Windows

btriebw: I don't have a formal presentation, but the explainer describes the idea. Brief overview: right now on the web, you can create a popup on a single screen, and with multi-screen window placement on a second screen.
… However, you cannot create a fullscreen window on a second screen without requiring two user gestures.
… After some back-and-forth, we settled down on adding a flag to window.open. That seems like the easiest method.
… Some use cases for this include a financial app willing to open a chart view fullscreen in a secondary display.
… Other example, security app launching video feeds on an array of 6 displays.
… We have started implementing this as a prototype in Chrome to get a demo out there.
… There are quite a few open questions, even on the explainer.
… One of the biggest questions right now is on focusing the window.
… You may open multiple fullscreen window popups and we haven't really specified what happens. Which takes the focus.
… Another big open question is feature detection.
… One of the drawbacks is that there is no way to detect. Developer would have to call window.open, check whether the popup is fullscreen, and provide a fallback if not.
… Another thing: when the popup is created, we need to make sure that there is no delay.
… We need to make sure that a malicious server cannot leverage delays.
… One thing we're considering is using capability delegation for the new window after creating a popup.
… But that adds another drawback with transient user activation.
… So we thought a flag on window.open was a better path.
… Another alternative we considered was to allow a target-screen fullscreen request after opening a cross-screen popup, but that seemed awkward.

msw: The ability to show fullscreen content on another display was one of the main requirements that we heard when we explored the space.
… The semantics of working with content on another display are grounded on top-level windows. At TPAC, I mentioned exploring using a single user activation signal for multiple actions.
… Fullscreen support need keeps recurring, so we're exploring solutions in that space.
… We're looking for early feedback on this.
… We'll request TAG review soon, but sooner feedback would be welcome. We don't expect to move fast on this, we want to make sure that we're doing it right and that UX is suitable.
… The window.open API is functional (despite the serialized options argument). The error path works.
… We've re-imagined a way that this would work well with the existing API. As Brad said, we were rather thinking about capability delegation initially.
… If people propose a replacement for window.open, we'd jump on that for sure.

cpn: Some of that seems to relate to the 1UA case on the Presentation API. Is that something you looked at?

msw: Replacing the 1UA mode is something we looked into, indeed. This would definitely bring the API closer to what the Presentaion API 1UA mode would allow.
… It would be the difference between having a handle on the window versus having a communication channel.

mfoltzgoogle: The two main differences are: 1) the scope of window placement does not currently include wireless displays. 2) Because we designed the API to be agnostic to where the content is rendered, we only allow messages to be exchanged.
… The window handle gives much more flexibility.

cpn: That sounds similar with what they're doing with Document Picture-in-Picture.

anssik: I suppose you're looking into custom media player with accessibility features for the Document Picture-in-Picture API.

cpn: Yes.

Remote Playback API v1

#35827

<ghurlbot> Pull Request 35827 Adding tests for remote playback API (FritzHeiden) infra, wg-secondscreen, remote-playback

Fritz: I tried to fix all the issues from the feedback I received a few days ago. I think things are now fine.
… Once I receive further feedback, I don't think that there's much else to do. Once that is merged, we will provide test results.

anssik: Thanks Mark for picking up this one as well.

mfoltzgoogle: Thanks for processing the feedback quickly. The two items for discussion are: 1) do we want to use display availability at the beginning of the test? You seemed to prefer atomic tests, that's fine; 2) do we want to document what users need to run these manual tests?

Fritz: Yes, we'd nedd clarity on what browsers and devices we can test.

mfoltzgoogle: I can certainly speak for Chrome. We can maybe get feedback from Safari and Edge.
… I'm not aware of support in Firefox.

anssik: Have you done any recent work on this API in Chrome?

mfoltzgoogle: Not on the API itself. On Chrome, the API is exposed on Chrome for Android, not on desktop, except for the disabled attribute.
… Further down the stack, there may have been some changes.
… Latest stable version of Chrome should be fine.

Fritz: What about devices to use it with?

mfoltzgoogle: I would probably recommend a Chromecast for a TV. Current in-market device so it tends to have the most recent release of the software.
… There are other devices that are compatible but I would start with that one.

anssik: Thank you for your work on this.

Presentation API v1

Repository: w3c/presentation-api

#507

<ghurlbot> Issue 507 PresentationRequest.getAvailability() could always return a new Promise (mfoltzgoogle)

getAvailability() algorithm

mfoltzgoogle: Trying to fix a test fail in our implementation, we realized that we did not respect the first step that requests the user agent to return the same Promise from a previous call.
… It turns out that it is complicated to implement.
… I noticed that few of the other APIs that follow this pattern return new Promises in any cases.
… I'm proposing that we drop this step and return a new Promise each time.
… This is simpler from an implementation perspective and more consistent with other APIs.
… I'm proposing to prepare a PR to align the spec with what our implementation does.

anssik: you can also look at https://www.w3.org/TR/battery-status/#the-getbattery-method for a similar design

anssik: Is it premature optimization?

mfoltzgoogle: Probably. You shouldn't need to retrieve the availability more than once and this will generate a single event no matter how many calls you make.
… Promises are pretty cheap at the end of the day. I would call it a premature optimization.

cpn: I think Mark answered the question I was about to ask, handlers attached to multiple promises. But you're suggesting that this would be a badly written application. Wondering about potential impact.

mfoltzgoogle: Different resolvers may resolve in different micro-tasks. I don't anticipate any compatibility issue but may need to check with JS experts.
… Also, practically speaking, that's what we've been shipping for some time.

Matter/Connectivity Standards Alliance coordination

Repository: w3c/openscreenprotocol

mfoltzgoogle: In our previous F2F, I presented an overview of Matter based on what was publicly available.

Slideset: HTML / PDF

[Slide 2]

[Slide 3]

mfoltzgoogle: Matter is a set of specifications to allow smart home devices to inter-operate.
… I think the standards organization has been around for some time, working on Zigbee. They re-branded a little bit and developed Matter.

[Slide 4]

mfoltzgoogle: A lot of devices are adjusting your home environment like lightbulbs. Media devices are not the core focus although they are supported.

[Slide 5]

Matter 1.0 specifications

Matter 1.0 reference implementation

mfoltzgoogle: Spec was published in December 2022, along with certification tools.

[Slide 6]

mfoltzgoogle: Over 600 products have been certified according to their web site. Not sure how many products are available on the market. Many of the members of the alliance have added support to their platform: Google, Apple, Amazon, LG.

[Slide 7]

[Slide 8]

mfoltzgoogle: Matter tends to be a full stack. There is an application layer. Underneath that, there is a networking layer with IPv6 as the basic foundation.
… On top of that, applications can communicate through TCP and UDP.
… Underneath, they support different link layers, including Thread.

[Slide 9]

mfoltzgoogle: The application layer consists of a data model and an interaction model.
… The messages communicated between devices use "action framing".
… Below that, there's transport management. How do devices put bytes on the network, etc.

[Slide 10]

mfoltzgoogle: Some devices will connect directly to the network, some devices may take part in the Thread network. Some may serve as bridge.
… Matter makes that agnostic.

[Slide 11]

mfoltzgoogle: Apart from the actions, the other bit part of Matter is how to add a new device to the set of controlled devices.
… There is a ceremony to go through they call commissioning.
… A few different paths.
… It's interesting because it parallels in some respect with work we've done in Open Screen Protocol to pair devices.
… First, discovery, with BLE, DNS-SD.
… Like we do in OSP, they use SPAKE, through SPAKE2+
… Couple of additional steps, challenging the device to prove that it's authentic. They assume that there is a root commissioner certificate.
… The commissioner will authenticate the device.
… The device gets a node ID.
… In Matter terms, that's called a fabric.
… Then operational certificate to authenticate to other devices.
… High-level overview, I would prefer to view them as black box

[Slide 12]

[Slide 13]

mfoltzgoogle: Trying to map, this slide shows how both protocols relate.
… For action framing, we decided to use CBOR. For security, we decided to use TLS.
… At the lower level, they have their own TCP protocol. We decided to use QUIC to manage transport between devices.
… Because this maps up nicely, we can see what we can take.

[Slide 15]

mfoltzgoogle: First possible approach is layer cake. We keep CBOR, and we tunnel the rest. OSP agent could run onto the Matter stack if we can access a slightly lower level in the Matter stack.
… The pros here is that we can reuse Matter for a lot of tricky issues around authentication (which we're still working on in OSP).
… If we can make CBOR the interface, not many changes to OSP.

[Slide 16]

mfoltzgoogle: Second approach is more a "bootstrap" approach.
… OSP certificates and IP ports get exchanged through Matter nodes. and OSP agents take care of the rest without knowing that Matter was used.
… That feels a good approach. Some details to look at though.
… Practically speaking, media devices will be IP-based devices, not Thread devices, so maybe corner cases do not matter a lot.

[Slide 17]

mfoltzgoogle: Some big questions are listed on this slide.
… How to use Matter transport to convey CBOR messages?
… Is Matter transport suitable for streaming use cases?
… Also for the "bootstrap" approach, how to leverage Matter?
… Finally, integration with the Video player that Matter includes

[Slide 18]

mfoltzgoogle: In OSP, 10 v1-spec non-security related issues. Security issues all have related PR. If we're landing those, I think that we should be in good shape.
… There is a privacy related issue, which may or may not need a PR.
… And a couple of meta issues.
… We're down to a fairly reasonable number of issues on the spec.

[Slide 19]

mfoltzgoogle: My work plan for the spec is to land the PR for security-related issues in OSP.
… Then explore the tunneling and bootstrap approaches.
… And if we can leverage the Matter SDK, that would be great.

anssik: The Matter specification is quite long.

mfoltzgoogle: Yes, it tries to be almost an entire distributed OS.
… It covers a whole bunch of use cases that are not related to second screen.
… That's also why I'm wondering whether writing some code might be easier.

anssik: I wonder about interest from TV vendors

cpn: Not something that we've discussed in M&E IG. If we could get someone from one of the vendors who could come and present, that would be great.

mfoltzgoogle: We have folks internally that worked on Matter. LG has participated in this group in the past and are one of the vendors that added support. Perhaps we could ask them to join us at a future meeting.

anssik: That's a great point.
… Have you found other community effort around Matter?

mfoltzgoogle: The Matter GitHub is probably the best way to reach developers who are working hands-on on the protocol.
… There's still an open question as to whether it makes sense to expose Matter more directly to the web, or Matter-like functionality directly to the web, to allow web apps to interact with smart home devices. I don't know whether this is the right group to discuss that.

anssik: I believe someone raised that as an issue in the repository.

Browser support in Matter

mfoltzgoogle: When I reviewed the spec, there was not a clear way for a device to give permission to a web app. That may be a gap

mfoltzgoogle: Re. OSP, at our next meeting, I think we'll need to check out to get wide review on the spec.

anssik: Many thanks for the meeting. We went through the entire agenda, so no need for day 2/2 meeting. I'll cancel it!

Minutes manually created (not a transcript), formatted by scribe.perl version 210 (Wed Jan 11 19:21:32 2023 UTC).