15:53:55 RRSAgent has joined #webscreens 15:53:59 logging to https://www.w3.org/2023/03/08-webscreens-irc 15:54:00 Zakim has joined #webscreens 15:54:05 RRSagent, make logs public 15:54:33 Meeting: Second Screen WG/CG - 2023 Q1 virtual meeting - Day 1/2 15:55:48 Agenda: https://github.com/w3c/secondscreen-wg/issues/7 15:57:20 mfoltzgoogle has joined #webscreens 15:57:39 anssik has joined #webscreens 15:59:27 msw_ has joined #webscreens 16:00:59 present+ Francois_Daoust, Anssi_Kostiainen, Mike_Wasserman, Mark_Foltz, Brent_Gaynor 16:01:23 Chair: Anssi 16:03:16 RRSAgent, draft minutes 16:03:47 I have made the request to generate https://www.w3.org/2023/03/08-webscreens-minutes.html anssik 16:03:57 present+ Brad_Triebwasser 16:03:57 brad has joined #webscreens 16:03:57 Regrets: Louay_Bassbouss 16:03:57 btriebw has joined #webscreens 16:04:04 scribe+ tidoust 16:04:13 present+ Chris_Needham 16:04:26 present+ Fritz_Heiden 16:04:38 cpn has joined #webscreens 16:05:02 scribe+ anssik 16:05:20 Topic: Welcome 16:05:38 anssik: Welcome! 16:06:02 anssik: Welcome Brent, invited today to provide dev experience feedback. 16:06:46 Brent_Mometic: I've been building apps over the years. Along the way, I build Momentic that encapsulates all capabilities in terms of front-end, back-end, UX, etc. 16:06:51 https://mometic.com/ 16:06:59 s/Momentic/Mometic 16:07:29 ... Hard to blow up in this competitive market. I started to do Momo in Polymer. 16:07:57 ... A couple of years ago, we moved into React, with a single page application. 16:08:21 ... For trying to do analytics with heads-up displays, this does not work very well. 16:08:47 ... In trading circles, people may have multiple screens. We tried to do something more special. 16:09:04 ... That is how we bumped into APIs developed by the Second Screen WG. 16:09:43 anssik: I'm Anssi, working for Intel, chair of the group 16:09:51 anssik: Francois is our W3C Staff contact 16:10:26 msw: Mike, working at Google on Chrome. 16:11:03 anssik: Mark covers a lot of ground in this group, editing 3 specifications. 16:11:34 mfoltzgoogle: I've been with the group since the beginning, starting with the Presentation API. I'm the editor of that spec, available in Chrome for some time. 16:12:11 ... The second API that was incubated was the Remote Playback API. It is now available in Chrome and Safari. That was originally edited by Mounir, I've inherited that one. 16:12:43 ... Finally, third incubation is Open Screen Protocol, targeted at providing a common platform on which the APIs may be implemented interoperably. 16:13:25 ... The means of communication between devices is based on proprietary protocols right now. The Open Screen Protocol is an attempt to bridge that gap. 16:13:50 ... We will also look into Mattr later today that could fill a few of the needs as well. 16:14:21 ... DLNA was also in the same space, with a different approach. 16:15:32 btriebw: Also at Google, working with Mike, also on Fullscreen Popups that we'll introduce later today. 16:16:01 cpn: Working at BBC. Interested in second screen support. Also co-chair of the Media WG and Media & Entertainment IG 16:16:39 Fritz: Working at Fraunhofer as student. I joined last year to develop tests for the Remote Playback API. Also involved in the CTA WAVE Project. 16:16:45 present+ Hakan_Isbiliroglu 16:17:39 Hakan: Joined the group recently. New to standards, will be listening in. 16:17:58 anssik: first a few quick updates from W3C ecosystem 16:18:05 Subtopic: W3C Workshop on Permissions 16:18:12 -> W3C Workshop on Permissions report https://www.w3.org/Privacy/permissions-ws-2022/report 16:18:44 anssik: Workshop held in Dec '22, was well attended, diverse topics. I presented on Permissions UX Across Form Factors including Multi-Screen Window Placement API. We discussed interesting solutions to permission prompting relevant to this WG. See "push" vs "pull" permission flows in the report. 16:18:52 Subtopic: Screen Capture Community Group 16:18:58 -> Screen Capture Community Group https://www.w3.org/community/sccg/ 16:19:24 anssik: This new CG had its first telcon last month. Scoped on new screen capture APIs and extending existing ones. Mark also participated the CG kick off and we can help coordinate between that CG and Second Screen WG/CG. 16:19:42 Subtopic: Service Discovery Community Group proposed 16:19:46 -> Service Discovery Community Group proposed https://www.w3.org/community/blog/2023/02/08/proposed-group-service-discovery-community-group/ 16:19:59 anssik: The purpose of this CG is to define a browser API allowing service discovery via mechanisms such as mDNS. Possibly relevant to OSP that does discovery with mDNS. 16:20:03 ... CG is still looking for supporters 16:20:22 ... any other updates from others? 16:21:12 RRSAgent, draft minutes 16:21:14 I have made the request to generate https://www.w3.org/2023/03/08-webscreens-minutes.html anssik 16:21:50 mfoltzgoogle: We've had a few conversations in the past about sites that may want to capture or generate media and stream that to other devices. If there is a way to make these use cases with the capture API, I could see that as a path forward. 16:22:19 ... That's the main thing that comes to mind in terms of coordination. 16:22:26 ... Possibly also with the Remote Playback API. 16:22:59 s/coordination/coordination with the Screen Capture CG/ 16:23:02 Topic: Multi-Screen Window Placement API 16:23:08 ghurlbot, this is w3c/window-placement 16:23:08 anssik, OK. But note that I am not currently expanding issues. You can change that with: ghurlbot issues on 16:23:51 Subtopic: Production use of multi-screen layout in a web-based stock market tool 16:24:42 anssik: Honored to get your feedback, Brent. 16:24:56 ... Anything you would like to hear from Brent, Mike? 16:25:28 msw: I would love to hear what this API made possible that wasn't possible before on the Web, for user convenience. 16:26:18 Brent_Mometic: Generally, I'm the product UX guy and then go to my developers to turn that into code. 16:27:28 [Brent presents slides on MOMO Pro] 16:27:58 Brent_Mometic: MOMO is a stock scanner for day traders, enabling traders to take actions in real-time. 16:28:54 ... I covered a bit of the background already. It's a web app, using React with a Node backend. 16:29:51 ... Ultimately, we want to turn the single page application approach into a layout that spans multiple tabs, screens and devices. 16:30:09 ... The use of multiple screens is pervasive. 16:30:49 ... I wanted to have a primary and secondary layout options. E.g. desktop layout and mobile layout. 16:31:13 ... I also wanted to maintain this "dynamic" responsive layout. 16:31:29 ... Also wanted the ability to drag and drop and resize components. 16:32:45 ... Just two screenshots: on the left side, multi-layout picker to change the layout. Layout can be locked so that you can get back to it at any point in time. 16:33:03 ... Useful for traders to zoom in different parts, then reset. 16:33:47 ... On the right hand-side is a zoom on the component toolbar. Using mouseover. You can drag it or expand it. 16:34:14 anssik: This toolbar leverages the multi-window placement API, right? 16:34:17 Brent_Mometic: Yes. 16:34:48 ... We tell users that, for best experience, they should enable the window placement API. 16:35:14 ... Here is a dump of the code we use to get screen details. 16:35:21 anssik: Any feedback on the permission prompt? 16:36:13 Brent_Mometic: I guess I'm pretty annoyed by the prompts. Users just want to get things going. It would be good to have a simple and cross-browser experience. 16:36:52 anssik: In general, it is annoying but required. 16:38:02 Brent_Mometic: Honestly, we've had great feedback on the implementation. Nobody's complained. Maybe people in the finance space are familiar with multi-screen issues and more patient. 16:38:53 ... Second area, getting precise layout positions is hard. We noted that some of the different window title bars needed to be tricked. 16:39:53 msw: Is the problem restricted to multiple windows of MOMO or to multiple windows of MOMO and other applications? 16:40:15 Brent_Mometic: Both. These guys may have 5 different screens and they want precise positioning. 16:40:41 ... Here is an example of trader settings. 16:40:59 anssik: What setup is typical for traders? Typical positioning? 16:41:20 Brent_Mometic: I'd say 3. Usually left and right display, with one on the top. 16:43:02 Brent_Mometic: [showing a demo video] 16:44:13 ... The demo shows the ability to create layouts across displays and save that for future use. 16:46:44 ... It's easy to run out of screen estate with a single window, even on a 5K monitor. Creating multiple windows allows to optimize things for users. When you're trading, often time, you'll be in a frenzy to explore through windows and you'll want to reset things at some point. 16:46:49 q+ 16:47:49 ... [demoing window placement permission under the lock in the address bar] 16:48:44 anssik: So users are not annoyed by the permission bar? 16:49:24 Brent_Mometic: Well, it is annoying. It would be good to register our domain or something like that to become a more trusted entity and get rid of the notification. 16:49:53 anssik: Permission prompting is still something that is being explored today. 16:50:09 Brent_Mometic: I think it should be handled more collectively. 16:50:52 msw: Certainly, saving and restoring window placements was one of the original use cases, so it's great to see it in action here. 16:51:42 q- 16:51:56 anssik: This feedback also helps other browsers look into the API. 16:52:10 Brent_Mometic: This works on Safari too. 16:52:30 msw: I suppose you can do same display placement, but not across displays. 16:52:49 Brent_Mometic: My developers managed to code some workaround in practice. 16:53:48 ... Are we the first ones to do that? 16:54:00 msw: First full-feldged application I've seen, indeed. 16:55:25 Subtopic: Use cases - Quick Distraction, Idle Distraction, and Social Watching Scenarios 16:55:52 -> Use cases: Quick Distraction, Idle Distraction, and Social Watching Scenarios https://github.com/anssiko/virtual-display/blob/main/secondscreen-2023-q1-vf2f-use-cases.pdf 16:55:55 anssik: I talked with our UX designers 16:56:16 ... In the interest of times, I'm not going to enter details. 16:56:31 ... 4 use cases described. 16:56:53 -> Document Picture-in-Picture (Specification) https://wicg.github.io/document-picture-in-picture/ 16:56:59 -> Document Picture-in-Picture (Explainer) https://github.com/WICG/document-picture-in-picture 16:57:26 ... I believe that they can inform some APIS, including Picture-in-Picture API (and the Document Picture-in-Picture proposal developed in the WICG), Presentation API. 16:58:14 ... Are you working together with Google folks involved in that project (Tommy and Frank)? 16:58:32 RRSAgent, draft minutes 16:58:34 I have made the request to generate https://www.w3.org/2023/03/08-webscreens-minutes.html anssik 16:58:40 mfoltzgoogle: I know a bit of the background and history of that feature. 16:59:50 anssik: I think it intersects with some of our group work. 17:00:27 Subtopic: Fullscreen Popups 17:00:27 -> Explainer: Creating Fullscreen Popup Windows https://github.com/bradtriebwasser/fullscreen-popup/blob/main/EXPLAINER.md 17:00:40 btriebw: I don't have a formal presentation, but the explainer describes the idea. Brief overview: right now on the web, you can create a popup on a single screen, and with multi-screen window placement on a second screen. 17:01:04 ... However, you cannot create a fullscreen window on a second screen without requiring two user gestures. 17:01:42 ... After some back-and-forth, we settled down on adding a flag to window.open. That seems like the easiest method. 17:02:25 ... Some use cases for this include a financial app willing to open a chart view fullscreen in a secondary display. 17:02:55 ... Other example, security app launching video feeds on an array of 6 displays. 17:03:13 ... We have started implementing this as a prototype in Chrome to get a demo out there. 17:03:23 ... There are quite a few open questions, even on the explainer. 17:03:35 ... One of the biggest questions right now is on focusing the window. 17:04:21 ... You may open multiple fullscreen window popups and we haven't really specified what happens. Which takes the focus. 17:04:31 ... Another big open question is feature detection. 17:05:31 ... One of the drawbacks is that there is no way to detect. Developer would have to call window.open, check whether the popup is fullscreen, and provide a fallback if not. 17:05:57 ... Another thing: when the popup is created, we need to make sure that there is no delay. 17:06:39 ... We need to make sure that a malicious server cannot leverage delays. 17:07:18 ... One thing we're considering is using capability delegation for the new window after creating a popup. 17:07:45 ... But that adds another drawback with transient user activation. 17:08:04 ... So we thought a flag on window.open was a better path. 17:08:23 RRSAgent, draft minutes 17:08:25 I have made the request to generate https://www.w3.org/2023/03/08-webscreens-minutes.html anssik 17:08:55 ... Another alternative we considered was to allow a target-screen fullscreen request after opening a cross-screen popup, but that seemed awkward. 17:09:33 msw: The ability to show fullscreen content on another display was one of the main requirements that we heard when we explored the space. 17:10:49 ... The semantics of working with content on another display are grounded on top-level windows. At TPAC, I mentioned exploring using a single user activation signal for multiple actions. 17:10:50 s/APIS/APIs 17:11:26 ... Fullscreen support need keeps recurring, so we're exploring solutions in that space. 17:11:32 ... We're looking for early feedback on this. 17:12:08 ... We'll request TAG review soon, but sooner feedback would be welcome. We don't expect to move fast on this, we want to make sure that we're doing it right and that UX is suitable. 17:13:46 q+ to ask about relationship to the presentation API 1UA mode 17:13:53 ... The window.open API is functional (despite the serialized options argument). The error path works. 17:14:50 ... We've re-imagined a way that this would work well with the existing API. As Brad said, we were rather thinking about capability delegation initially. 17:14:56 ack cpm 17:15:11 ... If people propose a replacement for window.open, we'd jump on that for sure. 17:15:38 cpn: Some of that seems to relate to the 1UA case on the Presentation API. Is that something you looked at? 17:16:38 msw: Replacing the 1UA mode is something we looked into, indeed. This would definitely bring the API closer to what the Presentaion API 1UA mode would allow. 17:17:07 ... It would be the difference between having a handle on the window versus having a communication channel. 17:18:12 mfoltzgoogle: The two main differences are: 1) the scope of window placement does not currently include wireless displays. 2) Because we designed the API to be agnostic to where the content is rendered, we only allow messages to be exchanged. 17:18:36 ... The window handle gives much more flexibility. 17:19:11 cpn: That sounds similar with what they're doing with Document Picture-in-Picture. 17:19:41 anssik: I suppose you're looking into custom media player with accessibility features for the Document Picture-in-Picture API. 17:19:43 cpn: Yes. 17:22:39 Topic: Remote Playback API v1 17:22:49 ghurlbot, this is web-platform-tests 17:22:49 anssik, OK. But note that I am not currently expanding issues. You can change that with: ghurlbot issues on 17:22:54 #35827 17:23:05 https://github.com/web-platform-tests/wpt/pull/35827 17:23:12 ghurlbot, issues on 17:23:12 tidoust, OK. 17:23:22 #35827 17:23:23 https://github.com/w3c/web-platform-tests/issues/35827 -> Pull Request 35827 Adding tests for remote playback API (FritzHeiden) infra, wg-secondscreen, remote-playback 17:23:55 Fritz: I tried to fix all the issues from the feedback I received a few days ago. I think things are now fine. 17:24:34 ... Once I receive further feedback, I don't think that there's much else to do. Once that is merged, we will provide test results. 17:25:03 anssik: Thanks Mark for picking up this one as well. 17:26:36 mfoltzgoogle: Thanks for processing the feedback quickly. The two items for discussion are: 1) do we want to use display availability at the beginning of the test? You seemed to prefer atomic tests, that's fine; 2) do we want to document what users need to run these manual tests? 17:27:02 Fritz: Yes, we'd nedd clarity on what browsers and devices we can test. 17:27:40 mfoltzgoogle: I can certainly speak for Chrome. We can maybe get feedback from Safari and Edge. 17:27:54 ... I'm not aware of support in Firefox. 17:28:15 anssik: Have you done any recent work on this API in Chrome? 17:29:10 mfoltzgoogle: Not on the API itself. On Chrome, the API is exposed on Chrome for Android, not on desktop, except for the disabled attribute. 17:29:26 ... Further down the stack, there may have been some changes. 17:29:37 ... Latest stable version of Chrome should be fine. 17:29:56 Fritz: What about devices to use it with? 17:30:29 mfoltzgoogle: I would probably recommend a Chromecast for a TV. Current in-market device so it tends to have the most recent release of the software. 17:30:39 ... There are other devices that are compatible but I would start with that one. 17:31:28 q? 17:31:40 anssik: Thank you for your work on this. 17:31:42 Topic: Presentation API v1 17:31:47 ghurlbot, this is w3c/presentation-api 17:31:47 anssik, OK. 17:31:49 q- cpn 17:31:53 #507 17:31:54 https://github.com/w3c/presentation-api/issues/507 -> Issue 507 PresentationRequest.getAvailability() could always return a new Promise (mfoltzgoogle) 17:33:06 -> getAvailability() algorithm https://www.w3.org/TR/presentation-api/#getting-the-presentation-displays-availability-information 17:34:15 mfoltzgoogle: Trying to fix a test fail in our implementation, we realized that we did not respect the first step that requests the user agent to return the same Promise from a previous call. 17:34:22 ... It turns out that it is complicated to implement. 17:34:40 ... I noticed that few of the other APIs that follow this pattern return new Promises in any cases. 17:35:05 ... I'm proposing that we drop this step and return a new Promise each time. 17:35:21 ... This is simpler from an implementation perspective and more consistent with other APIs. 17:35:51 q+ 17:36:06 ... I'm proposing to prepare a PR to align the spec with what our implementation does. 17:36:07 anssik: you can also look at https://www.w3.org/TR/battery-status/#the-getbattery-method for a similar design 17:37:34 anssik: Is it premature optimization? 17:38:10 mfoltzgoogle: Probably. You shouldn't need to retrieve the availability more than once and this will generate a single event no matter how many calls you make. 17:38:27 ... Promises are pretty cheap at the end of the day. I would call it a premature optimization. 17:38:42 ack cpn 17:38:42 ack cpn 17:39:38 cpn: I think Mark answered the question I was about to ask, handlers attached to multiple promises. But you're suggesting that this would be a badly written application. Wondering about potential impact. 17:40:20 mfoltzgoogle: Different resolvers may resolve in different micro-tasks. I don't anticipate any compatibility issue but may need to check with JS experts. 17:40:46 ... Also, practically speaking, that's what we've been shipping for some time. 17:41:40 q? 17:41:54 Topic: Matter/Connectivity Standards Alliance coordination 17:41:59 ghurlbot, this is w3c/openscreenprotocol 17:41:59 anssik, OK. 17:42:56 RRSAgent, draft minutes 17:42:58 I have made the request to generate https://www.w3.org/2023/03/08-webscreens-minutes.html anssik 17:44:06 mfoltzgoogle: In our previous F2F, I presented an overview of Matter based on what was publicly available. 17:44:26 Slideset: TBD 17:44:31 [slide 2] 17:44:36 [slide 3] 17:45:03 mfoltzgoogle: Matter is a set of specifications to allow smart home devices to inter-operate. 17:45:39 ... I think the standards organization has been around for some time, working on Zigbee. They re-branded a little bit and developed Matter. 17:45:48 [slide 4] 17:46:43 mfoltzgoogle: A lot of devices are adjusting your home environment like lightbulbs. Media devices are not the core focus although they are supported. 17:46:48 [slide 5] 17:47:01 -> Matter 1.0 specifications https://csa-iot.org/developer-resource/specifications-download-request/ 17:47:07 -> Matter 1.0 reference implementation https://github.com/project-chip/connectedhomeip/releases 17:47:10 mfoltzgoogle: Spec was published in December 2022, along with certification tools. 17:47:12 [slide 6] 17:48:11 mfoltzgoogle: Over 600 products have been certified according to their web site. Not sure how many products are available on the market. Many of the members of the alliance have added support to their platform: Google, Apple, Amazon, LG. 17:48:15 [slide 7] 17:48:19 [slide 8] 17:48:53 mfoltzgoogle: Matter tends to be a full stack. There is an application layer. Underneath that, there is a networking layer with IPv6 as the basic foundation. 17:49:07 ... On top of that, applications can communicate through TCP and UDP. 17:49:35 ... Underneath, they support different link layers, including Thread. 17:49:43 [slide 9] 17:50:02 mfoltzgoogle: The application layer consists of a data model and an interaction model. 17:50:22 ... The messages communicated between devices use "action framing". 17:50:41 ... Below that, there's transport management. How do devices put bytes on the network, etc. 17:50:55 [slide 10] 17:51:30 mfoltzgoogle: Some devices will connect directly to the network, some devices may take part in the Thread network. Some may serve as bridge. 17:51:38 ... Matter makes that agnostic. 17:51:43 [slide 11] 17:52:06 mfoltzgoogle: Apart from the actions, the other bit part of Matter is how to add a new device to the set of controlled devices. 17:52:17 ... There is a ceremony to go through they call commissioning. 17:52:26 ... A few different paths. 17:52:47 ... It's interesting because it parallels in some respect with work we've done in Open Screen Protocol to pair devices. 17:53:02 hober has joined #webscreens 17:53:06 ... First, discovery, with BLE, DNS-SD. 17:53:28 ... Like we do in OSP, they use SPAKE, through SPAKE2+ 17:54:09 ... Couple of additional steps, challenging the device to prove that it's authentic. They assume that there is a root commissioner certificate. 17:54:22 ... The commissioner will authenticate the device. 17:54:35 ... The device gets a node ID. 17:54:45 ... In Matter terms, that's called a fabric. 17:55:05 ... Then operational certificate to authenticate to other devices. 17:55:21 ... High-level overview, I would prefer to view them as black box 17:55:24 [slide 12] 17:55:43 [slide 13] 17:56:11 mfoltzgoogle: Trying to map, this slide shows how both protocols relate. 17:56:45 ... For action framing, we decided to use CBOR. For security, we decided to use TLS. 17:57:06 ... At the lower level, they have their own TCP protocol. We decided to use QUIC to manage transport between devices. 17:57:28 ... Because this maps up nicely, we can see what we can take. 17:57:32 [slide 15] 17:58:40 mfoltzgoogle: First possible approach is layer cake. We keep CBOR, and we tunnel the rest. OSP agent could run onto the Matter stack if we can access a slightly lower level in the Matter stack. 17:59:14 ... The pros here is that we can reuse Matter for a lot of tricky issues around authentication (which we're still working on in OSP). 17:59:28 ... If we can make CBOR the interface, not many changes to OSP. 17:59:34 [slide 16] 17:59:48 mfoltzgoogle: Second approach is more a "bootstrap" approach. 18:00:40 ... OSP certificates and IP ports get exchanged through Matter nodes. and OSP agents take care of the rest without knowing that Matter was used. 18:01:01 ... That feels a good approach. Some details to look at though. 18:01:34 ... Practically speaking, media devices will be IP-based devices, not Thread devices, so maybe corner cases do not matter a lot. 18:01:39 [slide 17] 18:01:50 mfoltzgoogle: Some big questions are listed on this slide. 18:02:00 ... How to use Matter transport to convey CBOR messages? 18:02:36 ... Is Matter transport suitable for streaming use cases? 18:02:59 ... Also for the "bootstrap" approach, how to leverage Matter? 18:03:18 ... Finally, integration with the Video player that Matter includes 18:03:22 [slide 18] 18:04:12 mfoltzgoogle: In OSP, 10 v1-spec non-security related issues. Security issues all have related PR. If we're landing those, I think that we should be in good shape. 18:04:26 ... There is a privacy related issue, which may or may not need a PR. 18:04:30 ... And a couple of meta issues. 18:04:40 ... We're down to a fairly reasonable number of issues on the spec. 18:04:51 [slide 19] 18:05:08 mfoltzgoogle: My work plan for the spec is to land the PR for security-related issues in OSP. 18:05:33 ... Then explore the tunneling and bootstrap approaches. 18:05:48 ... And if we can leverage the Matter SDK, that would be great. 18:05:50 q? 18:06:32 Slides: https://docs.google.com/presentation/d/e/2PACX-1vRFILfznQ6t5Am4bnTEpe7AAWFCGuqwpR3mxl9pYkleZJLhlXztPoqxDhHd62e8qm-KVLYry0GbI7Iw/pub?start=false&loop=false&delayms=3000&slide=id.g1321cc3f2f9_0_151 18:07:11 anssik: The Matter specification is quite long. 18:07:27 mfoltzgoogle: Yes, it tries to be almost an entire distributed OS. 18:07:45 ... It covers a whole bunch of use cases that are not related to second screen. 18:07:59 ... That's also why I'm wondering whether writing some code might be easier. 18:08:36 anssik: I wonder about interest from TV vendors 18:09:17 cpn: Not something that we've discussed in M&E IG. If we could get someone from one of the vendors who could come and present, that would be great. 18:09:59 mfoltzgoogle: We have folks internally that worked on Matter. LG has participated in this group in the past and are one of the vendors that added support. Perhaps we could ask them to join us at a future meeting. 18:10:10 anssik: That's a great point. 18:10:46 ... Have you found other community effort around Matter? 18:11:15 mfoltzgoogle: The Matter GitHub is probably the best way to reach developers who are working hands-on on the protocol. 18:12:52 ... There's still an open question as to whether it makes sense to expose Matter more directly to the web, or Matter-like functionality directly to the web, to allow web apps to interact with smart home devices. I don't know whether this is the right group to discuss that. 18:13:09 anssik: I believe someone raised that as an issue in the repository. 18:13:26 -> Browser support in Matter https://github.com/project-chip/connectedhomeip/issues/4270 18:13:35 ghurlbot has joined #webscreens 18:13:38 mfoltzgoogle: When I reviewed the spec, there was not a clear way for a device to give permission to a web app. That may be a gap 18:14:46 q? 18:16:26 mfoltzgoogle: Re. OSP, at our next meeting, I think we'll need to check out to get wide review on the spec. 18:18:11 anssik: Many thanks for the meeting. We went through the entire agenda, so no need for day 2/2 meeting. I'll cancel it! 18:21:14 RRSAgent, draft minutes 18:21:15 I have made the request to generate https://www.w3.org/2023/03/08-webscreens-minutes.html tidoust 18:50:08 s|Slideset: TBD|Slideset: https://lists.w3.org/Archives/Public/www-archive/2023Mar/att-0001/SSWG_-_Matter_and_Open_Screen_Protocol__Updated_March_2023_.pdf 18:50:13 RRSAgent, draft minutes 18:50:15 I have made the request to generate https://www.w3.org/2023/03/08-webscreens-minutes.html tidoust 20:32:26 Zakim has left #webscreens