W3C

Second Screen WG F2F - Day 2/2

16 September 2019

Attendees

Present
Alex Russel (Google)
Anssi Kostiainen (Google)
Chris Needham (BBC)
Daniel Libby (Microsoft)
Eric Carlson (Apple)
Francois Daoust (W3C)
Grisha Lyukshin (Microsoft)
Jer Noble (Apple)
Josh O'Connor (W3C)
Mark Foltz (Google)
Masaya Ikeo (NHK)
Mike Wasserman (Google)
Mounir Lamouri (Google)
Peter Thatcher (Google)
Raymes Khoury (Google)
Staphany Park (Google)
Takio Yamaoka (Yahoo)
Takumi Fujimoto (Google)
Thomas Nattestad (Google)
Victor Costan (Google)
Yuki Yoshida (ACCESS)
Chair
Anssi
Scribe
Chris, Francois

Meeting minutes

See also:

Anssi: We covered all day 1 topics yesterday. Recent additions to day 2 are around accessibility and display enumerations and positioning.
… Starting the day with new API features for Remote Playback API.
… A bit of planning at the end as we need to recharter by the end of this year.
… We should wrapup at noon, as I have a hard stop.

Remote buffer state for Remote Playback + MSE

See Remote buffer state for Remote Playback + MSE (PDF, p83-86)

takumif: There are some conditions under which the media playback on the receiver side may not be smooth, for instance if the buffer on the receiver is too small and the controller keeps pushing.
… Or if the bandwidth is smaller than the media the controller is pushing onto the buffer.
… Two ways to solve these: new API, or use exiting API, also solve at the protocol level.
… If the controller knows that the receiver buffer is small, it can limit the transmission to the receiver. For the bandwidth issue, we can use the MediaElement.buffered and readyState attributes. That is, we can synchronize the buffered state and readyState state on the two devices, so that the controller knows that this happens.
… Alternatively, I thought about adding a new state attribute to the remote playback object.
… "remotingBufferState" attribute that tells whether it has enough data or too much data.

takumif: To know whether the buffer is full, we need some info at the protocol level.

Peter: If part of the solution of the first problem is to cause the sender not to send as much data, then from a JavaScript perspective, it looks like the two problems are the same.

anssik: Wondering how AirPlay and other implementations handle this situation.

ericc: It's different because it doesn't load the data on the sender, unless you're doing screen scraping, where I don't know how that works.

mfoltzgoogle: We don't have these features implemented in our current implementation.
… we don't hit the problem of buffer running out of space in the mirroring case in practice.
… Here, we need a backpressure signal.

Mounir: Why not exposing the buffer size?

Peter: I'm not sure we're going to expose that at the API level.
… A new API is a possibility though.
… I agree we should add something to the protocol.

ericc: The application may not even know that it is remoting.

mfoltzgoogle: If we can model that in a way that complies with the MSE API, that would be good.

Peter: I assume that when applications call appendBuffer, things go to the buffer directly.
… How can we expect people to be sophisticated enough to use the existing API?
… "Do you know that sometimes you call appendBuffer it won't go to the buffer?"
… Remoting may happen today without the application knowing it?

ericc: Yes.
… With Airplay, you know when remote playback is active, whether you requested it or not.

Peter: You can check remote.state or whatever.

ericc: Yes.

mfoltzgoogle: When transition happens from local to remote playback, will there be a change in buffered and so on?

ericc: It would be better if applications could be aware that things could be made much more efficient.
… Have a remote state and fire an event when the state changes.

[looking at the remotingBufferState proposal]

Mounir: If buffered range is exposed, why would you need that on top of it?

takumif: the sender doesn't know about decode time, how long it takes to push things over.

anssik: What is the common case for most developers?

Peter: To make it easy, we should add something like this.

Mounir: I don't see the need for "insufficient-data" and "enough-data".

Peter: It can be about network bandwidth.

mfoltzgoogle: Two variables: buffer size and network conditions.
… "insufficient-data" would be one way to specify that network bandwidth is not enough.

Mounir: Make sure, as much as we can, that we want things to work automatically without requiring changes in existing applications.

takumif: maybe we just need "too-much-data".
… too low can be handled by the buffered time range.

Mounir: Could that just be an event?

mfoltzgoogle: If we do the protocol change, do we need to expose that to the API?

Peter: I think that's the question. And if we don't make it easy, will it be used?
… Roughly, it seems we want to add something but smaller than that.

PROPOSED RESOLUTION: Add a backpressure signal to the OSP

Resolved: Add a backpressure signal to the OSP

mfoltzgoogle: I think we want more feedback on the second part.
… What's the best way to expose the info to developers? How to make the mechanism consistent with existing APIs?

takumif: The Remote Playback API says that which of the media element attributes is sync-ed is up to the application. This would require that buffered time range is sync-ed.

Peter: [details typical computation on buffered time range]

Mounir: Isn't it the case that if you care that much, you would use the Presentation API?

Peter: Stepping back a bit, this whole conversation started trying to find a way to do MSE + Remote Playback API.

One prompt for Presentation and Remote Playback APIs

See Proposal: One prompt for Presentation and Remote Playback APIs (PDF, p87-90)

takumif: Each of those APIs have their own way to prompt. Prompt may list different devices. It would be nice to prompt the whole list and do either remote playback API or presentation API depending on selected display.
… [showing example code]

anssik: What is the concrete use case behind this proposal?

takumif: Some receivers may only support Presentation API or Remote Playback API. If a site wants to support both of those, right now the site would need to show two buttons.

Jer: And no way for the user to know which button to click.

mfoltzgoogle: The "prompt" is permission to use the device once?

takumif: Yes, we would want this permission to expire almost immediately.

anssik: Is this a new pattern on the Web platform?

mfoltzgoogle: There is user gesture, and user activation. This seems a little bit different.

ericc: I'm not sure. It's data that is valid for the duration of the Promise callback.

Jer: Why do we want this to die this immediately?

anssik: So that it happens in context, not one hour after the user selected a display.

Jer: Problem is that if people use weird JS frameworks that post messages all over the place, you'll end up with things broken.

ericc: But if that's the behavior from the ground up, then that's fine. It breaks when you change behavior.
… We can restrict it at start, and if we find out that there are too many problems, we could relax things later on.

Jer: I'd like to see prompt take an array. In the future, we may have more things that can be remoted.

mfoltzgoogle: A dictionary might be better.

anssik: I'm hearing support for the proposal with a dictionary parameter.
… I'm slightly surprised that no one used the same design.

mfoltzgoogle: A use case that comes to mind is media capture.

tidoust: Do we need to get feedback from other groups on this design? Privacy perhaps, WebRTC on media capture, etc.

Jer: Yes, we may be missing some context in which this is used.

anssik: A Pull Request would help get more feedback.

Mike: Folks in WebXR are talking about bundling permissions. Similar question about requesting access to device capabilities simultaneously as part of XR request.

takumif: OK, I'll just make a pull request on the Remote Playback API.

mfoltzgoogle: My suggestion would be to add it to the Presentation API and reuse the namespace there.
… It's fine to have a pull request for now and decide afterwards whether it's V1 or V2.

Mounir: More a Chrome question, is there a real use case from Web developers?

takumif: Not aware of specific feedback.

Mounir: Just wanted to point out that we probably don't want to specify features that are not triggered by actual needs.

Peter: Need will become more important as we expose devices through the APIs.

Mounir: No concern about the use case, more concerned about whether it's pressing.

PROPOSED RESOLUTION: Create a pull request for a common prompt along the lines presented and gather feedback on the design from other groups

Resolved: Create a pull request for a common prompt along the lines presented and gather feedback on the design from other groups

Presentation receiver friendly name

See Proposal: Presentation receiver friendly name (PDF, p91-93)

takumif: We can add the friendly name to either the controller or receiver side, on PresentationConnection or PresentationReceiver.
… It is only necessary to expose the friendly name on one side, since both sides can communicate. It makes more sense to expose it on the controller side.

ericc: It is only available after the user chooses the device?

takumif: Yes.

ericc: In Airplay, we replace the video element with a message that says "playing on [foo]" with the name of the device, so I can imagine that being useful to Web applications.

cpn: Is there a privacy issue even after you gave permission to use the device?

ericc: For media capture, it's possible to enumerate the capture devices that are attached to a machine. As originally written, a script could just enumerate the devices and get the display names of the devices without a user prompt. Sites were using that for fingerprinting. We changed the spec.
… Names are always empty until user grant permission through a prompt and we lie about devices.
… Point is we expose localized string provided by the device only after the user grants permission to capture, which I think is similar to what is being proposed here.

Mounir: For the Presentation API, you may already have access to that name in 2-UA mode.

ericc: Definitely something that needs to be reviewed by privacy folks, but my guess is that it is OK after prompt.

anssik: It seems that we have support for the "receiverName" and that we should refer back to the media capture enumeration API and reach out to PING.

PROPOSED RESOLUTION: Add "receiverName" to "PresentationConnection" and seek privacy review from PING noting similarity to "enumerateDevices"

ericc: Note in enumerateDevices, attribute is named "label".

Jer: If you're concerned about privacy, you can add a mitigation that user agents can use a generic name.

PROPOSED RESOLUTION: Add "receiverName" to "PresentationConnection" and seek privacy review from PING noting similarity to "enumerateDevices" in "MediaDeviceInfo.label"

PROPOSED RESOLUTION: Add "receiverName" to "PresentationConnection" and seek privacy review from PING noting similarity to "MediaDeviceInfo.label" in "enumerateDevices"

Resolved: Add "receiverName" to "PresentationConnection" and seek privacy review from PING noting similarity to "MediaDeviceInfo.label" in "enumerateDevices"

RemotePlaybackState enum can become misleading when changing media.src (#125)

Issue #125

Jer: On both Youtube and Netflix, different behaviors that result in broken experiences with users willing to use Airplay.
… Youtube will have a single video element and wait for an event that the user picked an Airplay display. Netflix uses wireless presentation changed.
… Even big web sites get this wrong when display is not compatible with current media stream.
… In Airplay, we fire connecting then disconnected, but applications need to understand that this means not compatible.
… Straw man proposal would be to have an unsupported state.
… Mounir made a counter proposal.

Mounir: Counter proposal is to provide some information in the disconnected event that explains why you got disconnected. If we create a new state, you have to handle state transitions.

Jer: From our perspective, it's a very common use case, if not 100% of use cases.
… If you add a message to disconnected, you have to prompt the user again, and user may select the same display. Not a great user experience.
… The new state would allow to retry without prompting again.

mfoltzgoogle: In Chrome, we mark remote playback as unavailable when no compatible devices are available (e.g. when MSE is used).

Mounir: If you are an MSE player, you have to be aware that some devices don't support remote MSE, so you need to have a file fallback.

Jer: All clients have to create multiple video elements to support all of these?

Mounir: If you want to do that as a fallback, yes.
… The benefit for us with the current API is that you make all decisions before prompting. You check availability.

ericc: The idea with the way that it works in Webkit right now is that we're firing an event that we'll try to remote the playback. If the application knows it won't work, it is responsible for changing the src.

Jer: Will fail with ads, and playlists. My point is that the scenario is complex even for the biggest video providers.

ericc: Also, they may not have a second encoding. They need to be able to detect that user wanted to play remotely and that it failed.
… The expectation is that, once a remote session is started, everything you play locally actually plays remotely.

mfoltzgoogle: No fallback to screen rendering when playback does not work?

ericc: No.
… On Safari, when remote playback does not work, playback plays locally.

Jer: Which triggers a number of feedback from users.

ericc: Assumption is that the system plays the video. Not really the case with MSE.
… Vimeo is another example where it doesn't always work right for the same issue.

[Some providers use two video elements, to control ability to switch between sources, and preserve MSE state]

Mounir: Same issue as codec capabilities. Remote Playback API solves this before connection.

Peter: Problem is that you don't have a way to filter the list?

ericc: Right. Before the page is even loaded, the list is established.

Mounir: Could we have an event that just says "unsupported codec"?
… You don't want to switch to disconnected because you're not, the event would solve the problem.

Peter: You might be in a remote session even before the page is loaded?

ericc: Correct. The way we handle it is by firing an event.

Peter: Issue is you want to convey that you're switching from connecting to connected but cannot play.

ericc: Yes.

Peter: Use case is somehow the user engages remote playback from the system while or before the page loads.

Jer: Also transition between e.g. an MP4 file to an MSE-based source.

Mounir: I wouldn't be surprised if we went to disconnected whenever a video stops.

Jer: Nothing that says that state should go to connecting when source changes. Unspecified.

Peter: So question is about the meaning of connected.

ericc: Yes, from my perspective, we are connected. Connected but incompatible.

Peter: We use the term "unavailable".

Mounir: The spec has NotSupportedError and NotFoundError.

mfoltzgoogle: The main question I had is what can the app do about it.

Jer: You could imagine a separate API that tries to resume playback but doesn't require a prompt.

mfoltzgoogle: If the application needs to do more than changing the src, how would it know that the new src would be compatible with remote playback?

Jer: That's a good question.

[discussion on monitoring display availability as specified in the spec]

anssik: I think we have understanding that it is a wide issue.

Peter: Option A would be to add a state that is connected-but-unavailable. To get from there to connected, you change the src.
… Option B would be disconnected state with a reason. To get back to connected, there's some reconnection mechanism without prompt.

Jer: Either one would be good. I note it's not even clear that changing src is supported by the spec. Is it unspecified on purpose?

Mounir: Instead of creating a new state, could we go back to connecting?

Jer: You can just imagine people showing a spinner forever if we stay at connecting. A bit weird to go to connected then immediately back to connecting.
… If you're stuck in connecting, you'd need a separate event to tell the application that it needs to do something.

Peter: So Option C is new event

[Note that availability in the spec is for the union of all devices]

Peter: With option C, you'd stay with connected, and so app that doesn't listen to the new event would just think everything's fine.

Jer: Yes.

Peter: Fine with that?

Jer: That matches current behavior.

Peter: Some preference for Option C.

mfoltzgoogle: That seems like a media error state.

Peter: But you also need to change that when you change the src.

PROPOSED RESOLUTION: look at how user agent should behave in the case of source switching in Remote Playback API

Mounir: We may have to keep in mind that it may be optional.

Resolved: look at how user agent should behave in the case of source switching in the Remote Playback API

Window placement & Screen Enumeration

See:

Mike: We're exploring a window placement & screen enumeration APIs.
… Native apps often use multiple screens (lots of use cases)

Mike: Sites want to be able to control placement on the second screen
… The OS desktop comprises multiple phyiscal displays
… [photos of examples]
… Displays can be high DPI, wide color gamut
… The Screen interface will describe the current display that the content is presented on
… Pages don't have a way to introspect the other connected displays in a similar way
… The existing APIs for opening, moving, resizing windows are limited, due to implementation specific behaviour around moving windows between displays etc
… There's a gap between what native apps and web apps can do
… Want to provide the ability to introspect connected displays via screen enumeration API
… Also support window placement across any display

Mark: What about virtual desktops?

Mike: Not considered yet, initial reaction: we wouldn't want sites to be able to move pages to other workspaces
… Also thinking about how pages want to be able to control, opening maximised, child or model windows
… Some of this is exploration, opening multiple windows as in a dashboard display
… And events
… [possible screen enumeration API]
… Returns an array of objects similar to the current Screen object, with width, height, orientation, scale factor, etc
… Sites could use this together with a window placement API, to allow opening of presentation on one display with notes on another
… Also, show an advert across 5 displays
… Show medical imaging app on high bit depth display
… [possible window placement API]
… similar to window.open, but more structured and helpful

Mounir: How does it help move across displays?

Mike: Looking for input and feedback on that. Existing APIs could be sufficient, could be accessed with permission

Mounir: Are left and top here the absolute position of each display?

Mike: It's in screen space, already exposed to the web

Mounir: Could pass in a display object instead of passing left and top

Mike: Do you have to specify the screen space or display space coordinates?

Mounir: You can't moveTo a different display today

Mike: You can't, but the coordinates are in screen space

Mike: Have considered this in the explainer

Mounir: Want to avoid incompatibility, with existing content, and between implementations

Mark: What window types would this API to?

Mike: We've mostly been thinking about popup windows

?: Also stand-alone windows

Mike: Could add a new API for disambiguating display specific coordinates
… [Additional explorations]
… A number of issues come up when movement is disjoint from sizing, could be an opportunity to address
… Similar to requestFullScreen, could have a request maximized state
… Important to know when a window has been moved, so the web app could store and then restore their window placement

Jer: Risk in enumerating devices

Mike: Permission request prompt

Mounir: Don't like use of permission prompts except for privacy and security
… It'll make using the API a pain

Mike: [demo of Chrome OS with two virtual displays]
… Can specify coordinates to open on current or second display
… Simple use case is to open a presentation on the second display

Anssi: Please share the slides with the group

Mike: We're working on the explainers, have sent intent to implement for screen enumeration

Stephanie: There's a WICG Discourse thread on Screen enumeration

Anssi: WICG is the appropriate next step for this, or we could recharter the Second Screen CG to add this

Mark: I'd like to incubate this further, then look at group consensus on whether to adopt into the CG

Mounir: The Screen interface is in CSS, so it could go there, and the Window stuff is in HTML

Thomas: Does this group think this is a reasonable addition to the web?

Eric: The privacy implications need to be considered
… Want to consult with PING and privacy experts

Mounir: Want to avoid having the top of the window off screen

Eric: Also windows that are that are too small, so the user is unaware

Mounir: Also always on top windows

Mark: Get input from Philip Jagenstadt and Avi Drisman would be helpful

Anssi: Thank you for this, we'll provide feedback in WICG

Accessible RTC use cases

See Accessible RTC Use Cases

Josh: Working in APA WG, have use cases in WebRTC, but relate to other groups, e.g., Second Screen
… Some things for consideration here

Josh: Scenarios and user needs for real-time communication
… Some specific user needs
… A screen reader user, actually a navigation and reading device. The user may have many audio devices to manage, want to route output to them
… We're using the term Second Screen to refer to any output devices, e.g, braille
… A user may have multiple sound cards to manage

Josh: [detailing mixing modalities]

mfoltzgoogle: Web browser consuming content marked up for accessibility, then screen reader and accessible tools, and output devices.

Josh: Yes, and there might be different types of displays and content.
… [going through other scenarios]
… These are some of the use cases that we think are related to some of the work you're doing.
… We have a table on the page where we map scenarios to specifications and groups.

anssik: Want us to consider these use cases? Feedback from our side?

Josh: Both would be great.

anssik: Maybe we can revise accessibility review with these new use cases in mind.

mfoltzgoogle: The Remote Playback API has the most overlap with these use cases.

<anssik> Remote Playback API a11y review

mfoltzgoogle: Media has different tracks. If you want to consider different routing for different tracks, then we'd need to consider that.

anssik: Let's consider that as part of wide review for the upcoming revised CR release of Remote Playback API.

Josh: That's great, thank you.

ACTION: group to consider new applicable accessible RTC use cases as part of wide review for the upcoming revised CR release of the Remote Playback API.

Planning

See OSP 1.0 Wide Review (PDF, p96-101)

mfoltzgoogle: We discussed earlier on having some kind of explainer for the OSP. No major update since Berlin as I didn't have time to finish it before TPAC.
… Problems OSP is trying to solve, key items that the TAG might be interested to hear about.

anssik: yes, including CBOR.

mfoltzgoogle: Yes. I think I can handle it, but happy to take feedback on the document.
… If we finish all of the 1.0 issues, the question I have is "what is the next step"?

[discussion on publishing a final CG report]

mfoltzgoogle: Review from TAG, WebAppSec, PING, and accessibility would be good.

anssik: We shouldn't feel blocked by lack of review, not a mandatory step for CGs and no guarantee we'll get reviews.

mfoltzgoogle: Also Media WG comes to mind
… For accessibility, it may be a level below what they usually look at.

tidoust: Josh would be the right point of contact there

anssik: Yes, we can ask for review. Again, not blocking.

mfoltzgoogle: So end of november could be target date for publication.
… If there are issues to look at and resolve, we can schedule a telco.
… Moving on to SSWG rechartering
… We need to go through the rechartering process.
… Two pull requests: an update of the language to reflect on what the group is actually doing (new terms) with no change of scope.

<anssik> Copy editing: https://‌github.com/‌w3c/‌secondscreen-charter/‌pull/‌10

<anssik> Material changes: https://‌github.com/‌w3c/‌secondscreen-charter/‌pull/‌11

mfoltzgoogle: Second one is material changes to the scope. Features that I think we should consider is that I'd like to explore presentation of part of an HTML document.
… Similar to fullscreen that can work with any element.

anssik: We may want to add that as a concrete example.

ACTION: mfoltzgoogle to add similarity to fullscreen to the charter as a concrete example for presentation of part of an HTML document.

mfoltzgoogle: The second thing is remote playback features for OSP. Encompasses some of the proposals that we reviewed here today. I can tweak that a bit, for instance to mention receiverName.
… Also single prompt for both presentation and remote playback

<anssik> HTML diff of material changes

mfoltzgoogle: Question about whether to integrate the OSP in the scope of Second Screen WG

tidoust: We may get feedback that protocols are IETF and APIs W3C. More importantly, what matters is what people want to do.

mfoltzgoogle: The application level protocol is really only of interest to folks at W3C. My gut feeling is to continue this in the WG and split the parts that are not application-level to IETF if there's interest there. Most is applying existing protocols to our use case.

anssik: Practically speaking, we're talking about the same people, so it would make sense. Transitioning to the Rec track would make sense.

tidoust: Companion question if we take that in scope of the WG is whether to mandate support for OSP

mfoltzgoogle: Cannot tell whether that's a shared goal by everyone.

tidoust: Let me have a discussion with Strategy team on taking the OSP in scope of the WG.

mfoltzgoogle: Can we have feedback before we need to take a decision, end of October?

tidoust: Definitely.

mfoltzgoogle: We may need a short extension to bring OSP on board before we recharter properly

tidoust: Doable without going through the membership if justified and scoped to 2-3 months.

Peter: Going to dispatch in IETF would be a good way to gauge interest either on the entire OSP, or IoT-type pairing stuff.
… Next meeting is in November. In March 2020 after that.

anssik: General question on what we would like to gain.

Peter: Reviews from experts, e.g. on mDNS, QUIC, CBOR.

anssik: OK, we need reviews, but not necessarily an IETF-track document.
… If we are seeking feedback from IETF individuals, it would be easier if the OSP remained in the CG as they may not be members.

Peter: The IETF has a review process, you'll have more chance to get feedback going that route than pulling people in.
… It may be good to have feedback from a W3C/IETF perspective, at the organizational level.

tidoust: OK, will work on that.

mfoltzgoogle: We need the feedback before end of October so that we can adjust things correctly.

mfoltzgoogle: Another change is that I consider content mirroring to no longer be out of scope, since that seems to be what presentation of part of an HTML page is about.
… If we can decide on the network protocol question, I would support two year extension.

tidoust: Usually good practice not to create too short charters. Group can recharter before the end of its charter.

Action: mfoltzgoogle to update proposed charter end date to end of 2021

Action: anssik to make sure that draft charter gets reviewed by WG participants.

Action: tidoust to run internal discussions with Strategy team about possible onboarding of the Open Screen Protocol.

Window segments

See Window Segments Enumeration API explainer

Daniel: Not really second sreen, but kind of. Main scenario we want to enable is to position elements on specific views. The rectangle of your main viewport may not be in the same display.
… The other problem we're trying to solve is things like keyboard that pop up. Currently no way for applications to reposition their content, e.g. to scroll things into view.
… We came up with a proposal for getWindowSegments() that return the list of segments, areas of your layout viewport that exists in different regions.

Thomas: Does that include device with two screens (on the front and the back).

Daniel: Not necessarily, we're encapsulating when your browser viewport encompasses these different displays.
… One question is how it could combine with the screen enumeration API.

Alex: What's new is knowing that these segments exist, because that's not exposed for now.

Daniel: That's correct.
… Foldable displays.
… Logical thinking about what is a screen and what is a segment.

Mike: What I've been looking at is existing APIs and what they can bring for screen enumeration. I'm curious about the info that is already available.

Daniel: I can't say, but being able to handle these scenarios in a global way would be good.

Thomas: Does this go beyond foldable displays?

Daniel: That's the main use case.

mfoltzgoogle: Any event on the Web platform that surfaces changes? For tablet mode, there's a hack, but for foldable, you'd want something as well.

Daniel: We do have an eventing model.

Mike: Similarly, for screen enumeration, we feel that an eventing model is needed.
… Screen enumeration and window placement originated from Service Worker. It could be exclusive to a Service Worker.

Thomas: I would hope not to have to install a Service Worker.

Mike: There is a lot of history of abuse with window.open
… I almost see a parallel between these two thoughts and Remote Playback API and Presentation API doing similar things.

mfoltzgoogle: Presentation will allow targeting wired displays on top of wireless displays, and that's the main overlap with the scenarios that you're considering.
… The goals are slightly different, I think.

Daniel: The window segment proposal is more a reactive thing that a proactive thing.

mfoltzgoogle: Can you learn that you're on a foldable now without this API?

Daniel: I don't think so.

mfoltzgoogle: If you're on a mobile and get a window resize, you can probably quickly infer that you have a foldable.
… Gut feeling is that it doesn't add much info from a fingerprinting perspective.

Mike: Curious to tell what happens today when keyboard pops up.

mfoltzgoogle: viewport change event.

Mike: Would you expect a resize event when keyboard appear on the bottom left?

Daniel: No, since not a regular shape.

Mike: If there's no way that your site can know that it's been occluded by a virtual keyboard, that seems like a gap to fill.

Daniel: Right, I think that's the issue.
… There is an issue in the CSS WG tracker on declarative. We're leaving that on the side for now.

mfoltzgoogle: Two cases, tiles and foldable. Different mechanisms, although the tile one should cover most of the needs.
… As long as the Web application can derive the tiling from other sources, we don't have so much of a fingerprinting issue.

Alex: I would like to push back on solving all problems in user land.

Mike: With regards to split screen devices, what would the system expose?
… Separate displays?
… What does it look like for one application frame to span multiple displays there?
… I know Chrome OS is kind of strange in this regard, it creates one virtual display that spans the physical displays.

Daniel: Breakout session scheduled tomorrow to go deeper into details.

Thomas: I'm excited to know what native is doing, because I'm sure Samsung had to pull it into Android.

Daniel: Another option looking at screen enumeration was that a display could be represented as a set of segments. Some overlap.

Mike: There is this complete lack of capabilities to see the available displays.
… If we were to explore something along the lines of segments without screen enumerations, it would not capture use cases such as overlapping windows, difference of resolutions between displays, etc.
… I'm curious about how displays work on a Samsung DEX.

Summary of action items

  1. group to consider new applicable accessible RTC use cases as part of wide review for the upcoming revised CR release of the Remote Playback API.
  2. mfoltzgoogle to add similarity to fullscreen to the charter as a concrete example for presentation of part of an HTML document.
  3. mfoltzgoogle to update proposed charter end date to end of 2021
  4. anssik to make sure that draft charter gets reviewed by WG participants.
  5. tidoust to run internal discussions with Strategy team about possible onboarding of the Open Screen Protocol.

Summary of resolutions

  1. Add a backpressure signal to the OSP
  2. Create a pull request for a common prompt along the lines presented and gather feedback on the design from other groups
  3. Add "receiverName" to "PresentationConnection" and seek privacy review from PING noting similarity to "MediaDeviceInfo.label" in "enumerateDevices"
  4. look at how user agent should behave in the case of source switching in the Remote Playback API
Minutes manually created (not a transcript), formatted by Bert Bos's scribe.perl version Mon Apr 15 13:11:59 2019 UTC, a reimplementation of David Booth's scribe.perl. See history.