Skip ⬇

Second Screen on the Web

Video

slides

Video hosted by Web Castor on their StreamFizz platform.

Transcript

Mark Foltz: So thank you for inviting me to speak, my name is Mark Foltz, I'm with Google and I'm going to speak about efforts going on primarily within the Second Screen working group and community group.

I've been involved with those groups for about four years now and I'm going to talk a little bit about what problems we're trying to solve and what progress we've made thus far.

So what do I mean when I say Second Screen?

Really the group was formed to make connected displays and also speaker devices accessible to the web.

There are hundreds of millions of these devices now available, mostly at homes but also in businesses as well.

Encompassing a range of everything from smart TVs, HDMI dongles that you plug into an existing TV, connected speakers, all of these are connected to the internet and we wanted to make sure that web applications and users were able to share web content from their devices they use today like laptops and mobile phones to these devices as well.

Often you can use it to create a better experience, especially for media.

So you find a great cat video on the web and you want to see it on your big screen, we wanted to make that possible.

The first area we really focused on, the first API that we incubated in those groups was called the Presentation API and the basic concept here is that one web document would be able to ask a different device like a smart TV to present a separate web document.

So we designed an API that allows a web page to find out if there is a device available nearby that can render a web document, if so, the user has to basically give it permission and select a device to share that document with the other device.

In most cases we send the URL of that other document to the other device and it renders it natively, but we allow a messaging channel, kind of like post message between the page that initiated the presentation, which we call the controlling page and the page that's showing this new document, the receiving page.

And a common scenario is you find some media on the web or maybe navigate to several pages and find media you wanna present on another device, we allow that initial web page to create the presentation, connect to it, send it either binary data or messages to tell it to render media and we allow the controlling page to reconnect to that presentation even after navigation.

So if you're browsing a site, you'll be able to maintain that state on the other device.

We wanted to allow a flexible set of use cases so not just allowing one user for example, to interact with the presentation, but we have a way for multiple users or multiple pages to actually interact and control with the presentation.

So that kind of opens it up to use cases like gaming and collaboration and think of a shared playlist for example of videos that you might want to view.

The second API we really focused on was more focused, instead of general web pages, we focused on media specifically.

So we added an API called the Remote Playback API, and this allows a web page to do something similar but just for the content of a single media outlet.

So there's now an API, an HTML media element called .remote, and through that API you can find out if there's a compatible playback device nearby that can playback the media in that element, often it's a video file for example.

If there is a compatible device, again the user has to give the page permission to send the media to the connected screen.

Once you initiate the connection between the page and the remote screen, in most cases, the browser will send the URL of that video file or the media resource to the other screen where I can then fetch it, render it locally, but the browser and the screen are responsible for keeping the media element state synchronized.

So the nice thing about that is that video sites don't really know, don't have to worry too much about their video being remoted, the video element, in most cases should work about the same.

It also allows us to, for example, to provide this from default media controls so that website authors don't necessarily have to do as much to enable this kind of functionality.

So we designed these APIs and they're great, they've implemented in Chrome and other browsers and they've seen pretty consistently good usage.

The big downside we've found in getting these as broadly adopted as we'd like is that the protocols that talk between the browser and the connected device are generally proprietary, what browser you use or what website you're visiting determines what devices you can use.

So after doing the API level work we really focused more on the community group side of our effort and have been for the last three years, have been incubating essentially a suite of network protocols, we call the Open Screen Protocol, that we hope will give the foundation for allowing broad adoption of these APIs across a variety of devices and vendors.

We have done a bunch of research figuring out what people were doing in this space.

What kind of things, building blocks we could build on, and we really didn't want to invent new, low level, network protocols so we choose things that already standardized or being standardized in the IETF like multicast DNS, NDNS, Quic and CBOR, as the foundation.

And on top of that we've built a set of application or API specific protocols that allow two pieces of software to implement these APIs.

Our plan currently is to wrap the 1.0 draft of this protocol, hopefully by the end of this year, early next year, we're working on implementations in parallel, through an open source library to help vendors incorporate this into their products and we hope this will pave the way for broad adoption of both protocol and these APIs.

We're also working on the future use cases as well, we've started seeing situations where for example, sites might not have a URL to media they want to present, instead they might want to generate the media themselves, using things like Web Gl, or Canvas, or maybe eventually they'll have access to codecs as well like using ideas like Web Codec, which I think there might be a breakout tomorrow on, but in general, the underlying protocol, often involves media streaming from the browser to the connected device and eventually we'd like to open that up to the web application as well.

So if it can create its own media or fetch it however it wants, it can just stream back directly to the remote device.

So these are some of the ideas we're starting to incubate as well, in terms of API functionality.

That's all I have so if you're interested in this use case or just want to find out more about what we're doing, again my name is Mark Foltz, this is Second Screen working group, feel free to grab me afterwards or later at TPAC.

Thank you.

Skip ⬇

Sponsors

Platinum sponsors

Rakuten Institute of Technology, Coil
														    Technologies, NTT

Gold sponsor

Panasonic

Silver sponsor

Yahoo! Japan WebCastor

Gift sponsor

JCB

Bronze sponsors

Newphoria, JPRS, Kodansha, Hitachi, Shueisha, Media Do, Sony, Igalia

Friday Coffee sponsor

SoftBank

Network sponsors

NTT West, Cisco, NTT Communications

Support TPAC 2019 and get great benefits from our Sponsorship packages.

For further details, contact sponsorship@w3.org