W3C

W3C Workshop on Web Performance

08 Nov 2012

See also: IRC log

Attendees

Present
Alexandru Chiculita, Ethan Malasky, Jared Wyles, Larry McLister, Michelangelo De Simone, Peter Flynn, Zoltan Horvath, Mike McCall, Alois Reitbauer, Glenn Adams, Arvind Jain, Ben Greenstein, Dominic Hamon, Ilya Grigorik, James Simonsen, Patrick Meenan, Robert Hundt, Ganesh Rao, Chihiro Ono, Tomoaki Konno, Anant Rao, Kunal Cholera, Ritesh Maheshwari, Seth McLaughlin, Viktor Stanchev, Aaron Heady, Derek Liddell, Gautam Vaidya, Jason Weber, Jatinder Mann, Mick Hakobyan, Andrea Trasatti, Tomomi Imura, Filip Salomonsson, Giridhar Mandyam, Yosuke Funahashi, Eric Gavaletz, Karen Myers, Philippe Le Hégaret, Matt Jaquish, Paul Bakaus
Regrets
Tobie Langel, Daniel Austin
Chair
Jason Weber & Arvind Jain
Scribes
Jatinder, Karen, Paul, Giri, Alois, Andrea, Mike, Ilya, Robert, James, Ritesh

Contents


<trackbot> Date: 08 November 2012

Introduction to the Workshop

Philippe's introduction

Comparing In-Browser Methods of Measuring Resource Load Times

Presenter: Paul Gavaletz

Paul: Consumers, web developers are interested in finding performance data to make faster applications.If you want a representative data on performance, you will need to use a client that most users are using, like web browsers.
... There are many third party binaries or add-ons to the browser that people have been using to measure performance within the browser. The problem with all of these approaches is that they have selection bias ("geek bias"), because only a small portion of the population will use these applications.
... I am working on creating a net-score which is an IQ scoring for network performance. This would be a generalized performance number that regular individuals, FCCs and others can use to understand their performance impact. This data, again, needs to be representative and not "geek biased'.
... We are using JavaScript to measure real sites, so we can get less geek bias.
... Our lab environment uses a Windows 7 client and runs through a 100 mbs switch and then connects to a Linux server. We use Windows 7, because we need to measure Internet Explorer, which has a large market share.
... We measure response time, round trip time and transfer time (amount of time it takes to transmit bits from the server to the browser.
... First measure we used was that we did a Date.now() prior to adding a image to the document, until the time of the resource load (onload event). However, this doesn't give the time from when the bits left the server.
... We did this analysis on all modern browsers (Chrome, Firefox, IE9, Opera). When we used this technique, we saw that the data showed that some browsers would show that the time took longer than ground zero (IE and Opera).
... Another technique we used was using an XHR to measure the latency of opening and recieving the headers. We saw that all browsers did well on opening the XHR; data was very close to ground zero. When we looked at XHR loading, we saw that there are cross-browser differences here. IE and Opera are closer to ground zero, whereas Chrome and Firefox take longer to fire the xhr load. When we looked at the XHR DONE, we saw that browsers w[CUT]
... The next method we used was looking at Navigation Timing API, which is supported in most modern browsers. We use requestStart as the start time, responseStart as the start of the response and responseEnd as the end of the response. When looking at the navigation timing data, we see that responseStart, most browsers are close in data, though with some differences. When we look at responseEnd, we noticed that there were differences [CUT]
... with the data.
... We looked at three different cases of placing our measurement code. We put the base case in the head, inline and inline in onload. Looking at the IE data, we see that based on when we measured the data, it looked diferent.
... When we looked at Firefox, we found issues with css visibility. We did a study where we used these different methods across browsers, we saw that there are differences between browsers when gathering this data.
... Why not time the simple cases, of downloading individual objects.

Question: Has someone implemented Resource Timing yet?

Paul: I have not yet found out if Resource Timing is implemented.

Jason: IE10 has resource timing available now publicly.

Arvind: Chrome will have it soon too.

Alois: With the Timing specs, it feels like the specs are very advanced, and the implementations are taking long time.

Philippe: This is probably the only time some has said that the standards group are working too fast.

Paul: I think there is value in having this specification completed before the browsers implement the APIs, that will reduce the cross-browser differences.

Jason: As the browser vendors, we have internal implementation details. I am more than happy to provider more information to you.

HTTP Extension to provide Timing Data for Performance Measurements

Next Speaker: Mike McCall from Akamai Technologies

scribe: I work in service performance group
... one of my jobs is to look at Akamai's performance, internal and external perspectives
... and from the Edge, Akamai servers to client
... have been working on @
... Paul gave a good demonstration of what we like to think of in RUM ?
... how the page loaded, the end user actual experience, the site they were visiting
... Akamai in particular are interested to show the value of our products and see they are getting benefit
... Thanks to specs like navigation and resource timing, it makes it easier to show that value
... Thanks, eveyone!
... During HTTP2.0 process this summer, Akamai is interested in being part of the conversation
... We wanted to make sure our needs were met by the spec
... We had an internal call for drafts to submit to IETF
... Some of my colleagues came up with this HTTP extension
... Some of these measuresments are similar to what Paul just descibed
... Collect those to the HTTP session...without calling back JavaScript
... Right now most of you are familiar with this
... We are converned with two types of performanc emeasurement
... Many commercial offerings
... wide, vast and really great
... Synthetics tests allow us to see page level detail for a single transaction,
... look at spike times, content times
... and give us baseline data points
... They do have some problems
... tend to be well connected data centers, running fromnice computers
... that tend not to have viruses
... so they provide a best case scenario for web based performance test
... but from measurement perspective, it's not what real pepole are seeing
... In the Rum space
... some more well known projects are Boomerng
... has capabilities to test throughput
... test sizes and bandwidth
... and what user timing spec from W3C
... Episodes doing great
... there is commercial support
... Rum is well received now and people want to see what end users are seeing
... But there is one problem, it mostly requires JS to run
... so has somewhat of an observer effect
... It's a start, but we thought why not add this collection of data and beaconing into the session
... that is basically the draft I have written internally
... and what Akamai sent to the IETF
... to get people talking and thinking
... to couple the gathering and beaconing of the HTTP session
... One thing that might be interesting from Akamai's perspective
... there are two ways to implement Rum solution...to show value to them
... and to do experiments of our own and understand what our end users see and improve our products
... Our servers talk to end users so there are some cool opportunities there
... Possibly dynamically change the session
... If user running on slow browser, give them a different version of the page
... We feel this could be a cool possibility
... Proposal goes into request response headers
... There is negotiation where the user agent sends the header...and be willing to send data
... and tell it which metrics it wants
... Might be further negotiation
... and server is going to send a TTL, a timing interval
... Idea is to wait only so long
... before timing out or sending data you have
... Assuming negotiation is successful we will send that back
...
... using HTTP is not what most people do with Rum
... May embed a chip
... POST, as we have noticed in our Rum measurements, you want to collect as much data as we can
... with resource timing you collect as much data on page and that gets out of control

POS seems like good idea

s/POST

scribe: On left side of slide is the user agent, right hand side is the server
... Header asks for timing measurements
... says ok and serves content
... get DNS request
... May also be a case to use the performance timeline
... And also tells where to send those measurements to
... Once the ust has collected those measurements...will send that data bacdk to server and HTTP POST
... that is a high-level of the use case
... Another thing we called out
... as the spec is written
... Send spec back
... may break the business logic
... so add this other header and send the data back
... If you want to override data, can also add a metatag
... especially if you do not have control over your own web server...to send for data processing
... That's all
... high-level overview

<Zakim> andreatrasatti, you wanted to ask about the packaged data to send back to the server, when and how much data

Andrea: about the last slide and previous one
... the client would say I'm available to send measurements
... what sort of package?
... If I'm Yahoo, I want to collect data for my entire site
... how is data packaged
... for client and server costs

Mike: great question, but not something we have addressed
... to get full client experience would need to address it

Andrea: How do I know what resources are being used
... if @ not being used...or if Edge cache coming from Europe

Mike: Our Edge server will still serve it

Andreas: But how woudl the client know...if I have news article with 10 images
... 9 of 10 are not cached on web server
... client says it took ten seconds to analyze
... but the article image was slow; how would I know that

Mike: to get that data we would have to log into servers
... this spec is about collection of data from client perspective
... would not know it had to be fetched from client of origin

Glen: Has this been implemented in any user agent?

Mike: It has not

Alois: There is a general problem with beacons getting the data back from the server
... people do crazy stuff
... beaconing approach per se
... as you mentioned, resource timing is there...you run into challenges
... content has to be posted there
... monitoring data to a different domain
... unless you are a big company you are not doing on your own
... you are not getting data back; can't get POST back
... cannot get Rum back
... don't want to go into technical details
... From Compuware perspective, this is a problem

<gmandyam> gmandyam is Giri Mandyam (Qualcomm Innovation Center)

Alois: run into serious problems getting data back to service
... have to wait before unknown event; have to wait for A B testing
... Second that we need something like this without going into technical details

Armind: The existing JavaScript APIs...can you just use them?

Mike: which ones?

Armind: the timing APIs

Mike: we absolutely do
... this is basically a wrapper around that
... go back to send measuremetn
... used high level things
... could be domain lookup N
... or request startup
... use existing APIs

Armind: You could use JS code
... why use HTTP headers is the question?

Mike: In addition to observer effect of running...

developers are reluctant to add new headers

scribe: customers are timid about this, especially from a third party perspective
... assume we have tested our code, but not sure how much they interact

Giri: I had a question as to implementation of this feature
... think of streaming technologies such as DASH
... media player devices
... this may be a good feature, not a full JS engine
... if you compare a browser based vs native app
... would there be performance issues between the two?

Mike: great question, I don't know answer

Giri: my guess is that the native media player could provide different measurement data than a browser running on the same device
... user agent stream...may show different application running on same device

Mike: You may be able to figure this out on the negotiation phase
... not sure what is supported
... you could conceive the browser would ask for this...or part of the orginal request header

Giri: more like, I am not a browser request

@: Existing agents will not role out without existing sets of complete data

PLH: Last question

Gautham (sp): hosting header could collect it; possible have a check box

scribe: turn check box off...possible instrument to collect data
... taht could be useful benefit to this approach

@: Be nice to aggregate the data to specify

scribe: gives an ID to the request
... client aggregates...ten objects and sends ID

Mike: something like that would be an internal implemetation that exists within the larger protocol
... request header leverages other API to collect data

@: Reason I brought up, would be too much for client to send

scribe: pages are too large
... lots of requests going back and forth

Mike: once per page, not per image
... the way navigation timing works is per page, per the document and noteverything on the document

@: maybe I misunderstood

Mike: that would be resource timing

@: Suppose you had ten objects in the cache and you want to know @ time and connect time

Mike: look in headers and ask for all the resource timing

Phil: On the JS side, user can report anything you want
... I have App and it gets reported
... JS solution has two gaps
... beginning and end of page
... onload
... I want to trigger an event with my analytics vendor
... make sure event gets registered
... some out of band reporting; browser please fire this; submit
... Other end is the end..fire the beacon...but people will abandon page
... would be nice to see that user got DNS results and then got cancelled
... we have no visibilty into that
... an autoband browser support

Mike: I agree

Automatic Web performance measurement

Philippe: I am presenting for Radware because they could not be here
... The problems they're facing is that they're a hosting company and navigation and resource timing gives them a lot of visibility into their performance of their sites
... the challenge they had is to inserting code into a page can change the web page functionality and performance of the page
... the solution they're proposed is similar to the one proposed by Mike. They also proposed a HTTP Header, but also a HTML content attribute.
... HTTP header they're proposing is very simple. It shows that when you answer to a get request you get back a response header on where you are going to report the timing information
... with the conten attribute, you can say with kind of information you want to listen to
... whether it's the navigation timing information, or measuring video performance, individual targets can be linked this way, it's very simple
... for each of the resources in the HTTP header you get the URI, start end timestamps and additional properties through the request
... It's not the one of only solution, I do have some open questions
... I don't like their x-www-form-urlencoded format, maybe JSON format instead
... Should it be under same origin policy? What are the security considerations?
... does the APM still have to modify the web page, and how to active automaticm easurement to resources like XHR, @import?

Alois: sometimes you have to parse the whole content, and it would have a significant perf impact. Second thing to put it into elements is that you might not know that you want to measure them as they're created later on, before you know through the request
... All additional information that is sent back through the bandwidth needs to be taken into account, especially important for mobile
... It would be nice if the page could say I want to post back to a certain backend to report information, not the other way around

Paul: The same issue that was brought up with the previous proposal. At what point is the page done, at what point is the user navigating away? If I stay on the same page for a long time, the most data is sent with other requests. We've outgrown the abstraction of "page", maybe that term is holding us back from the right implementation, They don't exist as

they used to.

Philippe: You shouldn't be hiding a POST within a GET

aaa: It's really easy to take the access log and pipe it

Philippe: any other questions?

Andrea: Connecting this with what Mike said earier. One thing you could nearly do now is to build from JS HAR archive and send it back to the server. Chrome dev tools (not sure about IE), you can already generate it from the client. Another option is a spec that creates HAR via JS, have the developer control info to report back. I am not convinced

about this solution, as HAR already solves the issues.

Discussion: Performance Metrics

Jatinder: This part of the program is a discussion.
... (paraphrasing) We have collected feedback from all members.

Jatinger: Divided into topics. First topic: Expand Navigation Timing. Gautam proposed this. Is it of interest to anyone else?

Jatinder: Next topic: define a networkType (e.g. radio, wired). Navigation timing now has an attribute to indicate when the radio is awake.

<plh> http://dvcs.w3.org/hg/dap/raw-file/tip/network-api/Overview.html

Philippe: There already exists a spec Network Information. How is this different?

Ilya: An older version of Network Info API is in Android, but it is not useful. Mozilla has a version that returns BW, but the implementation is not verified.

Ritesh: This kind of API is useful.

Paul: There was a System Information API, but because of security concerns this was never implemented. Won't there be similar issues with this API.

Jatinder: Privacy/security is a concern, in order to prevent device fingerprinting.

<plh> http://www.w3.org/2012/09/sysapps-wg-charter

Paul: We should examine what failed in the Sys Info API, e.g. restrict the information being returned.

Andrea: Sys Info API did not attract a lot of interest, and some features were split off. Latency is of most importance to developers, not radio type.

Jatinder: What do you do with this information? Is it for real-time decision making or post-processing?

Andrea: It can be for both. e.g. high-latency means that smaller resources can be selected.

Dominic: Radio type can change frequently for mobile devices. It may not be very useful, and for post-processing part of a page could be under one radio type while another radio type could be applicable.

Mike (Akamai): From a CDN perspective this information is useful.

Alois: The better your BW is, the more resources you can consume. Latecny is also important. We differentiate between BW and actual connectivity (e.g. high throughput over the link but connectivity to the server does not reflect it).

Jatinder: Next topic - define a round trip value. Can't this already be determined from the timing value? Is there value?

Alois: RTT is a latency measure, and direct access may not always be possible.

Next timing: Next topic: define chunked timing value (from Gautam).

Dominic: Could be used for burst transfer performance as well.

Jatinder: Next topic: define a first paint. It has been brought up before, but from a browser perspective this is difficult to measure.

Dominic: It is useful to measure user experience. You could make runtime decisions on resource loading, layour and CSS changes. It would be useful as long as it is not a "race to the bottom."

Patrick: First paint is not as useful as when an object is first added to the DOM render tree.

Alois: First paint is useful. Regarding Pat's point, we need to know when a JS action results in a screen rendering event. We should nail down the use cases and what kind of measurements we need, but we cannot actually measure when the user sees the page.

Jason: From a borwser perspective we'd like to solve it, but it is not possible. There can be 25 layouts and 4-5 paints during a signle page load. Also, the event firing and the rendering are no syncrhonized.
... We need to start with use cases.
... There are inconsistent patterns between browsers as to when events are batched.

Jatinder: There are a lot of profiling tools that can provide this data.
... Next topic - new performance metrics. Measuring frame rate.

Paul: Very useful, but hard to implement in browser. There is no single paint event - there are multiple paint events happening in time. The developer tools are not flawless. The use of the tools influences the frame rate.

Dominic: A game using animation frames has an easily-determined frame rate. The paint frame rate is not useful.

James: What motivated this was the scrolling use case and measuring frame rate.

Jatinder: Next topic - per-object latency.

Peter: I have not been able to measure object latency in the tools I have worked on. You can measure task time in JS, but not when the pixels hit the screen. We worked with requestAnimationFrame, but ended up using a high-speed camera to get accurate numbers (the error was 270%)

Jatinder: Next topic: graphics and GPU timing. Timing on loading HTML video.

Pat: This is related to chunk timing.

Glenn: We should document use cases better.

Ganesh: You are using UA as a measurement tool. Measurement tools require calibration. I recommend that calibration tools be considered as part of the test cases to ensure consistency across browsers.

Jatinder: End of discussion.

HTTP Client Hints for Content Adaption without increased Client Latency

Use Case: Mulitple devices and adapt content to device without having multiple sites

User Agent and mobile detection not reliable today

cannot use it for figuring out device capabilities

Media Queries another solutions but browsers will download all css files

browser cannot know which it needs: landscape, portrait, print ....

Device Databases: sniff UA string and adapt on server. Database needs to be maintained well

client-side detection with JavaScript, works but hides resources from browsers and needs JavaScript to execute, however reliable

User Equipment categories sometihing similar to browscap

Akamai mobile detection mostly for mobile redirects

Proposal: client-hint HTTP header

Andrea: Maybe just on the first request

Mick: maybe add to user agent

Ilya: UA already overloaded

Glenn: use standard media query identfiers

Paul: in favour, concern: fingerprinting

Ilya: nothing new introduced. so no additional JS

Dominic: capabilities for games eg. WebGL
... be careful to use categories, keep as granular as possible

Gotam: MS having similar idea.

What if resolution changes etc.

plh: not make available in incognito mode?

Glenn: not individual. use do not track fo not sending

Paul: dynamic switching: what about an event => API in JS for change of values

Ilya: already works for orientation change

Paul: where should this happen? IETC or Web Perf group

Jason: hard problem, let's look at use cases first. pixel not enough, screen size, distance from screen

Paul: CSS alone is not enough, different widgets, mark up etc.

Andrea: hard to define rules in advance for the browser to know what to do
... vocabulary must be understood by browsers, cdns, servers, what if it changes over time? also have to wait for browsers to implement it

Ilya: Responsive Images: How to not download additional immages

Ethan: hybrid problem. Cannot be solved on client or server only.
... Control which CSS breakpoints are downloaded?

Ilya: Client side will continue to exist. Eliminate UA detection

Gotam: How deep to go?

<andreatrasatti> ack

Browser Enhancements to Help Improve Page Load Performance using Delta Delivery

Robert: I work on the gmail team
... Initial load and that blue loading bar is a source of stress for us
... my goal is how do we bring the initial load down to below 1 sec?
... I am going to share some ideas
... the initial load sequence of gmail is very complicated and we measure every phase
... there are 3 main phases, establish connection, download resources, display on screen
... about 1.8 seconds for the first phase
... we measured it with Navigation Timing API
... Phase 2 has 3 major steps
... main page, JS, JSON stylesheets
... So what's the bottleneck?
... fast downloads are 1s, slow are 60s
... for the fast ones the bottleneck is the personal data, for the slow ones it's the JS
... one of the solutions was to reduce the size of JS
... looked at RFC3229, SDCH, other
... none worked for us
... DeltaJS is a proposed solution, send the JS once, store it locally, and on following requests only send the delta based on a version number that the client declares to have locally
... we looked at the size of deltas, over a month, each delta was about 2% of the original size and in total it meant 9% of the original size
... measured, if we managed to reduce the size of 95% of JS and SS, initial load would be reduced by about 50%
... in order for this to be more effective, we are thinking of some enhancements on the browser
... Sha256 is about 30x slower to compute in JS as opposed to C++
... so we propose exposing the Sha265 (Crypto API?) to JS
... second, pre-heating
... meaning pre-load "all" of localStorage, pre-load and pre-parse JS, pre-compile JS
... maybe even pre-pre JS. interesting side effects
... Delta-Application via HTTP, i.e. using HTTP headers
... UA and server exchange version numbers and delta versions
... it would be transparent to the client, but complicated to implement for us (gmail) on the server
... fourth and last idea is Streaming Delta Application
... pipeline parallelism between JS download, VCDIFF and parsing
... add safe points to delta changes

Andrea: when you talk about the protocol change for the delta is it for each resource or for the Web app as a whole?

Robert: each resource

Ritesh: wouldn't a better cache on the edge servers help?

Robert: these numbers I showed already take cache servers into account and us forcing cache refreshes
... and gmail has very frequent updates, so a lot of changes

Anant: how about the memory cache in the client, there is a lot of work to do there

Robert: well, the client could try to be smart, compare where it last downloaded the script from and optimize

Alois: how about using appCache? Isn't this caching of the deltas on edge servers resource intensive?

Robert: there are teams in Google that use appCache a lot and others that don't want to have anything to do with it. I am not an expert in this specific area
... you have to be very selective what you do with caching
... we sometimes have very aggressive on pre-fetching and this moves the load very much from client to server and sometimes overloads the server. You have to be careful

Alois: you could have part of JS in appCache

Robert: that's a possibility, I'm not religious

Aaron: what about security of the code that you send, could malicious code be injected with deltas?

Robert: I know MSFT does a lot of smart things in this space, we don't. We use checksums and those should be safe

Aaron: checksums might check, but who knows what's inside?

Robert: it's a possibility, but we are not that concerned

Improving Performance Diagnostics Capabilities in Browsers Error Logging

Alois: Every platform has a management API, Java, .NET, Ruby

None gives you all of the details you need

Analyze perf across browsers. Today, you need to use multiple tools to get metrics

Not useful for automated testing

Client-side monitoring of a web application - web page might be opened for hours or days

If you have a memory leak, there's no way to get access to the information

resolving user complaints: try to reproduce the bug locally, need to install tools locally. doesn't work

would like to have a way to remotely debug an application in a browser

looking at resource/nav timing, need to load the page. no means to get information beyond when the page was initially loaded

don't have access to 3rd party's code - no way to say "I want to see the code" and determine what they're doing on your page

want to see optimize network usage of a web app - hard to get information if a content network is in the mix

finding execution hotspots: how do i know that code takes a long time to execute within the page?

Chrome has a way to profile performance; why can't we automatically trigger it?

No way to send back a heap dump from JavaScript

finding layout and rendering hotspots: modifying DOM has an effect, need to track time it takes to do it

Proposal: an API to gather detailed performance data as other platforms have

privacy and security: hope to provide a diags interface, in doing so, a user would need to accept sending the information

Goal is trying to find execution hotspots

Ethan: doing diagnosis in the content itself, interested in having an outside interface?

Alois: want to be able to remotely enable it.

Philippe: what about web driver API?

Alois: getting an HTTP channel into the user's browser won't be possible. difficult to make work in a diagnostics use case in production

Profiling APIs already exist in browsers, let's expose them via a JavaScript API

Ethan: thinking about Chrome extension model

Alois: APIs don't exist at all now today, let's start by getting something

Alex: debugging the debugging code?

Alois: Heap data could get very large, separate environment for the debugging code

worker processes could be very useful here

Alex: security, don't necessarily want to know about Facebook widget's data

Alois: could be able to request permissions for third-party resources. make the user aware of this information

Peter: should also include rendering and layout performance. JS hotspots are important, but layouts are problematic

Alois: agree, covered in the document

Paul: talking about two different things, framework for profiling, on top of that are APIs

Ganesh: local storage option. data is dense, might prevent you from transmitting it remotely. might be able to analyze locally

Alois: if the API exists, you can determine which data you want to ship back

Time to start getting information out of the JIT?

If web devs have to worry about whether or not something was JITed, we're doing something wrong.

Improving Web Performance on Mobile Web Browsers - Ben Greenstein (Google)

Ben: Desktop browsing is fast, relatively speaking. Desktops and laptops are over-provisioned to display pages
... Mobile is 10x behind (average PLT ~9s), 10x less processing power, 10x less memory
... Why slow? High RTT, slow transfer rates
... world averages: 2.4mbps, 280ms RTT, USA avg: 3.2mpbs, 420ms RT -- speeds are disproportionately higher than RTT
... PLTs on mobile vary over an order of magnitude (3-15s is typical)
... location matters for LTE speeds - wide distribution of speeds; even small changes in location affect available bandwidth (ex, line of sight vs. ...)
... time of day affects bandwidth (in same location)
... as you would expect, performance varies by carrier
... on mobile, characterizing the PLT is very hard due to variability
... The CPU on the phone is sometimes the bottleneck - ex, javascript execution
... What can we do improve state of art? Need lots of measurements with broad coverage.
... need techniques to inform origins of expected performance -- ex, allow the client to notify the server of current link conditions
... we need to help developers diagnose problems with better tooling
... Gary (Qualcomm): we already do some bandwidth estimation on modem side.. in practice, need cooperation from lower layers in the stack to do reliable estimation. how do we get a consistent interface?
... various mobile OS's should be exposing more information about the channel

Mike: bandwidth estimation through fetchign additional resources.. results in extra power + bandwidth consumption

Derek (Microsoft): exposing more information to the client about current network conditions would definitely help

Improving Mobile Power Consumption with HTML5 Connectivity Methods

Presenter: Giridhar Mandyam

Giridhar: I am at the Qualcomm innovation center, which is a subsidairy that Qualcomm started so that we can more easily contribute to the open source community, like drivers, WebKit contributors. The parent company is very active in the modem, cell and application side.
... Today, I wanted to talk about mobile web consumption.
... The paper maybe a bit dense, but it provides examples of mobile power consumption. Web performance has had made great strives when it comes to mobile web performance, including JIT, Graphics rendering and hardware based co-optimzations.
... Mobile device power consumption is not sufficiently studied today. Difficult to assess and web developer don't always have a specific focus on mobile devices or power consumption.
... The developer research by Facebook was that the native battery APIs are only being used by battery makers, and not real world developers.
... If we go back in history, iPhone was the first vendor to measure web network bandwidth, in a similar way to talk time.
... Many new HTML5 features that are potentially battery draining, like video, webGL, and new connectivity methods like websockets, webRTC.
... We now have new persist connectivity features that can impact battery life.
... Websockets and mobile battery life. Websockets are new to html5 as a web communication method between web-based clients and servers. E.g., you can think of chat applications that would use this. All modern browsers (Chrome, Firefox, IE, Opera) support this API.
... IETF defines a similar spec (RFC 6455) to the websocket.
... Bit alarmed with some of the developer guidance that has been made on websockets. Some of the aspects have been difficult to implement (Google disabled the keep-alive in SPDY because of the variability in the network). There are some guidance that says to not rely on underlying mechanisms, but instead check if the connection is active with dummy data.
... There are scenarios where the modem can power down its radio when it detects that no data is coming down. Keep-Alive breaks this model, because the radio is kept alive unnecesary.
... A little test where we are checking the battery connection with a websocket on the page. Keep alive message was sent every 3 seconds, he can see that the rate of power reduce reduced from 0.5% to 0.2%.
... WebRTC also has an implication to battery life. WebRTC is in the process of being standarized, so this is a bit more theoritical.
... WebRTC is web real time communications which includes voip, video, streaming.
... The API basically has two main functionality, the peer connection and the getUserMedia (media capture). Peer connection is currently only available in Chrome, whereas getUserMedia is supported in most browsers.
... Ideally, WebRTC session would leaverage all qoS mechinasms available. The way LTE ("long-term evolution") differs from 4G, which uses OFDM method (orthogonal frequency division multiplexing).
... With LTE, they came up with if you have a real time service, QoS, it gets special scheduling. No QoS is referred to as an "over-the-top" (OTT) service. We found that QoS has implications on battery life.
... The basic idea is that you can basically shut off your transmitter when you're not talking. When you are talking, that is when you power usage is at the highest rate. The way LTE is setup is that you can minimize the number of timeslots when you don't need it. One is dynamic scheduling where you just request a time slot, while the other is semi-persistent which checks at different times.
... What we found is that when we use semi-persistent scheduling, we can see up to 20% power consumption reduction.
... How does this impact the web performance WG? The most vaulable thing the web performance WG can provide is to create the best practice guidelines to share with web developers. Ensure minimum performance for implementations of new battery API so that developers can leverage during runtime.
... Also, indicate to web developers whether cellular QoS is being levered in a persistent connectivity session. Also, add explicit metrics regarding the state of the connection propagated through JavaScript. Extend what we started with Navigation Timing, include things like adding packet loss percentage.

Paul: Last week, in the device working group, the thing that came up was that it was very hard to test implementation when it comes to battery. E.g., it is very hard to simulate batteries, like a driver that can create a battery life usage on a particular OS.

Giridhar: There is no easy way to do this today. Using the battery API, when I loaded the page, I could get perfect reading. After using websocket, the results of the battery API would change. Reloading the page would reset the socket. I agree, that it would be useful to have a better.

Anant: Do we need a finer grade battery API that can point to what the culprit is?

Giridhar: The battery API just came out and it's not even been uniformly implemented. That could be nice.

Anant: The battery usage is a result of a certain activity., E.g., my screen is too bright. This might be useful to get this data.

Alois: That's kind of indirect way of looking. Typically knowing that you are consuming more resources will have an impact on battery.

Paul: Do we know if there is a proposal in the W3C for power consumption?

Giridhar: I don't know if there is. I mentioned things like fast dormancy, different mobile vendors have different metrics to determine this.

Alois: It's perfectly valid for testing purposes. When you think about a user's phone, they have some much stuff running in the background. The readings you get doesn't have anything to do with your application. It may be representing another application in the background (another app is downloading resources). This sounds interesting just for a testing point of view.

Giridhar: There are some things that the web performance WG has done to help here. Like the Page Visibility API, where you can throttle acitivity based on visibility. However, in mobile sites, we expect that few sites will be visible at a given time.

Pat: Do you know if there is anything going on the networking side for transmitting data to a mobile device? Like the mobile device is mostly in a listening session.

Giridhar: There was a proposal where websites could request a certain type of access, which KDDI will be discussing later on. There is a point of view, where connectivity is better managed in the aggregate. Why send a bunch of packets one at a time, why not batch them up.

Pat: I'm thinking about not using TCP over the radio, use something else on the mobile.

Giridhar: That's a pretty mature thought. IBM was looking into this in the past, of using mobile proxies to do this. SPDY to some extent can be considered to do something like this, even though it is on the server side.

Jason: I was going to pose the a question that what should the browser take in terms of battery consumption, and what should the developers start doing? For example, the aggregation that you have mentioned this, we have been doing this in IE9 through using collaesed timers, where developers get this free. We would love to see other browser vendors to help do some of these things in the background as well. Including things like supporting[CUT]
... efficiently use CPU. URL: https://dvcs.w3.org/hg/webperf/raw-file/tip/specs/setImmediate/Overview.html

Igrigorik: I guess from a web developer point of view, there not really concerned with battery.

Jason: I agree, but should we have thousands of web developers make the web application more battery efficient or should we have the four browser vendors help solve this problem.

Igrigorik: An example from a study that AT&T was looking at Pandora where it was sending a ping every couple of seconds, by changing this they helped save battery life by 40%.

Jason: That study was done in Safari, which doesn't do the collaesing of timers. If you checked that study in IE9, you'll not see that battery impact.

http://blogs.msdn.com/b/ie/archive/2011/03/28/browser-power-consumption-leading-the-industry-with-internet-explorer-9.aspx

Measuring Memory

Paul: Realized that on so many devices, especially mobil devices, measuring memory is _the_ most important thing to do

Equally important: Load (makes players play), runtime (makes players come back). Navigation timers help on load, but how about runtime?

Statement: The BIGGEST performance bottleneck on mobile is: MEMORY

Memory is taken by: Static assets (CSS, JS, uncompressed images) - how much memory do they take?

Images and DOM correlation is unclear, when they are unloaded is "unknown"

DOM nodes not representative, what's the underlying memory?

Unloading images appears impossible in many situations.

Apparently, both compressed and uncompressed images are being cached in mem.

No control over that

HW acceleration is unpredictable, we want information when _exactly_ textures are created, destroyed, recomposited.

Optimal: Users shouldn't care about this, for game devs, however, part of the job.

Insight: All modern browsers GPU accelerated canvas

browsers and/or GPU use algorithm to determine when textures are still alive and needed

We want to understand when textures are buffered vs being released.

Statement: If ALL mem is consumed, your page performance goes to hell.

As a matter of fact: Browsers do crash, also on Android, iOS

Developers don't know when apps reach the limit.

Observation: More images on canvas deteriorates Paint.

Turning to GC

Wants:

- trigger GC manually

- GC timings

- disable GC

- understand execution "interval"

Knowing about GC rates and times would allow to throttle a game's FPS

Developers go through pain to easy GC pain:

- reuse array/objects

- object pooling

replace native methods with custom implementation to avoid GC

- Favor Function#call

QUESTIONS

Giridhar: Manual GC invocation might not "scale"

especially across multiple tabs

Alois: This gels with his proposal, being able to push the runtime to the max is desirable. Manual GC triggering could be "scary" for people/devs

Response: It would be nice if GC could be parameterized, e.g., pick 1 of 5 different flavors...

Alois: Probably not workable, as every freaking VM implements different GCs

Microsoft's IE has GC exposed.

<scribe> Unknown/question: Why does this have to be exposed to begin with?

There is no real control of GC algorithms anyways

Answer: Still important to know

Suggests sort of an adaptive algorithm, to work around browser's GC behavior?

At least developer has control

Alois: Browsers built for browsing, not for running Apps. Behavior changes for long running Apps (rhundt: couldn't agree more)

Preserving Frame Rate on Television and Web Browsing

Speaker: Yosuke Funahashi, Tomo-Digi Corp., Japan
... I am one of the co-chairs of W3C Web and TV Interest Group
... and also chair the W3C Business Group for Broadcast TV
... I would like to share my viewpoint and if it helps
... see how we can do work in W3C
... Brief introduction of the two groups
... This is from the last TPAC face to face meeting
... show you the hot topics in this group
... the group is also in the period of rechartering; gathering and studying what to do over the next two years
... As you may have noticed, the IG role is not to develop specification, but rather to gather and develop use cases
... then IG provides these use cases and requirements to other Working Groups
... some also join to create the specifications
... that is how the IG works
... We had a Home Networking task force
... We also met with Device API WG and collaborating with them
... Another example is Media Pipeline task force
... they are developing Encrypted extensions and media source extensions as part of HTML5 Working Group
... also looking at exposing content
... liasing with other groups
... following up
... stereoscopic 3D web
... TTML; ITU
... rechartering work
... So what is TV perspective on performance issue
... important for web browser on tv
... but real timeness is more important because tv shows something in real time
... that is the tv experience
... faster and faster...right frame, right timing
... I don't think we should have RT-browser
... we should have adequate performance APIs
... to maximize UX
... I would like to see how this working group and the Web TV and Business Group can work together
... My motivation
... A browser is a browser
... Focus on other performance issues on tv devices
... biggest difference between smart phones and smart tv is showing video continuously
... and the spectrum and GPUs
... TV can show video screen without drop of frame
... if video stream stops frequently
... or drops frames frequently
... people think it's bad and make complaints
... So it's a core value proposition and TV
... core difference
... Expensive smart phone, your web page
... may be slow, but you don't think it's that bad
... but people don't think that way with TV
... an expensive TV is capable of showing continuous frame video smoothly
... Web Apps on TV vary on a wide spectrum
... The gap between expensive and inexpensive TVs is big
... gap between tv and web apps is key factor
... you target devices in very broad spectrum
... that is the situation with the TV industry
... This is the the performance and interop lab in Tomo-digi
... based on XHTML2.1
... extend regionally
... to synchronize broadcast signals
... of course that specification does not have web apis
... we have been doing projects
... study and investigate the chips
... they are using
... and in some cases we ask
... about the secret performance measure of the devices
... and application developer create different user
... and check how they work on actual devices
... I think this is not a good way
... So when we use HTML5 on TV
... we call connected tv, smart tv or next tv
... we should have some good performance API
... and should be able to collect all of the information from these devices
... So I picked up some issues
... or use cases to clarify how use
... Issue one is preserving frame rate of web apps on TV set
... MPEG DASH works well for preserving frame rate
... DASH is Dynamic Interactive Ad replacemetn over HTTP
... client controls which screen they will use according to the information about bandwidth
... if bandwidth narrows, they change stream
... image may be...get worse
... no results for drop of frame
... usercan watch video stream...that's DASH
... I would like to have similar mechanism to select UI of TV web apps to preserve frame rate
... Developers need to know performance information, including frame rate
... need to know the characteristics of each devices when they design web applications
... when they tune it up
... and we can use the same facility to dynamically change the user interface
... for example
... the fastes scenario is using WebGL to synchronize with tv program
... in some situations
... for example search of memory or processor
... WebGL will not work well
... so they fall back to another scenario
... to maintain user experience
... Potential discussion spaces on this topic with W3C
... Web and Broadcasting Business Group is gathering and polishing business use cases
... and Web and TV Interest Group is clarifying requirements and gaps
... So I would like to hear how we can cooperate

PLH: Any questions or opinions?

Paul, Zynga: First, I think this is really important; we need for games as well

scribe: we're running on game loop
... other side as well
... requesting frame rate
... if not on tv, but on computer that runs at 60HZ
... and video is encoded at 24fps
... a good way to request a certain frame rate
... but I don't know any browsers that can do that...kind of stupid

PLH: any other reaction or questions?
... Seems to me that the Web & TV IG can continue to narrow the requirements
... that will help us to develop the specification
... no reason not to include you guys

Yosuke: thank you

use-case of smart network utilization for better user experience

several use cases for smart networks:

contents flexibly chosen based on user environment

ex: wifi is high quality, 3g is low quality

alternately, choose content type, like text for low bandwidth connections

finally, adjust traffic volume

session at tpac 2012: smarter apps for smarter phone

who should decide smart network? app devs or browsers?

what type of adaption to make? choose content for network or choose network for content?

telecom operators have the ability to provide network condition information

do we need it? and why?

usage example: check with policy server and include current status, like wifi quality.

questions:

andrea: talked about headers including information. what if operators did it instead?

ilya: can't inject a header in all situations
... lot of work to plumb the data up and down the network stack

Open Discussion

Jatinder: any burning question?

Aaron Heady, Bing, MS, MS: End user monitoring. Used keynote, gomez etc. Still not enough.

Aaron*

scribe: availability @ end user? Dns goes down, but for a low percentage. How can we use browser for monitoring agent, esp for failure case
... Browser should store all these failure event logs. Next time page load, re-execute the JS and poll that data and send it back
... we don't see intermittent errors (1% failure). Slowly builds up to other datacenters.
... base error rate goes up. Success load causes browser to send event log to origin
... Basic proposal: Browsers should become error monitoring infrastructure
... persist errors. May be send it back Async way
... Combine failure cases + performance together.

Alios: Agreed.
... we really want to monitor when it doesn't work

Paul: Hurricane sandy, gizmodo had error page. Can't access this site, other users are facing similar problems
... have server that tells you which site is down or not

Aaron: That's useful, but we need more
... especially for low error rate case
... 10 hrs to detect a HTTP error. Had to take a pcap to debug. Such a low rate, couldn't find issue with normal monitoring
... It was a proxy issue, only some end-users.

Paul: having service is better, since user may not go to your site again soon

Ritesh: HTTP Post can send this async

Mike: beaconing API is good. Fire and forget

Jatinder: Async is good since it doesn't hold up

Mick: Use a permanent JS per origin in browser

Gautam: Similar to watson service in windows
... when app fails, dump is saved. Stored locally, batched and queued and sent to MS server

Jatinder: Anything else. Error reporting is cool

Anant: Time to above fold event

Jason: How to define it?

Anant: Templates, async loading etc

Pat: Why not use user-timing?
... which content do you care about? Gets complicated. Speed index is tacking something like this.

Paul: Developer should decide

Jason: What is loaded time for image?
... when it is downloaded, loaded in memory etc etc
... would love to have above the fold, but its hard. We don't know how to measure it
... we have a perf lab. We capture video and sequence to diff ms level. Even there is hard to figure out above the fold

Paul: you do know first paint

Jason: yes, but thats what it is. Not above the fold

Pat: Devs might game system if first paint is imp

Jason: Web devs should control their own assets

?? : Whats the root case for perf problem? Need to know that.

scribe: tools are good enough, don't need to be APIs
... GPU stuff, layout etc.

Ilya: Chrome has some tools
... not user friendly, but work in progress
... Chrome tracing is heavy

Alex: after pressing key stroke, when was the first paint after that
... better to have an API. Need that for automation

Alois: Having different approaches for different browsers makes it hard to do.
... testing is not perfect since we need Real user measurements
... javascript API would be better. End user can approve or not
... End user can decide to send or not

Jason: Agreed that chrome tools are heavyweight and we have same for windows
... but that level of tracing collects 2MB/s. Too much data for real users
... dev tools are OK, but light weight piece for real world is not clear

Alois: Could be somewhere between. Better than nothing.
... Useful for debugging at the end-user level

Alex: Even in lab is hard if you want to automate
... for perf testing

Jason: IE has good blogs for that
... in IE to find DOM change to when it shows up in glass is possible

Alex: Using JS not always going to give right answers

Jason: in production we don't know how many layouts occured

Alois: different data in different browsers

Jason: May be similar to resource timing API, do it for CPU
... all work we have done is about platform

Arvind?, Intel: What's the use case for these APIs? We started with these use cases... show the roadmap on how we arrive at the solution

Jatinder: I think we should have done that. Let us know if not

Alois: I think the specs are clear about doc. But sometimes they are more powerful that the specs say
... we need a best practice doc.

PLH: We launched webplatform.org
... targeted towards developers

<karen> http://docs.webplatform.org/wiki/Main_Page

Derek: Not every developer know understanding of which features are slower etc
... so it should go to these docs

Arvind: Taling about what's next
... Good day, good ideas
... Challenge is that there are good ideas, but very few to work on them
... write the spec, be editor, group will help
... we need drivers for all the work we need to do

Jason: excited what we pulled off in 2 yrs or less.
... we have 8 specs and APIs across browsers

Arvind: Amazing how cooperative browsers vendors have been
... No pushback.. May be since they were easy and straightforward
... now more tricky problems are coming up
... talked about these for a while

Jason: Lucky, since no one complains about making web faster
... bite off to scope problems in bite size

Participate through email alias

PLH: We need more tests for APIs

Paul: Toby and I are starting perf breakout session
... Tobie from facebook
... we need automated tests
... as part of spec may be
... e.g., a browser says it supports audio, but useless if it supports with 1s latency

Jason: Hard since it also involves native hardware
... audio is interesting, since if we compress audio on wire, then device will spend too much time re encoding

Derek: specify design contraints so that features adhere to them for better perf

Paul: Gyroscope, accelerometer APIs are promoted, but without screen orientation locl

lock*

Alois: Have seen Negative DNS time in nav timings

Paul: Just collect yes/no data (works or not) for developers

[adjourned]