IRC log of webmachinelearning on 2019-09-20

Timestamps are in UTC.

06:32:19 [RRSAgent]
RRSAgent has joined #webmachinelearning
06:32:19 [RRSAgent]
logging to https://www.w3.org/2019/09/20-webmachinelearning-irc
06:32:25 [Zakim]
Zakim has joined #webmachinelearning
06:32:29 [dontcallmeDOM]
RRSAgent, make log public
06:32:31 [anssik]
RRSAgent, make logs public
06:32:35 [dontcallmeDOM]
Meeting: Web Machine Learning Community Group
06:32:38 [anssik]
Meeting: WebML CG F2F Day 2 – 20 September 2019
06:32:45 [dontcallmeDOM]
Chair: anssik
06:32:47 [anssik]
Chair: Anssi
06:32:51 [ningxin_hu]
ningxin_hu has joined #webmachinelearning
06:32:54 [anssik]
Agenda: https://github.com/webmachinelearning/meetings/blob/master/2019-09-17-fukuoka/
06:32:58 [wseltzer]
present+
06:33:04 [dontcallmeDOM]
Present+
06:33:07 [anssik]
Scribe: Anssi
06:33:12 [anssik]
scribeNick: anssik
06:33:48 [ningxin_hu]
Present+ Ningxin_Hu
06:35:00 [anssik]
Present+ Anssi_Kostiainen, Thomas_Steiner, Dominique Hazael-Massieux, Dave_Singer, Nikhil_Thorat, Dean_Jackson, Ehsan_Toreini, Sushanth_Rajasankar
06:35:08 [anssik]
RRSAgent, draft minutes v2
06:35:08 [RRSAgent]
I have made the request to generate https://www.w3.org/2019/09/20-webmachinelearning-minutes.html anssik
06:35:30 [Big_Screen]
Big_Screen has joined #webmachinelearning
06:35:33 [dontcallmeDOM]
s/que H/que_H/
06:35:35 [tomayac]
tomayac has joined #webmachinelearning
06:36:10 [anssik]
Topic: Operation set
06:36:20 [anssik]
-> https://github.com/webmachinelearning/webnn/issues/17 Compatibility study of 14 operators of Opset 10 #17
06:37:11 [anssik]
anssik: Nikhil filed issues for tracking op compatibility resolution for matMul and conv2d. This is a start, more to come, I guess contributions are welcome!
06:37:17 [anssik]
-> https://github.com/webmachinelearning/webnn/issues/27 [op compatibility] matMul #27
06:37:51 [anssik]
Nikhil: compat study is to understand that the API supports all of the JS libraries we care about
06:38:04 [anssik]
... the API should be compatible with the platform-provided underlying APIs
06:38:32 [anssik]
... filed two issues, bread and butter of ops, we want to start small and slowly grow
06:40:07 [anssik]
dom: TF.js is an open source project that can evolve, can it evolve to match what happens here, or the other way
06:40:47 [anssik]
nikhil: matMul will not change, hopefully conv2d also does not change, there are such basic primitives
06:41:39 [anssik]
... TF.js strict requirement is we have the same API with TF to allow model sharing
06:41:54 [anssik]
... breaking change for signature is hard, we can superset TF that is a possibility
06:42:15 [anssik]
dom: thanks, that help, do you know if ONNX has similar constraints?
06:42:30 [anssik]
nikhil: I assume they are more flexible, but cannot speak on behalf of them
06:43:42 [anssik]
... worked with TF and other Google people to figure out how to find an abstraction that will be good for the next 10 years
06:44:17 [anssik]
... not sure ops is the right abstraction, 25% growth in ops YoY in TF alone
06:44:46 [anssik]
... maybe there's an abstraction below ops that could be the standardized, and layer ops on top of that layer
06:45:07 [anssik]
... next, I'll introduce matMul op compat study issues findings
06:45:35 [igarashi]
igarashi has joined #webmachinelearning
06:45:42 [anssik]
... matMul is easier, this signature is from numpy directly
06:46:35 [anssik]
... and much of this is from numpy docs
06:47:32 [anssik]
... two important notes for this op, there are other arguments for this op e.g. transpose
06:47:51 [anssik]
... maybe transposing should be a graph optimizer's task
06:48:09 [anssik]
... that would allow us to make the API surface smaller
06:48:26 [anssik]
dom: how confident we are that could be done?
06:49:14 [anssik]
... how's the implementation complexity from the implementers (web engine) perspective?
06:50:11 [anssik]
... the way that the graph APIs work in general, stick together a graph, computation happens only when you feed the values into it, that's TF 1.0 style actually
06:50:45 [anssik]
... doing that from the user's perspective is complicated, TF 2.0 does eager mode instead, i.e.when you call it run, losing graph optimizations
06:51:18 [anssik]
... the hybrid approach is better for users, get graph optimizations also in this case
06:51:38 [anssik]
... discussion ongoing how to expose these to the browser, underlying there's always a graph
06:53:35 [anssik]
RRSAgent, draft minutes v2
06:53:35 [RRSAgent]
I have made the request to generate https://www.w3.org/2019/09/20-webmachinelearning-minutes.html anssik
06:55:49 [sushrajaMSFT]
sushraja_MSFT: does graph rewriting result in loosing ability to use the API for training
06:56:19 [anssik]
nikhil: good feedback from Benjamin/WebKit on this issue re CoreML & MPS
06:57:02 [anssik]
... need to understand how other accelerators deal with the concept of accumulator precision
06:58:00 [anssik]
ningxin_hu: it relates to our experiment on Android NN API
06:59:13 [anssik]
dom: should this be a capability that is exposed?
06:59:25 [anssik]
nikhil: question becomes, what accelerometers we want to support?
07:00:57 [anssik]
... conv2d precision could be different between devices, e.g. mobile vs. laptop and could lead to severely different results, this is not theory, we see this happen in TF
07:01:57 [anssik]
Gabe: when is the operator going to be its own variant?
07:02:24 [anssik]
ningxin_hu: question is also, do you want to open quantization issue or is precision issue enough?
07:02:57 [anssik]
dom: broadly, how do you handle capabilities, do you want to allow different code path based on underlying capabilities
07:04:39 [anssik]
nikhil: in TF we let this to happen, we don't throw at you, thinks just work but I expect the model to work the same on phone and desktop/laptop
07:05:05 [anssik]
ningxin_hu: decision should be done by frameworks, API should expose the capabilities
07:06:04 [sushrajaMSFT]
sushraja_MSFT: something to think about is exposing hardware capability or for the UA to automatically fill in with a slower code path
07:06:33 [anssik]
ningxin_hu: questions, there's a todo, want to know why you choose matMul over fully connected
07:06:43 [anssik]
nikhil: to get the discussion started :-)
07:07:08 [anssik]
... matMul is the simplest thing
07:07:36 [anssik]
ningxin_hu: we can contribute our input from POC to the compat study
07:07:39 [anssik]
nikjil
07:07:48 [anssik]
s/nikjil//
07:08:20 [anssik]
nikhil: conv2d(x, filter, padding, strides, dilations)
07:08:40 [anssik]
... padding, everyone does this differently
07:09:46 [anssik]
... it get fun with tensor layout, shape of x, many ways to represent, channels can be transposed, different hardware support different ways, e.g. CUDA has a different way to transpose
07:10:53 [anssik]
... with Daniel thought this, from user's POV, good if web developer does not need to think about underlying platform we're in a better place, so proposal to choose just one format, even if internal representation is different
07:11:11 [anssik]
... browsers have already chosen a particular format, channels not transposed outside
07:12:19 [anssik]
ningxin_hu: two re-layouts can happen, with constants and feed images into your network, done for every frame
07:12:56 [anssik]
dom: there seem not be benefit in picking one over another, matter of optimization, exposing capability not useful here
07:13:14 [anssik]
ningxin_hu: align with media element layout by the underlying implementation
07:13:21 [anssik]
dom: one mental model is easier to map
07:14:01 [anssik]
ningxin_hu: earlier discussion, we accept ArrayBuffer, investigate WebGL buffer, TextImage2D, video or canvas to be fed as input to this API
07:15:32 [anssik]
ningxin_hu: activation, per our native API experience, fused activation etc. can be done by underlying graph optimizer, current direction is a group with small number of ops and leave other for custom ops, need to investigate if we can optimize more, since optimizers do not work with custom ops(?)
07:16:41 [anssik]
nikhil: we need discussion on the underlying APIs
07:17:24 [anssik]
anssik: currently our charter says: "The APIs in scope of this group will not be tied to any particular platform and will be implementable on top of existing major platform APIs, such as Android Neural Networks API, Windows DirectML, and macOS/iOS Metal Performance Shaders and Basic Neural Network Subroutines."
07:17:32 [anssik]
nikhil: we should look at each of those
07:17:46 [anssik]
anssik: ningxin_hu do you think you can help with this part?
07:18:04 [anssik]
ningxin_hu: yes, we've already done this work in a separate spreadsheet
07:19:34 [anssik]
-> https://github.com/intel/webml-polyfill/blob/master/docs/supported_ops.md Supported ops
07:19:57 [anssik]
ningxin_hu: let's look at the supported ops table we've collected
07:21:48 [anssik]
ningxin_hu: listing different op types and their compatibility across Wasm, WebGL, NNAPI, MPS, BNNS, clDNN, MKLDNN, DirectML
07:22:22 [anssik]
... NN API and MPS have good coverage, DirectML with some compat issues documented in this table
07:23:20 [anssik]
-> https://docs.google.com/spreadsheets/d/1nthZOwgIKsj34EB-SymEwoNTPsxo4X8Pxavm-JaBwME/ Native mapping
07:24:11 [anssik]
ningxin_hu: this is a little bit complex, this table tries to map the native capability, API and parameters, compat issue marked with notes
07:24:33 [anssik]
... e.g. for MPS padding, we have 4 places that need to be padded, open question how to do the right padding
07:25:15 [anssik]
... for DirectML, we can extract static conv2d information into this table and provide it as input to the compat study under progress
07:25:34 [anssik]
... we want to get the data how the definition and what op is supported by native platforms
07:25:46 [anssik]
... also uplevel compat study, looking at frameworks
07:26:37 [anssik]
-> https://docs.google.com/spreadsheets/d/1nZziT-2uOWeHFeOU3yDZ4_0KDJElkb_4nqXdv6vG5ak/ Performance
07:27:25 [anssik]
ningxin: WebNN POC perf data for DirectML and WebGL backends
07:27:47 [anssik]
... our POC is open source, code available so you can run these benchmarks yourself
07:28:11 [anssik]
... models used are official TFLite and ONNX models
07:29:26 [zkis_]
zkis_ has joined #webmachinelearning
07:29:32 [anssik]
ningxin: summary, very promising performance speedup, opportunity for better perf with further optimization
07:30:02 [anssik]
Present+ Tatsuya_Igarashi
07:31:12 [anssik]
ningxin: across platforms we see similarly good speedups, not just Windows
07:32:23 [anssik]
anssik: how much work was it to produce these op compat study items?
07:32:34 [anssik]
nikhil: it was some work, not trivial
07:33:55 [anssik]
Topic: Standards track next steps
07:34:06 [anssik]
anssik: Wanted to discuss two things: 1) near-term goal to produce an Explainer document that complements the spec that helps solicit early review from W3C TAG; 2) incubation to standards track transition, invited Dom of W3C Staff to talk about this.
07:38:18 [anssik]
-> https://github.com/webmachinelearning/webnn/issues/18 Explainer document #18
07:38:26 [anssik]
anssik: Web specs are expected to be reviewed by W3C's Technical Architecture Group (TAG), and the best practice is to seek such TAG review earlier rather than later in the spec design process.
07:38:48 [anssik]
-> https://github.com/webmachinelearning/webnn/blob/master/explainer.md Web Neural Network API Explained template
07:39:05 [dsinger]
dsinger has joined #webmachinelearning
07:39:08 [anssik]
anssik: This is a collective group action.
07:39:15 [anssik]
https://github.com/immersive-web/webxr/blob/master/explainer.md
07:39:55 [anssik]
anssik: we could copy with pride WebXR explainer's approach, it includes e.g. target hardware section
07:40:08 [anssik]
Alex: supporting explainer-driven spec design
07:40:20 [anssik]
Present+ Alex_Russell
07:40:28 [anssik]
RRSAgent, draft minutes v2
07:40:28 [RRSAgent]
I have made the request to generate https://www.w3.org/2019/09/20-webmachinelearning-minutes.html anssik
07:40:50 [anssik]
Sushanth_Rajasankar: also splitting a spec into modules is one possible design approach
07:42:53 [anssik]
ningxin: what if we have alternative design we don't yet know which to pick?
07:43:18 [anssik]
alex: explainer is the right place for those, put your alternative designs in the explainer
07:43:43 [anssik]
dom: any discussion in the TAG on the architectural decision record?
07:43:54 [anssik]
alex: 8 month dated understanding, not sure at this point
07:44:29 [anssik]
ningxin: what is the process to update explainer?
07:44:36 [anssik]
anssik: PR with review
07:45:03 [anssik]
-> https://www.w3.org/2019/Talks/dhm-ml-workshop/standardization.html Towards W3C Standardization (slides)
07:45:09 [anssik]
anssik: Hand over to Dom
07:45:37 [anssik]
dom: W3C Standardization aka Recommendation Track
07:45:45 [anssik]
... build shared understanding when to advance
07:45:57 [anssik]
... Happens in Working Group
07:46:08 [anssik]
... Under strong Royalty-Free policy
07:46:15 [anssik]
... Following a well-defined process to enable:
07:46:24 [anssik]
... - Fairness and consensus
07:46:24 [anssik]
... - Architectural consistency with the platform
07:46:24 [anssik]
... - Proper review from a security, privacy, internationalization, accessibility perspectives (as it applies)
07:47:13 [anssik]
dom: When?
07:47:24 [anssik]
... Incubation in Community Group
07:47:24 [anssik]
... transition to WG when
07:47:24 [anssik]
-> https://www.w3.org/Guide/standards-track/ W3C Recommendation Track Readiness Best Practices
07:47:24 [anssik]
dom: - Rough agreement on shape & scope of API
07:47:24 [anssik]
... - Some early implementation experience
07:47:25 [anssik]
... - Before it is too late to evolve based on broader input
07:49:25 [anssik]
dom: How?
07:49:35 [anssik]
... Find a W3C Staff to help, e.g. Dom :-)
07:49:35 [anssik]
... Draft a charter reflecting Community Group's view
07:49:35 [anssik]
... Build momentum in W3C broader community (cf workshop)
07:49:35 [anssik]
... Iterate based on reviews (from W3C and others)
07:49:35 [anssik]
... Get formal approval
07:50:58 [anssik]
dom: What about the CG then?
07:51:08 [anssik]
... Various possible approaches:
07:51:08 [anssik]
... - Keep CG to incubate new proposals (e.g. Immersive Web, Web Assembly, Web Audio)
07:51:08 [anssik]
... - Pause the CG while the standardization work happens (may relaunch afterwards)
07:51:08 [anssik]
... - DYI
07:52:43 [anssik]
anssik: thanks Dom, any questions?
07:52:53 [anssik]
nikhil: what is a typical timing?
07:53:05 [anssik]
dom: Immersive Web was 4-5 years in incubation
07:53:12 [anssik]
... Wasm incubated for 2 years
07:53:24 [anssik]
... there's no rules really, depends on where you are in your design process
07:53:37 [anssik]
nikhil: how to evaluate maturity?
07:54:20 [anssik]
dom: interest from target community, key thing is making sure whatever you produce gets adoption
07:54:48 [anssik]
... when you see that implementers are behind the rough shape of the API, it is good time to graduate
07:55:46 [anssik]
Topic: W3C Workshop
07:56:07 [anssik]
anssik: asked Dom to talk to us about Workshops and how they help in getting wider community engaged around a web spec proposal
07:56:14 [anssik]
-> https://www.w3.org/2019/Talks/dhm-ml-workshop/ Towards a Web & Machine Learning Workshop
07:56:20 [jc]
jc has joined #webmachinelearning
07:56:21 [anssik]
dom: What is a W3C workshop?
07:56:30 [anssik]
... Open to anyone with relevant expertise
07:56:30 [anssik]
... Broader perspective than specific CG/WG
07:56:30 [anssik]
... Opportunity to hear from more relevant communities
07:56:30 [anssik]
... Typically, 2-days event
07:57:41 [anssik]
dom: W3C Workshop examples
07:57:55 [anssik]
... Web & Virtual Reality Workshop in 2016
07:57:55 [anssik]
... Web & Games Workshop in June 2019
07:57:55 [anssik]
-> https://www.w3.org/2003/08/Workshops/archive W3C workshop archive
07:58:37 [anssik]
dom: Why a W3C Workshop on Machine Learning?
07:58:48 [anssik]
... Lots of energy in WebML CG
07:58:48 [anssik]
... Lots of interest from many connected but not yet involved communities
07:58:48 [anssik]
... Opportunity to put WebNN in broader context of ML in browser
07:59:27 [anssik]
dom: Possible topics
07:59:47 [anssik]
... WebNN in context
07:59:47 [anssik]
... Integrate ML with browser data sources (e.g. sensors)
07:59:47 [anssik]
... Integration with WebGPU, WASM
07:59:47 [anssik]
... Relation to Speech Recognition, Shape detection
07:59:47 [anssik]
... Relation to /integration with cloud-based inference
07:59:52 [anssik]
-> https://w3c.github.io/machine-learning-workshop/#topics More topic proposals
08:02:41 [anssik]
nikhil: anyone who implements the underlying APIs should be great participants in such a workshop
08:03:09 [anssik]
... also MLIR is an important group of people
08:03:28 [anssik]
... anyone from hardware side of people would be also very welcome participants
08:03:37 [anssik]
... what is the format of the workshop?
08:04:28 [anssik]
dom: program committee to decide, can be short talks, discussions, lightning talks
08:05:10 [anssik]
nikhil: lightning talks would work in this context, since this is so much cross-team effort
08:05:54 [anssik]
Dave_Singer: you want to look at opportunities where standards would already help build a market
08:06:10 [anssik]
... Apple would probably be interested in participating
08:06:32 [anssik]
dom: How, where and when
08:06:41 [anssik]
... Define a call for participation
08:06:41 [anssik]
... Establish a small team to do outreach, research and set agenda
08:06:41 [anssik]
... Where? Offer in Berlin - others?
08:06:41 [anssik]
... When? Q1 2020: last 2 weeks of March?
08:09:31 [anssik]
nikhil: TensorFlow Dev Summit is probably March 2020 want to avoid overlap
08:14:47 [anssik]
anssik: Thank you all for participating, see you at the W3C Workshop on Web & Machine Learning in Q1 2020 maybe? :-)
08:14:52 [anssik]
Topic: Adjourn
08:14:57 [anssik]
RRSAgent, draft minutes v2
08:14:57 [RRSAgent]
I have made the request to generate https://www.w3.org/2019/09/20-webmachinelearning-minutes.html anssik
08:29:53 [zkis]
zkis has joined #webmachinelearning
10:39:50 [zkis]
zkis has joined #webmachinelearning
11:45:18 [Zakim]
Zakim has left #webmachinelearning
13:30:28 [Chunming]
Chunming has joined #webmachinelearning
20:53:58 [zkis]
zkis has joined #webmachinelearning
23:52:55 [dsinger]
dsinger has joined #webmachinelearning