IRC log of webmachinelearning on 2019-09-05

Timestamps are in UTC.

14:00:48 [RRSAgent]
RRSAgent has joined #webmachinelearning
14:00:48 [RRSAgent]
logging to https://www.w3.org/2019/09/05-webmachinelearning-irc
14:00:53 [Zakim]
Zakim has joined #webmachinelearning
14:00:58 [anssik]
RRSAgent, make logs public
14:01:02 [anssik]
Meeting: WebML CG Teleconference – 5 September 2019
14:01:06 [anssik]
Chair: Anssi
14:01:11 [anssik]
Agenda: https://github.com/webmachinelearning/meetings/blob/master/telcons/2019-09-05-agenda.md
14:01:21 [anssik]
Scribe: Anssi
14:01:27 [anssik]
scribeNick: anssik
14:01:28 [Rafael]
Rafael has joined #webmachinelearning
14:01:32 [anssik]
Regrets+ Thomas_Steiner
14:01:45 [anssik]
Present+ Anssi_Kostiainen
14:01:57 [ningxinhu]
Present+ Ningxin_Hu
14:02:47 [Rama]
Present+ G_Ramalingam
14:02:57 [anssik]
Present+ Jonathan_Bingham
14:03:04 [anssik]
Present+ Kai_Ninomiya
14:03:08 [anssik]
Present+ Nikhil_Thorat
14:03:12 [anssik]
Present+ Paul_McDaniel
14:03:16 [anssik]
Present+ Rafael_Cintron
14:03:29 [Jonathan_]
Jonathan_ has joined #webmachinelearning
14:03:32 [nsthorat]
nsthorat has joined #webmachinelearning
14:03:47 [nsthorat]
+nsthorat
14:03:59 [anssik]
RRSAgent, draft minutes v2
14:03:59 [RRSAgent]
I have made the request to generate https://www.w3.org/2019/09/05-webmachinelearning-minutes.html anssik
14:05:07 [anssik]
TOPIC: Ops compatibility study
14:05:15 [anssik]
-> Ops compatibility study https://github.com/webmachinelearning/webnn/issues/17
14:05:19 [anssik]
-> ONNX vs TF Lite op comparison: Conv2D, Matmul / Fully Connected https://docs.google.com/document/d/1RXCkZ9mliWbqSakYvNlWhsRH4yFtnpe1YQQNFAIRZo8/
14:06:05 [anssik]
nsthorat: spent time with Googlers discussing this topic, Google consensus seems to be standardizing on ops is too early at this point
14:06:39 [anssik]
... TensorFlow in general is not in a position to endorse ONNX in a web spec, prefer create a new spec for ops
14:07:07 [anssik]
... we think ops are not the right level of abstraction that lasts the test of time
14:07:37 [anssik]
... MLRA might be it, but we not ready yet
14:07:42 [kainino]
Present+ James_Darpinian
14:07:55 [anssik]
... there's a lot of valuable exploration with e.g. custom ops
14:08:24 [anssik]
s/we not/we're not/
14:08:28 [anssik]
q?
14:08:44 [anssik]
q+ to speak up
14:08:46 [anssik]
q?
14:08:47 [anssik]
ack anssik
14:08:47 [Zakim]
anssik, you wanted to speak up
14:08:55 [jdarpinian]
jdarpinian has joined #webmachinelearning
14:09:18 [Rafael]
q+
14:09:23 [anssik]
ack Rafael
14:09:30 [kainino]
s/MLRA/MLIR/
14:09:58 [anssik]
Rafael: nikhil I'm curious about the rationale re ONNX, is it political or thinking we cannot find a middle ground?
14:10:06 [Jonathan_]
Jonathan_ has joined #webmachinelearning
14:10:36 [anssik]
nsthorat: TensorFlow has not publicly endorsed ONNX and does not want to do that for the purpose of the web spec
14:11:12 [anssik]
daniel: we feel ONNX is too big of a spec as of now, question of neutrality as well
14:11:24 [anssik]
Present+ Daniel_Smilkov
14:12:09 [Rama]
q+
14:12:17 [anssik]
Rafael: I understand this would be something we start small, all ISVs are part of ONNX, Amazon, it is not meant to be one company driven effort
14:12:38 [anssik]
nsthorat: I think the (ONNX) issue is more organization than technical
14:13:35 [jdarpinian]
q+
14:14:08 [anssik]
paul: we started looking at things that could be hardware accelerated
14:15:11 [anssik]
nsthorat: feedback we got internally was creating a spec on an ops level there would be a lot of issues for hw vendors(?)
14:15:52 [anssik]
... TF thinks it is too early to standardize ops set, but we do not want to remove momentum from this CG and do explorations that could evolve, maybe with custom ops, sharing memory
14:16:03 [anssik]
q?
14:16:22 [anssik]
paul: thinking what would be next steps in the light of this new information
14:16:50 [anssik]
... we're also working with vendors, working with IRs, we're on a similar journey
14:16:56 [anssik]
q?
14:18:33 [anssik]
anssik: does it make sense to phase work e.g. phase 1 explore ops + custom ops, phase 2 look into MLIR or whatever comes in the future
14:18:55 [anssik]
nsthorat: we should do there explorations we have ongoing
14:20:06 [Jonathan_]
James is on the queue with an idea
14:20:53 [anssik]
q?
14:23:05 [anssik]
nsthorat: looking ops and custom ops with shared memory in parallel would be reasonable exploration
14:23:26 [anssik]
q?
14:23:35 [anssik]
ack jdarpinian
14:24:03 [anssik]
jdarpinian: James from Chrome, thinking what we can do that's minimal and simplest thing that could possibly work
14:24:26 [anssik]
... looking at doing a WebGL extension, benefits: WebGL extensions are optional, if we ship it we can always unship it later
14:24:43 [anssik]
... almost all NN frameworks already make use of WebGL
14:24:55 [anssik]
... could be simple to add couple of API calls to access vendor-specific kernels
14:25:25 [Rafael]
q+
14:25:27 [anssik]
... seems like a simplest way as a CG to achieve the goal, not needed to be supported forever
14:25:28 [anssik]
q?
14:25:53 [anssik]
ack Rafael
14:26:15 [anssik]
Rafael: doing a WebGL extension sounds good
14:26:34 [anssik]
... custom ops could use compute shaders of WebGPU
14:27:00 [anssik]
anssik: WebGL compute extension status?
14:27:09 [anssik]
jdarpinian: not shipping on Mac
14:27:15 [anssik]
ack rama
14:27:19 [anssik]
q?
14:27:58 [anssik]
Rama: about ops abstraction, does that mean ops are not sufficient as part of this standard?
14:28:48 [anssik]
daniel: because NN/ML is evolving so quickly new ops coming into place all the time
14:29:33 [anssik]
... we want all hw vendors implement them efficiently, otherwise we'll fall back to common low-level abstractions such as Wasm
14:29:59 [anssik]
... ops keep on growing, ONNX, TF Lite keeps on growing, and web would be unable to catch up with their ops sets
14:30:40 [anssik]
Rama: this could be also modeled on higher-level ops
14:31:30 [anssik]
... we could identify a collection of higher-level abstractions, would something like that address this with easy extensibility?
14:31:51 [anssik]
q?
14:31:54 [paulmcdaniel]
paulmcdaniel has joined #webmachinelearning
14:32:15 [anssik]
nsthorat: I hear you, those are good ideas, these explorations are being done umbrella of compilers and MLIR
14:32:49 [anssik]
... these explorations are happening also outside this group and will evolve significantly over the next 6 months
14:33:39 [Rama]
rama: can we address the extensibility question using a small collection of higher-order ops, like element-wise-op?
14:33:42 [anssik]
-> "Multi-Level Intermediate Representation" Compiler Infrastructure https://github.com/tensorflow/mlir
14:34:21 [ningxinhu]
q+
14:34:56 [anssik]
q?
14:35:13 [anssik]
anssik: is compat study exploration still valid?
14:35:27 [anssik]
nsthorat: yes, would prioritize custom ops exploration
14:35:55 [anssik]
q?
14:35:59 [anssik]
ack ningxinhu
14:36:31 [anssik]
ningxinhu: question to james and rafael re WebGPU extension, will it be op level abstraction or lower-level abstraction?
14:36:55 [anssik]
jdarpinian: I think it would be op-level, since there's nothing really concrete to propose otherwise at this time
14:38:25 [anssik]
ningxinhu: the idea is to add ops-level extension to WebGL/GPU?
14:38:48 [anssik]
jdarpinian: yes, we could implement those ops that would give biggest speedup
14:39:26 [anssik]
ningxinhu: we still need ops compat study to look into MPS and DirectML compatibility
14:40:14 [anssik]
jdarpinian: maybe we need to look (more) into compat not on the framework level, but native API level MPS etc.
14:40:29 [anssik]
s/jdarpinian: maybe/ningxinhu: maybe/
14:41:12 [anssik]
ningxinhu: do you expect this group cound do ops study, and how do you see collaboration with WebGPU and WebGL groups
14:41:39 [anssik]
jdarpinian: WebGL not sure, but WebGPU probably easier since also W3C group
14:45:29 [anssik]
ack?
14:45:32 [anssik]
q?
14:45:32 [ningxinhu]
q+
14:45:41 [kainino]
q+
14:47:06 [anssik]
ningxinhu: another question, james and rafael propose WebGL and WebGPU extension route, how to support other types of accelerators including CPU-based.
14:47:42 [anssik]
... another device class is standalone accelerators, how to expose those capabilities to the web
14:47:49 [jdarpinian]
q+
14:48:10 [anssik]
ack?
14:48:15 [anssik]
q?
14:49:40 [anssik]
jdarpinian: I'm very interested in standalone accelerators, unclear what type of API on native side will be used to interface with them long term
14:50:07 [anssik]
... would be great to be able to have a mechanism to unship
14:50:11 [anssik]
q?
14:50:17 [anssik]
ack jdarpinian
14:50:20 [anssik]
ack kainino
14:51:25 [anssik]
kainino: there has been W3C-Khronos collaboration with canvas and HTML specs that has worked via shared membership and people, has been easy in practice
14:52:18 [anssik]
... WebGPU does not meet at TPAC 2019 formally, but e.g. Myles and Dean from Apple will be there
14:52:32 [anssik]
q?
14:52:58 [anssik]
ack ningxinhu
14:53:01 [anssik]
q?
14:53:15 [Rafael]
q+
14:54:39 [anssik]
q?
14:54:42 [anssik]
ack Rafael
14:55:10 [anssik]
Rafael: what is the roadmap of TF.js over the few next months?
14:55:49 [anssik]
nsthorat: good question, we're on Wasm backend for TF.js and work on WebGPU backend, trying to ship higher-level models for e.g. PoseNet
14:56:07 [anssik]
... MLIR will evolve and we'll watch that space
14:56:13 [anssik]
q?
14:56:47 [anssik]
TOPIC: F2F agenda building
14:56:53 [anssik]
-> WebML F2F agenda https://github.com/webmachinelearning/meetings/tree/master/2019-09-17-fukuoka
14:56:58 [anssik]
RRSAgent, draft minutes v2
14:56:58 [RRSAgent]
I have made the request to generate https://www.w3.org/2019/09/05-webmachinelearning-minutes.html anssik
14:58:42 [anssik]
anssik: would nikhil want to give a briefing on MLIR?
14:58:46 [anssik]
nsthorat: can do that
14:58:54 [anssik]
[no objection]
15:00:53 [anssik]
-> ONNX vs TF Lite op comparison: Conv2D, Matmul / Fully Connected https://docs.google.com/document/d/1RXCkZ9mliWbqSakYvNlWhsRH4yFtnpe1YQQNFAIRZo8/
15:01:30 [anssik]
nsthorat: I think we should still do compat study
15:02:24 [anssik]
anssik: can you share more in DirectML POC?
15:02:56 [anssik]
ningxinhu: that POC is a contrib to help ops compat study for DirectML and MPS
15:06:41 [anssik]
nsthorat: TensorFlow Dev Summit 2020 dates not yet decided
15:11:34 [anssik]
TOPIC: Adjourn
15:11:42 [anssik]
RRSAgent, draft minutes v2
15:11:42 [RRSAgent]
I have made the request to generate https://www.w3.org/2019/09/05-webmachinelearning-minutes.html anssik
17:10:38 [Zakim]
Zakim has left #webmachinelearning