IRC log of webmachinelearning on 2021-09-16

Timestamps are in UTC.

13:58:45 [RRSAgent]
RRSAgent has joined #webmachinelearning
13:58:45 [RRSAgent]
logging to https://www.w3.org/2021/09/16-webmachinelearning-irc
13:58:48 [Zakim]
RRSAgent, make logs Public
13:58:51 [Zakim]
please title this meeting ("meeting: ..."), anssik
13:58:55 [anssik]
Meeting: WebML WG Teleconference – 16 Sep 2021
13:58:55 [anssik]
Chair: Anssi
13:58:58 [anssik]
Agenda: https://github.com/webmachinelearning/meetings/blob/master/telcons/2021-09-16-agenda.md
13:59:04 [anssik]
Scribe: Anssi
13:59:08 [anssik]
scribeNick: anssik
13:59:13 [anssik]
Present+ Anssi_Kostiainen
13:59:28 [anssik]
Present+ Ningxin_Hu
13:59:33 [dom]
Regrets+ Dom
14:00:07 [anssik]
RRSAgent, draft minutes
14:00:07 [RRSAgent]
I have made the request to generate https://www.w3.org/2021/09/16-webmachinelearning-minutes.html anssik
14:00:27 [anssik]
Present+ Sungpil_Shin
14:00:33 [anssik]
Present+ Chai_Chaoweeraprasit
14:01:46 [Chai]
Chai has joined #webmachinelearning
14:02:05 [anssik]
Present+ Ganesan_Ramalingam
14:02:42 [Sungpil_Shin]
Sungpil_Shin has joined #webmachinelearning
14:02:44 [rama]
rama has joined #webmachinelearning
14:03:27 [anssik]
Topic: WebNN API recent new feature requests
14:03:38 [anssik]
Subtopic: Request input layout and resize only on height and width for Resample
14:03:45 [anssik]
-> https://github.com/webmachinelearning/webnn/issues/200 issue #200
14:03:51 [anssik]
-> https://github.com/webmachinelearning/webnn/pull/205 PR #205
14:04:28 [anssik]
anssik: it seems this discussion is still active in PR #205 and we're converging on a slightly lower-level design that allows WebNN to act as a backend interface to ML frameworks most flexibly?
14:06:07 [Jonathan]
Jonathan has joined #webmachinelearning
14:06:11 [anssik]
Chai: Ningxin proposed we'd use axis, want to hear from Rama how to use this properly?
14:06:20 [anssik]
Present+ Jonathan_Bingham
14:06:47 [anssik]
Rama: (explaining the details of axis)
14:07:03 [anssik]
Chai: does axis work similarly in other ops too?
14:07:46 [anssik]
Rama: in most other ops yes, axis identification is straight-forward
14:08:27 [anssik]
Chai: in this case we know we're going to interpolate on the consecutive dimensions, you're saying them axis can be zero, not following the axis zero?
14:08:42 [anssik]
Rama: in ONNX it is zero-based, axis zero is allowed usually
14:09:01 [anssik]
... numpy convention allows zero and negative counts axis backwards from the last axis
14:09:07 [ningxin_hu]
i understand the webnn is also zero based
14:09:25 [anssik]
... zero-based and one-based is othogonal, you can have zero-based inde
14:09:29 [anssik]
s/inde/index
14:09:46 [anssik]
Chai: when zero, how do you describe it? interpolate in first two dimensions?
14:09:55 [anssik]
Rama: axis+1 are the dims to be scaled
14:10:06 [anssik]
... I guess I'm not sure I understand the question
14:10:24 [anssik]
... the axis in other ops are index 1
14:10:30 [anssik]
... in WebNN
14:10:44 [anssik]
... we need to define axis as zero-based to use the same notion across the API
14:10:57 [anssik]
... want to be consistent within this API and its ops
14:11:18 [anssik]
Chai: I'd look at slice for usage, there are other ops as well
14:11:26 [anssik]
... slice is a good example, the same semantics
14:11:47 [anssik]
Rama: OK, using 1-based should be fine then, I'm a bit surprised still
14:11:57 [ningxin_hu]
for slice, the axes values in the sequence are either within the [0, r-1] range where r is the input tensor rank
14:12:09 [ningxin_hu]
https://webmachinelearning.github.io/webnn/#dom-mlsliceoptions-axes
14:12:18 [anssik]
Chai: for example, we interporlate the last two dims, axis equal 2, we interpolate along the second dimension
14:12:37 [anssik]
... how to define axis when interpolating along the first dim?
14:12:53 [anssik]
Rama: we can continue discuss this in the GH issue
14:14:04 [anssik]
... higher-level question: choosing this design, considering automation in other frameworks, people are writing code manually, it is helpful to have an API that is constrained?
14:14:16 [anssik]
... a more generous API may make this simpler perhaps
14:14:51 [anssik]
Chai: this was a point I was trying to make, when you design the API, normally from the get go you want to ask whether you optimize for ease of use, that web dev can call directly
14:14:57 [anssik]
... e.g. give enums
14:15:33 [anssik]
... that's not the first goal, we're targeting ML frameworks and this is backend API sitting underneath
14:15:53 [anssik]
... interface should be flexible enough, but flexible enough
14:16:09 [anssik]
... e.g. ONNX has no layout or axis for resize op
14:16:28 [anssik]
... when converting from ONNX to WebNN, whether from tool or at runtime, it should be possible
14:17:43 [ningxin_hu]
q+
14:17:46 [anssik]
ack ningxin_hu
14:18:03 [anssik]
ningxin_hu: thanks for the discussion, very helpful!
14:18:29 [anssik]
... I agree with Chai that the WebNN API key customer is an ML JS Framework
14:18:40 [anssik]
... and e.g. converting from ONNX should be possible
14:18:54 [anssik]
... the previous sizes and scales are good design, most flexible
14:19:31 [anssik]
... the only issue raised as an implementation feedback was that the previous spec was not clear what cases the interface supports, e.g. more than 2 dims interpolation
14:19:39 [anssik]
... also could be scattered
14:19:59 [anssik]
... implementation feedback suggests it is hard to map this flexible interface to the underlying ML APIs, e.g. OpenVINO and NNAPI
14:20:48 [Chai]
q+
14:20:55 [anssik]
... Chai just made it clear this API is sometimes limited, and we can go back and put the limitation that this interface supports three cases
14:21:06 [anssik]
... I think implementation could handle that
14:21:53 [anssik]
q?
14:21:56 [anssik]
ack Chai
14:22:10 [anssik]
Chai: want to add, agree with Ningxin
14:22:17 [anssik]
... mostly we agree on the details
14:22:29 [anssik]
... this should be 2D op and the proposed name change makes sense
14:22:45 [anssik]
... even ONNX is very open ended but does not do volumetrics, DirectML can do that
14:22:58 [anssik]
... interpolation makes a lot of sense to me
14:23:33 [anssik]
... limiting scale and sizes to be 2D makes sense, just need to distinguish the three separate cases mentioned in the GH issue:
14:23:36 [anssik]
[x, x, 1, 1]
14:23:36 [anssik]
[1, x, x, 1]
14:23:36 [anssik]
[1, 1, x, x]
14:24:13 [anssik]
ningxin_hu: different solution to axis proposal from Rama is to allow the developer to define two consecutive dims as spatial dims
14:24:25 [anssik]
... if the axis is zero-based
14:24:53 [anssik]
... to extend Rama's proposal, for simple axis sequence of two, the developer can define two consecutive dims as spatial
14:25:19 [anssik]
... that's a quick thought, not sure if perfect, but came to my mind just listening to this discussion
14:26:01 [ningxin_hu]
sgtm
14:26:33 [anssik]
anssik: proposal for Ningxin to hash out the design in PR #205 and seek review from Chai, Rama -- we seem to be close to consensus
14:26:51 [anssik]
Subtopic: Support for configuring rounding type in pooling operations
14:26:56 [anssik]
-> https://github.com/webmachinelearning/webnn/issues/198 issue #198
14:27:00 [anssik]
-> https://github.com/webmachinelearning/webnn/pull/208 PR #208
14:27:17 [anssik]
anssik: Chai suggested adding an optional output size's rounding model
14:27:32 [anssik]
... does this address the issue? Ningxin you said SGTM? Does this work for OpenCV?
14:27:44 [ningxin_hu]
q+
14:28:14 [anssik]
Chai: DirectML is an API that relies on the caller to spec the output size
14:28:20 [anssik]
... DML would not calculate the size
14:28:26 [anssik]
... in that case you'd have to do all the work
14:28:36 [anssik]
... works from DML, since frameworks sit on top
14:28:55 [anssik]
... the questions about DML helper lib, can dig an answer to that
14:29:15 [anssik]
... not super easy to use DML because it is not a goal, DML does not need to be easy but explicit
14:29:41 [anssik]
... mapping DML to WebNN would be nice but it does not have to map 1:1, the frameworks like TF or ONNXRuntime the backend has to do shared inference
14:29:48 [anssik]
... that's probably the difference there
14:29:53 [anssik]
ack ningxin_hu
14:30:28 [anssik]
ningxin_hu: your proposal to add output sizes SGTM, framework calc the output size and ignore the rounding mode
14:31:09 [anssik]
... OpenCV relies on backend to calc the size, so good to use a rounding model to map that and backend can help calc that with WebNN implementation
14:31:16 [anssik]
... if we'd have these two it'd be most flexible
14:31:39 [anssik]
... I think this is similar to the Conv2D, we have explicit padding
14:31:54 [anssik]
... I agree with Chai's proposal and will update the PR accordingly
14:32:29 [anssik]
... second questions: DML design, explicit output size, DirectML X, in WebNN implementation is based on DirectML X that is easier to use
14:32:55 [anssik]
... question about DML X pooling op, its graph builder API similar to WebNN
14:33:27 [anssik]
... caller set the output size, but in DML X pooling API it can't provide any output size or rounding model
14:34:06 [anssik]
... avg pooling calc the output size, but interface to DML X does not have the capability to set rounding model
14:34:18 [anssik]
... interested in how DML X supports this feature
14:34:41 [anssik]
Chai: DML itself is also a graph API, DML X does not add new features, makes it easier to use
14:35:37 [anssik]
... WebNN can defer all this to runtime, but DML X does not do that, will help calculate, but you have to lay it out when constructing the graph
14:35:49 [anssik]
ningxin_hu: I'll look at DML X implementation and will follow up
14:36:33 [anssik]
ningxin_hu: will add output size per Chai's suggestion to PR #208
14:37:14 [anssik]
Subtopic: reduceLogSum, reduceLogSumExp, reduceSumSquare are not supported on OV/NNAPI
14:37:28 [anssik]
-> https://github.com/webmachinelearning/webnn/issues/209 issue #209
14:37:36 [anssik]
anssik: the issue is brief, states reduceLogSum, reduceLogSumExp, reduceSumSquare are not supported by OpenVINO or NNAPI
14:37:46 [anssik]
... I believe the intent of this issue is to probably discuss whether those three ops should stay in the spec?
14:37:54 [anssik]
... would emulating these ops on those platforms be an option?
14:38:18 [anssik]
q?
14:38:21 [Chai]
it looks like all of them can be emulated
14:38:43 [rama]
I think so too. They are just compositions of other ops.
14:39:11 [Chai]
i can take an action item to check it out.
14:39:55 [anssik]
Chai: I'll take a look and confirm
14:40:13 [anssik]
anssik: it looks like we may want to add an informative section on emulation
14:40:18 [anssik]
... for these three ops
14:40:30 [anssik]
Subtopic: [webidl] Add method steps to operations
14:40:37 [anssik]
-> https://github.com/webmachinelearning/webnn/issues/210 issue #210
14:40:57 [anssik]
anssik: The WebIDL spec recommends algorithmic method steps for defining operations: https://heycam.github.io/webidl/#method-steps
14:41:24 [anssik]
... This spec currently uses a WebGPU spec inspired convention that differs from this best practice. We should consider adopting the "method steps" convention.
14:41:38 [anssik]
... An example of method steps in context of the WebNN API spec:
14:41:38 [anssik]
https://webmachinelearning.github.io/webnn/#dom-ml-createcontext
14:42:07 [anssik]
I believe @domenic would be happy to review the PR for this, perhaps also Dom
14:43:38 [anssik]
Subtopic: [webidl] Define algorithms for dictionaries with lists as default values
14:43:43 [anssik]
-> https://github.com/webmachinelearning/webnn/issues/211 issue #211
14:43:50 [anssik]
anssik: quoting domenic:
14:43:54 [anssik]
... You need to have a normative algorithm which checks if input["scales"]
14:43:59 [anssik]
... exists, and if not, use the defaults. I.e. something like
14:44:20 [anssik]
... 1. Let scales be the list « 1.0, 1.0, 1.0, 1.0 ».
14:44:23 [anssik]
... 2. If input["scales"] exists, then set scales to input["scales"].
14:44:41 [anssik]
... (and maybe also validate that input["scales"] is of length 4, or whatever you need?)
14:44:47 [anssik]
... This seems difficult since right now your spec has no algorithms (i.e. no method steps) for its operations, just a non-normative description of the inputs and return value.
14:45:44 [anssik]
anssik: My suggestion was to make this a global update to all affected dictionaries. We probably want to define a reusable algorithm we reference from all these places.
14:46:31 [anssik]
Subtopic: [usecase] Neural network deployment
14:46:35 [anssik]
-> https://github.com/webmachinelearning/webnn/pull/207 issue #207
14:46:45 [anssik]
anssik: Sungpil Shin submitted this use case for consideration.
14:46:57 [anssik]
... I reviewed this and had a question whether this use case is similar to https://webmachinelearning.github.io/webnn/#usecase-perf-adapt
14:47:07 [anssik]
... we could consider merging the two perhaps?
14:47:34 [anssik]
Sungpil_Shin: thanks Anssi, we suggested this PR with Wonsuk Lee
14:47:51 [anssik]
... we reviewed your comments, and agree the point of this use case is quite similar to the other use case already in the spec
14:48:15 [anssik]
... but our point for this use case is to select the best hardware in devices
14:48:57 [anssik]
... device selection discussion seem to have touched on this use case?
14:49:32 [anssik]
... we will revise this use case
14:50:00 [anssik]
https://github.com/webmachinelearning/webnn/issues/169
14:50:31 [Chai]
q+
14:50:53 [anssik]
ack Chai
14:51:12 [anssik]
Chai: this seems different from issue #169
14:52:01 [anssik]
... Sungpil is proposing something new, allow caller to select on which device to run time one, this seems to be a new capability, essentially we could consider adding this new use case if this is what we think WebNN should be able to do
14:52:49 [anssik]
... this happens automatically, someone has a model, converted to WebNN, expected to run on any hardware without specifying explicitly on which devices it'd run on
14:53:43 [anssik]
... this automation that'd be required in the API is way more than selecting what devices are available, would involve analyzing the model, e.g. there are models that are more effective to be run on CPU and in some cases on GPU e.g. CV models
14:54:03 [anssik]
... this use case basically says WebNN has to choose, beyond the scope we have now
14:55:06 [anssik]
q?
14:55:36 [anssik]
Sungpil: focused on learning rate to find the best hardware in this use case
14:56:15 [anssik]
Topic: TPAC agenda building
14:56:24 [anssik]
anssik: Call to review the draft agenda for the Web Machine Learning WG Virtual Meeting at TPAC 2021
14:56:31 [anssik]
-> https://github.com/webmachinelearning/meetings/issues/18 WebML WG Virtual Meetings at TPAC 2021
14:58:07 [anssik]
anssik: We'd like to get your proposals in as soon as possible so we can split topics across meetings days matching folks interests
14:58:27 [anssik]
anssik: poll was not very well attended, Rafael is not able to join on 26 Oct, I assume for other participants the date are fine.
14:58:38 [anssik]
... these proposed dates are now official:
14:58:38 [anssik]
14:58:38 [anssik]
26 October 2021 14:00-15:00 UTC+0
14:58:38 [anssik]
27 October 2021 14:00-15:00 UTC+0
14:58:39 [anssik]
28 October 2021 14:00-15:00 UTC+0
14:59:45 [anssik]
RRSAgent, draft minutes
14:59:45 [RRSAgent]
I have made the request to generate https://www.w3.org/2021/09/16-webmachinelearning-minutes.html anssik
16:55:20 [Zakim]
Zakim has left #webmachinelearning