13:55:47 RRSAgent has joined #webmachinelearning 13:55:47 logging to https://www.w3.org/2021/04/01-webmachinelearning-irc 13:55:49 Meeting: WebML CG Teleconference – 1 April 2021 13:55:50 RRSAgent, make logs Public 13:55:50 please title this meeting ("meeting: ..."), anssik 13:56:10 Meeting: WebML CG Teleconference – 1 April 2021 13:56:17 Chair: Anssi 13:56:21 Agenda: https://github.com/webmachinelearning/meetings/blob/master/telcons/2021-04-01-agenda.md 13:56:28 Scribe: Anssi 13:56:41 scribeNick: anssik 13:56:51 rbeaumont has joined #webmachinelearning 13:56:58 agilotte has joined #webmachinelearning 13:58:17 Present+ Alexandre_Gilotte 13:58:31 Present+ Anssi_Kostiainen 13:58:43 Present+ Zoltan_Kis 13:58:50 present+ 14:00:58 Present+ Romain_Beaumont 14:04:18 RafaelCintron has joined #webmachinelearning 14:04:18 Present+ Rafael_Cintron 14:12:04 ningxin_hu has joined #webmachinelearning 14:12:10 Present+ Ningxin_Hu 14:13:31 Topic: Operation-specific APIs proposal 14:13:57 anssik: all are probably familiar with the background already, discussed in issue by Jonathan and Ping: 14:14:02 -> https://github.com/webmachinelearning/proposals/issues/2 operation-specific APIs proposal 14:14:12 anssik: discusses also key use cases 14:14:33 ... today I want to continue discuss and review work-in-progress features that satisfy the requirements of this operation-specific APIs proposal 14:14:46 ... I listed 3 topics in the agenda, but please chime in if I missed some requirements not satisfied with these 14:14:59 ... 1) WebAssembly scenario of the op level execution use case (Ningxin) 14:15:06 ... 2) Clarify at which point weights are used in the compilation (Rafael) 14:15:23 ... 3) Discuss how the caller using an op-specific API can resource upload and download (Chai) 14:15:54 anssik: let's first discuss Wasm op level execution, Ningxin opened in issue with use case and requirements 14:16:00 -> https://github.com/webmachinelearning/webnn/issues/156 Support CPU - WebAssembly scenario of the op level execution use case (issue #156) 14:16:20 anssik: Ningxin, please introduce this use case and two requirements identified 14:17:00 Ningxin: Use case is one scenario of op-level execution per Jonathan et al. proposal 14:17:19 ... use case is basically a JS ML framework executing ops on the CPU device with WebAssembly 14:17:49 ... conv2d, matmul to execute op with ML specific instructions available in CPU device, e.g. VNNI 14:18:08 ... two requirements: 14:18:15 ... 1) WebNN API should allow JS ML frameworks create an MLContext for CPU device 14:18:34 ... avoid cross device copies, e.g. CPU to GPU 14:19:25 ... currently supported is Power Preference 14:19:34 zkis2 has joined #webmachinelearning 14:20:50 ... 2) WebNN API should allow JS ML frameworks control when the output data is available for access 14:21:42 ... motivation: WebNN interacts with Wasm and WebGPU/GL shaders and tensor layout conversion between these is expensive 14:23:18 ... for op-level execution scenario, JS framework likes to exec ops one by one and need to create multiple single op ML graphs to execute this ML graph, and execute them in a sequential way, using output of the previous as input to the next 14:23:48 ... for this scenario it means, WebNN implementation needs to converse memory layouts and that'd hurt performance 14:24:04 ... the framework would not use the intermediate results between these ops 14:24:57 ... this means WebNN API need to allow developers to controls when the memory layout conversion happens 14:25:04 q+ 14:25:09 ack RafaelCintron 14:25:44 RafaelCintron: about frameworks, why can't they just create a graph when they don't need an intermediate representation? 14:26:14 ningxin_hu: good question, if you look at PR #148 there's some discussion about this 14:26:36 ... it seems frameworks provide a similar op-level API 14:26:58 ... and need to execute the ops, but they don't know the graph beforehand 14:27:36 RafaelCintron: could there be just a single-op graph in such a case? 14:28:30 ningxin_hu: do you suggest this is a framework level optimization? 14:29:05 RafaelCintron: if the user of the framework gives the fw one op at a time, could that work out with the current API? 14:29:44 ningxin_hu: WebNN API is a graph API today, the output data of that graph may not be needed for the user code 14:30:29 RafaelCintron: if we just ship a graph API, would it satisfy this scenario? 14:30:58 ningxin_hu: I'm not sure if I can answer this, should ask Ping and Jonathan to give framework perspective. 14:32:17 anssik: Chai had a comment on WebAssembly.Memory object that allow efficient interop with JS ArrayBuffer 14:32:23 -> https://github.com/webmachinelearning/webnn/pull/149#discussion_r604304611 WebAssembly.Memory interop with ArrayBuffer 14:32:31 -> https://rob-blackbourn.github.io/blog/webassembly/wasm/array/arrays/javascript/c/2020/06/07/wasm-arrays.html Wasm-ArrayBuffer interop examples 14:32:53 q+ 14:32:59 ack ningxin_hu 14:33:16 ningxin_hu: I think this is great pointer for Wasm interop 14:33:49 ... I think we're fine, given WebNN accepts ArrayBuffer for input and output 14:33:56 ... and addresses Wasm interop requirement 14:34:14 ... the open is to allow Wasm to ask WebNN create a CPU context 14:34:21 ... to avoid data copies as much as possible 14:35:07 anssik: it seems Chai proposed to address (1) MLContext for CPU device with a preference in MLContextOptions 14:35:11 -> https://webmachinelearning.github.io/webnn/#dictdef-mlcontextoptions MLContextOptions 14:35:37 q+ 14:35:40 anssik:ack RafaelCintron 14:36:30 RafaelCintron: want to point out, if weights are copied before compilation, you're going to have a copy 14:36:40 q+ 14:36:54 ack ningxin_hu 14:37:39 ningxin_hu: I agree with RafaelCintron about data copy, would like to add that memory layout conversion issue is important to address, given there may be platform-specific layouts at play 14:37:42 ack RafaelCintron 14:37:45 q? 14:38:16 sounds good 14:38:27 ... 2) Clarify at which point weights are used in the compilation (Rafael) 14:38:51 -> https://github.com/webmachinelearning/webnn/issues/157 Clarify at which point weights are used in the compilation (issue #157) 14:39:22 RafaelCintron: my point here is, say you're using model builder API and have a bunch of ops and weights and say WebNN compile this for me 14:39:39 ... then you change one weight, is the change considered in the compilation? 14:39:50 ... is the change ignored, hopefully not? 14:40:07 ... we need to make this well-defined, when the ArrayBuffers are used in the compilation 14:40:22 ... I think it is good to change weights up until you say compile after which they're frozen 14:40:38 ... also need to consider platform specific conversion of layout 14:41:44 ... if they make a copy they can do whatever they want 14:42:46 q+ 14:42:50 ack ningxin_hu 14:43:15 ningxin_hu: I'd like to get the required behavior clarified a bit 14:43:31 ... when calling compile, before promise resolves, the data is copied by the implementation 14:43:48 ... when the weights are changed, would not affect the compilation? 14:44:31 RafaelCintron: right before compile() returns control back to developer you can change the weights, makes sense? 14:45:24 ningxin_hu: I think that makes sense, that's how webnn-polyfill implements compile() 14:45:34 ... if no objections, we should specify in the spec 14:47:11 anssik: do you have a rough idea how the PR should look like to address #157? 14:47:24 ningxin_hu: now compile() is build() I think in the latest spec update 14:48:00 Zoltan: it is standard move semantics in an API 14:48:40 I'll make a PR and invite Rafael to review 14:48:51 ... 3) Discuss how the caller using an op-specific API can resource upload and download (Chai) 14:49:37 [defer when Chai available] 14:49:53 Topic: TAG review feedback - open PRs 14:50:01 anssik: let's take a look at open PRs addressing TAG review feedback 14:50:05 -> https://github.com/webmachinelearning/webnn/issues/140 [tag-tracker] NamedOutput mechanism clarification (issue #140) 14:50:11 -> https://github.com/webmachinelearning/webnn/pull/147 MLGraph.compute (PR #147) 14:50:29 anssik: reviews completed, after resolving the merge conflict and we're ready. Ningxin feel free to resolve and merge. 14:50:41 i'll fix the conflicts 14:50:47 anssik: thanks! 14:51:02 Topic: TAG review feedback - open issues without associated PRs 14:51:17 anssik: there's a few open issues without PRs, let's check them out together to see if there are blockers 14:51:22 -> https://github.com/webmachinelearning/webnn/issues/150 [tag-tracker] Define a common term for logical tensor changes? 14:51:36 anssik: "Looking at this PR, wouldn't it make sense to define a common term for logical tensor changes (e.g. views?) somewhere early in the document so that concept can be re-used?" 14:51:54 anssik: any reactions on that? 14:52:03 q? 14:52:07 -> https://github.com/webmachinelearning/webnn/issues/142 [tag-tracker] Isomorphic JS story, worker scope exposure? 14:52:46 I have nothing new to add. 14:53:27 -> https://github.com/webmachinelearning/webnn/issues/139 [tag-tracker] Ergonomics of the JS examples 14:54:47 anssik: any new information to add for this? Seems we could close this by stating that the API consumer is primarily framework 14:54:56 -> https://github.com/webmachinelearning/webnn/issues/138 [tag-tracker] String enum for activations 14:55:12 anssik: "If there are layers that will be taking activations as string enums, there should simply be a string enum for activations rather than have it just in RecurrentNetworkActivation. (One may argue that hyperbolic tangent is RNN specific, but..." 14:55:44 +1 to Chai's comment 14:55:47 anssik: any further feedback beyond Chai's comment? 14:56:15 anssik: we could address this along the lines of Chai's comments 14:56:40 Topic: Privacy review feedback - normative change proposals 14:57:01 anssik: let's review the proposal to make WebNN API a policy-controlled feature, per PING review feedback 14:57:06 -> https://github.com/webmachinelearning/webnn/issues/145 [privacy-tracker] Make WebNN API a policy-controlled feature with default allowlist 'self' (issue #145) 14:57:54 anssik: Proposal: The feature is allowed in documents in top-level browsing contexts by default, and when allowed, is allowed by default to same-origin domain documents in child browsing contexts, but is disallowed by default in cross-origin documents in child browsing contexts 14:58:36 anssik: Criteo folks may be interested in this 14:59:11 Alexandre: I don't have anything to comment at this time 14:59:16 q+ 14:59:21 q? 15:00:18 ack zkis2 15:00:43 Zoltan: one question, it should mention what threat it addresses? 15:00:56 anssik: this is from https://github.com/webmachinelearning/webnn/issues/119#issuecomment-772565085 15:01:30 Zoltan: behind the scenes you may want to run some model in the cloud 15:02:24 ... I see normal same-origin policy is used, should explain why this mitigation is needed 15:02:42 anssik: any other comments on the privacy suggestion? 15:03:22 ... please provide comments in the issue 15:03:23 q+ 15:03:29 ack RafaelCintron 15:04:02 RafaelCintron: I'm OK with this feature policy, the goal is to not allow web developers use powerful features in iframes unless the main page allows that explicitly 15:04:37 ...we shouldn't ask user by feature policy controlled is OK 15:04:44 ack zkis 15:05:05 RRSAgent, draft minutes v2 15:05:05 I have made the request to generate https://www.w3.org/2021/04/01-webmachinelearning-minutes.html anssik 15:05:15 TOPIC: Adjourn 15:08:47 s/user by/user but 15:08:56 RRSAgent, draft minutes v2 15:08:56 I have made the request to generate https://www.w3.org/2021/04/01-webmachinelearning-minutes.html anssik