13:54:52 RRSAgent has joined #webmachinelearning 13:54:52 logging to https://www.w3.org/2022/05/05-webmachinelearning-irc 13:54:55 RRSAgent, make logs Public 13:54:55 please title this meeting ("meeting: ..."), anssik 13:55:01 Meeting: WebML WG Teleconference – 5 May 2022 13:55:08 Chair: Anssi 13:55:19 Agenda: https://github.com/webmachinelearning/meetings/blob/main/telcons/2022-05-05-wg-agenda.md 13:55:25 Scribe: Anssi 13:55:29 scribeNick: anssik 13:55:35 scribe+ dom 13:55:43 Present+ Anssi_Kostiainen 13:55:49 RRSAgent, draft minutes 13:55:49 I have made the request to generate https://www.w3.org/2022/05/05-webmachinelearning-minutes.html anssik 14:00:11 Present+ Rafael_Cintron 14:00:14 ningxin_hu has joined #webmachinelearning 14:00:27 Present+ Ningxin_Hu 14:00:30 Present+ Dominique_Hazael-Massieux 14:00:46 Present+ Ganesan_Ramalingam 14:01:19 rama has joined #webmachinelearning 14:03:47 jonathan has joined #webmachinelearning 14:03:50 Topic: Context-based graph execution methods for different threading models 14:04:00 Present+ Jonathan_Bingham 14:04:06 -> Context-based graph execution methods for different threading models https://github.com/webmachinelearning/webnn/pull/257 14:04:27 anssi: good news - the substantial pull request to address our discussions on threading models has been merged 14:04:56 ... huge thanks to Chai, Ningxin, Rafael, everyone involved in landing this, surely one of our more complex PR 14:04:56 -> Should WebNN support async APIs? https://github.com/webmachinelearning/webnn/issues/230 14:05:04 ... can we now close #230? 14:05:11 ghurlbot has joined #webmachinelearning 14:05:18 ghurlbot, this is webmachinelearning/webnn 14:05:18 dom, OK 14:05:24 q+ 14:05:36 ack ningxin_hu 14:05:52 RafaelCintron has joined #webmachinelearning 14:06:06 ningxin_hu: #257 introduces the async compute API - well done! 14:06:06 -> Pull Request 257 [closed] Context-based graph execution methods for different threading models. (wchao1115) https://github.com/webmachinelearning/webnn/issues/257 14:06:50 ... there is a remaining discussion on graph compilation #263 14:06:50 -> Issue 263 Support asynchronous graph compilation (wchao1115) https://github.com/webmachinelearning/webnn/issues/263 14:07:06 Subtopic: Support asynchronous graph compilation 14:07:14 -> Support asynchronous graph compilation https://github.com/webmachinelearning/webnn/issues/263 14:07:28 anssik: Jiewei Qian was asking: 14:07:33 ... "Should we change build() to return Promise? Having the API default to async gives us lots of flexibility in the future." 14:07:38 ... "Also, it's easy to write async calls in synchronous style with JS async-await. Converting sync / blocking calls to async / non-blocking calls is much harder." 14:08:11 -> MLGraphBuilder.build() https://www.w3.org/TR/webnn/#dom-mlgraphbuilder-build 14:08:29 q? 14:08:57 RRSAgent, draft minutes 14:08:57 I have made the request to generate https://www.w3.org/2022/05/05-webmachinelearning-minutes.html anssik 14:09:27 ningxin_hu: the build method implies non trivial work for graph compilation 14:09:44 ... there is a possibility that this would block the main thread if it can be called sync in the main thread 14:10:03 ... thus #257 restricts the sync build method to the worker 14:10:30 ... the same way sync compute is limited to worker 14:10:57 ... Jiewei's comment is about changing the build method to an async one instead of having both sync & async 14:11:47 ... my feedback to that is that while it's possible, having sync methods is needed for transpiling C++ code with WASM 14:12:06 ... this existing codebase typically expect sync results 14:12:21 ... and they would be hard to change to async paradigms 14:13:16 ... Jiewei points to existing ecmscripten tools like asyncify to deal with this - we had investigated this, but using this hurts performance 14:13:35 ... at least it was the case for compute which can be used at high frequency 14:13:49 ... that's why I think we need both sync & async for the build method as well 14:14:15 ... this also aligns with the pattern used e.g. in WebGPU 14:14:42 ... I'll bring this in the issue discussion as well 14:14:43 q+ 14:14:57 ack RafaelCintron 14:15:32 RafaelCintron: is it possible for WebNN to @@@? 14:15:42 ... if so, I don't see an issue to making it sync 14:15:57 ... I can see the reasoning for Model Loader 14:16:51 ... In terms of the WebGPU example - the feedback we've gotten from game engine developers is that mapAsync wouldn't work for them when porting native code 14:17:18 ... async poses problems and syncify doesn't address them 14:17:24 q+ 14:17:49 s/@@@/validate the graph at build time/ 14:18:04 ack ningxin_hu 14:18:28 ningxin_hu: with regard to validating the graph, the build method has all the information to validate the graph 14:18:35 ... that's what what we have in the WebNN native implementation 14:18:47 ... all the operators are in place 14:19:11 RRSAgent, draft minutes 14:19:11 I have made the request to generate https://www.w3.org/2022/05/05-webmachinelearning-minutes.html anssik 14:19:34 ... WRT WebGPU discussions, this also reflects what we observed when running WASM ports of native code 14:19:45 ... this is why the current version of build is sync 14:20:23 ... The problem is that the build method includes graph compilation and weight uploading and initialization in the GPU (critical for performance) 14:21:09 ... because that includes moving data, and that implementations may need time to initialize weight, this risks making the method blocking on the main thread 14:21:37 ... so we would limit the sync method to worker, and provide an async equivalent for the main thread (typically to be used in regular JS code) 14:22:15 q? 14:23:01 RafaelCintron: for the sync version of build, we wouldn't have to wait for the results to come if we're sure it can't fail 14:23:22 ... we could return the model right away without waiting for the GPU to finish its work 14:23:42 ... but that means we can be sure it can't fail e.g. for memory reasons 14:23:46 ningxin_hu: good point 14:24:33 ... we probably need to investigate if native APIs can ensure that type of validation - I don't have an answer to that 14:24:46 ... but I suspect this may vary across native implementations & drivers 14:26:09 ... the build method returns an MLGraph on which compute runs; if that MLGraph is not yet optimized for executing the compute method, that may create an unexpected delay to that execution 14:27:36 dom: that's because in case of real-time processing, you don't want the first frame to be delayed, you want compute to run as efficiently as possible? 14:27:57 ningxin: indeed - we should keep expectations of delay as clear as possible for developers 14:28:12 RafaelCintron: the delay needs to happen, whether at build or compute time 14:28:52 ningxin_hu: chai had mentioned that idea of pushing the compilation delay in the first compute, but that breaks the expectations that when calling compute, it can be run as efficiently as possible 14:29:31 q? 14:29:56 Subtopic: Should MLCommandBuffer be MLExternalCommandBuffer? 14:30:05 anssik: #264 14:30:06 -> Issue 264 Should MLCommandBuffer be MLExternalCommandBuffer? (bbernhar) https://github.com/webmachinelearning/webnn/issues/264 14:30:25 anssik: Bryan commented: 14:30:30 ... "If the intent of WebNN is to produce an immutable MLCommandBuffer that can be read-only by WebGPU, then I would suggest we consider 1) renaming it to MLExternalCommandBuffer and 2) avoid overloading WebGPU Interop with requirements WebGPU does not follow: a command buffer with GPU commands being equal to a command buffer with non-GPU commands - by moving MLExternalCommandBuffer into a WebNN Interop section." 14:30:36 ... "Alternatively, we could keep MLCommandBuffer (internal usage) but allow WebNN a means to submit (ex computeAsync). This would avoid breaking WebGPU and allow WebNN to have consistent GPU async support experience on both native (standalone) and web." 14:31:04 q? 14:32:42 RafaelCintron: would be fine with the current name; doesn't think we need to label it as external 14:32:49 ningxin_hu: +1 14:33:03 ... unless we get strong pushback from the WebGPU WG 14:33:09 q? 14:33:29 Topic: Accessibility and Internationalization responses, ethics feedback 14:33:40 Subtopic: Accessibility Checklist 14:33:53 anssik: #261 14:33:53 -> Issue 261 Accessibility Checklist (anssiko), cr https://github.com/webmachinelearning/webnn/issues/261 14:34:08 -> Accessibility Checklist https://w3c.github.io/apa/fast/checklist.html 14:35:19 anssik: my assessment is that only one section of the checklist applies " If technology defines an API" 14:35:21 "If the API can be used for structured content, it provides features to represent all aspects of the content including hidden accessibility features." 14:35:41 "Application programming interfaces allow programmatic manipulation and interchange of content, and are being used to create a more imperative Web. While typically APIs exchange data rather than user-focused content, this data ultimately is exposed to the user in some way. Some of the content richness can disappear if the API does not support features like content alternatives, control association, etc. Technologies that define 14:35:41 APIs should ensure the API is rich enough to exchange all relevant accessibility information." 14:36:01 anssik: proposed response: 14:36:03 ... "WebNN API is not used for structured content (data organized and structured in a particular way on a webpage in HTML)." 14:36:41 dom: +1 14:36:46 anssik: another checkpoint is the following: 14:36:52 "If the API relies on user agents to generate a user interface, the specification provides guidance about accessibility requirements needed to enable full interaction with the API." 14:37:10 "Content manipulated by an API is generally generated into a user interface. Technologies should provide guidance to ensure that user agents or dynamic content applications expose the full set of accessibility information available in the API." 14:37:21 anssik: proposed response is simply: 14:37:26 "WebNN API does not rely on user agents to generate a user interface." 14:37:42 anssik: proposed summary: 14:37:43 Geun-Hyung has joined #webmachinelearning 14:37:49 "Accessibility Checklist items don't apply to the Web Neural Network API." 14:37:53 presnt+ 14:38:03 present+ 14:38:35 q? 14:38:40 anssi: please check if you agree or disagree with my assessment - I plan to submit this to the Accessibility review before our next call 14:38:41 Subtopic: Internationalization Checklist 14:38:52 -> Internationalization Checklist https://w3c.github.io/i18n-drafts/techniques/shortchecklist 14:38:53 anssik: #262 14:38:54 -> Issue 262 Internationalization Checklist (anssiko), cr https://github.com/webmachinelearning/webnn/issues/262 14:39:27 anssik: based on my assessment, only the following checklist item applies to WebNN API: 14:39:27 anssik: only one item applies to WebNN 14:39:32 "If the spec (or its implementation) contains any natural language text that will be read by a human (this includes error messages or other UI text, JSON strings, etc, etc)," 14:39:54 anssik: my proposed response is the following: 14:40:01 ... "WebNN API contains DOMStrings that are developer-defined and meant purely to improve web developer ergonomics, and not surfaced to users:" 14:40:15 https://www.w3.org/TR/webnn/#typedefdef-mlnamedoperands 14:40:15 https://www.w3.org/TR/webnn/#dom-mlgraphbuilder-input 14:40:15 https://www.w3.org/TR/webnn/#typedefdef-mlnamedinputs 14:40:15 https://www.w3.org/TR/webnn/#typedefdef-mlnamedoutputs 14:40:31 anssik: I gave an example to illustrate usage: 14:40:40 ... For example, a web developer can create an operand for a graph input and assign it a name 'A': 14:40:45 ... const inputs = { 'A': bufferA, 'B': bufferB }; 14:40:49 ... And later refer to this input using the name 'A': 14:40:59 ... console.log(inputs.A); 14:41:05 <`join_subline> `join_subline has joined #webmachinelearning 14:41:33 ... proposed summary of the a18y checklist exercise: 14:41:39 ... "Only consideration that applies is "If the spec (or its implementation) contains any natural language text that will be read by a human (this includes error messages or other UI text, JSON strings, etc, etc),". 14:42:46 dom: WebNN API is low lower so that's why we do not tick many checklist boxes 14:42:59 s/low lower/low level API 14:43:20 q? 14:43:27 Subtopic: Ethics workshop feedback 14:44:04 -> https://github.com/webmachinelearning/ethical-webmachinelearning/pull/20 Review Ethical WebML workshop feedback 14:44:08 -> PR: Incorporate feedback from the Ethical ML workshops https://github.com/webmachinelearning/ethical-webmachinelearning/pull/20 14:44:26 anssi: based on the 2 ethical workshops we ran last month 14:44:39 ... anyone interested, please take a look at the PR 14:44:57 q? 14:45:34 anssik: when do you think we should make a W3C Note publication? 14:46:09 dom: chair to propose when to do that 14:46:41 Topic: WebNN integration with WebRTC APIs and WebGPU interop 14:46:46 Subtopic: WebNN integration with WebRTC APIs 14:47:17 -> WebRTC WG April 2022 meeting slides https://docs.google.com/presentation/d/15iAIhzpaA6reKJBL-ecgYtic6ZKHEpKL5OK_sExTllc/edit#slide=id.g12073675a7a_0_0 14:47:22 -> WebRTC WG April 2022 meeting minutes https://www.w3.org/2022/04/26-webrtc-minutes.html#t01 14:47:49 anssik: ningxin shared the following next steps for WebNN/mediacapture-transform integration: 14:47:53 ... 1. enable WebGPU backend 14:48:00 ... 2. new APIs that allow import frames as GPU textures and see whether that will improve efficiency 14:48:17 ... 3. Improve VideoFrame GC PR: we will try out when it is merged in Chrome. 14:48:52 q+ 14:48:56 ack ningxin_hu 14:49:52 RRSAgent, draft minutes 14:49:52 I have made the request to generate https://www.w3.org/2022/05/05-webmachinelearning-minutes.html anssik 14:50:18 Present+ Geun_Hyung_Kim 14:50:19 ningxin_hu: the 1st point is about the WebGPU-only pipeline - the current sample which has two main tasks (segmentation & image blending) has two backends: WebGL (doing both) & WebGPU (blending) + WebNN (ML segmentation) 14:52:25 q? 14:53:00 anssik: thanks - I think this was appreciated a good joint discussion 14:53:47 dom: we raised the need for WebRTC WG to get clarity from Media WG for WebCodecs and WebGPU interaction 14:53:58 Subtopic: WebGPU interop 14:54:40 -> Investigation: how WebNN / WebGPU interop could be happening https://github.com/gpuweb/gpuweb/issues/2500 14:57:16 q? 14:57:28 +1 to give a quick update 14:57:43 q? 14:57:55 Topic: Double-precision baseline implementation of WebNN operations for testing 14:58:16 -> WebML WG Teleconference – 10 February 2022 minutes https://www.w3.org/2022/02/10-webmachinelearning-minutes.html#t03 14:58:28 -> PR #1 webnn-baseline initial implementation https://github.com/webmachinelearning/webnn-baseline/pull/1 14:58:28 -> Issue 1 Look into pre-canned models (anssiko) https://github.com/webmachinelearning/webnn/issues/1 14:59:02 anssik: you may recall webnn-baseline is a CR requirement https://github.com/webmachinelearning/webnn/issues/240 14:59:28 q? 15:00:19 ningxin_hu: I propose to merge this PR to unblock development on ULP tolerance 15:00:37 ... baseline implementation is a tool to help define ULP tolerance, that will be part of test cases 15:00:57 ... Bruce opened a related issue 15:01:01 -> Define ULP (unit of least precision) tolerances https://github.com/webmachinelearning/webnn/issues/265 15:01:14 q? 15:01:29 anssik: any concerns with merging the initial PR? 15:02:42 ... please provide your feedback within the next 7 days 15:02:52 RRSAgent, draft minutes 15:02:52 I have made the request to generate https://www.w3.org/2022/05/05-webmachinelearning-minutes.html anssik 15:03:06 RRSAgent, draft minutes 15:03:06 I have made the request to generate https://www.w3.org/2022/05/05-webmachinelearning-minutes.html anssik 15:03:07 q? 15:03:19 the CL of chromium to reduce GC: https://chromium-review.googlesource.com/c/chromium/src/+/3586505 15:03:26 RRSAgent, draft minutes 15:03:28 I have made the request to generate https://www.w3.org/2022/05/05-webmachinelearning-minutes.html anssik 17:16:52 Zakim has left #webmachinelearning