IRC log of webmachinelearning on 2023-03-02

Timestamps are in UTC.

14:46:42 [RRSAgent]
RRSAgent has joined #webmachinelearning
14:46:46 [RRSAgent]
logging to https://www.w3.org/2023/03/02-webmachinelearning-irc
14:46:46 [Zakim]
RRSAgent, make logs Public
14:46:47 [Zakim]
please title this meeting ("meeting: ..."), anssik
14:46:53 [anssik]
Meeting: WebML WG Teleconference – 2 March 2023
14:46:58 [anssik]
Chair: Anssi
14:47:12 [anssik]
Agenda: https://github.com/webmachinelearning/meetings/blob/main/telcons/2023-03-02-wg-agenda.md
14:47:16 [anssik]
Scribe: Anssi
14:47:21 [anssik]
scribeNick: anssik
14:47:42 [anssik]
Present+ Anssi_Kostiainen
14:47:46 [anssik]
Regrets+ Dominique_Hazael-Massieux
14:47:50 [anssik]
RRSAgent, draft minutes
14:47:51 [RRSAgent]
I have made the request to generate https://www.w3.org/2023/03/02-webmachinelearning-minutes.html anssik
14:48:26 [anssik]
ghurlbot, this is webmachinelearning/webnn
14:48:26 [ghurlbot]
anssik, OK.
15:00:48 [ningxin_hu]
ningxin_hu has joined #webmachinelearning
15:00:57 [anssik]
Present+ Ningxin_Hu
15:01:09 [anssik]
Present+ Chai_Chaoweeraprasit
15:02:26 [zkis]
zkis has joined #webmachinelearning
15:02:36 [anssik]
Present+ Zoltan_Kis
15:03:15 [RafaelCintron]
RafaelCintron has joined #webmachinelearning
15:03:16 [anssik]
Present+ Rafael_Cintron
15:03:43 [anssik]
Topic: Call for Consensus: WebNN API Candidate Recommendation
15:04:08 [anssik]
anssik: Call for Consensus to publish the Web Neural Network API as a Candidate Recommendation (CR) was issued 23 Feb 2023
15:04:17 [anssik]
... this is a W3C process step to check the WG is fine with the plan.
15:04:21 [anssik]
-> CfC to publish WebNN API Candidate Recommendation - review by 2 Mar 2023 https://lists.w3.org/Archives/Public/public-webmachinelearning-wg/2023Feb/0005.html
15:04:31 [chai]
chai has joined #webmachinelearning
15:04:41 [anssik]
anssik: no concerns were raised for the proposal meaning we can move forward with this publication. We will integrate #340 to add Status of this document note for Candidate Recommendation prior to publication.
15:04:42 [ghurlbot]
https://github.com/webmachinelearning/webnn/issues/340 -> Pull Request 340 Add Status of this document note for Candidate Recommendation (anssiko)
15:05:16 [anssik]
... I'm aware we have many enhancements in flight and thus I will work with Dom to ensure our spec CI/CD system is ready to allow us continue make timely publications post CR.
15:05:36 [anssik]
... As a W3C process detail, post initial CR we can do three publications:
15:05:44 [anssik]
... (Return to) Working Draft
15:05:49 [anssik]
... (A revised) Candidate Recommendation Snapshot
15:05:54 [anssik]
... (A revised) Candidate Recommendation Draft
15:05:57 [anssik]
... (Advance to) Proposed Recommendation
15:06:04 [anssik]
... as WG participants you don't need to worry too much about these process details
15:06:16 [anssik]
... I'll take care of them with Dom and keep you informed so you can focus on the important technical work on our plate
15:06:27 [anssik]
... the key diff between CR Snapshot and CR Draft is that the Snapshot adds patent protection.
15:06:42 [anssik]
... these CR Snapshots should not be published more often once every 6 months
15:07:03 [anssik]
... so I expect we'll publish multiple WDs for smaller changes (e.g. editorial enhancements) post initial CR
15:07:11 [anssik]
... and when we make significant changes (e.g. add new features) we publish one or more Candidate Recommendation Drafts.
15:07:30 [anssik]
... And when we gather additional implementation experience we ultimately advance to Proposed Recommendation. That milestone is, however, further out.
15:07:45 [anssik]
... The requirements for Proposed Rec is to "have at least two independent implementations of every feature defined in the specification.".
15:08:04 [anssik]
... with that, I'm happy to resolve the CfC to publish WebNN API Candidate Recommendation.
15:08:06 [RafaelCintron]
q+
15:08:14 [anssik]
ack RafaelCintron
15:09:03 [anssik]
RafaelCintron: what is the process of changing the spec after the CR land?
15:09:03 [ningxin_hu]
q+
15:09:32 [chai]
q+
15:09:54 [zkis]
q+ to ask about spec versioning, like 1.0, 2.0 etc. See also https://www.w3.org/2005/05/tr-versions
15:10:27 [anssik]
anssik: we can do WD, revised CR
15:10:53 [anssik]
RafaelCintron: if we later decide that WebGPU interop needs to change I hope we can change it
15:11:21 [anssik]
anssik: we can change the WebGPU interop parts post CR
15:12:04 [anssik]
q?
15:12:35 [anssik]
ack ningxin_hu
15:13:08 [zkis]
For what CR means, see https://www.w3.org/2004/02/Process-20040205/tr.html
15:13:17 [anssik]
ningxin_hu: how the CR and the implementation feedback and experience can play with each other? Some open issues depend on implementation experience.
15:13:30 [anssik]
... how this implementation feedback and CR link together?
15:14:52 [anssik]
anssik: CR is "call for implementations"
15:15:25 [anssik]
zkis: CR may mean use cases have been stabilized
15:15:28 [anssik]
q?
15:15:28 [ningxin_hu]
you explanation of CR is very helpful, thanks Anssi!
15:15:32 [anssik]
ack chai
15:16:02 [anssik]
chai: thanks for clarifying CR status, "call for implementations" is a very clear explanation
15:16:58 [anssik]
... in the SOTD PR we augment the note, once we enter CR state we should focus on resolving the issues described in that PR
15:17:12 [anssik]
... we may add a few more ops per impl feedback, it sounds like those are fully allowed
15:17:14 [anssik]
q?
15:17:45 [anssik]
anssik: adding new ops is allowed after the initial CR
15:20:50 [anssik]
https://www.w3.org/TR/webnn/
15:21:04 [anssik]
https://webmachinelearning.github.io/webnn/
15:21:56 [chai]
+1
15:22:23 [anssik]
q?
15:22:31 [anssik]
ack zkis
15:22:31 [Zakim]
zkis, you wanted to ask about spec versioning, like 1.0, 2.0 etc. See also https://www.w3.org/2005/05/tr-versions
15:22:32 [zkis]
For versioning specs, see https://www.w3.org/2005/05/tr-versions
15:23:05 [anssik]
zkis: I pasted the link to versioning discussion, Anssi mentioned we'll make a CR drafts with breaking changes
15:23:18 [anssik]
s/a CR drafts/CR drafts
15:23:56 [anssik]
... do we want to stick with TR version to be referenced in the implementation
15:24:34 [anssik]
anssik: let's try to avoid versioning if possible
15:24:42 [anssik]
... "Living Standards"
15:24:50 [anssik]
https://www.w3.org/TR/2023/WD-webnn-20230124/
15:25:11 [anssik]
https://www.w3.org/standards/history/webnn
15:25:30 [anssik]
q?
15:26:15 [chai]
+1
15:26:21 [ningxin_hu]
+1
15:26:50 [anssik]
RESOLUTION: CfC to publish WebNN API Candidate Recommendation passes.
15:28:05 [anssik]
Topic: WebNN API feedback at the GPU Web F2F
15:28:16 [anssik]
anssik: WebNN API was on the agenda at the GPU Web F2F (2023-02-16/17).
15:28:23 [anssik]
... thanks to Rafael, Chai, Ningxin for presenting WebNN API to the WebGPU audience.
15:28:36 [anssik]
... there's some feedback recorded in the Member-only minutes. It'd be great if the minutes could be made public.
15:28:41 [anssik]
-> GPU Web F2F minutes (2023-02-16/17) [Member-only] https://lists.w3.org/Archives/Member/internal-gpu/2023Feb/0009.html
15:28:59 [anssik]
... I think what I can say now in the public space is that the feedback is encouraging considering multi-implementer interest signals.
15:29:34 [anssik]
... On behalf of the WebML WG I can say we're committed to work with WebGPU folks and I look forward to a closer collaboration when WebGPU folks' have available bandwidth.
15:29:41 [anssik]
... Rafael, Chai, Ningxin, do you want to share your high level takeaways?
15:30:18 [anssik]
q?
15:30:23 [chai]
q+
15:30:27 [anssik]
ack chai
15:31:19 [anssik]
chai: Ningxin and I joined 20 mins at the end of the day, I think the feedback for the WebNN API was positive
15:31:53 [anssik]
... I actually put some of that feedback in an issue #350
15:32:24 [ghurlbot]
https://github.com/webmachinelearning/webnn/issues/350 -> #350
15:32:25 [anssik]
... the belief is the WebNN API as specced is implementable on Core ML
15:33:19 [anssik]
... they like the fact we didn't insist in fighting the format wars, also like the API is focused on what matters the most, connecting apps to the underlying platform, WebNN API as a backend interface
15:34:29 [anssik]
... in terms of implementation technicalities, much focus on Neural Engine, the nature of their platform is such that they could partition the network graph to run on different processors, a single graph with part of it run on CPU or GPU and part on NPU
15:34:58 [anssik]
... the current spec that has an explicit device type is in a way not in a way they think it should work
15:35:42 [anssik]
... for them they'd like to be able to choose which part of the graph runs where, in the pending PR the come premise is to remove the GPU option from the device type and let the whole GPU processing done though WebGPU device path
15:36:02 [anssik]
... scoping this to one code path, leaves the default context to whatever the implementer want to implement
15:36:29 [anssik]
... on Windows, it is important the implementation is clear on whether it is on CPU or on something the system needs to take care of
15:37:28 [anssik]
... if we define the default to be CPU we cannot swing back and forth on Windows, on Apple platform this may be different that they change the default to actually use some other processing unit
15:38:21 [anssik]
... the core feedback is the explicit device type may not work well with Apple's architecture, secondly, Core ML works on textures, WebGPU on textures and buffers
15:38:41 [anssik]
... DML only works on buffers, and you need to tensorize it
15:38:49 [anssik]
... this is the diff between Core ML and DML
15:39:02 [anssik]
... this does not prevent implementation to work, but the diff in implementation in interesting
15:39:40 [anssik]
s/WebGPU on/WebNN on
15:40:30 [anssik]
q?
15:41:05 [anssik]
RafaelCintron: I think Chai covered the high points
15:41:27 [anssik]
... textures and buffers thing is on the interop side of things, interopping with WebNN-WebGPU needs to be a texture
15:41:47 [anssik]
... good to see validation with the choice to not do ML model formats
15:42:51 [anssik]
... when you import a texture to WebGPU you need to make sure any write to texture are seen by WebGPU when it read it, that means it is difficult to have an API to freely pass textures and buffers between WebNN and WebGPU
15:42:59 [anssik]
... that's pretty much it
15:43:22 [anssik]
... Mozilla person seemed to be happy with the WebNN API
15:44:16 [anssik]
q?
15:44:55 [anssik]
ningxin_hu: two cents from me, from use case perspective I shared real-time video processing GPU pipeline with WebRTC group with WebGPU folks there
15:45:32 [anssik]
... this was well received, also interest in Neural Engine co-work with ML, texture sharing
15:46:00 [anssik]
... questions how WebNN can hadle multiple device timelines, captured in issue #350 I believe
15:46:00 [ghurlbot]
https://github.com/webmachinelearning/webnn/issues/350 -> Issue 350 Need to understand how WebNN supports implementation that involves multiple devices and timelines (wchao1115) question
15:46:40 [anssik]
... we have an open issue investigate GPU and NN accelerator co-work, we can work on this together, this use case was interesting as well as AI and CPU co-work scenario
15:47:27 [anssik]
... Apple discussed a chopping concept for running a graph across multiple devices
15:47:52 [anssik]
... excluding a GPU use case was mentioned by Apple, did you Chai consider that in your open PR?
15:48:32 [anssik]
... Chai: that open PR just consolidates the GPU code path it does not tackle that specific feedback explicitly
15:49:28 [anssik]
... in Core ML they allow run the whole graph in GPU, in older systems there was not Neural Engine
15:49:37 [anssik]
s/not/no
15:50:14 [anssik]
Chai: a successful hybrid execution on Large Language Models is yet to be seen
15:51:07 [anssik]
... they can move around quite a bit when running the model on Core ML they can check what macOS version is on and apply heuristics per that information
15:52:01 [anssik]
... comments were made from the interface point of view, their impl would allow them to run WebNN graph on CPU or GPU entirely
15:52:29 [anssik]
in PR #332 we still allow full GPU implementation and also allow mix and match in case of default context
15:52:29 [ghurlbot]
https://github.com/webmachinelearning/webnn/issues/332 -> Pull Request 332 [closed] Fix #308: refer to MLContext validation steps from MLGraphBuilder (zolkis)
15:53:03 [anssik]
s/#332/#322
15:53:03 [ghurlbot]
https://github.com/s//issues/332 -> #332
15:53:04 [ghurlbot]
https://github.com/webmachinelearning/webnn/issues/322 -> Pull Request 322 Simplify MLContext creation (wchao1115)
15:53:29 [anssik]
RRSAgent, draft minutes
15:53:31 [RRSAgent]
I have made the request to generate https://www.w3.org/2023/03/02-webmachinelearning-minutes.html anssik
15:53:42 [anssik]
q?
15:54:21 [ningxin_hu]
q+
15:54:25 [anssik]
ack ningxin_hu
15:54:46 [anssik]
Topic: WebNN API open PRs and issues
15:54:54 [anssik]
anssik: As usual, we'll do a review of open PRs and discuss issues. Identify and fast track any priority changes that should get into the initial CR release train.
15:54:57 [anssik]
Subtopic: Make axis definition of concat and split consistent
15:55:02 [anssik]
anssik: issue #345
15:55:02 [ghurlbot]
https://github.com/webmachinelearning/webnn/issues/345 -> Issue 345 Use unsigned long for axis of concat operation (miaobin)
15:55:06 [anssik]
... PR #352
15:55:07 [ghurlbot]
https://github.com/webmachinelearning/webnn/issues/352 -> Pull Request 352 Make axis definition of concat and split consistent (huningxin)
15:55:10 [anssik]
... issue summary: The current axis definitions of concat and split are inconsistent.
15:55:14 [anssik]
... PR summary: fixes this inconsistency by aligning the valid value range [-N, N) and negative value interpretation.
15:55:29 [anssik]
ningxin_hu: Jiawei would like to propose an unsigned int axis type
15:55:39 [anssik]
... that makes sense I think, would like to hear the WG's view on that
15:55:56 [anssik]
... also raised a point the spec should make all axis definitions consistent across the spec
15:56:06 [chai]
+1 on axis consistency
15:56:10 [anssik]
... I put a table to help review axis usage across the spec
15:57:07 [anssik]
... please review the PR knowing some opens in the issue, check the issue too
15:57:13 [anssik]
... I'll update the PR per your feedback
15:57:36 [anssik]
Subtopic: Use static padding values for pad operation
15:57:41 [anssik]
anssik: issue #354
15:57:42 [ghurlbot]
https://github.com/webmachinelearning/webnn/issues/354 -> Issue 354 Use static padding values for `pad` operation (huningxin)
15:57:45 [anssik]
... PR #355
15:57:46 [ghurlbot]
https://github.com/webmachinelearning/webnn/issues/355 -> Pull Request 355 Use static padding values for `pad` operation (huningxin)
15:58:05 [anssik]
... issue summary: The current WebNN pad operation declares padding parameter as an MLOperand whose values may be dynamic at runtime
15:58:10 [anssik]
... not widely available on native ML APIs. This may lead to complex implementation if the native ML APIs only support static padding values
15:58:13 [anssik]
... PR summary: Use static padding values for pad operation
15:58:36 [anssik]
ningxin_hu: I incorporated Zoltan's naming suggestion into this PR, PR ready for review
15:58:55 [anssik]
Subtopic: Propose to add Parametric ReLU into the operation list
15:58:59 [anssik]
anssik: issue #356
15:58:59 [ghurlbot]
https://github.com/webmachinelearning/webnn/issues/356 -> Issue 356 Propose to add Parametric ReLU into the operation list (huningxin)
15:59:03 [anssik]
... PR #357
15:59:03 [ghurlbot]
https://github.com/webmachinelearning/webnn/issues/357 -> Pull Request 357 Add prelu (Parametric ReLU) into the operation list (huningxin)
15:59:16 [anssik]
... issue summary: PRelu is a widely used activation function with mainstream use cases in facial landmark detection, has efficient implementation path with native ML APIs
15:59:21 [anssik]
... PR summary: adds a new prelu() method
15:59:48 [anssik]
ningxin_hu: all good, no further comments on that PR
16:00:50 [anssik]
RRSAgent, draft minutes
16:00:51 [RRSAgent]
I have made the request to generate https://www.w3.org/2023/03/02-webmachinelearning-minutes.html anssik
16:01:37 [chai]
🎉
16:01:46 [anssik]
zkis: I'd like to advance the WebNN editorial enhancements into the spec
16:04:54 [ningxin_hu]
thanks zoltan!
16:05:30 [anssik]
RRSAgent, draft minutes
16:05:31 [RRSAgent]
I have made the request to generate https://www.w3.org/2023/03/02-webmachinelearning-minutes.html anssik
18:00:10 [Zakim]
Zakim has left #webmachinelearning
19:40:58 [zkis]
zkis has joined #webmachinelearning
20:38:54 [zkis]
zkis has joined #webmachinelearning