14:04:49 RRSAgent has joined #webmachinelearning 14:04:53 logging to https://www.w3.org/2024/05/16-webmachinelearning-irc 14:04:53 RRSAgent, make logs Public 14:04:54 please title this meeting ("meeting: ..."), anssik 14:04:54 Meeting: WebML WG Teleconference – 16 May 2024 14:04:55 Chair: Anssi 14:04:59 Agenda: https://github.com/webmachinelearning/meetings/blob/main/telcons/2024-05-16-wg-agenda.md 14:05:04 Scribe: Anssi 14:05:08 scribeNick: anssik 14:05:18 Present+ Anssi_Kostiainen 14:05:29 zkis has joined #webmachinelearning 14:05:35 Present+ Zoltan_Kis 14:05:39 Present+ Austin_Sullivan 14:05:44 Present+ Bryan_Bernhart 14:05:54 Present+ Dwayne_Robinson 14:06:00 MikeApple has joined #webmachinelearning 14:06:02 Present+ Joshua_Bell 14:06:10 Present+ Joshua_Lochner 14:06:16 Present+ Michael_McCool 14:06:23 Present+ Mike_Wyrzykowski 14:06:31 Regrets+ Ningxin_Hu 14:06:34 Regrets+ Rafael_Cintron 14:06:49 Regrets- Ningxin_Hu 14:06:52 Present+ Ningxin_Hu 14:06:57 Regrets- Rafael_Cintron 14:07:02 Present+ Rafael_Cintron 14:07:15 Present+ Sungpil_Shin 14:07:31 RRSAgent, draft minutes 14:07:33 I have made the request to generate https://www.w3.org/2024/05/16-webmachinelearning-minutes.html anssik 14:08:08 anssik: Our groups continues to grow, please welcome to our most recent new participants: Enrico Galli, Muthaiah Venkatachalam and Rahul Unnikrishnan Nair from Intel 14:08:20 Topic: Announcements 14:08:27 gb, this is webmachinelearning/meetings 14:08:27 anssik, OK. 14:08:34 Subtopic: TPAC 2024 14:09:06 anssik: W3C's annual conference TPAC 2024 gathers all working groups to transcend group borders, coordinate solutions to technical issues, with public breakouts 14:09:18 ... This year TPAC 2024 takes place 23-27 September 2024 in Anaheim, CA, USA 14:09:45 ... this is an opportunity for the WG to finally meet in the context of TPAC after many years of working together 14:10:02 ... as a non-binding poll, do participants like to get together during the TPAC week? 14:10:16 #23 14:10:17 https://github.com/webmachinelearning/meetings/issues/23 -> Issue 23 WebML WG Hybrid Meeting at TPAC 2024 (by reillyeon) 14:10:27 ... and thanks Reilly for already signaling your interest by opening an issue for discussion 14:10:37 ... please give a (non-binding) thumbs up in the meetings issue #23 if you'd like to see this meeting happen 14:10:48 ... any other feedback welcome there as well, you can also reach out to me privately via email on this matter 14:11:01 Clicked the thumbs up but: definitely planning to attend 14:11:26 q? 14:11:37 Subtopic: WebNN Implementation Status Update 14:11:43 -> Implementation Status of WebNN Operations https://webmachinelearning.github.io/webnn-status/ 14:11:48 -> What's new https://github.com/webmachinelearning/webmachinelearning.github.io/pull/71 14:11:48 https://github.com/webmachinelearning/webmachinelearning.github.io/pull/71 -> MERGED Pull Request 71 Update the DirectML and MLService implementation status (by ibelem) 14:12:04 anssik: Implementation Status updated, thanks Belem for this update and various folks working on the implementations first of all! 14:12:05 ... changes since the previous update: 14:12:12 ... - 78/78 (100%) ops implemented on DirectML backend 14:12:22 ... - also updated MLService status 14:12:29 anssik: work in progress is to add CoreML implementation status 14:12:42 q+ 14:12:47 ack jsbell 14:12:59 q+ 14:14:08 jsbell: we're in process of removing XNNPACK from Chromium, and MLService that ChromeOS exposes basically TFLite, this will be used as a backend for Linux, Android and ChromeOS, plumbing may differ per OS but backend is the same 14:14:14 ... so we'll have DML, TFLite and CoreML backends 14:15:23 q? 14:15:30 ack ningxin 14:16:08 ningxin: want to comment we discussed with Belem about this development, TFLite replacing XNNPACK, currently ML Service is TFLite-based, that reflects TFLite op coverage 14:16:40 ... the latest update predates the most recent change, so we'll update the table to expand MLService to cover other than ChromeOS to align with the new architecture 14:17:15 ... XNNPACK is a subset of TFLite, we plan to do more investigation with Austin and Phillis to add also CoreML backend data, will send a new PR soonish 14:17:15 q? 14:18:03 q? 14:18:11 Topic: NPU support 14:18:15 gb, this is webmachinelearning/webnn 14:18:15 anssik, OK. 14:18:18 anssik: issue #623 14:18:19 https://github.com/webmachinelearning/webnn/issues/623 -> Issue 623 WebNN should support NPU and QDQ operations (by wchao1115) [v2] [opset] [feature request] 14:18:22 -> Chromium implementation https://chromium-review.googlesource.com/c/chromium/src/+/5330647 14:18:36 anssik: We agreed to start formalize the NPU support with the simplest design (option 1: deviceType: "npu") informed by implementation experience. 14:19:06 ... Dwayne indicated in the issue he is planning to start a PR for this option 1 soonish that allows the WG to reserve the option to potentially expand from this base informed by further implementation experience on more backends, for example 14:19:20 anssik: is there any new information or considerations that should be brought to the group's attention? 14:19:35 ... looking at the issue I see just thumbs up to start with option 1 and this is also consistent with our resolution from the last meeting: 14:19:40 -> https://www.w3.org/2024/05/02-webmachinelearning-minutes.html#t02 14:20:16 q+ 14:20:19 Dwayne: I could start with PR for option 1 today 14:20:24 ack ningxin 14:21:19 ningxin: update on the prototyping and testing, we recently merged the PR into webnn-samples repo that add classification models in fp16 and UI to select "npu" device type, allows developers to test the prototype implementation on Intel NPU and CoreML 14:21:28 ... I hope that will help people test and inform the spec development 14:21:31 https://github.com/webmachinelearning/webnn-samples/pull/226 14:21:32 https://github.com/webmachinelearning/webnn-samples/pull/226 -> MERGED Pull Request 226 Add NPU device type and three fp16 models for image classification (by mingmingtasd) 14:22:02 anssik: I'd like to bring for discussion one related consideration: 14:22:34 ... Privacy and fingerprinting considerations of the expanded three bits of entropy 14:22:40 ... we currently have the following privacy considerations in place 14:22:43 -> https://www.w3.org/TR/webnn/#privacy 14:22:49 "An MLDeviceType normatively indicates the kind of device and is either "cpu" or "gpu". If this type cannot be satisfied, an "OperationError" DOMException is thrown, thus this type can in some cases add two bits of entropy to the fingerprint." 14:22:58 anssik: with the addition of "npu" we'd add one additional bit of entropy 14:23:15 ... we already anticipated this in the privacy considerations when we put in place this text: 14:23:19 "If a future version of this specification introduces support for a new MLDeviceType that can only support a subset of MLOperandDataTypes, that may introduce a new fingerprint." 14:23:32 ... we could still keep this "future version" text in place to future-proof 14:23:54 ... we will consult the Privacy IG on our next wide review cycle on this design and will update the privacy considerations based on their suggestions 14:24:23 q? 14:24:43 Topic: Open issues and PRs 14:25:03 anssik: issues addressed ahead the meeting were removed from the agenda, thanks again for moving at a high velocity! 14:25:07 -> Agenda diff https://github.com/webmachinelearning/meetings/commit/7566cbf9641c0457395b3c593ca514c48f3df05e 14:25:14 Subtopic: Debrief on PRs merged recently 14:25:32 anssik: as usual, JoshuaB has been on a roll again submitted a bunch of PR, thanks! Also thanks Ningxin, Dwayne, Austin, Zoltan, others for your PRs and PR reviews. 14:25:45 ... issue (n/a) fixed by PR #672 14:25:45 https://github.com/webmachinelearning/webnn/pull/672 -> MERGED Pull Request 672 Handful of algorithm and convention fixes (by inexorabletash) [editorial] 14:25:49 ... issue #572 fixed by PR #674 14:25:49 https://github.com/webmachinelearning/webnn/pull/674 -> MERGED Pull Request 674 Use consistent phrasing for operator creation (by inexorabletash) [editorial] 14:25:49 https://github.com/webmachinelearning/webnn/issues/572 -> CLOSED Issue 572 Synchronously validate input operands/activations (by inexorabletash) [bug] [question] 14:25:52 ... issue (n/a) fixed by PR #679 14:25:53 https://github.com/webmachinelearning/webnn/pull/679 -> MERGED Pull Request 679 Build fix: Correct link for "transferred" term (by inexorabletash) [editorial] 14:25:56 ... issue (n/a) fixed by PR #680 14:25:57 https://github.com/webmachinelearning/webnn/pull/680 -> MERGED Pull Request 680 Add missing definitions of inputShape to conv2d algorithms (by inexorabletash) [editorial] 14:26:00 ... issue #673 fixed by PR #682 14:26:01 https://github.com/webmachinelearning/webnn/issues/673 -> CLOSED Issue 673 Meta: Introduce "Interop" label? (by inexorabletash) [process] 14:26:01 https://github.com/webmachinelearning/webnn/pull/682 -> MERGED Pull Request 682 Process: Add "interop" label (by anssiko) [process] 14:26:05 ... issue #681 fixed by PR #683 14:26:05 https://github.com/webmachinelearning/webnn/pull/683 -> MERGED Pull Request 683 Validate no duplicate axes for reduction ops (by inexorabletash) 14:26:05 https://github.com/webmachinelearning/webnn/issues/681 -> CLOSED Issue 681 Shall we update the spec to add constraints on the axes and the input rank for the reduction operator? (by mei1127) [question] [operator specific] 14:26:08 ... issue #396 fixed by PR #684 14:26:08 https://github.com/webmachinelearning/webnn/issues/396 -> CLOSED Issue 396 Clarify the restriction for `minValue` and `maxValue` of `MLClampOptions` (by huningxin) [operator specific] 14:26:12 https://github.com/webmachinelearning/webnn/pull/684 -> MERGED Pull Request 684 Remove note about interop issues with clamp()'s minValue == maxValue (by inexorabletash) [editorial] 14:26:12 ... issue #675 fixed by PR #685 14:26:15 ... issue #686 fixed by PR #687 14:26:17 https://github.com/webmachinelearning/webnn/pull/685 -> MERGED Pull Request 685 Support int32 data type for 'indices' operand of 'gather' operator (by huningxin) 14:26:19 https://github.com/webmachinelearning/webnn/issues/675 -> CLOSED Issue 675 why `gather` indices only accept uint32 or int64? (by philloooo) [question] [interop] 14:26:23 https://github.com/webmachinelearning/webnn/pull/687 -> MERGED Pull Request 687 Validate layerNormalization options.axes (by inexorabletash) 14:26:26 https://github.com/webmachinelearning/webnn/issues/686 -> CLOSED Issue 686 `layerNormalization()` method steps should validate the values of `options.axes` (by huningxin) 14:26:30 ... non-editorial PRs include #683 #685 #687 from Josh and Ningxin may benefit from a debrief? 14:27:11 checking... 14:27:14 q+ 14:27:20 ack ningxin 14:27:50 ningxin: PR #685 int32 data type support for gather, thanks Phillis for providing the platform support data to inform this PR 14:27:57 (yeah, my two were very straightforward validation additions) 14:28:31 ... widely supported data type, unblocks wpt tests for gather for CoreML and we see also issues for ONNX RT support int32 and models unblocked 14:29:13 jsbell: #683 #687 reflect validation additions informed by implementation experience 14:29:35 q? 14:29:45 Subtopic: [process] Introduce "interop" label 14:30:02 anssik: we have a new "interop" workstream, comes with a new shiny "interop" label we try to put on issues arising from differences between backends 14:30:07 -> https://github.com/webmachinelearning/webnn/blob/main/docs/IssueTriage.md#workstream 14:30:13 -> current "interop" issue https://github.com/webmachinelearning/webnn/labels/interop 14:30:24 anssik: thanks to my triage team pal Josh for identifying the first batch of "interop" issues 14:30:31 ... Josh anything else to share with the group about this? 14:31:00 jsbell: I was interested in adding "interop" label because these are the ones where there's no "easy solution" while for others we can make a call between e.g. A or B 14:31:43 q? 14:32:00 Subtopic: [process] TypeScript Types Declarations for WebNN 14:32:07 anssik: issue #677 14:32:08 https://github.com/webmachinelearning/webnn/issues/677 -> Issue 677 Missing TypeScript Type Declaration (by egalli) [process] 14:32:31 ... a proposal for TypeScript type definitions for WebNN similar to respective definitions for WebGPU 14:32:41 -> PROPOSED TypeScript type definitions for WebNN (repo) https://github.com/egalli/webnn-types-example/ 14:32:45 -> TypeScript type definitions for WebGPU (repo) https://github.com/gpuweb/types/ 14:32:49 -> TypeScript type definitions for WebGPU (index) https://gpuweb.github.io/types/ 14:33:07 anssik: this seems like a useful project and could fit into the WebML CG that is chartered to develop, among other things, also "Other Software" 14:33:11 -> WebML CG Charter Test Suites and Other Software https://webmachinelearning.github.io/charter/#test-suites 14:33:36 anssik: The CG is already using the standard W3C 3-clause BSD License for its Test Suites contributions and the same license is a good fit for this types proposal too considered "Other Software" because WebGPU group appears to be using the same BSD 3-Clause License for its corresponding types project 14:33:40 -> W3C 3-clause BSD License https://www.w3.org/Consortium/Legal/2008/03-bsd-license.html 14:33:57 anssik: Enrico just joined the WebML CG (welcome!) and is prepared to make the initial contribution and help keep this project maintained 14:34:04 q+ 14:34:06 ... I propose we setup a repo for this effort unless there are concerns 14:34:08 ack jsbell 14:34:35 jsbell: having TS definitions sounds great, want to make sure we're transparent that anything in the spec can change 14:35:03 ... any breaking changes in the spec will have an effect to other projects such as on this project 14:35:31 q? 14:35:35 ... bikeshedding the name, WebGPU uses https://github.com/gpuweb/types/ -- do we call this https://github.com/webmachinelearning/types or something less generic like "webnn-types" 14:36:29 anssik: if no concerns raised in the issue in the coming week, we'll create a repo for this project 14:36:38 "webnn-types" SGTM 14:36:59 Subtopic: Broad device coverage and maintainability 14:37:05 anssik: issue #453 14:37:05 https://github.com/webmachinelearning/webnn/issues/453 -> Issue 453 Google Chrome Feedback on WebNN: aiming for broad device coverage and maintainability (by vsekhar) [process] [opset] [use case] 14:37:15 q+ 14:37:17 ... I want us to revisit this high-level issue discussing broad device coverage and maintainability 14:37:25 ... would like to discuss what has been done, what remains to be done, disseminate any new information 14:37:41 ... a subset of the recommendations in this issue have been or are being discussed in topic-specific issues, I see these open issues mentioned #456 and #573 14:37:41 https://github.com/webmachinelearning/webnn/issues/456 -> Issue 456 Define the maximum number of operand dimensions (maximum rank) (by huningxin) [interop] 14:37:42 https://github.com/webmachinelearning/webnn/issues/573 -> Issue 573 Core operator set (by philloooo) [question] [opset] 14:37:53 ... recently the group has focused on the models and hardware targets mentioned in this high-level summary, in both specification and prototyping efforts 14:38:08 ... so, checking how to make progress with this issue, would a revision to this high-level summary would be appropriate? 14:38:09 q? 14:38:13 ack jsbell 14:38:31 jsbell: quick update, I'm a big fan of closing issues 14:38:47 ... for the most parts the issue should remain open and recommendations are valid and progress is made on a lot of those 14:38:50 ... four things: 14:39:03 ... - 1) public positions from browser vendors 14:39:23 ... would love feedback from Apple in particular 14:39:45 ... we at Chrome team do consider standards positions from Apple and Mozilla 14:40:02 ... - 2) streamlining the API surface, core op set 14:40:28 ... Austin opened an issue to revisit comples GPU and LSTM, e.g. CISC ops that can be difficult to implement on some backends 14:40:31 https://github.com/webmachinelearning/webnn/issues/689 14:40:32 https://github.com/webmachinelearning/webnn/issues/689 -> Issue 689 Consider removing `lstm` and `gru` operators (by a-sully) [question] [operator specific] 14:40:44 ...- 3) Performance for CPU and GPU across OSes and backends 14:41:01 ... worked on Windows, macOS, Intel has shared early PnP numbers on NPU 14:41:21 ... work is happening, need fully interoperable prototypes across all the OSes 14:41:57 - 3) Performance using ML accelerator hardware 14:42:10 s/- 3) Per/... - 4) Per 14:42:19 q? 14:42:44 Subtopic: [bug] ArgMax/Min selectLastIndex is not supported on CoreML 14:42:49 anssik: issue #652 14:42:50 https://github.com/webmachinelearning/webnn/issues/652 -> Issue 652 ArgMax/Min `selectLastIndex` is not supported on CoreML (by philloooo) [bug] [operator specific] [interop] 14:43:13 ... to recap, this is a proposal from Phillis to consider removing selectLastIndex parameter due to CoreML compatibility 14:43:17 ... discussed on our previous call 14:43:21 -> https://www.w3.org/2024/05/02-webmachinelearning-minutes.html#t06 14:43:41 ... Mike provided helpful details on the previous call re BNNS and MPS, mentioning they don't run on an NPU and wanted to talk to some engineers and get back 14:43:46 ... I wonder if there's new information to share? 14:44:57 Mike: the reason for difference is the sorting algorithm, both being equal depends on the underlying sorting algorithm 14:46:33 Dwayne: you can get different results depending on whether you go through BNNS or MPS 14:46:40 Mike: right 14:47:11 Dwayne: this matters for some models that are explicit in this regard, cannot advocate without knowing any specific models, don't anticipate a lot of harm in removing 14:47:20 ... we have an option to add it back later 14:47:29 ... we can deterministically compute answers on all the platforms 14:47:35 q? 14:48:10 anssik: any concerns in removing selectLastIndex? 14:48:57 Subtopic: [bug] Consider changing output type of ArgMax/Argmin to int32, or allow passing output_type 14:49:09 anssik: issue #653 14:49:09 https://github.com/webmachinelearning/webnn/issues/653 -> Issue 653 Consider changing output type of ArgMax/Argmin to int32, or allow passing output_type (by philloooo) [bug] [operator specific] [interop] 14:49:23 ... also discussed on our previous call, wanted to check in to if we have further research or explorations to report 14:49:28 -> https://www.w3.org/2024/05/02-webmachinelearning-minutes.html#t07 14:49:41 ... I think both Dwayne and Phillis indicated interest to look at this, not sure if you've had time for that yet? 14:50:00 Dwayne: been too busy last two weeks, not looked at this yet, will follow up 14:50:19 ... it looks like Phillis is awaiting Dwayne's example models where int64 is useful 14:50:27 ... there's also a separate question from Phillis to Dwayne in the issue, quoting: "I didn't fully understand your gather example, because I actually don't understand why indices allow both int64 and uint32. The indices should point to valid indices that's within MLOperand's dimensions right?" 14:50:37 q? 14:51:03 Re: "Type casting all the things" + "MLNumber" - for #442, #678, #325 - please look at PR #647 which tries to tackle all of these - early feedback welcome. No live discussion needed... 14:51:03 #489 is about the cast() op so it's a different beast 14:51:04 https://github.com/webmachinelearning/webnn/pull/647 -> Pull Request 647 Introduce MLNumber for specifying numeric inputs of any type (by inexorabletash) 14:51:04 https://github.com/webmachinelearning/webnn/issues/325 -> Issue 325 Clarify the usage of 32 bit floating point type and consider using double (by huningxin) [feature request] 14:51:04 https://github.com/webmachinelearning/webnn/issues/678 -> Issue 678 Specifies scalar values casted to match input type. (by philloooo) [feature request] 14:51:05 https://github.com/webmachinelearning/webnn/issues/442 -> Issue 442 Type of some parameters should match the input data type (by Honry) [feature request] [operator specific] 14:51:09 https://github.com/webmachinelearning/webnn/issues/489 -> Issue 489 Clarify the casting behavior from floating-point / signed integers <-> unsigned integers (by huningxin) [operator specific] [interop] 14:52:35 q+ 14:52:39 ack jsbell 14:52:55 Subtopic: [feature request] Allow checking whether operators/types are supported for a backend before creating a graph 14:52:59 anssik: issue #463 14:52:59 https://github.com/webmachinelearning/webnn/issues/463 -> Issue 463 Allow checking whether operators/types are supported for a backend before creating a graph (by huningxin) [feature request] 14:53:07 ... this is the original issue regarding checking for op/data type support 14:53:14 anssik: this feature would help with issues such as uint64/int64 data type #654 and MLConstantOperand #668, and interop issues e.g. #653, #675 #283 14:53:15 https://github.com/webmachinelearning/webnn/issues/654 -> Issue 654 Consider dropping the support of uint64/int64 data type for some operators (by lisa0314) [bug] [operator specific] [interop] 14:53:15 https://github.com/webmachinelearning/webnn/issues/653 -> Issue 653 Consider changing output type of ArgMax/Argmin to int32, or allow passing output_type (by philloooo) [bug] [operator specific] [interop] 14:53:15 https://github.com/webmachinelearning/webnn/issues/283 -> CLOSED Issue 283 Specify the operand data type constraints of operation (by huningxin) [question] 14:53:17 https://github.com/webmachinelearning/webnn/issues/668 -> Issue 668 Do we need an `MLConstantOperand`? (by a-sully) [question] [interop] 14:53:20 https://github.com/webmachinelearning/webnn/issues/675 -> CLOSED Issue 675 why `gather` indices only accept uint32 or int64? (by philloooo) [question] [interop] 14:53:26 ... Phillis shared a context.opSupportLimits() proposal for exposing the data type support level: 14:53:32 -> https://github.com/webmachinelearning/webnn/issues/463#issuecomment-2113130485 14:53:32 https://github.com/webmachinelearning/webnn/issues/463 -> Issue 463 Allow checking whether operators/types are supported for a backend before creating a graph (by huningxin) [feature request] 14:54:05 jsbell: we think this would be useful for just maintaining op coverage reports 14:54:32 ... we'd love feedback from people from frameworks sitting on top e.g. ONNX RT EP that could take benefit of this 14:54:44 Dwayne: would be useful for registration of ops and use a fallback as appropriate 14:54:45 q? 14:54:48 +1, this would be useful for frameworks 14:54:56 +1 agreed 14:55:03 q+ 14:55:11 jsbell: as soon as we confirmation the shape is right and prototoping is appropriate Phillis was interested in advancing with POC 14:55:34 Dwayne: the proposal reminds me what we did in our team, list data types and rank ranges 14:55:43 q? 14:55:59 ack RafaelCintron 14:56:32 RafaelCintron: haven't looked at the proposal in detail but in general it comes down to we need whether some op works or not and what data types work 14:57:03 q+ 14:57:11 ... even inputs support different data types, I wonder if it is possible to group support into sets of things, so that way web developers can pick which model they run, without needing to consider every op separately 14:57:36 ... in 3D space we have a similar grouping concept 14:57:37 q? 14:57:40 ack jsbell 14:58:05 jsbell: totally agree with Rafael, we land on interoperable set of interoperable backends eventually 14:58:17 q? 14:58:42 q+ 14:58:51 jsbell: we 14:59:21 s/we/we'll take a look at this with Phillis and come back to this issue 14:59:21 q? 14:59:22 ack ningxin 15:00:20 ningxin: I can ensure Wanming working on ONNX RT EP can investigate from that angle, EP did check for data type on an op-level, we had an allowlist in the code 15:01:00 ... it's per op-level, Phillis data structure is useful for framework to do this op-level examination, I will let Wanming to investigate and comment 15:01:01 q? 15:01:18 q? 15:01:35 RRSAgent, draft minutes 15:01:37 I have made the request to generate https://www.w3.org/2024/05/16-webmachinelearning-minutes.html anssik 15:03:05 s/welcome to/welcome 15:06:42 s/… it looks/anssik: … it looks 15:07:49 s/we confirmation/we get confirmation 15:09:02 s/can investigate/will investigate 15:09:26 RRSAgent, draft minutes 15:09:27 I have made the request to generate https://www.w3.org/2024/05/16-webmachinelearning-minutes.html anssik 15:09:48 s/it looks/anssik: it looks 15:09:50 RRSAgent, draft minutes 15:09:51 I have made the request to generate https://www.w3.org/2024/05/16-webmachinelearning-minutes.html anssik 15:13:06 s/Topic: Broad device coverage and maintainability/Topic: [process] Broad device coverage and maintainability 15:13:07 RRSAgent, draft minutes 15:13:09 I have made the request to generate https://www.w3.org/2024/05/16-webmachinelearning-minutes.html anssik 15:13:42 s/Broad device coverage and maintainability/[process] Broad device coverage and maintainability 15:13:43 RRSAgent, draft minutes 15:13:44 I have made the request to generate https://www.w3.org/2024/05/16-webmachinelearning-minutes.html anssik 17:25:26 Zakim has left #webmachinelearning