W3C

– DRAFT –
Machine Learning for the Web CG F2F at TPAC 2018

26 October 2018

Meeting minutes

<ningxinhu> the webml examples URL: https://‌huningxin.github.io/‌webml-examples/

<gregwhitworth> Greg Whitworth, Microsoft

<ningxinhu> Ningxin Hu, Intel

Myles C. Maxfield, Apple

anssik: it has been a long week. This is the last meeting possible at TPAC. It's exciting topic!

anssik: let's start in a couple of minutes.

anssik: who was at the breakout session on Wednesday?

anssik: who was not there? 2 people

Welcome, Introductions

anssik: on Wednesday we had a breakout session. Most of you saw the demo. We saw a couple of slides about the findings based on teh Chromium implementation. To recap: the performance is pretty close to native, so we can implement the use cases on the web that require low latency. So you saw object recognition, human pose estimation, as examples of use cases that require low latency. and image classification.

anssik: we had discussion around the scope of the work briefly. The idea was to use more of this meeting to go into more detail.

anssik: Ningxin has prepare an API proof of concept proposal based on his implementation work

anssik: Ningxin has reviewed the existing platforms that provide native APIs for doing inference: Android, macOS, Windows, and we were going to review the API mapping table.

anssik: I would like to connect you guys with each other so we can continue work outside of this meeting. It's unreasonable to expect us to be at our best on Friday afternoon and make progress during these two hours.

anssik: Let's get started.

anssik: let's start with the review of the charter.

<anssik> Charter

anssik: this is the charter. With a bunch of you and with the public, we reviewed over the course of the last month.

anssik: the goals section is there. We want to keep it seimple. The goal is to define a low level API for machine learning specifically for inference.

anssik: the constraint: Not going to define an API that doesn't work across major platforms. So we are fairly tightly scoped. We had some questions during the breakout sessiona bout that. We don't wnat to descriminate against any platforms about that.

anssik: Details: The Web API allows construcing neural network computational graph

anssik: Can compile a neural network to native form

anssik: and can accept input data from somewhere

anssik: from various sources (Array buffers, media streams, etc.)

anssik: We list teh platform constraints that we have.

anssik: Android Neural Networks API, Windows DIrectML, macOS/iOS Metal Performance Shaders and Basic Neural Network Subroutines

anssik: there are privacy implications.

anssik: we take that seriously, and we document them here.

anssik: in scope is inference, out of scope is training. This is because of practicalities. This is because platform's don't expose training facilities. Also we don't expose any hardware facilities. We are not interested in doing overlapping work, so we don't re-invent the wheel.

anssik: we coordinate with WebGPU, WebGL, and WebAssembly

anssik: Some people here are from those communities, that's great

anssik: out of scope: We don't attempt to mandate a model schema or format.

anssik: There are other groups that will do that.

anssik: any questions?

anssik: Deliverable: Web Neural Network API

anssik: We also work with WebRTC for MediaStream for providing input for inference. Also audio. and devices and sensors.

anssik: Google proposed coordination with the immersive web working group and community group.

anssik: they want to use our work. e.g. object recognition

<anssik> Charter

mmccol: Please add the Web of Things group. We are looking at virtual services. There might be a service that accepts an image and produces JSON.

anssik: Thanks. This charter is on GitHub, please open an issue. We need review to add a normative change. This doesn't sound normative.

anssik: we will integrate your proposal

barbarah: Is this only working groups or are you also working wiht the video interest groups? Video is importatn for machine learning. They may want collaboration

anssik: definitely.

anssik: this is informative.

anssik: the rest of the charter is less exciting.

anssik: We work on GitHub.

anssik: Consensus-based decisions. I'm the initial chair, but I'm happy to share the workload. Please get in touch if you want to help me chair

anssik: We start wtih the tight scope, but we have a plan to expand teh scope, but it requires a 30-day vote with 2/3 support.

anssik: If someone proposes expanding the scope, we want to make sure the community agrees with that. Any questions?

Michael_McCool: Are you deciding on deliverables?

anssik: yes.

Michael_McCool: It's a browser-based API?

anssik: yes.

Michael_McCool: W3C doesn't usually do server-side things. When I was thinking about edge computing, where machine learnign is applicable to edge computing. It's a reasonable thing to do there.

dsr: If you could reach out to the node.js community, that would be good.

dsr: Please coordinate with the;m

<anssik> CG home

Michael_McCool: Also test cases too

Michael_McCool: If we can move this in the server-side stuff, that's good

gregwhitworth: When you say server-side, you mean JS right? If so, then it's in-scope

dsr: It duplicates browser APIs

gregwhitworth: it doesn't actually say browser.

Michael_McCool: It shouldn't

ningxinhu: Regarding the current POC API, it's JS, so it'spretty straightfoward in node.js. We also put the inputs and outputs. Today, Array buffers are standard, but in the futurre, we will do media streams

Michael_McCool: There are security concerns

anssik: It's good if this was implementable in node. But the browser is the primary target, and node is secondary.

ningxinhu: For node, because it's different secuirty model, it can access native code, some vendors already provide a solution to expose their native API to node.

Michael_McCool: We're already seeing it for native APIs

<gregwhitworth> Greg Whitworth, Microsoft

<anssik> W3C Graph Data Workshop

anssik: thank you.

anssik: we know who we are

Target demographic, use cases and requirements

anssik: In a discussion with gregwhitworth: What is the target audience.

anssik: We want to have a good design, so we need to understand the user. Use cases and requirements. Let's discuss it.

anssik: Expectation: It's a low-level API, users would be machine learning framework library authors. We want to understand their needs.

<anssik> WebML use cases by Tomoyuki/KDDI

anssik: Tomoyuki contributed a document as a startign point fo the discussion, which describes other APIs and use cases for the framework.

Tomoyuki: Previous cases for high level use cases. That could clarify what kind of applications can be built on top of WebML. These are application examples. first and second are strongly related to the demo. First: person detection. Recent image recognition can recognize what kind of objects are in a picture frame: human or otherwise. We can detect where the person is in the image.

Tomoyuki: Second: skeleton detection

anssik: One observation: On the client, you can do depersonalized, without sending the data. Consider being at home with expensive objects, you dont' want to tell the world you ahve these expensive objects

ningxinhu: You also need per-pixel segmentation to know whic hpixels are okay and which aren't

anssik: this is a good example

ningxinhu: this is like photo uploading, but this doesn't require real time processing.

Tomoyuki: it requires depersonalization. This depersonalization needs to be done before uploading.

ningxinhu: So, you can get a preview of what will be uploaded.

ningxinhu: Removing hte background is useful for live streaming

Michael_McCool: If we want to do it for live streaming, the performance requirements become higher

dsr: Also blurring background

Barbara: High level comment: These are application capabilities or features. I don't see them as use models. Is there anyindustry or type application that would utilize that

Barbara: Would commerce applications use this?

anssik: It would cut across everything. It's like a JS library. There are libraries like TensorFlowjs that provide higher-level abstractions for web developers to enable implementing this use case easily.

ningxinhu: One is a social network.

gregwhitworth: ???

ningxinhu: Social network website.

gregwhitworth: is there a specific social network?

anssik: it's a real problemthat people upload photos to social network without the content of others

gregwhitworth: I don't want to tangent too much, but I would love to know who would actually use this. Who is ready to consume this API. Which businesses are waiting fo rit.

aritamk: Google photos used to do image recognition, so you upload them to google drive, it does face detection across all your photos. Also, Amazon image recognition capabilities monitoring peopel that they sold to

Michael_McCool: Doing it on the client has advantage.

Which companies would this actually support their business models?

Barbara: Who would utilize this?

Barbara: Otherwise it's a science proejct

gregwhitworth: I'm working with our teams on this. They aren't chomping at the bit for this. I want to hear Facebook saying that they really want this.

Michael_McCool: If the governmetn says you can't share photos of people without their concent, suddenly everyone will want this

gregwhitworth: i don't want to spend my engineering resources if that isn't a reality

anssik: We need to have outreach to these companies so they can help influence the design

Michael_McCool: Instead of person detection, let's do video conferencing. Gesture detection: raising your hand in a meeting. Do you need to run multiple workloads? Do you need to have a queue, single accelerator, or multiple workloads?

Michael_McCool. If you have a single accelerator, this becomes an issue, or you may need a queue manager. But here you need social media and video conferencing, rather than technical things.

anssik: This is Tomoyuki's initial contribution, which is great.

anssik: let's move this to GitHub.

anssik: maybe we can move this in the spec for "these are the problem we want to solve"

do we ahve any existing client side applciations that are using native?

Antialiasing is now shipping with ML trainined models. Hardware vendors would love to see the deployment.

Michael_McCool: Texture generation - running it backwards

gregwhitworth: who wants antialiasing? gaming?

yes

anssik: Re-emerging interest to investigate web-based gaming. There's a workshop.

anssik: We discussed skeleton detection. Video conferencing is an easy to understand valuable use case. Background removal, raise your hand, etc.

Michael_McCool: There are multiple workloads per application

ningxinhu: Communication with AI to do sign language recognition.

ningxinhu: Currently their solution is based on [inaudible] so we would liek to see if they could use this.

Skeleton detection is also useful to detect the form of surfaces. E.g. the fashion industry. Because when you have a scarf, they take a special position that help them to put ads around the fashion place

Michael_McCool: Image generation is a technical area, not a use case. We need to say something like "gaming" instead.

anssik: Maybe we can put this in the wiki and massage it.

anssik: low-level use cases. These are by definition more like capabilities.

<BarbaraH_> Additional use cases options are retail/commerce, healthcare, marketing/advertising,fraud detection, online search, Natural Language Processing and audio.

Tomoyuki: Three examples: Sometimes neural network developer uses extension on their frameworks like TensorFlow. One exapmle: Custom layer. We sometimes face a kind of layer that is not permitted in the frameworks. So we often want how to extend such kind of layer to already permitted in existing framework.

anssik: comments?

<BarbaraH_> Review the use cases on which ones have a web value.

ningxinhu: This is important. Faster development of Machine Learning community. New operators come year by year. The idea here is we propose some operators that are in scope, so those might be well-optimized to existing hardware or platform. That's for the existing support of operators. For new operators, we can coordinate with other web APIs like web assembly or web gpu compute shader to allow developers to implement custom layer in a

programmable way and connect the graph built by the WebML to WebGPU.

programmable way and connect the graph built by the WebML to WebGPU.

ningxinhu: Cross hardware boundary issues exist. For our implementation we saw this kind of combination in native code. E.G. Apple's Metal Performance Shader combines with Metal on teh same GPU.

ningxinhu: We saw some it there. But on teh web we know we need to expose this capability as well as expose performance cliffs.

Michael_McCool: This drives toward an architecture decision. Is there a compilation tool? Is there a Machine independent compiled model? Or do you specify it on the fly in the API? If you want performance, you probably want precompiled model. There probably should be a statement saying you want to maintina performance

Precompiled will be tricky to maintain performance

Michael_McCool: You want custom code to not blow up performance.

Michael_McCool: If you're going to lose performance a lot on some platforms, we need to know early

ningxinhu: simd.js is an example. We want to let the developer to test whether or not there is native support.

Feature detection? should we do it?

Michael_McCool: Compiling a language is a good strategy.

<BarbaraH_> Which use cases and API are the target for short term versus long term based on developers needs. That will help drive the MVP with an enhancement roadmap.

Sean: Out of the devices, which are programmable?

ningxinhu: API should be hardware agnostic.

VPU, FPGA?

Michael_McCool: There are 2 ways: 1) A full compile from scratch, or use an FPGA

Michael_McCool: ASICs

IF you have discovery in a low-level API, how do you push that down to device.

Sean: You could throw in a capability "is this programmable or not"

ningxinhu: for GPU case, WebGPU has a shader language.

anssik: Network concatenation

Tomoyuki: Some recent network model of deep neural network uses module concatenation. For example, many network modules like MobileNet, ResNet, etc. Most neural network developers have insufficient time for training. We can reduce the training time by importing pre-trained models.

Tomoyuki: That gives us image feature extraction.

Tomoyuki: The following layer can be trained according to our use cases.

ningxinhu: is it really for training or for inference?

Tomoyuki: Developing process is just training but the result of the trained model. Trained modules and custom modules [inaudible]

<BarbaraH_> Media use case potential - how do you find the right content to serve your audience quickly? The Media & Entertainment IG would have more insights.

ningxinhu: After training, when you want to deploy, is there a concatenation layer? Or is it already solved when training?

ningxinhu: Network concatenation is during training or inference?

Tomoyuki: Both.

Tomoyuki: In inference phase, we can either prepare the concacenated model or two pretrained modesl separetely. So the developercan select eithe ro fthem

ningxinhu: So it's possible in inference

Tomoyuki: We can also change the models. A large pretrained model or a small pretained model for low bitrate networks.

<wseltzer> myles_: will there be built-in models?

anssik: Shape detection API can use built-in models. They're not in scope. But It's an itneresting idea. We haven't explored it.

gregwhitworth: Usually use cases revolve around 7 or 8. I've never heard of anyone asking for antialiasing. This is one of the major hurdles in Edge. We talked to Office, they are hundreds of megs big. What is the benefit of the end user. Id' like to narrow down to specific modesl and specifc use cases for those modesl. Faces, bar codes, etc. Let's have a small set of pre-canned models.

gregwhitworth: i don't want long load times.

anssik: There's no conflict there.

Resolved: We should look into pre-canned models.

sean: Google proposed "layered APIs" with known names. So that could work

sangwhan: "std::facedetection"

Who owns the models? Would loading your own model be secure?

sangwhan: W3C, presumably

Michael_McCool: Pretrained models allow hardware accelerators. Caching is useful. These names won't change, that helps caching. We will have cross-site problems.

gregwhitworth: Caching is by-origin anyways, even if they have the same name.

What's the type of the file?

sangwhan: undefined.

gregwhitworth: We are purposely putting that off until after this.

ningxinhu: Experience in POC focuses on mobile, small models. We try to cache the model. Service worker. Javascript TensorFlow.js introduces "web-friendly" model format, which is JSON-based topology description, and small-sized files. To try to fit into the cahce

gregwhitworth: The ones i'm worried about thigns like spell checking

gregwhitworth: Because of blind training, Are you referring to not do full-training, but slight pivots a little bit? Because not having access to [inaudible] makes it difficult to do it. Am I not able to say "I want the pre-canned one but I will provide 400k with different weights?"

ningxinhu: Like loading a delta on top of a model?

gregwhitworth: yes

ningxinhu: This is for transferrable training. You might need the on-device model to do that.

gregwhitworth: We want to do it.

sangwhan: For fine-tuning, you will need to expose gradient propagation, because that's also a propagation.

Michael_McCool. In some cases you might want flame points, other places you might want 8-bit, it's a tradeoff between size and quality. So each model should have multiple versions.

ningxinhu: Regarding these capabilities. In the POC, some native APIs support it, but not all.

gregwhitworth: I don't want to over-index on what native APIs do and don't support. I'd rather say "here are the use cases, and V0 is just some demos" and then native implementations can go further.

Michael_McCool: Some devices can only support certain quantizations. 8-bit or 32-bit. There may be a constraint on what is supported on each device.

sangwhan: This is a discussion bout formats, not about weights

Michael_McCool: It directly affects the size of the file.

gregwhitworth: It would be valuable to have that even if it's a V2 thing.

Tomoyuki: Some devices can do GPU acceleration. Others cannot support GPU acceleration, only runs on CPU. Some web developer is taking care of battery consumption, so they offer image recognition only for accelerated devices.

Michael_McCool: There's a whole other category of what do applications that use multiple models do. For a computer, you can do CPU and GPU at the same time. You can support two use cases at the same time. There's another use case of multi-model.

gregwhitworth: Performance is super important

CPU and GPU may already be super busy. Developers need control over low-level device usage

Michael_McCool: Support for multiple workflows is important.

sangwhan: Most applications use CUDA, which locks - so multi-task is probably hard

Michael_McCool: There's a CUDA vs graphics issue. We already have APIs have been extended recently to support multiple applications, when more than one device can handle the workloads. We can argue whether that is necessary or not.

ningxinhu: There are multiple hardware usecases. We have to select the hardware. CPU, GPU, MPU. We allow the developer to specify the preference of where they want it to run.

ningxinhu: GPU and CPU both have pros and cons, and the site can serve different models depending on where the model will be run on.

anssik: Does the current scope capture the needs of the target audience?

anssik: Is this the tightest possible scope that we can start with?

Michael_McCool: Is it possible to precompile the model?

gregwhitworth: POC is a POC, and the answer is not yet defined. It's desirable for web developers, but you want both.

gregwhitworth: There will be divergent opinions.

Barbara: If you look at the use cases, which ones fit into the MVP vs an extension? Don't try to boil the ocean. MVP please.

Michael_McCool: We could decide this now. We could choose either way

gregwhitworth: It shouldn't be in the charter.

anssik: The group decides what is MVP out of this scope.

anssik: We can add more bits

anssik: So it sounds like we can start with this scope.

Review mapping to platform APIs

anssik: Ningxin has published a mapping table.

<anssik> Mapping to platform APIs

ningxinhu: Before we go through the table, this is an overview. Convolution, depthwise convolution, concatenation. ADD and MUL, RESHAPE, SOFTMAX. We will support them by fuse into convolution or others. Of the9 operators, we have some support for models in teh POC.

ningxinhu: Examples: TFLite models exist. Our example grabs these models, load and parse it, and use it as a web API to construct the graph and run inference. SqueezeNet because it's small-sized. Also we tried large model like Inception V3. Except its size is bit but we can get a good speedup for the computation. For the object detection, we have SSD MobileNet. Also we have TensorFlow.js model like MobileNet and PoseNet. Also ONNX models.

ningxinhu: MobileNet V2 and SqueezeNet.

ningxinhu: Only implemented 9 operators so far, so we tried tehse models and underlying the implementation these operations on native APIs we have implementation supporting ops for MPS, BNNS on macOS, NNAPI on android, and clDDN on Linux and Window.

ningxinhu: clDDN is from Intel.

ningxinhu: We would like DirectML POC very soon.

ningxinhu: that's the overview.

ningxinhu: It's driven from the models. There are different ecosystem's models. We start with some small models optimized for mobile, then add necessary ops to support these models. And implement them across different APIs to get performance data.

ningxinhu: We have data! To map the data type Float32 and Float16 is what we have looked at so far. NNAPI doesn't support Float16, everybody else supports both.

ningxinhu: For convolution we have how the operator can be mapped to different API. Input, filter (aka "weights"), and bias. There are some differences between the APIs, the notes are in this chart in red.

ningxinhu: Also for stride and fused activation, dilation rate, and output

ningxinhu: This is just one case for convolution. For the other 9 ops, we have the same kind of data in the table.

anssik: We made this to satisfy this requirement that the API is implementable on top of platform APIs. We did the work so you don't have to.

anssik: if you spot issues, please let us know.

anssik: We dont' have a formal startin gpoint spec. Instead, we have POCs and sketch APIs.

Review & discuss Web Neural Network API spec proposals

anssik: ningxinhu has a API sketch proposal.

anssik: We have some ergonomic issues in this sketch proposal that we already know aboutl

<anssik> WebML API proof-of-concept

ningxinhu: Here's the example: Construct (tensor0 + tensor1) * (tensor2 + tensor3). 0 and 2 are constants 1 and 3 are inputs.

ningxinhu: ::describes the chart in the doc::

ningxinhu: So first you need a neural network context. It's inside navigator.ml.getNeuralNetworkContext()

ningxinhu: to build the model, you can see the details on the site. Promises are involved.

ningxinhu: Then you speicfy the tensor type

ningxinhu: The API is typed, so you have to annotate types along with the data

ningxinhu: You can set operands like the upper value in the graph to describe the shape of the tensor. Later you can upload the data to the graph.

ningxinhu: We use array buffer view to do the upload

ningxinhu: So it's just a scalar in this example.

ningxinhu: Tensor 1 is an input, so there's no data to upload. Then same for tensors 2 and 3

ningxinhu: Then you need to connect the operators together to define the graph.

ningxinhu: Then you add the operations, you specify the flow of computation graph. ::works through the example::

myles_: Why sin't htis a programming langue?

ningxinhu: Someone has invented a languge to do it, in python. This is supposed to define a common model for hardware dispatch. In this model we just define the computation graph (DAG) Then you compile it to different hardware. API allows you to define the workload

Michael_McCool: Another option is to use a string instead of the graph.

Michael_McCool: Dynamic programming allows you to glue graphs together. Having a whole langauge is an option but it has it's own problems. OpenCL uses a string for the string, and therefore views the program as data so you can send it around. The graphs can be serialized to a blob

Michael_McCool: The fact that it's close to TensorFlow is valuable.

ningxinhu: It's close to native APIs

Michael_McCool: If you have an object that represents the graph, you can export and import operations

sangwhan: Why do we want export API if we dont have a filesystem API?

sangwhan: each model will have a URL anyway

Michael_McCool: The serialized blob would be opaque.

Michael_McCool: You wouldn't have to define what's in the blob as long as there interoperability

Michael_McCool: Okay, maybe that's not true

ningxinhu: You can also specify a device selection preference

ningxinhu: Execution model, it's liek a tight loop. After you compile your code, you can upload your input data using an array buffer view. Lines like "execution.setInput(0, thingy); execution.setInput(1, thingy)"

ningxinhu: Your output can be an input for post-processing for WebAssembly code

Michael_McCool: If you wanted to support multiple devices, wouldn't the compilatino need to know which device it will be run on?

ningxinhu: yes

ningxinhu: The last step: Do the work, returns a promise

ningxinhu: "execution.startCompute()"

ningxinhu: so that's it

Michael_McCool: You could do multiple computations in parallel using Promise.all()

ningxinhu: yes

<tidoust> [not willing to bikeshed on the method names for now, but I would expect a Promise to a method called "startCompute" to resolve when the computation has started, not when it's over. In other words, I'd simply call the method "compute" if the results of the computation are available when the promise resolves]

ningxinhu: Here, the output is an Array Buffer view, so you have to be careful with validation. You have to map you rinput data to the output data. Ideally the promise resolution will incorporate this, but that would create a new object which is an issue. For the POC, we followed this simple design, but we're flexible.

<sangwhan> 1. Doesn't the current model suggest double memory allocation?

<sangwhan> 2. What is "fast single answer"? Is this going to be defined in the specification later on?

<sangwhan> 3. Choice of integer constants over string enums. Sad panda face.

<sangwhan> 4. Choice of procedural programming practices for model definition seems inconvenient, especially given that this is pretty much for inference only.

<sangwhan> 5. Output layer being re-used isn't nice. Neither does it follow idiomatic javascript practices.

<sangwhan> 6. Plans to define/make consistent poorly defined ops? e.g. "EMBEDDING_LOOKUP"

<sangwhan> 7. Constructors for cases where it makes sense? (for initializations that probably won't reject)

sangwhan: When you allocate float32 arrays, they get copied to the GPU, right? Double allocation! Shouldn't those be other types?

ningxinhu: it's not an allocation here. Here you're just describing the graph. When you compile the code's model, it will then allocate device memory.

sangwhan: Embedding lookup is poorly defined. Will we define this?

ningxinhu: Our POC just calls into android calls. We didn't define anything.

myles_: so it isn't interoperable?

gregwhitworth: we will define it in an interoperable way.

sangwhan: It's a lot of code. If we're just loading from a URL, shouldn't have to write that much code.

ningxinhu: We want to follow the extensible web manifesto

<gregwhitworth> to extend on what I meant - this is a rough API shape, it's not a spec - it's a POC so they want to show what they did - but in no way is this actually defined.

myles_: Is this the basis for a future API, or is this just something to show that it's possible on the web?

anssik: just something to show that it's possible on the web

ningxinhu: There are two-levels of ML APIs: Model API like CoreML, WinML, or TensorFlow Lite, compared to the Accelerate API like BNNS MPS DirectML AndroidNN. Our POC is this "accelerate API" type

sangwhan: I'd like this to be interoperable with other things that come along.

sangwhan: There are some "island" specs in the W3C which is unfortunate.

Barbara: I don't understand.

sangwhan: Tensors are tricky because we already have DOMMatrix. This is something you probably want to talk with TC39 too. We want interop and consistency across different platforms. Building a primitive type that is specifically for this, but it's risky.

ningxinhu: If we want to introduce a new data type, it's important to make sure that it's interoperable with other systems. Just like in simd.js

sangwhan: You can either make them totally opaque, or you can build primitves. Primitives will take a lot of time.

ningxinhu: We thought about that. We could define a tensor type in javascript. We were hired to find how to map it to native API. We want to use that as a model.

ningxinhu: I odn't know how to map it to a native API.

myles_: you could do something like BigInt

sangwhan: yes

Dave: Do you need anything else from the W3C staff?

anssik: I would love to have someone from W3C staff monitor this work. Staying in the loop is good.

anssik: feel free to join.

anssik: We want this to be on the standards track.

Michael_McCool: Procedural issues?

anssik: We addressed them all.

<Tomoyuki> s/odn’t/don’t/

gregwhitworth: Do we want to have face-to-faces?

anssik: It depends on the energy level. If high energy, yes.

anssik: It's a great idea. Can someone host?

ningxinhu: Should we have regular updates with the group to track the progress?

gregwhitworth: I've spun up threads during the meeting with notebooks and azure. I'll report back to help refine our interest. Any one else should do it too.

gregwhitworth: We should have at least one.

anssik: unfortunately people are from all over the world

Next steps

Barbera: Developer outreach. If the W3C is doing developer outreach, this would be an interesting one we would want to communicate out to developers. What can this group provide to help with that conversation?

anssik: Before developers can try this out, they need native frameworks installed. But application-level use cases would be helpful at this point.

anssik: Teleconferences are an opportunity, we can't accomodate everyone's sleep schedule.

anssik: F2F is a good idea.

Resolved: Try to figure out if we can meet in 6 months

<anssik> Community Group page (click Join!)

anssik: Please, everyone join the group please please please

anssik: We are interested in use cases and API sketches and anything else. I'll set up a new github repo for this kind of incubation.

anssik: We have a GitHub organization "Web Machine Learning"

anssik: For W3C support, can CGs use your telecon infrastructure?

officially, no, but of course you can.

gregwhitworth: Can you write a specification?

anssik: yes.

<sangwhan> This fun issue is for later: https://‌github.com/‌webmachinelearning/‌meetings/‌issues/‌2

anssik: We don't want to start moving before we do our homework and gather use cases. We need to do the groundwork

gregwhitworth: Please use bikeshed

anssik: We can set up the boilerplate

gregwhitworth: The issues can go in the spec

anssik: Use cases should go in the psec

gregwhitworth: yes

anssik: RESOLVED: Create a draft spec, with boilerplate and use cases

anssik: Thank you everyone, and myles, he is suuuuuuuuuper awesome the best incredible fantastic

Summary of resolutions

  1. We should look into pre-canned models.
  2. Try to figure out if we can meet in 6 months
Minutes manually created (not a transcript), formatted by Bert Bos's scribe.perl version 2.49 (2018/09/19 15:29:32), a reimplementation of David Booth's scribe.perl. See CVS log.

Diagnostics

Succeeded 2 times: s/???/Ningxin/g

Succeeded: s/For node/ningxinhu: For node/

Succeeded: s/Regarding the current/ningxinhu: Regarding the current/

Succeeded: s/???/Tomoyuki/

Succeeded: s/High/Barbara: High/

Succeeded: s/sean/sangwhan/

Succeeded: s/sangwhan: Who/Who/

Succeeded: s/CUDA BLAS/CUDA, which locks - so multi-task is probably hard/

Succeeded: s/[inaudible]/Ningxin/

Succeeded: s/liek/like/

Succeeded: s/ningxinhu: Embedding/sangwhan: Embedding/

Succeeded: s/Do you need anything else/Dave: Do you need anything else/

Succeeded: s/address/addressed/

Failed: s/odn’t/don’t/