IRC log of webmachinelearning on 2019-09-17

Timestamps are in UTC.

04:26:06 [RRSAgent]
RRSAgent has joined #webmachinelearning
04:26:06 [RRSAgent]
logging to https://www.w3.org/2019/09/17-webmachinelearning-irc
04:26:11 [Zakim]
Zakim has joined #webmachinelearning
04:26:17 [anssik]
RRSAgent, make logs public
04:26:28 [anssik]
Meeting: WebML CG F2F Day 1 – 17 September 2019
04:26:32 [anssik]
Chair: Anssi
04:26:36 [anssik]
Agenda: https://github.com/webmachinelearning/meetings/blob/master/2019-09-17-fukuoka/
04:26:45 [niwamoto]
niwamoto has joined #webmachinelearning
04:26:48 [anssik]
Scribe: Anssi
04:26:50 [anssik]
scribeNick: anssik
04:27:10 [anssik]
Present+ Anssi_Kostiainen
04:27:26 [ningxin_hu]
Present+ Ningxin_Hu
04:27:37 [takio]
takio has joined #webmachinelearning
04:27:43 [niwamoto]
Present+ Narifumi_Iwamoto
04:27:55 [Youngsun_Ryu]
Present+ Youngsun_Ryu
04:27:58 [David_Marsh]
Present+ David Marsh
04:28:10 [riju]
riju has joined #webmachinelearning
04:28:20 [HelloFillip]
Present+ Phil_Laszkowicz
04:28:24 [Kangchan]
present+ Kangchan_Lee
04:28:25 [riju]
Present+ Rijubrata Bhaumik
04:28:26 [takio]
Present+ Takio_Yamaoka
04:32:15 [anssik]
RRSAgent, draft minutes v2
04:32:15 [RRSAgent]
I have made the request to generate https://www.w3.org/2019/09/17-webmachinelearning-minutes.html anssik
04:33:04 [anssik]
TOPIC: Welcome and intros
04:33:24 [anssik]
anssik: welcome to the WebML CG's 2nd F2F, happy to see both new and old faces around
04:33:58 [Bruce]
Bruce has joined #webmachinelearning
04:34:37 [jdarpinian]
jdarpinian has joined #webmachinelearning
04:34:37 [anssik]
... on the agenda today on Day 1: intros, custom operations, MLIR (Multi-Level Intermediate Representation) exploration, Operation set
04:34:48 [anssik]
... on Friday Day 2 exploratory topics, standards track next steps, W3C workshop planning
04:35:00 [anssik]
anssik: Let's do a roundtable 30-sec intros: your affiliation & interests toward the group
04:35:37 [dino]
dino has joined #webmachinelearning
04:35:43 [dino]
present+
04:35:45 [anssik]
anssik: I'm the chair working for Intel
04:36:06 [anssik]
nikhil: working for Google, Deeplearn.js co-author, want to bring the ecosystem forward, not familiar with W3C
04:36:23 [anssik]
ningxin_hu: Intel, CV and ML interest, OpenCV.js background
04:36:42 [anssik]
kenneth: Intel architect, W3C TAG rep, overseeing the architecture of the Web
04:37:02 [jdarpinian]
jdarpinian has joined #webmachinelearning
04:37:22 [Cortiz]
Cortiz has joined #webmachinelearning
04:37:33 [anssik]
Yongsun: Samsung, interested in ML in general
04:37:47 [anssik]
s/replacethis/withthis/
04:38:13 [whsieh]
whsieh has joined #webmachinelearning
04:38:13 [kimwooglae_]
kimwooglae_ has joined #webmachinelearning
04:38:13 [anssik]
Dave: Payments network with many members, just interested in ML
04:38:43 [anssik]
Chimiming: University affiliated
04:38:48 [Chunming]
Chunming has joined #webmachinelearning
04:39:31 [anssik]
Dean: Apple, interested in everything the group does, not ML specialist but I'll do my best connecting Apple experts, I work on WebKit project also Safari
04:40:11 [anssik]
Philip: Omnijar, working with DL for 13 years, with large companies, automotive, NVIDIA, ARM, interest to continue move commercial project to the Web
04:40:44 [anssik]
Riju: Intel, Chromium developer, sensors, NFC, media capture, OpenCV, not using ML currently
04:41:15 [anssik]
Kangchan: ETRI Korea, working on standards in ITU, ML as a Service
04:42:04 [anssik]
Wenson: Apple, WebKit, interest in ML
04:42:26 [anssik]
Diogo: Brazil W3C office, NLP background and interest
04:42:38 [Bruce]
Bruce has joined #webmachinelearning
04:42:58 [anssik]
Takio: Yahoo Japan, sensor processing, transcoding, interest in CV w/ ML
04:43:34 [anssik]
Sangwhan: TAG member, used to work for Opera, CV startup not affiliated with Web, I also do NLP
04:43:53 [anssik]
Frank: Inria France, curious of the group
04:43:59 [tung]
tung has joined #webmachinelearning
04:44:40 [anssik]
Belem: Intel, responsible to WebML polyfill
04:45:16 [anssik]
James: Google, working on Chrome, WebGL/GPU, interested in ML in Chrome
04:46:08 [anssik]
TOPIC: Custom operations
04:49:13 [hyojin]
hyojin has joined #webmachinelearning
04:49:15 [Big_Screen]
https://docs.google.com/presentation/d/1KGRc1RnnYt_1JK2Pk6r2xRkD60v4F8jc4beHMv0crng/edit#slide=id.p
04:50:44 [anssik]
q+ to say something
04:50:53 [anssik]
ack anssik
04:50:53 [Zakim]
anssik, you wanted to say something
04:51:17 [anssik]
[ningxin presents the slides]
04:53:17 [anssik]
ningxin_hu: ML field is fast moving. Model architecture and the ops are evolving quickly. This leads JS ML frameworks usually have big op set (e.g. TF.js has over 200 ops)
04:53:25 [anssik]
... Today’s framework’s ops are implemented in WebGL and WASM, and WebGPU
04:53:32 [anssik]
... WebNN’s built-in op set that focuses on hardware acceleration will be small and grow slowly
04:53:52 [anssik]
... Problem: It demands a way for library authors to write ops that can interop with built-in ops.
04:54:08 [anssik]
Options: WebNN built-in ops interop with framework ops in WASM and WebGL/WebGPU (focus of this investigation)
04:54:49 [anssik]
Kenneth: can you mix Wasm and WebNN ops?
04:55:14 [anssik]
Shangwan: there's a GPU-CPU transfer with a performance cost
04:55:38 [anssik]
... WebNN provides a way to write custom op by a domain specific language (e.g. Kai’s proposal) (future exploration)
04:55:57 [anssik]
ningxin_hu: next subtopic, WebNN-WebGPU Interop
04:56:18 [anssik]
[showing example code of Conv + Add + Relu by TF.js WebGPU]
04:56:42 [anssik]
RRSAgent, draft minutes v2
04:56:42 [RRSAgent]
I have made the request to generate https://www.w3.org/2019/09/17-webmachinelearning-minutes.html anssik
04:57:41 [anssik]
[showing example of compile WebNN op for WebGPU device]
04:59:48 [anssik]
[scribe sees ~30 participants, not all names recorded in minutes]
05:00:10 [HelloFillip]
HelloFillip has joined #webmachinelearning
05:00:25 [anssik]
[showing example of execute WebNN's op with WebGPU op]
05:00:29 [junwei]
junwei has joined #webmachinelearning
05:02:16 [anssik]
-> https://docs.google.com/presentation/d/1KGRc1RnnYt_1JK2Pk6r2xRkD60v4F8jc4beHMv0crng/ WebNN Interop Investigation slides
05:03:16 [junwei]
junwei has joined #webmachinelearning
05:03:31 [anssik]
[ningxin showing a demo on his laptop]
05:03:52 [anssik]
ningxin_hu: custom build of Chromium on macOS
05:04:03 [lisha]
lisha has joined #webmachinelearning
05:05:08 [kimwooglae]
kimwooglae has joined #webmachinelearning
05:05:20 [ningxin_hu]
https://docs.google.com/presentation/d/1KGRc1RnnYt_1JK2Pk6r2xRkD60v4F8jc4beHMv0crng/edit?usp=sharing
05:05:40 [Franck]
Franck has joined #webmachinelearning
05:05:44 [ningxin_hu]
conv input dims: [1,100,100,100] and filter dims: [3,3,100,100] WebGPU conv2d/add/relu elapsed time: 60.81 ms WebNN conv2d interops with WebGPU add/relu via ArrayBuffer elapsed time: 39.67 ms WebNN conv2d interops with WebGPU add/relu via WebGPUBuffer elapsed time: 22.11 ms WebNN conv2d with fused add/relu elapsed time: 21.11 ms
05:06:32 [anssik]
[above pasted text is an output of test case of TF.js sets backend as WebGPU]
05:07:50 [anssik]
sangwhan: is the Chromium source available?
05:07:57 [anssik]
ningxin_hu: that's available
05:08:58 [anssik]
nikhil: how fast is the readback?
05:09:08 [anssik]
ningxin_hu: not yet tested that
05:09:24 [anssik]
dino: you can't use MPS, why is that?
05:09:40 [anssik]
ningxin_hu: different memory layout internally
05:10:15 [anssik]
dino: can you show conv operations, what they are doing?
05:10:24 [yoshiaki_]
yoshiaki_ has joined #webmachinelearning
05:10:33 [anssik]
... I was expected to see a custom op, i.e. shader code
05:10:45 [anssik]
ningxin_hu: shader code is inside TF.js
05:11:47 [anssik]
ningxin_hu: subtopic, POC Implementation on MPS
05:11:57 [anssik]
... Reuse the same MTLDevice associated with WebGPUDevice.
05:12:07 [anssik]
... Get the MTLBuffer associated with input and output WebGPUBuffer.
05:12:14 [anssik]
... Allocate MPSImage for inputs with MTLDevice.
05:12:21 [anssik]
... Create MTLCommandBuffer from MTLQueue associated with WebGPUDevice.
05:12:28 [anssik]
... Encode a compute shader that copies and reorders data from MTLBuffer to MPSImage (MPSImage layout).
05:12:48 [anssik]
dino: is this a custom WebGPU implementation? Where you decide you MPS?
05:13:15 [anssik]
... TF.js running on top of WebGPU
05:13:25 [anssik]
... this is an impl of WebNN, not TF on for of Chromium
05:13:44 [anssik]
... using WebGPU infra underneath it has platform implementation e.g. MPS
05:13:55 [anssik]
ningxin_hu: Encode MPSNNGraph/MPSCNNKernel to MTLCommandBuffer
05:14:02 [anssik]
... Encode a compute shader that copies and reorders data from output MPSImage to output MTLBuffer.
05:14:10 [anssik]
... Commit MTLCommandBuffer.
05:14:37 [anssik]
ningxin_hu: Performance Summary
05:15:24 [anssik]
... Inference time (ms)
05:16:00 [anssik]
... WebGPU conv/add/relu 61.31
05:16:14 [anssik]
... WebNN conv interops with WebGPU add/relu via ArrayBuffer 43.42
05:16:28 [anssik]
... WebNN conv interops with WebGPU add/relu via WebGPUBuffer 23.06
05:16:34 [anssik]
... WebNN conv with fused add/relu 21.25
05:16:52 [anssik]
ningxin_hu: Copying/Reordering Optimization
05:17:01 [anssik]
... Inference time (ms)
05:17:16 [anssik]
WebGPU conv x2 112.96
05:17:22 [anssik]
WebNN conv + WebGPU conv 67.33
05:17:38 [anssik]
... WebNN conv x2 with reordering 24.53
05:17:54 [anssik]
s/ WebGPU conv x2 112.96/... WebGPU conv x2 112.96/
05:18:00 [yoshiaki]
yoshiaki has joined #webmachinelearning
05:18:03 [anssik]
s/WebNN conv + WebGPU conv 67.33/... WebNN conv + WebGPU conv 67.33/
05:18:16 [yoshiaki_]
yoshiaki_ has joined #webmachinelearning
05:18:45 [anssik]
sangwhan: with this design, vendors targeting a single type of accelerator, what are the implications?
05:19:12 [anssik]
... if you were to implement this in a general browser, not OS bound, you'd have multiple accelerators, what's the story?
05:19:41 [anssik]
... you'd need to have compilers for every accelerator
05:19:51 [anssik]
... implementability question
05:20:20 [anssik]
... if you'd use the platform APIs, it'd be fine, but they can be limited in terms of support
05:20:48 [anssik]
dino: Apple's perspective is we want to offload to the hardware as much as possible
05:21:26 [anssik]
sangwhan: when testing the POC, did the inference affect the ref(?)
05:21:47 [anssik]
dino: same issue with WebGL/GPU
05:22:20 [anssik]
... issue if the background task freezes the computer
05:22:44 [anssik]
... battery and perf benefit for going to ML hardware
05:23:01 [anssik]
sangwhan: would be nice if everyone had these purpose-built accelerators
05:23:11 [anssik]
... curious of implications of that
05:23:24 [anssik]
dino: not sure what Android devices have AI accelerators
05:23:48 [anssik]
sangwhan: based on testing, could be NEON accelerated, or GPU, whatever the vendor had time to do
05:24:20 [anssik]
nikhil: also good to benchmark readback times from those accelerators
05:24:22 [yoshiaki]
yoshiaki has joined #webmachinelearning
05:25:17 [anssik]
[skipping slides to Summary of WebNN-WASM interop slide]
05:25:28 [anssik]
ningxin_hu: WebNN ops allow to access vendor specific CPU acceleration
05:25:36 [anssik]
... Interop between WASM ops and WebNN op has overhead
05:25:42 [anssik]
... Memory copying between WASM heap and WebNN backend
05:25:49 [anssik]
... Memory reordering, e.g. MKL-DNN blocked layout
05:25:58 [anssik]
... Execute WebNN ops chain with opaque operands can avoid unnecessary overhead
05:26:24 [anssik]
ningxin_hu: Proposal
05:26:43 [anssik]
... Support key ops that access hardware acceleration (#17) E.g. conv2d and matmul
05:26:57 [anssik]
... Support compiling and executing ops for devices (new issue?) CPU or GPU
05:27:09 [anssik]
... Support interop with WebAssembly and WebGPU compute shader
05:27:18 [anssik]
... Sharing ArrayBuffer with WASM op
05:27:27 [anssik]
... Sharing WebGPUBuffer with WebGPU op (new issue?)
05:28:00 [anssik]
... Support interop with WebAssembly and WebGPU compute shader
05:28:07 [anssik]
... - Sharing ArrayBuffer with WASM op
05:28:17 [anssik]
... - Sharing WebGPUBuffer with WebGPU op (new issue?)
05:28:25 [anssik]
... Support executing ops chain with opaque operands (#11)
05:28:33 [anssik]
... - Leverage device optimized memory layout and avoid unnecessary memory reordering
05:28:41 [anssik]
... Explore custom op support by DSL (new issue?)
05:30:03 [anssik]
dino: how do these numbers compare with true native frameworks, CoreML, TensorFlow native?
05:31:29 [anssik]
ningxin_hu: 10% WebNN overhead over native
05:31:48 [anssik]
nikhil: TensorFlow/WebGL vs. CUDA, CUDA 10x faster
05:32:06 [anssik]
???: what kind of model do you use?
05:32:46 [anssik]
ningxin_hu: we have multiple models for this experiment, we use conv kernels, MobileNet, Inception, ResNet50
05:33:12 [anssik]
... on our website we have bigger models, the model size constraints us
05:34:09 [anssik]
nikhil: CPU and non-CPU accelerators an issue, how to consider them in the context of custom ops, understand readbacks
05:34:40 [anssik]
???: what is the focus in terms of hardware targets of this group?
05:34:58 [anssik]
ningxin_hu: we have experience on Android phone with an AI accelerator, close to native perf
05:35:18 [yoshiaki]
yoshiaki has joined #webmachinelearning
05:35:59 [anssik]
???: what is the scope of this work? Recommendation to define a higher level abstraction to be flexible
05:36:25 [anssik]
[hearing no concerns for the proposed tasks to investigate further]
05:36:55 [anssik]
ningxin_hu: I'm willing to take "Support compiling and executing ops for devices (new issue?)" task
05:37:28 [anssik]
... maybe Kai could help with "Explore custom op support by DSL (new issue?)"
05:38:34 [anssik]
dino: Apple could look at "Support key ops that access hardware acceleration (#17)" and provide feedback for that
05:39:15 [anssik]
nikhil: just filed issues for conv2d and matmul
05:39:28 [anssik]
https://github.com/webmachinelearning/webnn/issues/27
05:39:34 [anssik]
https://github.com/webmachinelearning/webnn/issues/28
05:39:57 [anssik]
... will move forward with issues #27 and #28
05:40:45 [anssik]
Topic: MLIR
05:41:07 [anssik]
nikhil: disclaimer, I'm not a compiler person, but talked with Google experts on that field
05:41:22 [anssik]
RRSAgent, draft minutes v2
05:41:22 [RRSAgent]
I have made the request to generate https://www.w3.org/2019/09/17-webmachinelearning-minutes.html anssik
05:42:07 [anssik]
nikhil: we'll not proposing MLIR, just exploring this area
05:42:14 [jdarpinian]
do you have a link to the slides?
05:43:00 [anssik]
-> https://docs.google.com/presentation/d/1vv-pFsTqAVITtx3RwmEs-g7YRK1PD9APSIuice88aSI/ MLIR slides by Nikhil
05:43:20 [anssik]
[nikhil presenting MLIR slides]
05:44:49 [anssik]
???: XLA compiler spits out LLVM IR already?
05:44:54 [anssik]
nikhil: correct
05:45:04 [yoshiaki]
yoshiaki has joined #webmachinelearning
05:46:11 [anssik]
... Domain specific optimizations, progressive lowering
05:46:34 [anssik]
... The TensorFlow compiler ecosystem has many “Graph” IRs, each with challenges
05:47:47 [anssik]
... Domain Specific IRs, Great! High-level domain-specific optimizations; Progressive lowering encourages reuse between levels
05:48:24 [anssik]
... Not great!
05:48:29 [anssik]
... Huge expense to build this infrastructure
05:48:34 [anssik]
... Reimplementation of all the same stuff:
05:48:40 [anssik]
... pass managers, location tracking, use-def chains, inlining,
05:48:47 [anssik]
... constant folding, CSE, testing tools, ….
05:48:51 [anssik]
... Innovations in one community don’t benefit the others
05:49:19 [anssik]
nikhil: let's talk about what is MLIR
05:50:21 [anssik]
... TensorFlow
05:50:21 [anssik]
... "An open source machine learning framework for everyone"
05:50:21 [anssik]
... Multi-Level Intermediate Representation
05:50:21 [anssik]
... "An open source program optimization framework for ... everyone"
05:50:21 [anssik]
... Abstraction Building Toolkit
05:50:22 [anssik]
... Reusable set of compiler passes for higher abstractions
05:50:22 [anssik]
... Targeting analysis/program optimization/code generation
05:50:22 [anssik]
... Open governance and part of LLVM
05:50:48 [anssik]
nikhil: MLIR has wide support across industry
05:51:09 [yoshiaki_]
yoshiaki_ has joined #webmachinelearning
05:51:19 [anssik]
nikhil: Extensible Operations Allow Multi-Level IR
05:52:37 [jc]
jc has joined #webmachinelearning
05:52:43 [anssik]
... MLIR “Dialects”: Families of defined operations
05:53:16 [anssik]
... Example Dialects:
05:53:16 [anssik]
... TensorFlow, LLVM IR, XLA HLO, TF Lite, Swift SIL…
05:53:16 [anssik]
... Dialects can define:
05:53:16 [anssik]
... Sets of defined operations
05:53:16 [anssik]
... Entirely custom type system
05:53:16 [anssik]
... Customization hooks
05:53:16 [anssik]
... Constant folding, decoding
05:53:18 [anssik]
... Operation can define:
05:53:18 [anssik]
... Invariants on # operands, results, attributes, etc
05:53:18 [anssik]
... Custom parser, printer, verifier, …
05:53:37 [anssik]
nikhil: MLIR Type System - some examples
05:53:39 [yoshiaki]
yoshiaki has joined #webmachinelearning
05:53:58 [anssik]
... Scalars:
05:53:58 [anssik]
... f16, bf16, f32, … i1, i8, i16, i32, … i3, i4, i7, i57, …
05:53:58 [anssik]
... Vectors:
05:53:58 [anssik]
... vector<4 x f32> vector<4x4 x f16> etc.
05:53:59 [anssik]
... Tensors, including dynamic shape and rank:
05:53:59 [anssik]
... tensor<4x4 x f32> tensor<4x?x?x17x? x f32> tensor<* x f32>
05:53:59 [anssik]
... Others: functions, memory buffers, quantized integers, other ... TensorFlow stuff, ...
05:53:59 [anssik]
... Extensible!!
05:55:58 [anssik]
nikhil: Applications of MLIR
05:56:05 [anssik]
... TensorFlow Lite Converter
05:56:30 [anssik]
... One of the focusses: Usability
05:56:45 [anssik]
... Usability of TOCO top complaint among TFLite users
05:56:46 [anssik]
... Debugging
05:56:53 [anssik]
... Report why a model failed to convert
05:57:01 [anssik]
... Dialect types enable more checking & better reporting
05:57:03 [yoshiaki]
yoshiaki has joined #webmachinelearning
05:58:51 [anssik]
... [MLIR] for the Web?
05:59:08 [anssik]
... Some facts from MLIR investigations
05:59:14 [anssik]
... Operator expansion is about 25% YoY for TensorFlow
05:59:20 [anssik]
... Hardware vendors will implement dialects
05:59:50 [anssik]
... Open governance
06:00:29 [anssik]
riju: regarding operator expansion, is there a fallback mechanism, even if with performance penalty?
06:00:37 [anssik]
nikhil: we'd need to use e.g. a Wasm polyfill
06:01:36 [anssik]
... MLIR dialect on the web -- thoughts
06:02:13 [anssik]
... No backwards compatible guarantees today from MLIR
06:02:13 [anssik]
... A dialect could be invented that is backwards compatible
06:02:13 [anssik]
... What does maintaining this look like?
06:02:13 [anssik]
... Web sourcemaps => python code
06:02:13 [anssik]
... Immediately tells you whether python code will execute in browser
06:02:28 [anssik]
kenneth: web needs backwards compat, and we do not really do versioning on the Web
06:02:46 [anssik]
nikhil: how maintaining backwards compatibility could happen?
06:03:00 [jc]
jc has joined #webmachinelearning
06:03:42 [anssik]
dino: LLVM IR is a well-suited as a web transport format
06:04:20 [yoshiaki_]
yoshiaki_ has joined #webmachinelearning
06:04:20 [whsieh]
^ *not* well-suited?
06:04:31 [anssik]
... a lot of lowering, what is the improvement?
06:04:49 [anssik]
s/well-suited/not well-suited/
06:05:18 [anssik]
RRSAgent, draft minutes v2
06:05:18 [RRSAgent]
I have made the request to generate https://www.w3.org/2019/09/17-webmachinelearning-minutes.html anssik
06:07:05 [anssik]
dino: what is the scope of the group, all models interop with all devices?
06:07:22 [sushrajaMSFT]
sushrajaMSFT has joined #webmachinelearning
06:08:39 [anssik]
... we could start with a set of ops everyone supports
06:09:05 [anssik]
nikhil: initially we wanted to support all ops
06:09:24 [anssik]
... then understood better growing the set slowly is a better approach
06:10:14 [anssik]
dino: our fear is, and I can be wrong, if the ecosystem becomes skewed toward TF models, so that those get hardware acceleration while some other models might not
06:10:31 [anssik]
nikhil: as a group we can grow that set so that it does not happen
06:10:57 [anssik]
dino: TF is growing fast, how's hardware adding ops?
06:11:20 [anssik]
nikhil: I think hardware vendors add new ops more slowly
06:11:34 [anssik]
kenneth: do any ops go away with time?
06:12:02 [anssik]
riju: any kind of ranking within these ops, what are used the most?
06:12:07 [jc]
jc has joined #webmachinelearning
06:12:15 [anssik]
nikhil: TF has it, not sure if can make that data public
06:13:56 [anssik]
Philip: Swift for TF was good experience from usability perspecticve
06:14:13 [anssik]
... ML not a domain of data scientists for any longer, need good dev ergonomics
06:14:30 [Franck]
Franck has joined #webmachinelearning
06:14:43 [anssik]
ningxin_hu: on which level of abstraction would the Web dialect of MLIR sit on?
06:15:35 [HelloFillip]
HelloFillip has joined #webmachinelearning
06:15:39 [anssik]
nikhil: lower level things would evolve more slowly, but not sure at this point on which level the web dialect should be at
06:16:01 [anssik]
dino: generally Apple's position is that a high-level abstraction works well on the Web since it allows implementations to optimize
06:16:18 [anssik]
... we don't have a huge dataset, but JS is a good example
06:16:34 [anssik]
... no enough data yet how Wasm goes
06:16:53 [anssik]
... if we did a Web dialect, it would be something like that, but we'd make it a bit more higher-level than LLVM IR
06:17:37 [anssik]
nikhil: I'm wondering whether there's a level of abstraction between ops and LLVM IR we should target
06:19:44 [zkis]
zkis has joined #webmachinelearning
06:20:30 [anssik]
anssik: what would be good next steps for the group re MLIR tasks?
06:20:48 [anssik]
nikhil: talking to MLIR people, it seems a bit too early still since moving target
06:21:46 [anssik]
... concretely, I can try to figure out which ops are used, how many times an op is called
06:22:30 [yuta]
yuta has joined #webmachinelearning
06:22:38 [anssik]
RRSAgent, draft minutes v2
06:22:38 [RRSAgent]
I have made the request to generate https://www.w3.org/2019/09/17-webmachinelearning-minutes.html anssik
06:24:01 [anssik]
RRSAgent, draft minutes v2
06:24:01 [RRSAgent]
I have made the request to generate https://www.w3.org/2019/09/17-webmachinelearning-minutes.html anssik
06:24:24 [HelloFillip]
HelloFillip has joined #webmachinelearning
06:24:30 [HelloFillip]
The link to Chris's talk on Swift for TensorFlow can be found here (as an example for other languages): https://www.youtube.com/watch?v=s65BigoMV_I
06:25:56 [anssik]
we'll defer Day 1 3rd topic "operation set" to Day 2 on Friday
06:26:09 [anssik]
thanks for attending, we'll see again on Friday!
06:26:15 [anssik]
Topic: Adjourn
06:26:17 [anssik]
RRSAgent, draft minutes v2
06:26:17 [RRSAgent]
I have made the request to generate https://www.w3.org/2019/09/17-webmachinelearning-minutes.html anssik
06:26:22 [belem]
Thanks Anssi!
06:29:10 [anssik]
Present+ Nikhil_Thorat
06:29:12 [anssik]
RRSAgent, draft minutes v2
06:29:12 [RRSAgent]
I have made the request to generate https://www.w3.org/2019/09/17-webmachinelearning-minutes.html anssik
06:31:03 [jc]
jc has joined #webmachinelearning
06:32:09 [anssik]
Present+ Heejin_Chung
06:32:53 [anssik]
Present+ Philip_Laszkowicz
06:33:11 [anssik]
Present+ Diogo_Cortiz
06:33:36 [anssik]
Present+ Dean_Jackson
06:34:05 [anssik]
Present+ Wooglae_Kim
06:34:25 [anssik]
RRSAgent, draft minutes v2
06:34:25 [RRSAgent]
I have made the request to generate https://www.w3.org/2019/09/17-webmachinelearning-minutes.html anssik
06:36:01 [anssik]
Present+ David_Marsh
06:36:25 [anssik]
Present+ Kenneth_Christiansen
06:37:15 [anssik]
Present+ Wenson_Hsieh
06:37:38 [anssik]
Present+ A
06:37:42 [dino]
dino has joined #webmachinelearning
06:37:55 [anssik]
Present+ Takio_Yamaoka
06:38:02 [anssik]
s/Present+ A//
06:38:25 [anssik]
Present+ Sangwhan_Moon
06:39:36 [anssik]
Present+ Belem_Zhang_(remote)
06:39:54 [anssik]
Present+ James_Darpinian_(remote)
06:39:59 [anssik]
RRSAgent, draft minutes v2
06:39:59 [RRSAgent]
I have made the request to generate https://www.w3.org/2019/09/17-webmachinelearning-minutes.html anssik
06:41:02 [anssik]
Present+ Frank_?
06:41:13 [anssik]
RRSAgent, draft minutes v2
06:41:13 [RRSAgent]
I have made the request to generate https://www.w3.org/2019/09/17-webmachinelearning-minutes.html anssik
06:50:29 [jc]
jc has joined #webmachinelearning
06:50:38 [jc]
jc has joined #webmachinelearning
06:57:35 [jc]
jc has joined #webmachinelearning
06:58:29 [yoshiaki]
yoshiaki has joined #webmachinelearning
07:02:33 [jc]
jc has joined #webmachinelearning
07:03:55 [whsieh]
whsieh has joined #webmachinelearning
07:07:18 [dino]
dino has joined #webmachinelearning
07:11:16 [whsieh]
whsieh has joined #webmachinelearning
07:11:35 [whsieh]
whsieh has left #webmachinelearning
07:18:22 [yoshiaki]
yoshiaki has joined #webmachinelearning
07:38:12 [jc]
jc has joined #webmachinelearning
07:41:15 [yoshiaki]
yoshiaki has joined #webmachinelearning
07:44:09 [Chunming]
Chunming has joined #webmachinelearning
07:44:43 [jc]
jc has joined #webmachinelearning
07:54:45 [jc]
jc has joined #webmachinelearning
08:12:09 [yoshiaki_]
yoshiaki_ has joined #webmachinelearning
08:13:59 [jc]
jc has joined #webmachinelearning
08:34:52 [jc]
jc has joined #webmachinelearning
08:47:20 [jc]
jc has joined #webmachinelearning
08:51:01 [jc]
jc has joined #webmachinelearning
09:02:10 [jc]
jc has joined #webmachinelearning
09:02:24 [Zakim]
Zakim has left #webmachinelearning
09:22:30 [yoshiaki]
yoshiaki has joined #webmachinelearning
09:47:21 [yoshiaki]
yoshiaki has joined #webmachinelearning
10:21:34 [zkis]
zkis has joined #webmachinelearning
11:58:14 [zkis]
zkis has joined #webmachinelearning
12:11:12 [Chunming]
Chunming has joined #webmachinelearning
12:30:16 [Chunming]
Chunming has joined #webmachinelearning
12:32:46 [zkis_]
zkis_ has joined #webmachinelearning
12:46:19 [dino]
dino has joined #webmachinelearning
13:35:15 [zkis]
zkis has joined #webmachinelearning
13:55:14 [Chunming]
Chunming has joined #webmachinelearning
14:33:24 [yoshiaki]
yoshiaki has joined #webmachinelearning
17:49:47 [zkis]
zkis has joined #webmachinelearning
18:21:17 [zkis_]
zkis_ has joined #webmachinelearning
18:29:54 [zkis_]
zkis_ has joined #webmachinelearning
18:47:57 [zkis__]
zkis__ has joined #webmachinelearning
19:05:15 [zkis__]
zkis__ has joined #webmachinelearning
19:24:46 [zkis_]
zkis_ has joined #webmachinelearning