00:07:33 RRSAgent has joined #apa 00:07:33 logging to https://www.w3.org/2019/09/20-apa-irc 00:07:39 scribenick: ada 00:07:46 alexturn has joined #apa 00:09:43 present+ 00:10:30 present+ 00:10:30 Present+ 00:10:35 NellWaliczek: One of the things I had noticed is that we don't have a shared understanding of eachothers technlogies and it was enlightneing to the issues we each are investigating. 00:11:30 ... but was hard to come accross solutions. This is to give some background about 3D graphics, if we would like to follow up on a call after TPAC ask ada and she can add it to the agenda. 00:12:03 RRSAgent, make logs public 00:12:12 RRSAgent, make minutes 00:12:12 I have made the request to generate https://www.w3.org/2019/09/20-apa-minutes.html Roy 00:12:18 ... to give background to I will tie it to something we have experience with today the DOM, in HTML you don't say draw this pixel at this point. You ask to draw by component, draw an input box , div etc 00:13:09 ... it is declarative, it isn't imperatively asking the GPU to draw pixels. We describe the elements and style and it is up to the UA to issue the GPU commands. 00:13:21 ... (aside from canvas) 00:13:56 Matt_King_ has joined #apa 00:14:06 present+ 00:14:16 present+ 00:14:17 ... imperative rendering is the opposite we take some buffers and send them to the GPU and give it commands which draw to the sceen pixel by pixel. 00:15:07 ... so for those who are unfamiliar with canvas, you cannot place content with style and it is just an opaque block which cannot interact with the page. 00:15:36 Meeting: APA WG TPAC 2019 00:16:04 ... It is nesacary for 3D graphics but without understanding the necessity we will spin our wheels when it comes to adding a11y into this. 00:16:46 Joshue108 has joined #apa 00:16:51 ... i'm going to breakdown about how we think about 3d rendering into its constituent parts. What are the data you need for 3d rendering and what do you send to the GPU for 3D rendering. 00:16:53 present+ 00:17:26 Matt_King_: Are the drawing APIs for standardised? 00:17:29 zcorpan has joined #apa 00:18:03 NellWaliczek: Yes, but not through the W3C through Khronos WebGL. 00:18:15 Ada: WebGL is a low level drawing primitive. 00:18:25 It will draw triangles fast, thats all. 00:18:38 The layer between that and what devs write is wild west. 00:18:56 RRSAgent, make minutes 00:18:56 I have made the request to generate https://www.w3.org/2019/09/20-apa-minutes.html Roy 00:19:42 NellWaliczek:You were asking about the graphics standards, whilst WebGL is widely adopted it is basd on a fairly old native API called OpenGL, there have been numerous developments in 3D graphics. Which cannot be matched by WebGL. 00:20:02 chair: Janina 00:20:47 ... There are new standards to try to access this new functionality such as WebGPU which provides different functionality. Which exposes functionality unavailable to WebGL. There is also WebGL2 which is an imporovement on WebGL. 00:20:49 Topic: XR 00:21:04 zaur has joined #apa 00:21:15 ... Vulkan, Apple's Metal, WebGPU is aimed at targetting those. 00:21:37 WebGPU is still a wip, so for all intents and purposes WebGL is only available to us. 00:23:12 Irfan has joined #apa 00:23:14 present+ 00:23:42 ... I could talk to you for hours about the interesting side tracks, just to give a sense of the acronyms in play. Before I dig into the graphics primitive i would like to talk about the relationship between WebGL and WebXR, I get asked this a lot it is not unique to this group. The best way to think about his is that we have thought of WebGl being a graphics language it takes imperative data and 00:23:45 turns it into pixels, WebXR does not do that. WebXR could not function withot WebGL. WebGl is for drawing the pixels, WebXR provides the information on to where and how to draw those pictures. 00:24:09 ... and describing the view frustum. Like a cropped of pyramid with the top at your eyes. 00:24:30 ... THe WebXR describes the shape of this view frustum. 00:24:40 Matt_King_: Like a cone on my face. 00:24:43 NellWaliczek:Yes 00:25:00 Joshue108:How does this relate to field of vision (fov) 00:25:16 NellWaliczek: it is roughly the same concept but also includes the near plane and far plane. 00:26:24 ... the point htere is that you need all that information to know where to draw, yo need to know where to draw and how far they have moved from the origin of the space. When drawing for a sheadset you may have to draw at least stereoscopically, somethimes more as some headsets have more multiple screens per eyes. 00:26:53 ... The API describribes the view frustum from each panel and the location of each. 00:28:15 ... i just talked about a simplification of what the API provides, once you have joined those pixels they don't go on the monitor they go on the displays in the headset (which runs at a different frameratre to your monitor) the API then describes how to send the images to the screens. 00:28:47 ... the hardware will slightly move the images to account for some minor head motion this is known as reprojection and stops people being ill. 00:28:49 artem has joined #apa 00:29:46 Matt_King_:if you are a developer, does the developer make WebGL calls to WebXR or just take information from it? 00:29:58 NellWaliczek: They also submit the rendered pixels to the screen. 00:31:24 ... It is part of the RAF loop, on a monitor at 60fps the screen is wiped and redrawn, you can hook into this loop with requestAnimationFrame, to move objects before the next draw. 00:31:37 Irfan_ has joined #apa 00:32:10 ... New monitors can run at 144fps, which can be problematic for developers which have assumed 60fps because their animations run extra fast. 00:33:29 s/sheadset/headset 00:33:33 ... A VR headset which is plugged into a computer the headsets have to draw faster than 60fps to reduce the effects of VR sickness, it is a different frame rate. So it needs to provide it's own requestAnimationFrame at it's own refresh rate. 00:34:12 Matt_King_: Is RAF a WebGL API? 00:34:32 NellWaliczek:There is one on window and one in the WebXR device API. 00:34:53 NellWaliczek:Once you have initialised the XR session the RAF is on the XR session object. 00:35:45 ... The Web Developer will first ask if there is AR hardware to use. 'isSessionSupported' so they know whether to add a button the screen. 00:36:04 Matt_King__ has joined #apa 00:36:06 Matt_King___ has joined #apa 00:37:05 In the button handler you will call, navigator.XR.requestSession that is where the session begins and it will set up a new session for you ending any other. It is async from a promise which resolves to let you start setting up all the things you need to create to start rendering. 00:37:25 ... You will start an xr WebGL Layer. 00:38:08 aboxhall_ has joined #apa 00:38:34 ... Creates 2D buffers you will draw your 2D content into they bound into the displays n the headset. It is important to render directly to the buffers, because copying pixels between buffers slows things down which makes people sick. 00:39:17 ... sushrajaMSFT: at a higher level you can think of it as any WebGL commands against that context will render directly to the headset. 00:40:09 NellWaliczek: The context is what you have to call the commands from to render into. You get it from the canvas it maybe a WebGL, WebGL2 or WebGPU context. 00:40:37 ... if you think about it seperately from the WebGL APIs there is a 1-1 mapping between the canvas and the context. 00:40:56 ... in this case a canvas may have multiple contexts. 00:41:18 ... You pass in a canvas and it will pull out the contexts it needs to render the content to the headset. 00:42:22 ... Data can't be shared between WebGL contexts for security reasons. 00:42:58 q+ 00:43:20 Ada: If you send. canvas it generates one per panel. 00:43:28 Nell: One per additional one. 00:43:30 One per headset. 00:43:35 q- 00:44:11 This may change but right now to support WebGL it is just one. 00:44:24 q+ to ask if other content such as related semantics can be rendered to support generated XR session contexts that are not canvas 00:44:52 NellWaliczek: The WebGL context associated with The XR device, the final buffer you draw into goes directly to the display it doesn't get copied anywhere. 00:45:25 Ada: To clarify it is onto the pixel but shifted for reprojection. 00:45:48 q? 00:45:59 NellWaliczek:slightly shifted for many purposes, you draw a pixel and it goes to the identical spot on the display. 00:48:12 Matt_King__: in stereoscopic there are typically one panel per eye, the information for those panels is associated with the context from the canvas, when I get these XR session RAF callbacks, for each one I populate information into those contexts which are attached to the canvas and the panels on the display, where I cannot share information between the canvas context and the headset context? 00:48:36 NellWaliczek: [confirms] yes you can, information can be shared within one canvas. 00:50:13 NellWaliczek:We didn't talk about what you do in the RAF loop, the last piece of this puzzle is what to do. The first thing you do is, hey session where is the headset and each panel in 3D space. You get a frustum for each panel. 00:50:45 ... you can create combined frustums. Which we won't get into right now for perf reasons. 00:51:12 ... You'll ask where are the motion controllers so I can draw them in the correct place. 00:51:51 ... This is where we talk about the renderloop which is graphics specific but not XR specific. Once it complese you then have Pixels which can be displayed by the UA. 00:51:55 Irfan has joined #apa 00:52:00 Matt_King__: How does this apply to audio? 00:52:09 NellWaliczek: it is part of the same black box. 00:52:16 rrsagent, make minutes 00:52:16 I have made the request to generate https://www.w3.org/2019/09/20-apa-minutes.html Irfan 00:52:47 kip: we also have the central position of the head which can be used for positioning 3D audio. 00:53:30 q? 00:53:30 NellWaliczek: We essentially have a ray which points out from between they eyes, which is used for spatialising the audio. 00:53:35 q? 00:53:42 ack Lisa 00:53:49 ack ach 00:53:55 ack me 00:53:55 Joshue, you wanted to ask if other content such as related semantics can be rendered to support generated XR session contexts that are not canvas 00:54:12 q? 00:54:38 present+ Joanmarie_Diggs 00:54:41 Joshue108: Can content related semantics be generated within those loops? 00:54:52 NellWaliczek:Yes i'll talk about it in context of rendering. 00:55:01 present+ ada 00:55:44 NellWaliczek: I was talking before about data that gets sents to the GPU, yesterdya we were talking about the scenegraph. 00:56:14 THe scenegraph is kind of like a DOM tree but it is a totally made up idea, that is not standardised. 00:56:22 s/yesterdya/yesterday 00:56:33 Different engines have their own different ways of describing it. 00:56:51 On native these engines/middleware are like Unity 3D. 00:56:55 Or Unreal 00:57:09 It is a combination of an editor and renderer. 00:57:43 On the Web the most well known is THREE.js which a JS libary which has it's own concepts of a scenegraph, Babylon, Sumerian (and others) 00:58:35 Babylon and THREE.js are programatic, Sumerian is a Visual editor where the scenegraph is visualised. WHere you get almost a WYSIWYG experience. 00:58:55 ... this is all middleware it has nothing to do with the web. 00:59:26 ... When we talk about the scenegraph it is a made up concept that describes the commands that should be said to WebGL. 00:59:57 Matt_King__: A developer won't make WebGL calls they will use a library? 01:00:56 NellWaliczek: Yes WebGL extremely verbose, it takes more than a page of code just to render a triangle. You use a 3D engine. Because you use an engine you probably won't be using WebXR directly either. 01:01:33 Matt_King__: SO any a11y standard would have to be supported by these middlewares, these 3D engines. 01:02:12 NellWaliczek: Almost, because that is where we are today. When looking to the future, file formats for 3D models 01:02:18 Matt_King__: 3D hello worlds? 01:03:22 NellWaliczek: correct, these 3d formats include geometry and texture but don't tend to include things like physics or scripting. ANy animations they have will be on rails. They are static. 01:05:08 ... the history of the 3D file formats is long and contentious. The most well known one FBX is only made available through AUtodesk, it is propriety and only they provide the encoders. There are others like OBJ very simple cannot have labels, collada which never got traction. 01:06:03 Matt_King has joined #apa 01:06:05 Matt_King_ has joined #apa 01:06:29 ... the current darling is glTF "gl transition format" and usdz, they are very similar usdz is proprietary. Blender will convert models to as needed. 01:07:27 ... There are 3 kinds of formats, editor formats like photoshop files, you have interchange formats which can be shared between editors uncompressed, gltf is the first open source runtime format. 01:08:39 scribe: Joshue108 01:09:20 N: With the advent of glTF where we want to codify scene graphs we open the door for, future vision, soon to happen.. 01:09:44 It is likely there will be a new HTML element, similar to a canvas but will take a file like glTF.. 01:09:52 defferring pixel drawing to the browser. 01:10:21 So you will have geometry, textures etc that will be communicated, so it we may pack accessibility into into glTF for example. 01:10:40 1) We can forsee UAs exposing a model element to draw with.. 01:11:06 A UA can add button that allows you to view in AR or XR etc but it is declaritive. 01:11:34 An author is saying here is a scene in glTF and asking the browser to draw it.. 01:11:56 This would not be interactive, so we would need to prototype and hit test etc. 01:12:24 To make more advanced things you need to be able to script against the scene graph. 01:12:31 rrsagent, make minutes 01:12:31 I have made the request to generate https://www.w3.org/2019/09/20-apa-minutes.html Irfan 01:12:34 This is new stuff, glTF was only finalised recently. 01:12:52 The format has an extension system, it has interoperability features. 01:13:15 Extension formats can be written in, a11y extensions could be added, also for scripting. 01:13:30 This would mean other 3D engines would have the ability to expose that info. 01:13:47 Rendering engines etc could then parse this info to create more accessible stuff. 01:13:57 Mk: Question about structures.. 01:14:29 N: Yes, you can - the scene graph can contain tons of objects that wont get drawn as they are hidden but as the user moves items can be drawn and hidden as needed. 01:14:32 mhakkinen has joined #apa 01:14:52 MK: Sounds like glTF is a like combining HTML/CSS.. 01:14:59 N: Yes, but not the JS. 01:15:22 MC: Accessibility could be a use case for things that have been thought about and where other use cases exist. 01:15:34 So we want to document a11y use cases and push this along? 01:15:49 N: In the short term I would not focus on the rendering engines. 01:15:50 zakim, who is here? 01:15:50 Present: janina, Joanmarie_Diggs, Matthew_Atkinson, MichaelC, interaccess, IanPouncey, Irfan, CharlesL, Roy, Joshue108_, Manishearth, kip, cabanier, Matt_King, NellWaliczek, 01:15:54 ... ZoeBijl, Léonie, (tink), zcorpan, Avneesh, romain, marisa, LisaSeemanKest_, Joshue, achraf, addison, stevelee, Lauriat, Matt_King_, ada 01:15:54 On IRC I see mhakkinen, Matt_King_, Matt_King, Irfan, aboxhall_, artem, zaur, zcorpan, Joshue108, alexturn, RRSAgent, sushrajaMSFT, Lauriat, Roy, achraf, jcraig, chrishall, 01:15:54 ... NellWaliczek, cabanier, kip, ada, Manishearth, jamesn, janina, Zakim, Joshue_108_, tink, jasonjgw, ZoeBijl, joanie, slightlyoff, trackbot 01:15:55 But in the format, extension etc. 01:16:02 present+ ada 01:16:09 W3C and Khronos do have arrangement and agreements. 01:16:19 q+ to talk about standardising semantic scene graphs 01:16:26 q? 01:16:32 MC: So this is being done by Khronis so we should talk with them. 01:16:58 MC: We need to talk with Dom HM as we may want to delegate W3C to that or stimulate discussion with Khronos. 01:17:09 N: W3C hosted a games workshop.. 01:17:14 JS: We have someone there... 01:17:15 s/Khronis/Khronos/ 01:17:37 N: Neil Trevis was keen and happy to work with the W3C. 01:17:55 JS: Matt Atkinson is working there and was supprtive of glTF. 01:18:21 01:18:36 N: For this to work it cannot be drawn imperatively. 01:19:01 It needs to be declaratively, we will see investement into this space in platforms and UAs. 01:19:17 N: This is simplistic now, but over time will be exposed. 01:19:25 Investing now is smart. 01:19:41 MC: Accessibility like declarative things, sounds like we need to do this. 01:19:54 ack Joshue 01:19:54 Joshue, you wanted to talk about standardising semantic scene graphs 01:19:57 N: We have to focus on the audio thatnks. 01:20:01 ack me 01:20:34 q? 01:20:57 Joshue108: I have a q, related to something Ada brought up yesterday, about semantic scenegraph and thinking about how a DOM tree can be used to anotate a semantic scenegraph. 01:21:06 scribe: ada 01:22:14 NellWaliczek: My goals here are to give you the information you need to think about this so we can talk about this later. 01:23:19 Matt_King_: What does it mean to put semantics on a scenegraph? They are similar to element names and class names. 01:24:00 kip: The GLTF file is like a snapshot of a dynamic system which at run time may get mangled to display the content. 01:24:11 q? 01:24:26 Ada: Can one thing call the glTF as a scene graph? 01:25:00 N: We talked about RAF callbacks etc 01:25:28 NellWaliczek: The GLTF part of GLTF is a scenegraph. Which referene external assets such as geometry and textures. 01:25:29 When you try to track the users head etc, there exists audio APIs, that you can use to generate spacialised sounds etc. 01:25:52 The days generated is handled by the OS.. to make sure audio is fed to correct device etc. 01:26:11 Handled by OS, so this means in a render loop - audio gets spatialsed using this data. 01:26:16 That is what is outputed. 01:26:51 JS: No reason that we can support rich audio environment like Dolby ATMOS 01:27:03 N: Another Khronos standard comes into play.. 01:28:16 01:28:25 OpenXR. THey ahve the power to implement what is effectively drivers for Dolby Atmos. 01:28:28 q+ 01:28:39 q? 01:28:47 To get audio exposed thru WebXR, the audio implementation has to talk to ? 01:28:50 q- 01:28:53 Judy has joined #apa 01:29:12 ??: There is support for listeners and emitters etc and how should that be done. 01:29:28 Virtual listeners and emmitters et.. 01:29:32 01:29:38 s/??/Alex 01:29:44 JS: That is not enough. 01:29:47 q? 01:29:51 present+ 01:30:01 Some sounds sources are going to need a lot of channels. 01:30:15 KIP: I've implemented audio engines.. 01:30:30 The hardware for playback, is going to be binaural - it will be tracked. 01:30:50 hrTF is an impulse response a computeed model.. 01:31:01 How does that get to your inner ear to give you cues. 01:31:05 "head related transfer function" 01:31:26 As we can track your head inspace.. for stereo there can be multiple, finding the angle relative to your head. 01:31:39 atai has joined #apa 01:31:52 Then it works out, and simulates the binaural effect and doing it virtuak 01:31:54 zakim, who is here 01:31:54 Irfan, you need to end that query with '?' 01:31:56 kemar head.. 01:32:00 zakim, who is here? 01:32:00 Present: janina, Joanmarie_Diggs, Matthew_Atkinson, MichaelC, interaccess, IanPouncey, Irfan, CharlesL, Roy, Joshue108_, Manishearth, kip, cabanier, Matt_King, NellWaliczek, 01:32:04 ... ZoeBijl, Léonie, (tink), zcorpan, Avneesh, romain, marisa, LisaSeemanKest_, Joshue, achraf, addison, stevelee, Lauriat, Matt_King_, ada, Judy 01:32:04 On IRC I see atai, Judy, mhakkinen, Matt_King_, Matt_King, Irfan, aboxhall_, artem, zaur, Joshue108, alexturn, RRSAgent, sushrajaMSFT, Lauriat, Roy, achraf, jcraig, chrishall, 01:32:04 ... NellWaliczek, cabanier, kip, ada, Manishearth, jamesn, janina, Zakim, Joshue_108_, tink, jasonjgw, ZoeBijl, joanie, slightlyoff, trackbot 01:32:09 q? 01:32:22 rrsagent, make minutes 01:32:22 I have made the request to generate https://www.w3.org/2019/09/20-apa-minutes.html Irfan 01:32:22 JS: For regular headsets this may suffince. 01:32:49 N: Referring to to Atmos, for every piece of hardware, via web APIs the browser needs to be able to talk to it. 01:32:56 zcorpan has joined #apa 01:33:04 JS: So what syncs these things? 01:33:37 Kip: The APIs give you enough to generate the similation but there is middleware, WebAudio etc that helps to realise the sonic environment. 01:33:49 N: As a dev these things are not called directly. 01:34:00 You set up an environment that handles these things. 01:34:12 Alex: Described how things are moved around and processed. 01:34:28 N: You mean the devs view? 01:34:43 01:34:58 Alex: The engine handles things here. 01:35:02 JOC: The logic. 01:35:12 N: Calls are set up on your behaf. 01:35:46 ???: Operating system, primitives, sounds streams and tying to OS primitive etc. 01:36:10 s/???/Sushan 01:36:33 N: There is an audio only spatialsed headset, got that mixed up with Atmos. 01:36:45 Kip: So Atos format describes multiple sounds. 01:36:53 JS: Yes, up to 128 Channels. 01:37:26 Kip: We can use this technqiue replace many sources virtually, replace binaural stuff to spacialse sound sources. 01:37:49 MK: The magic leap had multiple transducers around my head, has a lot more. 01:38:04 CabR: Dont think so. 01:38:51 Alex: We can play tricks relative to head positioning. 01:39:33 JOC: So the correct usage of sound is vastly important for accessibility and the quality of the user experience. 01:39:44 Alex: There are effective things we can do. 01:39:57 JS: You can turn your head to locate things. 01:40:10 Alex: And we can do clever things. 01:40:23 q+ to comment at the end of the session 01:40:36 Sushan: Handing the amount of audio and channels is the OS and hardware drivers to support systems with multiple outputs and hardware. 01:41:06 q? 01:41:13 N: We've talked about semantics, declarative structure etc, how middleware plays a role, audio.. 01:41:40 So these middleware stuff.. 01:41:43 zcorpan has joined #apa 01:41:56 JOC: glTF plugs into WebGL, with the scene info. 01:42:15 N: We will have a composited experience in the future. 01:42:24 Lets talk about interaction.. 01:42:53 That is a challenge, think of the complixity of mouse and touchscreen syncing. 01:43:08 Now we have a bunch of input mechanisms.. 01:43:24 We are getting more. 01:43:36 Web is deficient for speech input etc. 01:43:41 q? 01:44:16 N: You can use input to move around your space.. 01:44:39 In VR the space is nearly always larger that the physical space, you can teleport in these spaces. 01:44:54 N: How this is done is via different input sources. 01:45:21 N: there are platform that map hand held motion controllers to grab objects. 01:45:23 +q 01:45:45 There is also selection at a distance, that you can aim at, select it and pick it up, move it etc. 01:45:59 There is also the painting option. 01:46:11 q- later 01:46:17 MK: There can also be chat type things. 01:46:26 N: yes, 01:46:56 N: These input devices are not inherently accessible. 01:47:16 N: If you have limited motion, these controllers can be problematic. 01:47:28 01:47:35 zcorpan has joined #apa 01:47:52 N: There are native platform layer where AT can be plugged in. 01:48:07 N: Mentions the Microsoft One. 01:48:25 MC: For WoT we need to use APIs that can provide these functions. 01:48:27 q? 01:48:40 N: We have been forces to generalise how this is done. 01:49:33 We have a XRInputSource is the obect type for this, that is called on a session. 01:49:54 N: Target Rays Grip location methods.. 01:50:04 s/obect/object 01:50:14 N: Input sources can be parts of your body. 01:50:36 You can create input sources that you can call these methods on particular objects. 01:50:54 q+ to ask about the benefits of generalisation 01:51:18 q- later 01:51:21 N: These are opportuities that are worth discussing. 01:51:25 ack mhakkinen 01:51:41 MC: Do authors have to do anything special here? 01:51:53 MH: Any discussion on haptics? 01:52:06 N: Great question. 01:52:16 q+ to ask about input source software vs. hardware. 01:52:23 N: Prior to WebXR we had WebVR. 01:52:24 q- later 01:52:31 01:52:49 q- 01:53:22 Haptics has not been striped out of proposals.. 01:53:51 When haptics are available they can be used in WebXR, such as the rumble pack on an Occulus. 01:53:59 Kind of generic use with current controllers. 01:54:10 MH: Can you capture textual objects? 01:54:46 N: You can simulate some things here, there are full body suits etc. 01:55:15 Would be suprised if there was not work going on here. 01:55:44 N: There are challenges, if on Gamepad API we have that. 01:56:08 MH: Having more than just the rumble for the controller is important. 01:56:12 ack Lauriat 01:56:12 Lauriat, you wanted to ask about input source software vs. hardware. 01:56:32 SL: Regarding input sources, mapping, how easy is it to have a software based input? 01:57:03 01:57:29 N: Input source objects can be mapped to the grip button, however, now there is only on button - if a Gamepad. 01:57:51 We call generic select events, thumb stick wont fire if, like user initated actions etc. 01:58:18 Fake events can be fired, but not the user activation thing, as that is done by the browser. 01:58:32 SL: What about other interactions? 01:58:38 N: YOu can polyfill those. 01:59:09 MK: I've a high level question about Web vs Native.. 01:59:36 In the accessibility world we are trying to go accross multiple ways of delivering experiences etc. 02:00:00 I'm wondering how much content that is browser based or ?? 02:00:13 q+ to comment on web and beyond web 02:00:33 Is used today, how much are you living on the web with Occulus etc. 02:00:50 N: Some of it, regarding the glTF format, that is consumed by Unity and Unreal etc. 02:01:13 To get accessiiblity inside them - you have the benefit of it being a common file format. 02:01:19 q+ to ask about perfomance. 02:01:47 N: Extensions can be written, browsers can support secondary run times etc. 02:01:55 zcorpan has joined #apa 02:02:26 N: Lots of current browser based APIs, there is a commonality between how native and web apps are built. 02:02:33 You need to solve the same problems. 02:02:37 q? 02:02:55 So how much is web based vs native - we haven't hit CR yet! 02:03:22 At turning point, not there yet. 02:04:02 02:05:18 People will use a generic tools and not instal bespoke random apps to do stuff. 02:05:49 ack Judy 02:05:49 Judy, you wanted to comment at the end of the session and to comment on web and beyond web 02:06:05 Judy_alt has joined #apa 02:06:05 JB: I've seen 360 hotel views. 02:06:14 Matt_King__ has joined #apa 02:06:17 Matt_King___ has joined #apa 02:06:34 And regarding Matts question, W3C as a whole is coming accross the question of we should be looking at Web only or beyond that. 02:06:50 In WAI we are aware that some what we need to look at is beyond the web proper. 02:07:24 Regarding the inclusive immersive workshop.. 02:07:49 It is filling up, and in WAI as we look emerging web tech, we want to grow a community of experts. 02:07:55 This session was really good. 02:08:02 This content could be distilled and shared. 02:08:40 Will be a good primer - but some may feel unprepared to make people feel welcome. 02:09:31 https://www.w3.org/2019/08/inclusive-xr-workshop/ 02:11:32 MichaelC has joined #apa 02:12:03 q- 02:12:58 rrsagent, draft minutes 02:12:58 I have made the request to generate https://www.w3.org/2019/09/20-apa-minutes.html Joshue108 02:14:20 Scribe: Joshue108 02:14:30 rrsagent, draft minutes 02:14:30 I have made the request to generate https://www.w3.org/2019/09/20-apa-minutes.html Roy 02:36:17 Matt_King_ has joined #apa 02:36:19 Matt_King__ has joined #apa 02:51:52 zcorpan has joined #apa 03:04:58 Roy_ has joined #apa 03:15:08 zcorpan has joined #apa 03:29:39 stevelee has joined #apa 04:04:33 zakim, who is on the phone? 04:04:33 Present: janina, Joanmarie_Diggs, Matthew_Atkinson, MichaelC, interaccess, IanPouncey, Irfan, CharlesL, Roy, Joshue108_, Manishearth, kip, cabanier, Matt_King, NellWaliczek, 04:04:37 ... ZoeBijl, Léonie, (tink), zcorpan, Avneesh, romain, marisa, LisaSeemanKest_, Joshue, achraf, addison, stevelee, Lauriat, Matt_King_, ada, Judy 04:05:02 join #webapps 04:05:08 (ignore that) 04:05:19 Irfan has joined #apa 04:07:22 present+ 04:08:22 Judy has joined #apa 04:10:27 ScribeNick: jcraig 04:10:54 Topic: Web RTC joint meeting with APA 04:11:26 Scribe: jcraig 04:11:36 hta1 has joined #apa 04:11:39 present+ 04:11:41 dom__ has joined #apa 04:11:41 present+ 04:11:49 Dominic: introduce yourself 04:11:58 Present+ 04:12:00 youenn has joined #apa 04:12:00 Benard A??? 04:12:06 s/A???/Aboba/ 04:12:06 s/Dominic/Dom/ 04:12:22 Jared has joined #apa 04:12:22 present+ Judy 04:12:31 jib has joined #apa 04:12:37 Harald Alvestrand 04:12:46 James Craig, Apple 04:12:56 Armando ??? 04:13:05 Jared ???, Chesire 04:13:13 Josh O Connor, W3C 04:13:20 Bernard has joined #APA 04:13:28 stevelee has joined #apa 04:13:32 Youene Fablet, Apple 04:13:41 Judy Brewer, W3C WAI 04:13:44 Joanie Diggs, Igalia 04:13:45 s/Jared ???/Jared_Cheshier/ 04:13:47 s/Youene/Youenn 04:13:52 Janina Sajka, APA/WAI 04:13:56 Introduction: Bernard Aboba, Co-Chair of the WEBRTC WG, and formerly a member of the FCC EAAC and TFOPA groups. 04:14:05 Henrik ???, Google 04:14:11 Jared Cheshier, new to W3C, in the WebRTC working group and Immersive Web working group. 04:14:12 Jan-Ivar Bruaroey 04:14:16 Henrik Bostrom, Google. 04:14:19 s/Armando ???/Armando Miraglia/ 04:14:23 Daiki, NTT on RTC 04:14:48 Hiroko Akishimoto ??? 04:15:04 s/???/NTT/ 04:15:13 and colleague? 04:15:44 Topic: Real Time Text 04:16:18 important on behalf of those with speech disability or deaf hard-of-hearing 04:17:03 "Topic 1" is Real Time Text 04:17:43 "Topic 2" is use case for Web RTC 2.0 04:18:02 Joshue108 has created a document of example use cases 04:18:12 Here they are https://www.w3.org/WAI/APA/wiki/Accessible_RTC_Use_Cases 04:18:30 s/is use case/is use cases/ 04:19:05 Bernard: WG MMusic is standardizing transports over RTT channel 04:19:12 3GPP is citing that effort 04:19:17 artem_ has joined #apa 04:19:29 almost certainly will result in a final spec 04:20:19 dom__: vocab: RTT also mean rounds trip time in other contexts.. RTT or this discussion is Real Time Text 04:20:45 Bernard: goal is to enable WebRTC as a transport protocol for RTT 04:21:12 RTT is a codec in the architecture, but somewhat like a data channel too 04:21:27 wouldn't make sense to send music over RTT for example 04:22:03 their plan to use the data channel to send music I think makes sense 04:22:18 RTT is timed, but not synchronized time 04:22:32 q+ to ask about time 04:22:36 Is time sync necessary? 04:22:42 janina: I think not 04:22:43 s/for RTT/for RTT, and Gunnar Hellström is currently reviewing that/ 04:22:49 q+ 04:22:54 jcraig: why not? 04:23:17 Joshue108: what about synced sign language track 04:23:19 ack me 04:23:19 Joshue, you wanted to ask about time 04:23:25 ack Judy 04:23:41 Judy: I share Josh's concern 04:24:05 hta: ??? and other one is that the system records send time 04:24:18 I think the first thing is the only one required 04:24:33 "Any service or device that enables the initiation, transmission, reception, and display of RTT communications must be interoperable over IP-based wireless networks, which can be met by adherence to RFC 4103 or its successor protocol. 26 RFC 4103 can be replaced by an updated standard as long as it supports end-to-end RTT communications and performance requirements." https://www.fcc.gov/document/transition-tty-real-time-text-technology 04:24:39 Bernard: Would affect how time is sent over the channel... 04:24:40 As long as we are sure that issues with time, dont impact on synchronisation of various alternate medium 04:24:54 because 3GPP is involved... likely to be implemented 04:24:56 s/medium/media content 04:25:28 janina: idea (with telecom RTT) is to see characters immediately 04:25:31 [jb partly wondering if timing is relevant for rtt communication records in emergency communications] 04:25:40 q+ to mention the 911 context for immediate chars 04:25:44 q+ 04:26:00 Challenges with TTS timing for blind users https://www.w3.org/WAI/APA/wiki/Accessible_RTC_Use_Cases#Challenges_with_TTS_timing 04:26:17 ack me 04:26:17 jcraig, you wanted to mention the 911 context for immediate chars 04:26:19 mhakkinen has joined #apa 04:26:23 JC: VoiceOver handles this well 04:26:33 q+ to speak to requirement for non-buffering including for deaf-blind use cases in emergency communications 04:26:42 Bernard: draft has reliable mode and unreliable (lossy?) mode.. 04:26:50 ack Judy 04:26:50 Judy, you wanted to speak to requirement for non-buffering including for deaf-blind use cases in emergency communications 04:27:10 The WebRTC 1.0 API supports both reliable and unreliable modes. 04:27:17 q+ to suggest reliable is needed for RTT (completeness is probably more important than latency for text) 04:27:18 Judy: glad practical details are being discussed .. eg. emergency situation 04:27:33 Deaf community also wants immediacy 04:27:58 Draft is here: https://tools.ietf.org/html/draft-holmberg-mmusic-t140-usage-data-channel 04:27:59 deafblind community may share a need fro non-buffered comm 04:28:24 s/fro/for/ 04:28:28 Judy: I'm interested in hearing the background. We jumped straight into discussion 04:28:55 is there an informative para that shows polyfill implementations? the Deaf community thinks so 04:29:35 q+ 04:30:02 HTA: If there is nothing required in RTT protocol, you can have a perfect polyfill? but if not, you may need extensions. 04:30:13 ack j 04:30:25 Judy: Sometimes JS polyfills can count as one of two required implementations 04:30:52 dom__: ??? 04:31:12 dom: may have room in spec to add RTT support in WebRTC today 04:31:49 s/is there an informative para that shows polyfill implementations/is there an opportunity to add an informative para that explains the relevance and allows polyfill implementations// 04:32:06 dom__: I see value in exposing 04:32:37 atai has joined #apa 04:32:56 dom: if a gateway from RTT to webrtc is already possible (Bernard to confirm), it would be useful to add a note to the WebRTC document to point to that usage of datachannel 04:33:08 q+ 04:33:10 Bernard: questions from the use case.. it does not recommend whether to use reliable or unreliable mode.. no rec on whether to send char by char or as a blob 04:33:13 q+ ack d 04:33:20 ... for a normative change to the API surface, it's hard to consider without understanding the underlying protocol and what it would need to expose 04:33:28 suggest APA review the document and provide fedbaclk 04:33:31 q- ack 04:33:34 q- d 04:33:39 s/fedbaclk/feedback/ 04:33:43 ack me 04:33:44 dom__, you wanted to suggest reliable is needed for RTT (completeness is probably more important than latency for text) 04:34:16 Latest version is https://tools.ietf.org/html/draft-ietf-mmusic-t140-usage-data-channel 04:34:23 Lauriat has left #apa 04:34:23 q+ henrik 04:34:37 q? 04:34:42 q+ Joshue108 04:34:47 Bernard:reliabel in order preferred 04:35:11 Judy: colleagues at Galliudet would be interested in sharing polyfill implementations... 04:35:16 q+ to ask confirm that this doc contains the technnical requirements for RTT implementiations in WebRT and APA should review 04:35:36 I'm concerned about missing the timeline window since you are nearing completion 04:35:45 ack Judy 04:36:06 I'd like final WebRTC to include ack that RTT is on the roadmap? 04:36:46 Bernard: WebRTC has evolved since the RTT proof, we should review that it works with the current draft 04:36:52 ack henrik 04:37:24 Field trial specification: https://tap.gallaudet.edu/IPTransition/TTYTrial/Real-Time%20Text%20Interoperability%20report%2017-December-2015.pdf 04:37:33 henrik: what is the requirement on WebRTC for RTT... sounds like you can do this today'? 04:38:04 dom: there is a dedicated RTT spec required by FCC .. question is how to you expose this in the RTC stack 04:38:23 and provide interior with existing services like TTML 04:38:25 ack me 04:38:26 Joshue, you wanted to ask confirm that this doc contains the technnical requirements for RTT implementiations in WebRT and APA should review 04:38:37 s/interior/interop/ 04:38:49 henbos has joined #apa 04:39:02 Henrik Boström here 04:39:03 dontcallmeDOM has joined #apa 04:39:05 Bernard: I've entered the spec Id like APA to review 04:39:19 also added the Galliudet prototype from 2015 04:39:56 s/Galliudet/Gallaudet/ 04:40:05 q+ 04:40:24 q? 04:40:32 janina: I would like the use cases to clearly distinguish the nuanced differences... e.g. emergency services, etc. 04:41:05 JC: Create implementations that can be brailled immediately.. 04:41:20 We are presenting characters as fast as possible with minor adjustments in VoiceOver. 04:41:32 So you can get the existing string asap 04:41:51 dom: req that the character buffer be sent as fast as possible 04:42:39 from a webrtc perspective, what you will get is a stream off characters, and it will be up to the app to determine how to transport those characters 04:42:39 The current holmberg draft specifies reliable transport. Are there use cases where partial reliability might be desired? 04:43:06 q? 04:43:08 ack me 04:43:10 For example, where a maximum latency might be desired. WebRTC 1.0 API supports maxPacketLifeTime or maxRetransmissions for partial reliability. 04:43:10 q- 04:43:21 janina: it's wonderful that we'll be able to use this side-by-side with existing rtc 04:43:27 q+ 04:44:06 q+ 04:44:09 Level 1, emergency services, Level 2 disabilities, Level 3 personal pref 04:44:21 (priorities ^) 04:44:39 Bernard: I'd like to discuss emergency services a bit more 04:45:02 max-transport time might be affected 04:45:04 q+ to talk about finding details of emergency use cases 04:45:16 zcorpan has joined #apa 04:45:46 q+ Janina 04:45:52 iommediacy and speed are sometime in conflict. each use case could result in difference implementations... max-package-lifetime for example 04:46:06 s/speed/accuracy/ 04:46:47 Judy: FCC did research. Some examples of death resulting from lack of immediacy in RTT communication. 04:46:52 Here is an outline of some of the RTT emergency use cases 04:46:53 https://www.w3.org/WAI/APA/wiki/Accessible_RTC_Use_Cases#Support_for_Real_Time_Text_.28RTT.29 04:47:02 max-partial-package may have helped in these cases 04:47:14 s/package/packet/ 04:48:29 [it is 118 apparently] 04:48:36 [110 sorry] 04:48:52 (after checking: maxPacketLifetime attribute seems to be wired all the way down the stack, so it probably works.) 04:49:05 Topic: document of use cases for Web RTC 2.0 04:49:12 https://www.w3.org/WAI/APA/wiki/Accessible_RTC_Use_Cases 04:49:20 q? 04:49:21 q+ 04:49:34 +q 04:49:34 ack Bernard 04:49:39 q+ 04:49:41 ack jan 04:49:46 q+ to give overview 04:50:13 Bernard: I think the doc is useful... especially mentioned of other tech outside RTC 04:50:41 woful by completion, this can be tested to reveal any shortcomings 04:51:08 FCC has funded open source project that can be run 04:51:28 q+ to say 1) realize probably need more nuance in our use cases; 2) want to make sure we have a better (non-hack) path for RTT integration in RTC 2.0; 3) to talk about leveraging accessible RTC in multi-channel virtual conferencing 04:51:32 useful to have a standalone doc that can be used against multiple sources. RTC etc. 04:51:36 ack Ju 04:51:36 Judy, you wanted to talk about finding details of emergency use cases and to and to say 1) realize probably need more nuance in our use cases; 2) want to make sure we have a 04:51:39 ... better (non-hack) path for RTT integration in RTC 2.0; 3) to talk about leveraging accessible RTC in multi-channel virtual conferencing 04:52:00 Judy: I think we need more nuance in use cases for the fine detail questions you're asking 04:52:02 IETF RUM WG: https://datatracker.ietf.org/wg/rum/about/ 04:52:34 Judy: if we do the right things, we may end up with a hacked ??? in RTC 1.0 04:52:59 my guess is you need to add something in the spec that indicates "here's how to support RTT for now" 04:53:07 +1 to Judy 04:53:19 and for 2.0, we need a plan to make sure RTT is supported sans hacks 04:53:45 RUM document on the VRS profile: https://tools.ietf.org/html/draft-rosen-rue 04:53:57 +q 04:54:00 ask the tech companies move today more carbon neutral teleconferencing, can RTC become that open stnadard 04:54:04 ack dom 04:54:09 ack don 04:54:31 dontcallmeDOM: milestone: end of march will require recharter for RTC 04:54:46 might be a good time to reflect RTT use cases in charter 04:54:59 s/ask the tech companies more today/as the tech companies more towards/ 04:56:17 dontcallmeDOM: response to Judy: I don't think 1.0 RTT implementation is a hack... happy to work with you on a clarifying note. 04:56:22 s/that open standard/that open standard for fully accessible virtual carbon-neutral conferencing/ 04:56:46 saying no direct support in spec today, but there is ongoing work that can be referenced in the doc 04:57:17 ack me 04:57:19 ack me 04:57:19 Joshue, you wanted to give overview 04:57:22 https://www.w3.org/WAI/APA/wiki/Accessible_RTC_Use_Cases#Data_table_mapping_User_Needs_with_related_specifications 04:57:59 Joshue108: part of the doc has a data table which provides mapping to use cases 04:59:19 Joshue108: could publish as a note from APA 05:00:01 Web RTC group could contribute back to that Note 05:00:07 ack Joshue108 05:00:08 q? 05:00:10 ack Bernard 05:00:37 Bernard: good reason to have use cases as separate doc 05:01:11 q+ 05:01:17 we've learned getting access to raw media opens a host of Accessibility opportunities... e.g. live captioning on a bitstream for example 05:01:17 ack me 05:01:49 janina: thank you all for coming 05:01:54 dontcallmeDOM: thanks 05:01:57 judy 05:02:05 rrsagent, make minutes 05:02:05 I have made the request to generate https://www.w3.org/2019/09/20-apa-minutes.html Irfan 05:02:05 judy: thanks all 05:02:14 [adjourned] 05:02:19 q? 05:02:29 scribe: 05:02:35 scribenick: 05:02:42 rrsagent make minutes 05:02:54 rrsagent: make minutes 05:02:54 I have made the request to generate https://www.w3.org/2019/09/20-apa-minutes.html jcraig 05:03:55 Manishearth has left #apa 05:04:42 topic: Pronunciation Explainer 05:05:03 Scribe: Irfan 05:05:22 https://github.com/w3c/pronunciation/blob/master/docs/explainer.md 05:05:50 Jared has joined #apa 05:05:54 dontcallmeDOM has joined #apa 05:06:01 hta1 has joined #apa 05:07:03 mhakkinen: took the recommendation and put together a document with recommended. 05:07:11 dontcallmeDOM has left #apa 05:07:18 with all goals and non-goals with open questions 05:07:50 need feedback from Michael or Roy with the format 05:07:58 Janina: good point.. 05:08:03 hta1 has left #apa 05:08:19 jib has joined #apa 05:08:20 roy: personalization task force has a document 05:09:17 mhakkinen: I am looking from the group to hear if I have covered everything in this document. 05:09:31 HOw can we bring SSML into HTML- one approach 05:09:47 inline-ssml... just drop right in 05:09:51 or bring it as attr model 05:10:27 one of the concern that we have, AT products may have more challenging time extracting SSML into the document. 05:10:55 based on our survey, one of the big AT vendor came up in support of attribute model 05:11:23 the issue is broader.. with spoken interfaces 05:11:51 present+ burn 05:12:12 burn: I can imagine what inline means.. but have no idea about attr model 05:12:27 mhakkinen: 05:12:50 what they did was created an attribute.. ssml:alphabet 05:12:59 you can drop two attribute in 05:13:34 we come from Education testing which is assessment world and trying solving problem for pronunciation. aria-label doesnt work 05:13:53 have seen data-ssml and some funny interpretation of ssml.. 05:14:16 we got json structure which is relatively clean and prototype.. thats one model 05:14:32 with explainer we are trying to explain the problems and proposing solutions.. 05:14:47 seeking inputs from stack-holders 05:15:41 burn: when we created SSML.. we expected that XHTML is next which would made it easy. 05:16:26 if you gonna do it JSON model, how do you maintain context? 05:16:41 you are going to loose scoping? how is it going to work? 05:16:52 mhakkinen: thats the question and we are looking more feedback 05:17:15 in-general in assessment we are looking for very specific features... such as say-as, pausing.. sub 05:17:41 burn: you are going to deal with name-spacing issue 05:17:55 I dont see any reason that you can't do that 05:18:27 mhakkinen: when we talked about bring inline with the HTML.. comment from SR vendor was that it is going to be hard to impliment.. 05:18:38 some browser vendor also shared the same concerns 05:18:41 zcorpan has joined #apa 05:19:09 burn: Its possible to rename the element .. like p element 05:20:17 there internal models have to deal with video rendering.. you could leave the original text there and ignore the element if you are adding any SSML 05:20:34 mhakkinen: problem with braille display as well 05:20:53 pronunciation string goes to both, braille and SR 05:21:17 some discussion.. like ariabraille-label 05:21:39 could this control purely by aria.. but that doesn't solve the broad problem 05:22:09 joanie: what voice assistance support this? 05:22:20 mhakkinen: google and alexa both allows 05:22:41 burn: we did some work in the past.. good to know that it has some life now 05:23:15 mhakkinen: I tried to hack a demo with Alexa.. it looks like.. pullin some HTML content and if contains SSML into it, IT can be rendered directly 05:23:30 I cant believe that amazon team is not looking for a solution 05:23:36 It is great way to extend 05:23:42 have contacted amazon as well 05:24:04 we want to make sure to make it render on web and voice assistance 05:24:08 burn: is inline dead? 05:24:37 mhakkinen: we have two ways with advatages and disadvantages 05:24:46 this is just draft and would like explain more in details 05:25:02 janina: we have time to discuss it 05:25:22 mhakkinen: GG from jaws is been pretty clear that he likes attribute approach 05:25:44 havent heard from other org so far 05:26:02 janina: how about narrator 05:26:20 mhakkinen: talked to them and they seem to be working on this 05:26:44 ?? is trying to work to get pass through from the browser to AT 05:26:55 for the voice assistance .. we can live with either approach 05:27:14 AT are less of the challenge here 05:27:50 Joanie: implementation on browser side. 05:28:14 05:29:10 mahakkinen: talked to chief arch at pearson.. they use some hack to handle pronunciation. they like JSON model.. its easier for them because they dont have to change much 05:29:36 Joanie: thinking.. may be waht we want combination of version 1 and version 2 05:29:48 how do you expose it to a11y API? 05:30:03 speak tag is not exposed to A11Y API 05:30:56 mhakkinen: *showing example... inline is simple 05:31:13 05:31:36 q+ 05:31:50 is not exposed to A11Y API 05:32:34 joanie: we could use span instead 05:32:47 it is still going to be included in AT tree 05:33:33 for any AT.. we need an object attr.. which can exposed to API 05:33:52 s/which can/ which can be 05:33:59 that will make me super happy 05:34:26 we would want HTML to bless this 05:34:32 q- 05:35:02 zcorpan has joined #apa 05:35:07 burn: if you want to filter out the speak content, you have to pay attention to the text which is there 05:36:17 joanie: good point but you are wrong... because its an object which can be exposed to a11y tree 05:36:42 we have DOM tree and render tree. A11Y tree is combination 05:37:41 if we have
foo
element.. we are going to have ATKObject which is accessible element with some property like APK_role_section 05:37:48 state-enabled 05:37:58 zcorpan has joined #apa 05:37:59 ATK Text interface will bring all stuff about text 05:38:18 all these ATK object attributes going to include the text 05:39:14 if we do like
foo
05:39:25 I am also going to get bar:baz 05:39:42 ATKText- "foo" 05:40:27 also going to have attribute like ssml : .. that means text doesn't go away because it is an object attribute 05:40:38 mhakkinen: is there any char limit? 05:40:48 joanie: probably there is 05:40:55 mhakkinen: we need to think about it 05:41:14 joanie: we we have a limit then we need to break it into multiple attrbute 05:42:14 burn: you might have multiple properties.. one say-as attr is not going to work.. 05:42:20 joanie: then we have to go to SSML 05:42:40 burn: we can break it up which is an array of literal string 05:45:24 if you need some of the hierarchal properties then there could be a problem otherwise it is okay 05:45:36 joanie: agrees 05:46:34 mhakakinen: during IMS discussion we talked about these challenges... people want to exercise all the capabilities 05:47:14 burn: one of the issue with us was... there would be couple of different TTS voice loaded.. may english voice.. female german voice... 05:47:19 its been long time 05:47:28 idea was that you have an html page or voice xml 05:48:00 you have text there and some one add lang tag.. it would switch TTS voice to german and make it female.. which is more disruptive 05:48:22 zcorpan has joined #apa 05:48:55 we hade to change which would affect to larger users but here we dont have that challenge 05:49:28 joanie: AT are going to have to deal with inheritance which is un-fun. 05:50:26 not asking to change your explainer but inheritance or multiple level properties are going to be applied 05:50:50 I was happy about this solution and we started talking about child or other voice 05:52:00 05:52:51 rrsagent, make minutes 05:52:51 I have made the request to generate https://www.w3.org/2019/09/20-apa-minutes.html Irfan 05:53:39 s/mahakkinen/mhakkinen 05:55:25 zcorpan has joined #apa 06:00:27 mhakkinen: we are stepping back to full SSML inline.. to a subset either an element or new Attr... span based model.. that is simple and clean 06:00:58 burn: where is this work is going to be happen 06:01:15 github handel @burnburn 06:01:32 its a good way to start 06:01:51 mhakkinen: SSML is going to have much broader impact on the web. 06:02:08 amazon is already doing extension to SSML 06:02:16 there is so much potential here 06:06:19 joanie: JSON has to be parsed, and in the case of at least some SR turned back into the original SSML 06:06:49 joanie: I am going to parsed it into option one... 06:07:58 all the ssml property should be use and literal string as a single object attribute 06:08:31 s/should be use and/should be used as 06:08:51 it is going to be very lengthy literal object attribute 06:09:11 mhakkinen: if user wants to navigate with a char.. what is affecting ? 06:09:35 if user wants it in female voice.. are you going to retain as active ssml? 06:09:48 joanie: that has nothing to do with option 1 or option 2 06:10:31 I dont want to learn SSML but I want to use it with learning it 06:10:44 06:12:28 joanie:need to talk to vendors and say that no matter what.. it is going to be attribute.. could you deal with actually SSML mark up and an attribute 06:32:39 Judy has joined #apa 06:35:56 RRSAgent, make minutes 06:35:56 I have made the request to generate https://www.w3.org/2019/09/20-apa-minutes.html Roy_ 06:47:13 atai has left #apa 07:02:06 jib has joined #apa 07:03:07 zcorpan has joined #apa 08:14:52 Roy has joined #apa 11:39:46 zcorpan has joined #apa 11:42:19 MichaelC has joined #apa 13:38:50 Judy has joined #apa 14:20:24 stevelee has joined #apa 14:58:25 zcorpan has joined #apa 19:24:42 Judy has joined #apa