IRC log of apa on 2019-09-20

Timestamps are in UTC.

00:07:33 [RRSAgent]
RRSAgent has joined #apa
00:07:33 [RRSAgent]
logging to https://www.w3.org/2019/09/20-apa-irc
00:07:39 [ada]
scribenick: ada
00:07:46 [alexturn]
alexturn has joined #apa
00:09:43 [NellWaliczek]
present+
00:10:30 [kip]
present+
00:10:30 [Lauriat]
Present+
00:10:35 [ada]
NellWaliczek: One of the things I had noticed is that we don't have a shared understanding of eachothers technlogies and it was enlightneing to the issues we each are investigating.
00:11:30 [ada]
... but was hard to come accross solutions. This is to give some background about 3D graphics, if we would like to follow up on a call after TPAC ask ada and she can add it to the agenda.
00:12:03 [Roy]
RRSAgent, make logs public
00:12:12 [Roy]
RRSAgent, make minutes
00:12:12 [RRSAgent]
I have made the request to generate https://www.w3.org/2019/09/20-apa-minutes.html Roy
00:12:18 [ada]
... to give background to I will tie it to something we have experience with today the DOM, in HTML you don't say draw this pixel at this point. You ask to draw by component, draw an input box , div etc
00:13:09 [ada]
... it is declarative, it isn't imperatively asking the GPU to draw pixels. We describe the elements and style and it is up to the UA to issue the GPU commands.
00:13:21 [ada]
... (aside from canvas)
00:13:56 [Matt_King_]
Matt_King_ has joined #apa
00:14:06 [Roy]
present+
00:14:16 [Matt_King_]
present+
00:14:17 [ada]
... imperative rendering is the opposite we take some buffers and send them to the GPU and give it commands which draw to the sceen pixel by pixel.
00:15:07 [ada]
... so for those who are unfamiliar with canvas, you cannot place content with style and it is just an opaque block which cannot interact with the page.
00:15:36 [Roy]
Meeting: APA WG TPAC 2019
00:16:04 [ada]
... It is nesacary for 3D graphics but without understanding the necessity we will spin our wheels when it comes to adding a11y into this.
00:16:46 [Joshue108]
Joshue108 has joined #apa
00:16:51 [ada]
... i'm going to breakdown about how we think about 3d rendering into its constituent parts. What are the data you need for 3d rendering and what do you send to the GPU for 3D rendering.
00:16:53 [Joshue108]
present+
00:17:26 [ada]
Matt_King_: Are the drawing APIs for standardised?
00:17:29 [zcorpan]
zcorpan has joined #apa
00:18:03 [ada]
NellWaliczek: Yes, but not through the W3C through Khronos WebGL.
00:18:15 [Joshue108]
Ada: WebGL is a low level drawing primitive.
00:18:25 [Joshue108]
It will draw triangles fast, thats all.
00:18:38 [Joshue108]
The layer between that and what devs write is wild west.
00:18:56 [Roy]
RRSAgent, make minutes
00:18:56 [RRSAgent]
I have made the request to generate https://www.w3.org/2019/09/20-apa-minutes.html Roy
00:19:42 [ada]
NellWaliczek:You were asking about the graphics standards, whilst WebGL is widely adopted it is basd on a fairly old native API called OpenGL, there have been numerous developments in 3D graphics. Which cannot be matched by WebGL.
00:20:02 [Roy]
chair: Janina
00:20:47 [ada]
... There are new standards to try to access this new functionality such as WebGPU which provides different functionality. Which exposes functionality unavailable to WebGL. There is also WebGL2 which is an imporovement on WebGL.
00:20:49 [Roy]
Topic: XR
00:21:04 [zaur]
zaur has joined #apa
00:21:15 [ada]
... Vulkan, Apple's Metal, WebGPU is aimed at targetting those.
00:21:37 [ada]
WebGPU is still a wip, so for all intents and purposes WebGL is only available to us.
00:23:12 [Irfan]
Irfan has joined #apa
00:23:14 [Irfan]
present+
00:23:42 [ada]
... I could talk to you for hours about the interesting side tracks, just to give a sense of the acronyms in play. Before I dig into the graphics primitive i would like to talk about the relationship between WebGL and WebXR, I get asked this a lot it is not unique to this group. The best way to think about his is that we have thought of WebGl being a graphics language it takes imperative data and
00:23:45 [ada]
turns it into pixels, WebXR does not do that. WebXR could not function withot WebGL. WebGl is for drawing the pixels, WebXR provides the information on to where and how to draw those pictures.
00:24:09 [ada]
... and describing the view frustum. Like a cropped of pyramid with the top at your eyes.
00:24:30 [ada]
... THe WebXR describes the shape of this view frustum.
00:24:40 [ada]
Matt_King_: Like a cone on my face.
00:24:43 [ada]
NellWaliczek:Yes
00:25:00 [ada]
Joshue108:How does this relate to field of vision (fov)
00:25:16 [ada]
NellWaliczek: it is roughly the same concept but also includes the near plane and far plane.
00:26:24 [ada]
... the point htere is that you need all that information to know where to draw, yo need to know where to draw and how far they have moved from the origin of the space. When drawing for a sheadset you may have to draw at least stereoscopically, somethimes more as some headsets have more multiple screens per eyes.
00:26:53 [ada]
... The API describribes the view frustum from each panel and the location of each.
00:28:15 [ada]
... i just talked about a simplification of what the API provides, once you have joined those pixels they don't go on the monitor they go on the displays in the headset (which runs at a different frameratre to your monitor) the API then describes how to send the images to the screens.
00:28:47 [ada]
... the hardware will slightly move the images to account for some minor head motion this is known as reprojection and stops people being ill.
00:28:49 [artem]
artem has joined #apa
00:29:46 [ada]
Matt_King_:if you are a developer, does the developer make WebGL calls to WebXR or just take information from it?
00:29:58 [ada]
NellWaliczek: They also submit the rendered pixels to the screen.
00:31:24 [ada]
... It is part of the RAF loop, on a monitor at 60fps the screen is wiped and redrawn, you can hook into this loop with requestAnimationFrame, to move objects before the next draw.
00:31:37 [Irfan_]
Irfan_ has joined #apa
00:32:10 [ada]
... New monitors can run at 144fps, which can be problematic for developers which have assumed 60fps because their animations run extra fast.
00:33:29 [artem]
s/sheadset/headset
00:33:33 [ada]
... A VR headset which is plugged into a computer the headsets have to draw faster than 60fps to reduce the effects of VR sickness, it is a different frame rate. So it needs to provide it's own requestAnimationFrame at it's own refresh rate.
00:34:12 [ada]
Matt_King_: Is RAF a WebGL API?
00:34:32 [ada]
NellWaliczek:There is one on window and one in the WebXR device API.
00:34:53 [ada]
NellWaliczek:Once you have initialised the XR session the RAF is on the XR session object.
00:35:45 [ada]
... The Web Developer will first ask if there is AR hardware to use. 'isSessionSupported' so they know whether to add a button the screen.
00:36:04 [Matt_King__]
Matt_King__ has joined #apa
00:36:06 [Matt_King___]
Matt_King___ has joined #apa
00:37:05 [ada]
In the button handler you will call, navigator.XR.requestSession that is where the session begins and it will set up a new session for you ending any other. It is async from a promise which resolves to let you start setting up all the things you need to create to start rendering.
00:37:25 [ada]
... You will start an xr WebGL Layer.
00:38:08 [aboxhall_]
aboxhall_ has joined #apa
00:38:34 [ada]
... Creates 2D buffers you will draw your 2D content into they bound into the displays n the headset. It is important to render directly to the buffers, because copying pixels between buffers slows things down which makes people sick.
00:39:17 [ada]
... sushrajaMSFT: at a higher level you can think of it as any WebGL commands against that context will render directly to the headset.
00:40:09 [ada]
NellWaliczek: The context is what you have to call the commands from to render into. You get it from the canvas it maybe a WebGL, WebGL2 or WebGPU context.
00:40:37 [ada]
... if you think about it seperately from the WebGL APIs there is a 1-1 mapping between the canvas and the context.
00:40:56 [ada]
... in this case a canvas may have multiple contexts.
00:41:18 [ada]
... You pass in a canvas and it will pull out the contexts it needs to render the content to the headset.
00:42:22 [ada]
... Data can't be shared between WebGL contexts for security reasons.
00:42:58 [Joshue108]
q+
00:43:20 [Joshue108]
Ada: If you send. canvas it generates one per panel.
00:43:28 [Joshue108]
Nell: One per additional one.
00:43:30 [Joshue108]
One per headset.
00:43:35 [Joshue108]
q-
00:44:11 [ada]
This may change but right now to support WebGL it is just one.
00:44:24 [Joshue108]
q+ to ask if other content such as related semantics can be rendered to support generated XR session contexts that are not canvas
00:44:52 [ada]
NellWaliczek: The WebGL context associated with The XR device, the final buffer you draw into goes directly to the display it doesn't get copied anywhere.
00:45:25 [Joshue108]
Ada: To clarify it is onto the pixel but shifted for reprojection.
00:45:48 [Joshue108]
q?
00:45:59 [ada]
NellWaliczek:slightly shifted for many purposes, you draw a pixel and it goes to the identical spot on the display.
00:48:12 [ada]
Matt_King__: in stereoscopic there are typically one panel per eye, the information for those panels is associated with the context from the canvas, when I get these XR session RAF callbacks, for each one I populate information into those contexts which are attached to the canvas and the panels on the display, where I cannot share information between the canvas context and the headset context?
00:48:36 [ada]
NellWaliczek: [confirms] yes you can, information can be shared within one canvas.
00:50:13 [ada]
NellWaliczek:We didn't talk about what you do in the RAF loop, the last piece of this puzzle is what to do. The first thing you do is, hey session where is the headset and each panel in 3D space. You get a frustum for each panel.
00:50:45 [ada]
... you can create combined frustums. Which we won't get into right now for perf reasons.
00:51:12 [ada]
... You'll ask where are the motion controllers so I can draw them in the correct place.
00:51:51 [ada]
... This is where we talk about the renderloop which is graphics specific but not XR specific. Once it complese you then have Pixels which can be displayed by the UA.
00:51:55 [Irfan]
Irfan has joined #apa
00:52:00 [ada]
Matt_King__: How does this apply to audio?
00:52:09 [ada]
NellWaliczek: it is part of the same black box.
00:52:16 [Irfan]
rrsagent, make minutes
00:52:16 [RRSAgent]
I have made the request to generate https://www.w3.org/2019/09/20-apa-minutes.html Irfan
00:52:47 [ada]
kip: we also have the central position of the head which can be used for positioning 3D audio.
00:53:30 [Irfan]
q?
00:53:30 [ada]
NellWaliczek: We essentially have a ray which points out from between they eyes, which is used for spatialising the audio.
00:53:35 [Joshue108]
q?
00:53:42 [Joshue108]
ack Lisa
00:53:49 [Joshue108]
ack ach
00:53:55 [Joshue108]
ack me
00:53:55 [Zakim]
Joshue, you wanted to ask if other content such as related semantics can be rendered to support generated XR session contexts that are not canvas
00:54:12 [Irfan]
q?
00:54:38 [joanie]
present+ Joanmarie_Diggs
00:54:41 [ada]
Joshue108: Can content related semantics be generated within those loops?
00:54:52 [ada]
NellWaliczek:Yes i'll talk about it in context of rendering.
00:55:01 [ada]
present+ ada
00:55:44 [ada]
NellWaliczek: I was talking before about data that gets sents to the GPU, yesterdya we were talking about the scenegraph.
00:56:14 [ada]
THe scenegraph is kind of like a DOM tree but it is a totally made up idea, that is not standardised.
00:56:22 [artem]
s/yesterdya/yesterday
00:56:33 [ada]
Different engines have their own different ways of describing it.
00:56:51 [ada]
On native these engines/middleware are like Unity 3D.
00:56:55 [ada]
Or Unreal
00:57:09 [ada]
It is a combination of an editor and renderer.
00:57:43 [ada]
On the Web the most well known is THREE.js which a JS libary which has it's own concepts of a scenegraph, Babylon, Sumerian (and others)
00:58:35 [ada]
Babylon and THREE.js are programatic, Sumerian is a Visual editor where the scenegraph is visualised. WHere you get almost a WYSIWYG experience.
00:58:55 [ada]
... this is all middleware it has nothing to do with the web.
00:59:26 [ada]
... When we talk about the scenegraph it is a made up concept that describes the commands that should be said to WebGL.
00:59:57 [ada]
Matt_King__: A developer won't make WebGL calls they will use a library?
01:00:56 [ada]
NellWaliczek: Yes WebGL extremely verbose, it takes more than a page of code just to render a triangle. You use a 3D engine. Because you use an engine you probably won't be using WebXR directly either.
01:01:33 [ada]
Matt_King__: SO any a11y standard would have to be supported by these middlewares, these 3D engines.
01:02:12 [ada]
NellWaliczek: Almost, because that is where we are today. When looking to the future, file formats for 3D models
01:02:18 [ada]
Matt_King__: 3D hello worlds?
01:03:22 [ada]
NellWaliczek: correct, these 3d formats include geometry and texture but don't tend to include things like physics or scripting. ANy animations they have will be on rails. They are static.
01:05:08 [ada]
... the history of the 3D file formats is long and contentious. The most well known one FBX is only made available through AUtodesk, it is propriety and only they provide the encoders. There are others like OBJ very simple cannot have labels, collada which never got traction.
01:06:03 [Matt_King]
Matt_King has joined #apa
01:06:05 [Matt_King_]
Matt_King_ has joined #apa
01:06:29 [ada]
... the current darling is glTF "gl transition format" and usdz, they are very similar usdz is proprietary. Blender will convert models to as needed.
01:07:27 [ada]
... There are 3 kinds of formats, editor formats like photoshop files, you have interchange formats which can be shared between editors uncompressed, gltf is the first open source runtime format.
01:08:39 [Joshue108]
scribe: Joshue108
01:09:20 [Joshue108]
N: With the advent of glTF where we want to codify scene graphs we open the door for, future vision, soon to happen..
01:09:44 [Joshue108]
It is likely there will be a new HTML element, similar to a canvas but will take a file like glTF..
01:09:52 [Joshue108]
defferring pixel drawing to the browser.
01:10:21 [Joshue108]
So you will have geometry, textures etc that will be communicated, so it we may pack accessibility into into glTF for example.
01:10:40 [Joshue108]
1) We can forsee UAs exposing a model element to draw with..
01:11:06 [Joshue108]
A UA can add button that allows you to view in AR or XR etc but it is declaritive.
01:11:34 [Joshue108]
An author is saying here is a scene in glTF and asking the browser to draw it..
01:11:56 [Joshue108]
This would not be interactive, so we would need to prototype and hit test etc.
01:12:24 [Joshue108]
To make more advanced things you need to be able to script against the scene graph.
01:12:31 [Irfan]
rrsagent, make minutes
01:12:31 [RRSAgent]
I have made the request to generate https://www.w3.org/2019/09/20-apa-minutes.html Irfan
01:12:34 [Joshue108]
This is new stuff, glTF was only finalised recently.
01:12:52 [Joshue108]
The format has an extension system, it has interoperability features.
01:13:15 [Joshue108]
Extension formats can be written in, a11y extensions could be added, also for scripting.
01:13:30 [Joshue108]
This would mean other 3D engines would have the ability to expose that info.
01:13:47 [Joshue108]
Rendering engines etc could then parse this info to create more accessible stuff.
01:13:57 [Joshue108]
Mk: Question about structures..
01:14:29 [Joshue108]
N: Yes, you can - the scene graph can contain tons of objects that wont get drawn as they are hidden but as the user moves items can be drawn and hidden as needed.
01:14:32 [mhakkinen]
mhakkinen has joined #apa
01:14:52 [Joshue108]
MK: Sounds like glTF is a like combining HTML/CSS..
01:14:59 [Joshue108]
N: Yes, but not the JS.
01:15:22 [Joshue108]
MC: Accessibility could be a use case for things that have been thought about and where other use cases exist.
01:15:34 [Joshue108]
So we want to document a11y use cases and push this along?
01:15:49 [Joshue108]
N: In the short term I would not focus on the rendering engines.
01:15:50 [Irfan]
zakim, who is here?
01:15:50 [Zakim]
Present: janina, Joanmarie_Diggs, Matthew_Atkinson, MichaelC, interaccess, IanPouncey, Irfan, CharlesL, Roy, Joshue108_, Manishearth, kip, cabanier, Matt_King, NellWaliczek,
01:15:54 [Zakim]
... ZoeBijl, Léonie, (tink), zcorpan, Avneesh, romain, marisa, LisaSeemanKest_, Joshue, achraf, addison, stevelee, Lauriat, Matt_King_, ada
01:15:54 [Zakim]
On IRC I see mhakkinen, Matt_King_, Matt_King, Irfan, aboxhall_, artem, zaur, zcorpan, Joshue108, alexturn, RRSAgent, sushrajaMSFT, Lauriat, Roy, achraf, jcraig, chrishall,
01:15:54 [Zakim]
... NellWaliczek, cabanier, kip, ada, Manishearth, jamesn, janina, Zakim, Joshue_108_, tink, jasonjgw, ZoeBijl, joanie, slightlyoff, trackbot
01:15:55 [Joshue108]
But in the format, extension etc.
01:16:02 [ada]
present+ ada
01:16:09 [Joshue108]
W3C and Khronos do have arrangement and agreements.
01:16:19 [Joshue108]
q+ to talk about standardising semantic scene graphs
01:16:26 [Irfan]
q?
01:16:32 [Joshue108]
MC: So this is being done by Khronis so we should talk with them.
01:16:58 [Joshue108]
MC: We need to talk with Dom HM as we may want to delegate W3C to that or stimulate discussion with Khronos.
01:17:09 [Joshue108]
N: W3C hosted a games workshop..
01:17:14 [Joshue108]
JS: We have someone there...
01:17:15 [ada]
s/Khronis/Khronos/
01:17:37 [Joshue108]
N: Neil Trevis was keen and happy to work with the W3C.
01:17:55 [Joshue108]
JS: Matt Atkinson is working there and was supprtive of glTF.
01:18:21 [Joshue108]
<give example of how this would work for the user>
01:18:36 [Joshue108]
N: For this to work it cannot be drawn imperatively.
01:19:01 [Joshue108]
It needs to be declaratively, we will see investement into this space in platforms and UAs.
01:19:17 [Joshue108]
N: This is simplistic now, but over time will be exposed.
01:19:25 [Joshue108]
Investing now is smart.
01:19:41 [Joshue108]
MC: Accessibility like declarative things, sounds like we need to do this.
01:19:54 [Irfan]
ack Joshue
01:19:54 [Zakim]
Joshue, you wanted to talk about standardising semantic scene graphs
01:19:57 [Joshue108]
N: We have to focus on the audio thatnks.
01:20:01 [Joshue108]
ack me
01:20:34 [Irfan]
q?
01:20:57 [ada]
Joshue108: I have a q, related to something Ada brought up yesterday, about semantic scenegraph and thinking about how a DOM tree can be used to anotate a semantic scenegraph.
01:21:06 [Irfan]
scribe: ada
01:22:14 [ada]
NellWaliczek: My goals here are to give you the information you need to think about this so we can talk about this later.
01:23:19 [ada]
Matt_King_: What does it mean to put semantics on a scenegraph? They are similar to element names and class names.
01:24:00 [ada]
kip: The GLTF file is like a snapshot of a dynamic system which at run time may get mangled to display the content.
01:24:11 [Irfan]
q?
01:24:26 [Joshue108]
Ada: Can one thing call the glTF as a scene graph?
01:25:00 [Joshue108]
N: We talked about RAF callbacks etc
01:25:28 [ada]
NellWaliczek: The GLTF part of GLTF is a scenegraph. Which referene external assets such as geometry and textures.
01:25:29 [Joshue108]
When you try to track the users head etc, there exists audio APIs, that you can use to generate spacialised sounds etc.
01:25:52 [Joshue108]
The days generated is handled by the OS.. to make sure audio is fed to correct device etc.
01:26:11 [Joshue108]
Handled by OS, so this means in a render loop - audio gets spatialsed using this data.
01:26:16 [Joshue108]
That is what is outputed.
01:26:51 [Joshue108]
JS: No reason that we can support rich audio environment like Dolby ATMOS
01:27:03 [Joshue108]
N: Another Khronos standard comes into play..
01:28:16 [Joshue108]
<comments on the sound etc>
01:28:25 [ada]
OpenXR. THey ahve the power to implement what is effectively drivers for Dolby Atmos.
01:28:28 [ada]
q+
01:28:39 [Irfan]
q?
01:28:47 [Joshue108]
To get audio exposed thru WebXR, the audio implementation has to talk to ?
01:28:50 [ada]
q-
01:28:53 [Judy]
Judy has joined #apa
01:29:12 [Joshue108]
??: There is support for listeners and emitters etc and how should that be done.
01:29:28 [Joshue108]
Virtual listeners and emmitters et..
01:29:32 [Joshue108]
<discussion on same>
01:29:38 [Joshue108]
s/??/Alex
01:29:44 [Joshue108]
JS: That is not enough.
01:29:47 [Joshue108]
q?
01:29:51 [Judy]
present+
01:30:01 [Joshue108]
Some sounds sources are going to need a lot of channels.
01:30:15 [Joshue108]
KIP: I've implemented audio engines..
01:30:30 [Joshue108]
The hardware for playback, is going to be binaural - it will be tracked.
01:30:50 [Joshue108]
hrTF is an impulse response a computeed model..
01:31:01 [Joshue108]
How does that get to your inner ear to give you cues.
01:31:05 [ada]
"head related transfer function"
01:31:26 [Joshue108]
As we can track your head inspace.. for stereo there can be multiple, finding the angle relative to your head.
01:31:39 [atai]
atai has joined #apa
01:31:52 [Joshue108]
Then it works out, and simulates the binaural effect and doing it virtuak
01:31:54 [Irfan]
zakim, who is here
01:31:54 [Zakim]
Irfan, you need to end that query with '?'
01:31:56 [Joshue108]
kemar head..
01:32:00 [Irfan]
zakim, who is here?
01:32:00 [Zakim]
Present: janina, Joanmarie_Diggs, Matthew_Atkinson, MichaelC, interaccess, IanPouncey, Irfan, CharlesL, Roy, Joshue108_, Manishearth, kip, cabanier, Matt_King, NellWaliczek,
01:32:04 [Zakim]
... ZoeBijl, Léonie, (tink), zcorpan, Avneesh, romain, marisa, LisaSeemanKest_, Joshue, achraf, addison, stevelee, Lauriat, Matt_King_, ada, Judy
01:32:04 [Zakim]
On IRC I see atai, Judy, mhakkinen, Matt_King_, Matt_King, Irfan, aboxhall_, artem, zaur, Joshue108, alexturn, RRSAgent, sushrajaMSFT, Lauriat, Roy, achraf, jcraig, chrishall,
01:32:04 [Zakim]
... NellWaliczek, cabanier, kip, ada, Manishearth, jamesn, janina, Zakim, Joshue_108_, tink, jasonjgw, ZoeBijl, joanie, slightlyoff, trackbot
01:32:09 [Joshue108]
q?
01:32:22 [Irfan]
rrsagent, make minutes
01:32:22 [RRSAgent]
I have made the request to generate https://www.w3.org/2019/09/20-apa-minutes.html Irfan
01:32:22 [Joshue108]
JS: For regular headsets this may suffince.
01:32:49 [Joshue108]
N: Referring to to Atmos, for every piece of hardware, via web APIs the browser needs to be able to talk to it.
01:32:56 [zcorpan]
zcorpan has joined #apa
01:33:04 [Joshue108]
JS: So what syncs these things?
01:33:37 [Joshue108]
Kip: The APIs give you enough to generate the similation but there is middleware, WebAudio etc that helps to realise the sonic environment.
01:33:49 [Joshue108]
N: As a dev these things are not called directly.
01:34:00 [Joshue108]
You set up an environment that handles these things.
01:34:12 [Joshue108]
Alex: Described how things are moved around and processed.
01:34:28 [Joshue108]
N: You mean the devs view?
01:34:43 [Joshue108]
<description of node geometry etc scripting etc>
01:34:58 [Joshue108]
Alex: The engine handles things here.
01:35:02 [Joshue108]
JOC: The logic.
01:35:12 [Joshue108]
N: Calls are set up on your behaf.
01:35:46 [Joshue108]
???: Operating system, primitives, sounds streams and tying to OS primitive etc.
01:36:10 [Joshue108]
s/???/Sushan
01:36:33 [Joshue108]
N: There is an audio only spatialsed headset, got that mixed up with Atmos.
01:36:45 [Joshue108]
Kip: So Atos format describes multiple sounds.
01:36:53 [Joshue108]
JS: Yes, up to 128 Channels.
01:37:26 [Joshue108]
Kip: We can use this technqiue replace many sources virtually, replace binaural stuff to spacialse sound sources.
01:37:49 [Joshue108]
MK: The magic leap had multiple transducers around my head, has a lot more.
01:38:04 [Joshue108]
CabR: Dont think so.
01:38:51 [Joshue108]
Alex: We can play tricks relative to head positioning.
01:39:33 [Joshue108]
JOC: So the correct usage of sound is vastly important for accessibility and the quality of the user experience.
01:39:44 [Joshue108]
Alex: There are effective things we can do.
01:39:57 [Joshue108]
JS: You can turn your head to locate things.
01:40:10 [Joshue108]
Alex: And we can do clever things.
01:40:23 [Judy]
q+ to comment at the end of the session
01:40:36 [Joshue108]
Sushan: Handing the amount of audio and channels is the OS and hardware drivers to support systems with multiple outputs and hardware.
01:41:06 [Irfan]
q?
01:41:13 [Joshue108]
N: We've talked about semantics, declarative structure etc, how middleware plays a role, audio..
01:41:40 [Joshue108]
So these middleware stuff..
01:41:43 [zcorpan]
zcorpan has joined #apa
01:41:56 [Joshue108]
JOC: glTF plugs into WebGL, with the scene info.
01:42:15 [Joshue108]
N: We will have a composited experience in the future.
01:42:24 [Joshue108]
Lets talk about interaction..
01:42:53 [Joshue108]
That is a challenge, think of the complixity of mouse and touchscreen syncing.
01:43:08 [Joshue108]
Now we have a bunch of input mechanisms..
01:43:24 [Joshue108]
We are getting more.
01:43:36 [Joshue108]
Web is deficient for speech input etc.
01:43:41 [Joshue108]
q?
01:44:16 [Joshue108]
N: You can use input to move around your space..
01:44:39 [Joshue108]
In VR the space is nearly always larger that the physical space, you can teleport in these spaces.
01:44:54 [Joshue108]
N: How this is done is via different input sources.
01:45:21 [Joshue108]
N: there are platform that map hand held motion controllers to grab objects.
01:45:23 [mhakkinen]
+q
01:45:45 [Joshue108]
There is also selection at a distance, that you can aim at, select it and pick it up, move it etc.
01:45:59 [Joshue108]
There is also the painting option.
01:46:11 [Judy]
q- later
01:46:17 [Joshue108]
MK: There can also be chat type things.
01:46:26 [Joshue108]
N: yes, <discusses types again>
01:46:56 [Joshue108]
N: These input devices are not inherently accessible.
01:47:16 [Joshue108]
N: If you have limited motion, these controllers can be problematic.
01:47:28 [Joshue108]
<discusses some of problems with these inputs>
01:47:35 [zcorpan]
zcorpan has joined #apa
01:47:52 [Joshue108]
N: There are native platform layer where AT can be plugged in.
01:48:07 [Joshue108]
N: Mentions the Microsoft One.
01:48:25 [Joshue108]
MC: For WoT we need to use APIs that can provide these functions.
01:48:27 [Irfan]
q?
01:48:40 [Joshue108]
N: We have been forces to generalise how this is done.
01:49:33 [Joshue108]
We have a XRInputSource is the obect type for this, that is called on a session.
01:49:54 [Joshue108]
N: Target Rays Grip location methods..
01:50:04 [Joshue108]
s/obect/object
01:50:14 [Joshue108]
N: Input sources can be parts of your body.
01:50:36 [Joshue108]
You can create input sources that you can call these methods on particular objects.
01:50:54 [Joshue108]
q+ to ask about the benefits of generalisation
01:51:18 [Judy]
q- later
01:51:21 [Joshue108]
N: These are opportuities that are worth discussing.
01:51:25 [Irfan]
ack mhakkinen
01:51:41 [Joshue108]
MC: Do authors have to do anything special here?
01:51:53 [Joshue108]
MH: Any discussion on haptics?
01:52:06 [Joshue108]
N: Great question.
01:52:16 [Lauriat]
q+ to ask about input source software vs. hardware.
01:52:23 [Joshue108]
N: Prior to WebXR we had WebVR.
01:52:24 [Judy]
q- later
01:52:31 [Joshue108]
<history>
01:52:49 [Joshue108]
q-
01:53:22 [Joshue108]
Haptics has not been striped out of proposals..
01:53:51 [Joshue108]
When haptics are available they can be used in WebXR, such as the rumble pack on an Occulus.
01:53:59 [Joshue108]
Kind of generic use with current controllers.
01:54:10 [Joshue108]
MH: Can you capture textual objects?
01:54:46 [Joshue108]
N: You can simulate some things here, there are full body suits etc.
01:55:15 [Joshue108]
Would be suprised if there was not work going on here.
01:55:44 [Joshue108]
N: There are challenges, if on Gamepad API we have that.
01:56:08 [Joshue108]
MH: Having more than just the rumble for the controller is important.
01:56:12 [Irfan]
ack Lauriat
01:56:12 [Zakim]
Lauriat, you wanted to ask about input source software vs. hardware.
01:56:32 [Joshue108]
SL: Regarding input sources, mapping, how easy is it to have a software based input?
01:57:03 [Joshue108]
<describes issue, automatic grabbing etc>
01:57:29 [Joshue108]
N: Input source objects can be mapped to the grip button, however, now there is only on button - if a Gamepad.
01:57:51 [Joshue108]
We call generic select events, thumb stick wont fire if, like user initated actions etc.
01:58:18 [Joshue108]
Fake events can be fired, but not the user activation thing, as that is done by the browser.
01:58:32 [Joshue108]
SL: What about other interactions?
01:58:38 [Joshue108]
N: YOu can polyfill those.
01:59:09 [Joshue108]
MK: I've a high level question about Web vs Native..
01:59:36 [Joshue108]
In the accessibility world we are trying to go accross multiple ways of delivering experiences etc.
02:00:00 [Joshue108]
I'm wondering how much content that is browser based or ??
02:00:13 [Judy]
q+ to comment on web and beyond web
02:00:33 [Joshue108]
Is used today, how much are you living on the web with Occulus etc.
02:00:50 [Joshue108]
N: Some of it, regarding the glTF format, that is consumed by Unity and Unreal etc.
02:01:13 [Joshue108]
To get accessiiblity inside them - you have the benefit of it being a common file format.
02:01:19 [Joshue108]
q+ to ask about perfomance.
02:01:47 [Joshue108]
N: Extensions can be written, browsers can support secondary run times etc.
02:01:55 [zcorpan]
zcorpan has joined #apa
02:02:26 [Joshue108]
N: Lots of current browser based APIs, there is a commonality between how native and web apps are built.
02:02:33 [Joshue108]
You need to solve the same problems.
02:02:37 [Judy]
q?
02:02:55 [Joshue108]
So how much is web based vs native - we haven't hit CR yet!
02:03:22 [Joshue108]
At turning point, not there yet.
02:04:02 [Joshue108]
<Gives example of hotel room VR exploration>
02:05:18 [Joshue108]
People will use a generic tools and not instal bespoke random apps to do stuff.
02:05:49 [Joshue108]
ack Judy
02:05:49 [Zakim]
Judy, you wanted to comment at the end of the session and to comment on web and beyond web
02:06:05 [Judy_alt]
Judy_alt has joined #apa
02:06:05 [Joshue108]
JB: I've seen 360 hotel views.
02:06:14 [Matt_King__]
Matt_King__ has joined #apa
02:06:17 [Matt_King___]
Matt_King___ has joined #apa
02:06:34 [Joshue108]
And regarding Matts question, W3C as a whole is coming accross the question of we should be looking at Web only or beyond that.
02:06:50 [Joshue108]
In WAI we are aware that some what we need to look at is beyond the web proper.
02:07:24 [Joshue108]
Regarding the inclusive immersive workshop..
02:07:49 [Joshue108]
It is filling up, and in WAI as we look emerging web tech, we want to grow a community of experts.
02:07:55 [Joshue108]
This session was really good.
02:08:02 [Joshue108]
This content could be distilled and shared.
02:08:40 [Joshue108]
Will be a good primer - but some may feel unprepared to make people feel welcome.
02:09:31 [Judy_alt]
https://www.w3.org/2019/08/inclusive-xr-workshop/
02:11:32 [MichaelC]
MichaelC has joined #apa
02:12:03 [Joshue108]
q-
02:12:58 [Joshue108]
rrsagent, draft minutes
02:12:58 [RRSAgent]
I have made the request to generate https://www.w3.org/2019/09/20-apa-minutes.html Joshue108
02:14:20 [Roy]
Scribe: Joshue108
02:14:30 [Roy]
rrsagent, draft minutes
02:14:30 [RRSAgent]
I have made the request to generate https://www.w3.org/2019/09/20-apa-minutes.html Roy
02:36:17 [Matt_King_]
Matt_King_ has joined #apa
02:36:19 [Matt_King__]
Matt_King__ has joined #apa
02:51:52 [zcorpan]
zcorpan has joined #apa
03:04:58 [Roy_]
Roy_ has joined #apa
03:15:08 [zcorpan]
zcorpan has joined #apa
03:29:39 [stevelee]
stevelee has joined #apa
04:04:33 [Joshue108]
zakim, who is on the phone?
04:04:33 [Zakim]
Present: janina, Joanmarie_Diggs, Matthew_Atkinson, MichaelC, interaccess, IanPouncey, Irfan, CharlesL, Roy, Joshue108_, Manishearth, kip, cabanier, Matt_King, NellWaliczek,
04:04:37 [Zakim]
... ZoeBijl, Léonie, (tink), zcorpan, Avneesh, romain, marisa, LisaSeemanKest_, Joshue, achraf, addison, stevelee, Lauriat, Matt_King_, ada, Judy
04:05:02 [ada]
join #webapps
04:05:08 [ada]
(ignore that)
04:05:19 [Irfan]
Irfan has joined #apa
04:07:22 [Irfan]
present+
04:08:22 [Judy]
Judy has joined #apa
04:10:27 [jcraig]
ScribeNick: jcraig
04:10:54 [jcraig]
Topic: Web RTC joint meeting with APA
04:11:26 [jcraig]
Scribe: jcraig
04:11:36 [hta1]
hta1 has joined #apa
04:11:39 [ZoeBijl]
present+
04:11:41 [dom__]
dom__ has joined #apa
04:11:41 [Joshue108]
present+
04:11:49 [jcraig]
Dominic: introduce yourself
04:11:58 [dom__]
Present+
04:12:00 [youenn]
youenn has joined #apa
04:12:00 [jcraig]
Benard A???
04:12:06 [dom__]
s/A???/Aboba/
04:12:06 [Judy]
s/Dominic/Dom/
04:12:22 [Jared]
Jared has joined #apa
04:12:22 [Judy]
present+ Judy
04:12:31 [jib]
jib has joined #apa
04:12:37 [hta1]
Harald Alvestrand
04:12:46 [jcraig]
James Craig, Apple
04:12:56 [jcraig]
Armando ???
04:13:05 [jcraig]
Jared ???, Chesire
04:13:13 [jcraig]
Josh O Connor, W3C
04:13:20 [Bernard]
Bernard has joined #APA
04:13:28 [stevelee]
stevelee has joined #apa
04:13:32 [jcraig]
Youene Fablet, Apple
04:13:41 [Judy]
Judy Brewer, W3C WAI
04:13:44 [jcraig]
Joanie Diggs, Igalia
04:13:45 [dom__]
s/Jared ???/Jared_Cheshier/
04:13:47 [youenn]
s/Youene/Youenn
04:13:52 [jcraig]
Janina Sajka, APA/WAI
04:13:56 [Bernard]
Introduction: Bernard Aboba, Co-Chair of the WEBRTC WG, and formerly a member of the FCC EAAC and TFOPA groups.
04:14:05 [jcraig]
Henrik ???, Google
04:14:11 [Jared]
Jared Cheshier, new to W3C, in the WebRTC working group and Immersive Web working group.
04:14:12 [jib]
Jan-Ivar Bruaroey
04:14:16 [Bernard]
Henrik Bostrom, Google.
04:14:19 [dom__]
s/Armando ???/Armando Miraglia/
04:14:23 [jcraig]
Daiki, NTT on RTC
04:14:48 [jcraig]
Hiroko Akishimoto ???
04:15:04 [dom__]
s/???/NTT/
04:15:13 [jcraig]
and colleague?
04:15:44 [jcraig]
Topic: Real Time Text
04:16:18 [jcraig]
important on behalf of those with speech disability or deaf hard-of-hearing
04:17:03 [jcraig]
"Topic 1" is Real Time Text
04:17:43 [jcraig]
"Topic 2" is use case for Web RTC 2.0
04:18:02 [jcraig]
Joshue108 has created a document of example use cases
04:18:12 [Joshue108]
Here they are https://www.w3.org/WAI/APA/wiki/Accessible_RTC_Use_Cases
04:18:30 [jcraig]
s/is use case/is use cases/
04:19:05 [jcraig]
Bernard: WG MMusic is standardizing transports over RTT channel
04:19:12 [jcraig]
3GPP is citing that effort
04:19:17 [artem_]
artem_ has joined #apa
04:19:29 [jcraig]
almost certainly will result in a final spec
04:20:19 [jcraig]
dom__: vocab: RTT also mean rounds trip time in other contexts.. RTT or this discussion is Real Time Text
04:20:45 [jcraig]
Bernard: goal is to enable WebRTC as a transport protocol for RTT
04:21:12 [jcraig]
RTT is a codec in the architecture, but somewhat like a data channel too
04:21:27 [jcraig]
wouldn't make sense to send music over RTT for example
04:22:03 [jcraig]
their plan to use the data channel to send music I think makes sense
04:22:18 [jcraig]
RTT is timed, but not synchronized time
04:22:32 [Joshue108]
q+ to ask about time
04:22:36 [jcraig]
Is time sync necessary?
04:22:42 [jcraig]
janina: I think not
04:22:43 [Judy]
s/for RTT/for RTT, and Gunnar Hellström is currently reviewing that/
04:22:49 [Judy]
q+
04:22:54 [jcraig]
jcraig: why not?
04:23:17 [jcraig]
Joshue108: what about synced sign language track
04:23:19 [Joshue108]
ack me
04:23:19 [Zakim]
Joshue, you wanted to ask about time
04:23:25 [jcraig]
ack Judy
04:23:41 [jcraig]
Judy: I share Josh's concern
04:24:05 [jcraig]
hta: ??? and other one is that the system records send time
04:24:18 [jcraig]
I think the first thing is the only one required
04:24:33 [dom__]
"Any service or device that enables the initiation, transmission, reception, and display of RTT communications must be interoperable over IP-based wireless networks, which can be met by adherence to RFC 4103 or its successor protocol. 26 RFC 4103 can be replaced by an updated standard as long as it supports end-to-end RTT communications and performance requirements." https://www.fcc.gov/document/transition-tty-real-time-text-technology
04:24:39 [jcraig]
Bernard: Would affect how time is sent over the channel...
04:24:40 [Joshue108]
As long as we are sure that issues with time, dont impact on synchronisation of various alternate medium
04:24:54 [jcraig]
because 3GPP is involved... likely to be implemented
04:24:56 [Joshue108]
s/medium/media content
04:25:28 [jcraig]
janina: idea (with telecom RTT) is to see characters immediately
04:25:31 [Judy]
[jb partly wondering if timing is relevant for rtt communication records in emergency communications]
04:25:40 [jcraig]
q+ to mention the 911 context for immediate chars
04:25:44 [Judy]
q+
04:26:00 [Joshue108]
Challenges with TTS timing for blind users https://www.w3.org/WAI/APA/wiki/Accessible_RTC_Use_Cases#Challenges_with_TTS_timing
04:26:17 [jcraig]
ack me
04:26:17 [Zakim]
jcraig, you wanted to mention the 911 context for immediate chars
04:26:19 [mhakkinen]
mhakkinen has joined #apa
04:26:23 [Joshue108]
JC: VoiceOver handles this well
04:26:33 [Judy]
q+ to speak to requirement for non-buffering including for deaf-blind use cases in emergency communications
04:26:42 [jcraig]
Bernard: draft has reliable mode and unreliable (lossy?) mode..
04:26:50 [jcraig]
ack Judy
04:26:50 [Zakim]
Judy, you wanted to speak to requirement for non-buffering including for deaf-blind use cases in emergency communications
04:27:10 [Bernard]
The WebRTC 1.0 API supports both reliable and unreliable modes.
04:27:17 [dom__]
q+ to suggest reliable is needed for RTT (completeness is probably more important than latency for text)
04:27:18 [jcraig]
Judy: glad practical details are being discussed .. eg. emergency situation
04:27:33 [jcraig]
Deaf community also wants immediacy
04:27:58 [Bernard]
Draft is here: https://tools.ietf.org/html/draft-holmberg-mmusic-t140-usage-data-channel
04:27:59 [jcraig]
deafblind community may share a need fro non-buffered comm
04:28:24 [ZoeBijl]
s/fro/for/
04:28:28 [jcraig]
Judy: I'm interested in hearing the background. We jumped straight into discussion
04:28:55 [jcraig]
is there an informative para that shows polyfill implementations? the Deaf community thinks so
04:29:35 [Judy]
q+
04:30:02 [jcraig]
HTA: If there is nothing required in RTT protocol, you can have a perfect polyfill? but if not, you may need extensions.
04:30:13 [Judy]
ack j
04:30:25 [jcraig]
Judy: Sometimes JS polyfills can count as one of two required implementations
04:30:52 [jcraig]
dom__: ???
04:31:12 [jcraig]
dom: may have room in spec to add RTT support in WebRTC today
04:31:49 [Judy]
s/is there an informative para that shows polyfill implementations/is there an opportunity to add an informative para that explains the relevance and allows polyfill implementations//
04:32:06 [jcraig]
dom__: I see value in exposing <scribe lost context>
04:32:37 [atai]
atai has joined #apa
04:32:56 [dom__]
dom: if a gateway from RTT to webrtc is already possible (Bernard to confirm), it would be useful to add a note to the WebRTC document to point to that usage of datachannel
04:33:08 [Judy]
q+
04:33:10 [jcraig]
Bernard: questions from the use case.. it does not recommend whether to use reliable or unreliable mode.. no rec on whether to send char by char or as a blob
04:33:13 [Judy]
q+ ack d
04:33:20 [dom__]
... for a normative change to the API surface, it's hard to consider without understanding the underlying protocol and what it would need to expose
04:33:28 [jcraig]
suggest APA review the document and provide fedbaclk
04:33:31 [Judy]
q- ack
04:33:34 [Judy]
q- d
04:33:39 [jcraig]
s/fedbaclk/feedback/
04:33:43 [dom__]
ack me
04:33:44 [Zakim]
dom__, you wanted to suggest reliable is needed for RTT (completeness is probably more important than latency for text)
04:34:16 [Bernard]
Latest version is https://tools.ietf.org/html/draft-ietf-mmusic-t140-usage-data-channel
04:34:23 [Lauriat]
Lauriat has left #apa
04:34:23 [dom__]
q+ henrik
04:34:37 [Joshue108]
q?
04:34:42 [dom__]
q+ Joshue108
04:34:47 [jcraig]
Bernard:reliabel in order preferred
04:35:11 [jcraig]
Judy: colleagues at Galliudet would be interested in sharing polyfill implementations...
04:35:16 [Joshue108]
q+ to ask confirm that this doc contains the technnical requirements for RTT implementiations in WebRT and APA should review
04:35:36 [jcraig]
I'm concerned about missing the timeline window since you are nearing completion
04:35:45 [Joshue108]
ack Judy
04:36:06 [jcraig]
I'd like final WebRTC to include ack that RTT is on the roadmap?
04:36:46 [jcraig]
Bernard: WebRTC has evolved since the RTT proof, we should review that it works with the current draft
04:36:52 [dom__]
ack henrik
04:37:24 [Bernard]
Field trial specification: https://tap.gallaudet.edu/IPTransition/TTYTrial/Real-Time%20Text%20Interoperability%20report%2017-December-2015.pdf
04:37:33 [jcraig]
henrik: what is the requirement on WebRTC for RTT... sounds like you can do this today'?
04:38:04 [jcraig]
dom: there is a dedicated RTT spec required by FCC .. question is how to you expose this in the RTC stack
04:38:23 [jcraig]
and provide interior with existing services like TTML
04:38:25 [Joshue108]
ack me
04:38:26 [Zakim]
Joshue, you wanted to ask confirm that this doc contains the technnical requirements for RTT implementiations in WebRT and APA should review
04:38:37 [jcraig]
s/interior/interop/
04:38:49 [henbos]
henbos has joined #apa
04:39:02 [henbos]
Henrik Boström here
04:39:03 [dontcallmeDOM]
dontcallmeDOM has joined #apa
04:39:05 [jcraig]
Bernard: I've entered the spec Id like APA to review
04:39:19 [jcraig]
also added the Galliudet prototype from 2015
04:39:56 [Judy]
s/Galliudet/Gallaudet/
04:40:05 [dontcallmeDOM]
q+
04:40:24 [Irfan]
q?
04:40:32 [jcraig]
janina: I would like the use cases to clearly distinguish the nuanced differences... e.g. emergency services, etc.
04:41:05 [Joshue108]
JC: Create implementations that can be brailled immediately..
04:41:20 [Joshue108]
We are presenting characters as fast as possible with minor adjustments in VoiceOver.
04:41:32 [Joshue108]
So you can get the existing string asap
04:41:51 [jcraig]
dom: req that the character buffer be sent as fast as possible
04:42:39 [jcraig]
from a webrtc perspective, what you will get is a stream off characters, and it will be up to the app to determine how to transport those characters
04:42:39 [Bernard]
The current holmberg draft specifies reliable transport. Are there use cases where partial reliability might be desired?
04:43:06 [Joshue108]
q?
04:43:08 [Joshue108]
ack me
04:43:10 [Bernard]
For example, where a maximum latency might be desired. WebRTC 1.0 API supports maxPacketLifeTime or maxRetransmissions for partial reliability.
04:43:10 [dontcallmeDOM]
q-
04:43:21 [jcraig]
janina: it's wonderful that we'll be able to use this side-by-side with existing rtc
04:43:27 [Bernard]
q+
04:44:06 [Judy]
q+
04:44:09 [jcraig]
Level 1, emergency services, Level 2 disabilities, Level 3 personal pref
04:44:21 [jcraig]
(priorities ^)
04:44:39 [jcraig]
Bernard: I'd like to discuss emergency services a bit more
04:45:02 [jcraig]
max-transport time might be affected
04:45:04 [Judy]
q+ to talk about finding details of emergency use cases
04:45:16 [zcorpan]
zcorpan has joined #apa
04:45:46 [Judy]
q+ Janina
04:45:52 [jcraig]
iommediacy and speed are sometime in conflict. each use case could result in difference implementations... max-package-lifetime for example
04:46:06 [jcraig]
s/speed/accuracy/
04:46:47 [jcraig]
Judy: FCC did research. Some examples of death resulting from lack of immediacy in RTT communication.
04:46:52 [Joshue108]
Here is an outline of some of the RTT emergency use cases
04:46:53 [Joshue108]
https://www.w3.org/WAI/APA/wiki/Accessible_RTC_Use_Cases#Support_for_Real_Time_Text_.28RTT.29
04:47:02 [jcraig]
max-partial-package may have helped in these cases
04:47:14 [jcraig]
s/package/packet/
04:48:29 [dontcallmeDOM]
[it is 118 apparently]
04:48:36 [dontcallmeDOM]
[110 sorry]
04:48:52 [hta]
(after checking: maxPacketLifetime attribute seems to be wired all the way down the stack, so it probably works.)
04:49:05 [jcraig]
Topic: document of use cases for Web RTC 2.0
04:49:12 [Joshue108]
https://www.w3.org/WAI/APA/wiki/Accessible_RTC_Use_Cases
04:49:20 [Joshue108]
q?
04:49:21 [Judy]
q+
04:49:34 [Bernard]
+q
04:49:34 [jcraig]
ack Bernard
04:49:39 [dontcallmeDOM]
q+
04:49:41 [Judy]
ack jan
04:49:46 [Joshue108]
q+ to give overview
04:50:13 [jcraig]
Bernard: I think the doc is useful... especially mentioned of other tech outside RTC
04:50:41 [jcraig]
woful by completion, this can be tested to reveal any shortcomings
04:51:08 [jcraig]
FCC has funded open source project that can be run
04:51:28 [Judy]
q+ to say 1) realize probably need more nuance in our use cases; 2) want to make sure we have a better (non-hack) path for RTT integration in RTC 2.0; 3) to talk about leveraging accessible RTC in multi-channel virtual conferencing
04:51:32 [jcraig]
useful to have a standalone doc that can be used against multiple sources. RTC etc.
04:51:36 [Joshue108]
ack Ju
04:51:36 [Zakim]
Judy, you wanted to talk about finding details of emergency use cases and to and to say 1) realize probably need more nuance in our use cases; 2) want to make sure we have a
04:51:39 [Zakim]
... better (non-hack) path for RTT integration in RTC 2.0; 3) to talk about leveraging accessible RTC in multi-channel virtual conferencing
04:52:00 [jcraig]
Judy: I think we need more nuance in use cases for the fine detail questions you're asking
04:52:02 [Bernard]
IETF RUM WG: https://datatracker.ietf.org/wg/rum/about/
04:52:34 [jcraig]
Judy: if we do the right things, we may end up with a hacked ??? in RTC 1.0
04:52:59 [jcraig]
my guess is you need to add something in the spec that indicates "here's how to support RTT for now"
04:53:07 [Joshue108]
+1 to Judy
04:53:19 [jcraig]
and for 2.0, we need a plan to make sure RTT is supported sans hacks
04:53:45 [Bernard]
RUM document on the VRS profile: https://tools.ietf.org/html/draft-rosen-rue
04:53:57 [Bernard]
+q
04:54:00 [jcraig]
ask the tech companies move today more carbon neutral teleconferencing, can RTC become that open stnadard
04:54:04 [Joshue108]
ack dom
04:54:09 [Joshue108]
ack don
04:54:31 [jcraig]
dontcallmeDOM: milestone: end of march will require recharter for RTC
04:54:46 [jcraig]
might be a good time to reflect RTT use cases in charter
04:54:59 [Judy]
s/ask the tech companies more today/as the tech companies more towards/
04:56:17 [jcraig]
dontcallmeDOM: response to Judy: I don't think 1.0 RTT implementation is a hack... happy to work with you on a clarifying note.
04:56:22 [Judy]
s/that open standard/that open standard for fully accessible virtual carbon-neutral conferencing/
04:56:46 [jcraig]
saying no direct support in spec today, but there is ongoing work that can be referenced in the doc
04:57:17 [dontcallmeDOM]
ack me
04:57:19 [Joshue108]
ack me
04:57:19 [Zakim]
Joshue, you wanted to give overview
04:57:22 [Joshue108]
https://www.w3.org/WAI/APA/wiki/Accessible_RTC_Use_Cases#Data_table_mapping_User_Needs_with_related_specifications
04:57:59 [jcraig]
Joshue108: part of the doc has a data table which provides mapping to use cases
04:59:19 [jcraig]
Joshue108: could publish as a note from APA
05:00:01 [jcraig]
Web RTC group could contribute back to that Note
05:00:07 [dontcallmeDOM]
ack Joshue108
05:00:08 [Joshue108]
q?
05:00:10 [dontcallmeDOM]
ack Bernard
05:00:37 [jcraig]
Bernard: good reason to have use cases as separate doc
05:01:11 [Joshue108]
q+
05:01:17 [jcraig]
we've learned getting access to raw media opens a host of Accessibility opportunities... e.g. live captioning on a bitstream for example
05:01:17 [Joshue108]
ack me
05:01:49 [jcraig]
janina: thank you all for coming
05:01:54 [jcraig]
dontcallmeDOM: thanks
05:01:57 [jcraig]
judy
05:02:05 [Irfan]
rrsagent, make minutes
05:02:05 [RRSAgent]
I have made the request to generate https://www.w3.org/2019/09/20-apa-minutes.html Irfan
05:02:05 [jcraig]
judy: thanks all
05:02:14 [jcraig]
[adjourned]
05:02:19 [Joshue108]
q?
05:02:29 [jcraig]
scribe:
05:02:35 [jcraig]
scribenick:
05:02:42 [jcraig]
rrsagent make minutes
05:02:54 [jcraig]
rrsagent: make minutes
05:02:54 [RRSAgent]
I have made the request to generate https://www.w3.org/2019/09/20-apa-minutes.html jcraig
05:03:55 [Manishearth]
Manishearth has left #apa
05:04:42 [Irfan]
topic: Pronunciation Explainer
05:05:03 [Irfan]
Scribe: Irfan
05:05:22 [Irfan]
https://github.com/w3c/pronunciation/blob/master/docs/explainer.md
05:05:50 [Jared]
Jared has joined #apa
05:05:54 [dontcallmeDOM]
dontcallmeDOM has joined #apa
05:06:01 [hta1]
hta1 has joined #apa
05:07:03 [Irfan]
mhakkinen: took the recommendation and put together a document with recommended.
05:07:11 [dontcallmeDOM]
dontcallmeDOM has left #apa
05:07:18 [Irfan]
with all goals and non-goals with open questions
05:07:50 [Irfan]
need feedback from Michael or Roy with the format
05:07:58 [Irfan]
Janina: good point..
05:08:03 [hta1]
hta1 has left #apa
05:08:19 [jib]
jib has joined #apa
05:08:20 [Irfan]
roy: personalization task force has a document
05:09:17 [Irfan]
mhakkinen: I am looking from the group to hear if I have covered everything in this document.
05:09:31 [Irfan]
HOw can we bring SSML into HTML- one approach
05:09:47 [Irfan]
inline-ssml... just drop right in
05:09:51 [Irfan]
or bring it as attr model
05:10:27 [Irfan]
one of the concern that we have, AT products may have more challenging time extracting SSML into the document.
05:10:55 [Irfan]
based on our survey, one of the big AT vendor came up in support of attribute model
05:11:23 [Irfan]
the issue is broader.. with spoken interfaces
05:11:51 [Irfan]
present+ burn
05:12:12 [Irfan]
burn: I can imagine what inline means.. but have no idea about attr model
05:12:27 [Irfan]
mhakkinen: <ePub Example>
05:12:50 [Irfan]
what they did was created an attribute.. ssml:alphabet
05:12:59 [Irfan]
you can drop two attribute in <span>
05:13:34 [Irfan]
we come from Education testing which is assessment world and trying solving problem for pronunciation. aria-label doesnt work
05:13:53 [Irfan]
have seen data-ssml and some funny interpretation of ssml..
05:14:16 [Irfan]
we got json structure which is relatively clean and prototype.. thats one model
05:14:32 [Irfan]
with explainer we are trying to explain the problems and proposing solutions..
05:14:47 [Irfan]
seeking inputs from stack-holders
05:15:41 [Irfan]
burn: when we created SSML.. we expected that XHTML is next which would made it easy.
05:16:26 [Irfan]
if you gonna do it JSON model, how do you maintain context?
05:16:41 [Irfan]
you are going to loose scoping? how is it going to work?
05:16:52 [Irfan]
mhakkinen: thats the question and we are looking more feedback
05:17:15 [Irfan]
in-general in assessment we are looking for very specific features... such as say-as, pausing.. sub
05:17:41 [Irfan]
burn: you are going to deal with name-spacing issue
05:17:55 [Irfan]
I dont see any reason that you can't do that
05:18:27 [Irfan]
mhakkinen: when we talked about bring inline with the HTML.. comment from SR vendor was that it is going to be hard to impliment..
05:18:38 [Irfan]
some browser vendor also shared the same concerns
05:18:41 [zcorpan]
zcorpan has joined #apa
05:19:09 [Irfan]
burn: Its possible to rename the element .. like p element
05:20:17 [Irfan]
there internal models have to deal with video rendering.. you could leave the original text there and ignore the element if you are adding any SSML
05:20:34 [Irfan]
mhakkinen: problem with braille display as well
05:20:53 [Irfan]
pronunciation string goes to both, braille and SR
05:21:17 [Irfan]
some discussion.. like ariabraille-label
05:21:39 [Irfan]
could this control purely by aria.. but that doesn't solve the broad problem
05:22:09 [Irfan]
joanie: what voice assistance support this?
05:22:20 [Irfan]
mhakkinen: google and alexa both allows
05:22:41 [Irfan]
burn: we did some work in the past.. good to know that it has some life now
05:23:15 [Irfan]
mhakkinen: I tried to hack a demo with Alexa.. it looks like.. pullin some HTML content and if contains SSML into it, IT can be rendered directly
05:23:30 [Irfan]
I cant believe that amazon team is not looking for a solution
05:23:36 [Irfan]
It is great way to extend
05:23:42 [Irfan]
have contacted amazon as well
05:24:04 [Irfan]
we want to make sure to make it render on web and voice assistance
05:24:08 [Irfan]
burn: is inline dead?
05:24:37 [Irfan]
mhakkinen: we have two ways with advatages and disadvantages
05:24:46 [Irfan]
this is just draft and would like explain more in details
05:25:02 [Irfan]
janina: we have time to discuss it
05:25:22 [Irfan]
mhakkinen: GG from jaws is been pretty clear that he likes attribute approach
05:25:44 [Irfan]
havent heard from other org so far
05:26:02 [Irfan]
janina: how about narrator
05:26:20 [Irfan]
mhakkinen: talked to them and they seem to be working on this
05:26:44 [Irfan]
?? is trying to work to get pass through from the browser to AT
05:26:55 [Irfan]
for the voice assistance .. we can live with either approach
05:27:14 [Irfan]
AT are less of the challenge here
05:27:50 [Irfan]
Joanie: implementation on browser side.
05:28:14 [Irfan]
<some approach to write JSON>
05:29:10 [Irfan]
mahakkinen: talked to chief arch at pearson.. they use some hack to handle pronunciation. they like JSON model.. its easier for them because they dont have to change much
05:29:36 [Irfan]
Joanie: thinking.. may be waht we want combination of version 1 and version 2
05:29:48 [Irfan]
how do you expose it to a11y API?
05:30:03 [Irfan]
speak tag is not exposed to A11Y API
05:30:56 [Irfan]
mhakkinen: *showing example... inline is simple
05:31:13 [Irfan]
<showing two different approach>
05:31:36 [Irfan]
q+
05:31:50 [Irfan]
<speak> is not exposed to A11Y API
05:32:34 [Irfan]
joanie: we could use span instead
05:32:47 [Irfan]
it is still going to be included in AT tree
05:33:33 [Irfan]
for any AT.. we need an object attr.. which can exposed to API
05:33:52 [Irfan]
s/which can/ which can be
05:33:59 [Irfan]
that will make me super happy
05:34:26 [Irfan]
we would want HTML to bless this
05:34:32 [Irfan]
q-
05:35:02 [zcorpan]
zcorpan has joined #apa
05:35:07 [Irfan]
burn: if you want to filter out the speak content, you have to pay attention to the text which is there
05:36:17 [Irfan]
joanie: good point but you are wrong... because its an object which can be exposed to a11y tree
05:36:42 [Irfan]
we have DOM tree and render tree. A11Y tree is combination
05:37:41 [Irfan]
if we have <div> foo</div> element.. we are going to have ATKObject which is accessible element with some property like APK_role_section
05:37:48 [Irfan]
state-enabled
05:37:58 [zcorpan]
zcorpan has joined #apa
05:37:59 [Irfan]
ATK Text interface will bring all stuff about text
05:38:18 [Irfan]
all these ATK object attributes going to include the text
05:39:14 [Irfan]
if we do like <div aria-bar="baz">foo</div>
05:39:25 [Irfan]
I am also going to get bar:baz
05:39:42 [Irfan]
ATKText- "foo"
05:40:27 [Irfan]
also going to have attribute like ssml : <say-as>.. that means text doesn't go away because it is an object attribute
05:40:38 [Irfan]
mhakkinen: is there any char limit?
05:40:48 [Irfan]
joanie: probably there is
05:40:55 [Irfan]
mhakkinen: we need to think about it
05:41:14 [Irfan]
joanie: we we have a limit then we need to break it into multiple attrbute
05:42:14 [Irfan]
burn: you might have multiple properties.. one say-as attr is not going to work..
05:42:20 [Irfan]
joanie: then we have to go to SSML
05:42:40 [Irfan]
burn: we can break it up which is an array of literal string
05:45:24 [Irfan]
if you need some of the hierarchal properties then there could be a problem otherwise it is okay
05:45:36 [Irfan]
joanie: agrees
05:46:34 [Irfan]
mhakakinen: during IMS discussion we talked about these challenges... people want to exercise all the capabilities
05:47:14 [Irfan]
burn: one of the issue with us was... there would be couple of different TTS voice loaded.. may english voice.. female german voice...
05:47:19 [Irfan]
its been long time
05:47:28 [Irfan]
idea was that you have an html page or voice xml
05:48:00 [Irfan]
you have text there and some one add lang tag.. it would switch TTS voice to german and make it female.. which is more disruptive
05:48:22 [zcorpan]
zcorpan has joined #apa
05:48:55 [Irfan]
we hade to change which would affect to larger users but here we dont have that challenge
05:49:28 [Irfan]
joanie: AT are going to have to deal with inheritance which is un-fun.
05:50:26 [Irfan]
not asking to change your explainer but inheritance or multiple level properties are going to be applied
05:50:50 [Irfan]
I was happy about this solution and we started talking about child or other voice
05:52:00 [Irfan]
<free conversation>
05:52:51 [Irfan]
rrsagent, make minutes
05:52:51 [RRSAgent]
I have made the request to generate https://www.w3.org/2019/09/20-apa-minutes.html Irfan
05:53:39 [Irfan]
s/mahakkinen/mhakkinen
05:55:25 [zcorpan]
zcorpan has joined #apa
06:00:27 [Irfan]
mhakkinen: we are stepping back to full SSML inline.. to a subset either an element or new Attr... span based model.. that is simple and clean
06:00:58 [Irfan]
burn: where is this work is going to be happen
06:01:15 [Irfan]
github handel @burnburn
06:01:32 [Irfan]
its a good way to start
06:01:51 [Irfan]
mhakkinen: SSML is going to have much broader impact on the web.
06:02:08 [Irfan]
amazon is already doing extension to SSML
06:02:16 [Irfan]
there is so much potential here
06:06:19 [Irfan]
joanie: JSON has to be parsed, and in the case of at least some SR turned back into the original SSML
06:06:49 [Irfan]
joanie: I am going to parsed it into option one...
06:07:58 [Irfan]
all the ssml property should be use and literal string as a single object attribute
06:08:31 [Irfan]
s/should be use and/should be used as
06:08:51 [Irfan]
it is going to be very lengthy literal object attribute
06:09:11 [Irfan]
mhakkinen: if user wants to navigate with a char.. what is affecting ?
06:09:35 [Irfan]
if user wants it in female voice.. are you going to retain as active ssml?
06:09:48 [Irfan]
joanie: that has nothing to do with option 1 or option 2
06:10:31 [Irfan]
I dont want to learn SSML but I want to use it with learning it
06:10:44 [Irfan]
<open discussion>
06:12:28 [Irfan]
joanie:need to talk to vendors and say that no matter what.. it is going to be attribute.. could you deal with actually SSML mark up and an attribute
06:32:39 [Judy]
Judy has joined #apa
06:35:56 [Roy_]
RRSAgent, make minutes
06:35:56 [RRSAgent]
I have made the request to generate https://www.w3.org/2019/09/20-apa-minutes.html Roy_
06:47:13 [atai]
atai has left #apa
07:02:06 [jib]
jib has joined #apa
07:03:07 [zcorpan]
zcorpan has joined #apa
08:14:52 [Roy]
Roy has joined #apa
11:39:46 [zcorpan]
zcorpan has joined #apa
11:42:19 [MichaelC]
MichaelC has joined #apa
13:38:50 [Judy]
Judy has joined #apa
14:20:24 [stevelee]
stevelee has joined #apa
14:58:25 [zcorpan]
zcorpan has joined #apa
19:24:42 [Judy]
Judy has joined #apa