16:22:51 RRSAgent has joined #immersive-web 16:22:56 logging to https://www.w3.org/2023/04/24-immersive-web-irc 16:23:05 zakim, clear agenda 16:23:05 agenda cleared 16:28:02 lgombos has joined #immersive-web 16:30:30 present+ 16:33:01 cabanier has joined #immersive-web 16:33:12 present+ 16:35:48 dom has joined #immersive-web 16:50:46 meeting: Immersive-Web WG/CG face-to-face day 1 16:50:51 chair: Ada 16:50:56 rrsagent, make log public 16:56:56 adarose has joined #immersive-web 16:56:59 Brandel has joined #immersive-web 16:57:12 etienne has joined #immersive-web 16:58:00 Manishearth_ has joined #immersive-web 16:58:26 agenda: https://github.com/immersive-web/administrivia/blob/main/F2F-April-2023/schedule.md 16:58:33 Yonet has joined #immersive-web 16:58:43 marcosc has joined #immersive-web 16:59:31 Dylan_XR_Access has joined #immersive-web 16:59:41 marcosc has changed the topic to: Immerisive Web F2F - Cupertino 17:00:13 present + 17:00:16 present+ 17:00:16 present+ 17:00:16 present+ 17:00:19 present+ 17:00:24 present+ 17:00:30 bialpio has joined #immersive-web 17:00:37 present+ 17:00:40 present+ 17:00:45 Marisha has joined #immersive-web 17:00:49 bajones has joined #Immersive-Web 17:00:49 gmz has joined #immersive-web 17:00:51 Nick-Niantic has joined #immersive-web 17:00:56 present+ 17:01:03 present+ 17:01:09 present+ 17:01:11 DatChu has joined #immersive-web 17:01:15 felix_Meta_ has joined #immersive-web 17:01:20 kdashg has joined #Immersive-Web 17:01:22 present+ 17:01:25 present+ 17:01:25 mkeblx has joined #immersive-web 17:01:39 rigel has joined #immersive-web 17:01:40 present+ 17:01:45 mjordan has joined #immersive-web 17:01:56 present+ 17:02:07 present+ 17:02:08 rrsagent, publish minutes 17:02:09 I have made the request to generate https://www.w3.org/2023/04/24-immersive-web-minutes.html atsushi 17:02:28 vicki has joined #immersive-web 17:02:30 present+ 17:02:49 agenda+ webxr-gamepads-module#58 Add support for a PCM buffer to the gamepad actuator 17:02:59 present+ 17:03:08 agenda+ webxr#1320 Discuss Accessibility Standards Process 17:03:17 present+ 17:03:18 agenda+ semantic-labels 17:03:27 dulce has joined #immersive-web 17:03:31 agenda+ webxr#1317 Some WebXR Implementations pause the 2D browser page in XR, make this optional? 17:03:36 present+ 17:03:48 agenda+ navigation#13 Let's have a chat about Navigation at the facetoface 17:03:54 scribe: Dylan_XR_Access 17:04:09 Mats_Lundgren has joined #immersive-web 17:04:11 agenda +webxr#1273 Next steps for raw camera access 17:04:15 Introductions 17:04:26 agenda+ webxr#892 Evaluate how/if WebXR should interact with audio-only devices 17:04:30 zakim, list agenda 17:04:30 I see 7 items remaining on the agenda: 17:04:31 1. webxr-gamepads-module#58 Add support for a PCM buffer to the gamepad actuator [from atsushi] 17:04:31 2. webxr#1320 Discuss Accessibility Standards Process [from atsushi] 17:04:31 3. semantic-labels [from atsushi] 17:04:32 4. webxr#1317 Some WebXR Implementations pause the 2D browser page in XR, make this optional? [from atsushi] 17:04:32 5. navigation#13 Let's have a chat about Navigation at the facetoface [from atsushi] 17:04:33 6. webxr#1273 Next steps for raw camera access [from atsushi] 17:04:33 7. webxr#892 Evaluate how/if WebXR should interact with audio-only devices [from atsushi] 17:04:35 Ada Rose Cannon, Apple; into declarative stuff 17:04:49 Nick, sr director at Niantic; AR web and geodata platform for devs 17:09:35 present+ 17:13:16 https://hackmd.io/@jgilbert/imm-web-unconf 17:13:16 additional introductions available on request 17:13:43 i/Ada Rose Cannon, Apple; into/topic: intro 17:14:04 zakim, take up agendum 1 17:14:04 agendum 1 -- webxr-gamepads-module#58 Add support for a PCM buffer to the gamepad actuator -- taken up [from atsushi] 17:14:18 https://github.com/immersive-web/administrivia/blob/main/F2F-April-2023/schedule.md 17:15:08 Ada: First item on agenda is Add support for a PCM buffer to the gamepad actuator 17:15:13 https://github.com/WebKit/standards-positions/issues/1 17:15:18 https://github.com/w3c/gamepad/issues/186 17:15:43 Rik: had support for intensity and vibration of rumble/haptic on controller, but done through nonstandard API 17:16:01 ...Google wanted to extend API, Apple objected; should use .WAV file and leave implementation up to developer 17:16:03 Yih has joined #immersive-web 17:16:04 q+ 17:16:05 alcooper has joined #immersive-web 17:16:28 ...Can send multiple frequencies through motor; want to add API to pass audio buffer to controller 17:16:49 ...Haptic actuator is nonstandard, people want to get rid of it; but alternate proposals haven't been developed in two years 17:16:54 present+ 17:16:58 ...Also based on touch events, focused on mouse more than controller 17:17:29 ...Want API for it in WebXR, so instead of going through input profile, gamepad, haptic actuator, etc.; just go straight through WebXR 17:17:42 ...Complication is a constant source of problems 17:18:05 ...Really just need a method to play audio file 17:18:07 q+ 17:18:38 Marcos: putting on web apps working group chair hat, I work on gamepad API with colleagues at Google 17:18:58 ack marcosc 17:19:02 ...objection was shared, and based on the idea that Xbox live folks were using dual rumble 17:19:22 ...dual rumble was supported in chrome; objected that this is a terrible design, all you do is pass enum and get things to rumble 17:19:25 Emmanuel has joined #immersive-web 17:19:47 ...no fine-grained haptics there. Implemented in web kit, safari, but all found it abhorrent as a working group 17:20:22 ...Putting Apple hat on instead, would object to it moving because compared to core haptics, using only audio to represent haptics is not good enough; can't get fidelity you need 17:20:49 ...must synchronize audio and haptics together; not sure what a WAV file would lead to on a gamepad 17:20:49 q+ 17:21:13 ...for more complicated haptic devices, there are different regions, multiple actuators; proposal from microsoft is more region-based, e.g. a glove with actuator for each finger 17:22:03 q? 17:22:05 ...In web apps, we claimed actuator part because of gamepad; want to figure out in this space whether it's the right time to do generalization 17:22:25 ...Minefield with regards to IPR as well. It's a new area for the web, fraught with potential issues 17:22:51 ...e.g. vibration API is not in webkit because you can do vibrations that feel like system vibrations, alerts; could be scary 17:23:22 ...Together, many issues that increase complexity; just sending an audio stream isn't good enough 17:23:40 ack bajones 17:23:40 +1 to include region based tactile as well 17:23:41 q+ 17:23:46 ...But acknowledge that a lot of devices do take audio input. Need to find a happy medium 17:24:35 Brandon: In addition to being concerned that this would map to devices we're trying to support, I feel strongly that putting an API on object A when it likely belongs on object B because we aren't getting what we want from group B is not the right direction 17:25:09 ...Could be applicable to any gamepad; we should be improving this for all gamepad-like objects 17:25:23 ...Would want to see evidence that what we're doing only applies to webXR devices 17:25:52 ...PSVR2 has rumble in the headset. Could see argument for "let's give the session itself as a proxy for the device the ability to rumble" (though an edge case right now) 17:26:19 q? 17:26:21 ...Don't just try to leapfrog bureaucracy using the spec - shouldn't take exclusive ownership of this capability 17:26:23 ack cabanier 17:26:24 marcosc has joined #immersive-web 17:26:32 q+ 17:26:40 Rik: Some frustration because haptic actuator has been festering for years. Shipped it nonstandard, leaving us in bad situation 17:27:24 q? 17:27:27 ...Some frustration over lack of progress. OpenXR supports ___ CCM, with plenty of experiences that use API without problems. Not sure if there's something missing by playing an audio file 17:27:59 Ada: From a politics standpoint, is there anything we can do as a group to encourage discussion? "Festering" is an unfortunately accurate verb 17:28:11 q+ 17:28:22 q+ 17:28:44 ??: For those of us with ownership over gamepad, we meet once a month Thurs at 4pm; could be a good time to grab Microsoft folks and push discussion 17:28:56 ack adarose 17:29:10 ack marcosc 17:29:57 ...On the scope questions that came up, targeting gamepads, writing instruments, etc.; could be overly generic. How much of this is an XR issue? 17:30:34 ...Folks at Apple adamant that audio isn't going to cut it. Need better synchronization 17:30:49 q+ to ask about a web audio node 17:30:50 etienne has joined #immersive-web 17:30:53 ...Must synchronize haptics to audio itself. Renderers need to sync with each other, which is challenging 17:31:05 ack Manishearth_ 17:31:28 Manish: Heard a bunch of political/technical reasons for trickiness; sounds like there might also be a lack of people to do the work 17:31:47 q+ to point out compatibility challenges if only targeting the highest end haptics 17:32:16 q+ 17:32:17 ...Quite a bit of interest here, in this group. Worth wondering if there's a way for people in this group to submit proposal to help 17:33:05 ??: Yes, that would be great. Have been wanting to rage-rewrite it for a while, it's a mess. But it's a matter of resource allocation - need testing framework, etc. 17:33:31 ...Would be great to apply resources from multiple companies, have a nice base to apply future WebXR work as well 17:33:36 ack CharlesL 17:34:10 q+ 17:34:14 Charles: From accessibility POV, having only an audio API would be an issue. Having multiple ways to target different regions could be very beneficial if e.g. audio is only coming from your right or left 17:34:20 dino7 has joined #immersive-web 17:34:24 ack CharlesL 17:34:31 ack adarose 17:34:31 adarose, you wanted to ask about a web audio node 17:34:51 Ada: This is probably wrong group for this, but could be cool if it was a web audio node 17:35:01 Manish: as hacky as that sounds, it might be the best way to do this 17:35:06 q? 17:35:09 q- 17:35:10 ack bajones 17:35:10 bajones, you wanted to point out compatibility challenges if only targeting the highest end haptics 17:36:15 Brandon: Want to caution against the perfect being the enemy of the good. In some cases, you've just got a little motor that buzzes 17:36:17 q+ 17:36:39 ...Would be a shame if we ignore pressing need for haptics in devices available today because people want to be architectural astronauts 17:36:56 ...Balance to be made between quick and dirty, vs planning for the future 17:36:56 q? 17:37:09 ack Brandel 17:37:18 q+ 17:37:40 Brandel: on topic of devices that exist today, Xbox One has 4 actuators, with intention of spatializing a haptic moment. The accessibility controller also has haptics 17:38:06 ...need a higher level signal to make judgments on what spatialization entails 17:38:14 q? 17:38:20 ack marcosc 17:38:45 Marcos: Sony are editors of gamepad spec, have asked them to take a look at uploading audio from the web 17:38:57 ...Concern that comes up is that controllers were never designed to take random files from the web 17:39:10 ...From a security perspective, not sure whether harm can be done. e.g. overloading the motor 17:39:42 ...iPhone considered a gamepad as well 17:39:42 q? 17:39:45 ack cabanier 17:40:19 Rik: for reference, Quest Pro controller has 4 haptic actuators, including a fancy one; all take audio, system downsamples to do something reasonable 17:40:28 Brandel: does it expose relative position? 17:40:39 Rik: no, the gamepad is just supposed to know which one is which 17:41:11 Marcos: have a demo that could show how it does work with audio. How it synchronizes, etc. 17:41:16 bajones has joined #Immersive-web 17:41:28 Rik: Everything is synchronized to display time. Pass it a time, it plays at that time. 17:41:35 Marcos: Send it like a URL? 17:41:49 Rik: No, it's a web audio buffer. Already in memory 17:42:02 Ada: We should set a cross-group meeting 17:42:32 Marcos: next meeting on Thursday May 11th 17:43:25 zakim, take up agendum 2 17:43:25 agendum 2 -- webxr#1320 Discuss Accessibility Standards Process -- taken up [from atsushi] 17:44:00 scribenick: Manishearth_ 17:44:16 mjordan has joined #immersive-web 17:44:34 Dylan: prior a11y discussions: webxr has *some* control over this but it's fundamentally a low level system 17:44:44 rrsagent, this meeting spans midnight 17:44:53 .... we should figure out what of this is under our scope, and what falls under other groups 17:45:29 ... case study: charles & i are a part of an NSF team, making nonverbal communication in XR accessible to low-vision people. making gestures, physical proximity, and 3d/2d content and turning that into sound and haptics 17:46:04 Yih_ has joined #immersive-web 17:46:12 ... some things here we can help handle, some things like gestures or emoji are beyond the webxr level 17:46:50 Resource: How Do You Add Alternative Text and Metadata to glTF Objects? https://equalentry.com/accessibility-gltf-objects/ 17:47:19 q? 17:47:20 .... 17:47:28 q+ 17:47:36 .... could create a task force from XR a11y, separate from tuesday meetings 17:48:03 ... can bring recs to this group as a while, and bring in the APA/etc 17:48:19 Charle: 17:48:32 q+ to ask about standardisation work we can do in this group 17:48:49 ack Jared 17:48:50 Dylan: A lot of the current screenreaders in VR are about pointing at what you want and OCRing what you see, as opposed to looking at "everything in the space" 17:48:53 https://www.w3.org/2019/08/inclusive-xr-workshop/ 17:49:25 Jared: back in 2019 was that there was a workshop. there's quite a bit of shifts around responsibilities in the w3c 17:49:43 ... could be interesting for this group to have one resource for what the responsibilities currently are 17:50:07 ... i've had a hard time discovering what that is. would be good to come up with consensus on the current state of things 17:50:11 q? 17:50:28 XR Access github: https://bit.ly/xraccess-github 17:51:06 q+ 17:51:15 ack adarose 17:51:15 adarose, you wanted to ask about standardisation work we can do in this group 17:51:29 Dylan: one thing is that we don't have things like a list of legal responsibilities for XR, and that's one of the problems 17:51:39 ... good to have minimum guidelines around this 17:51:56 ada: 100%, if we had such minimal guidelines we could start building the things we need so people can satisfy them 17:52:23 ... also this is a good group to do this work, to do it in the group or as a separate task force formed from this group 17:53:09 ... something mentioned last week, might be a good idea to ... like the a11y object model ... the visual parts of that model is quite tied in, but nothing like that for WebGL. Giving people the option to generate that themselves will be useful 17:53:25 q+ 17:53:25 q? 17:53:29 ack Nick-Niantic 17:53:43 ... and then if there are minimum-viable standards later, we can say "hey we made this easy for you" (and if you don't do it, there's the stick) 17:53:56 Nick: when we talk about a11y we talk about alt text, ARIA tags, ... markup 17:54:13 ... as ada said we now have webgl/gpu which don't know anything about what they're rendering 17:54:30 q+ 17:54:36 ... but you also have frameworks like a-frame/etc that integrate with DOM/etc 17:55:04 ... and they can perhaps do more semantic a11y stuff 17:55:11 ... otoh there's pushback against them for being heavy on the DOM 17:55:19 ack CharlesL 17:55:42 Nick: in other words; do you think we should make a standard like a-frame, or something else? 17:55:46 q+ 17:56:18 adarose: would like an imperative API, where you build some kind of tree 17:56:35 ... probably has access to hit boxes / etc 17:56:56 Nick: could it be declarative? like a json file? 17:57:03 ada: i guess you could and then parse it into tree 17:57:29 ada: part of my instinct is to keep the DOM for stuff that is rendered in DOM 17:57:37 ... especially as we get more DOM integration 17:58:17 q+ 17:58:17 ... a-frame has shown you can have a nice matchup bw the DOM tree and the scenegraph 17:58:17 scribe nick: adarose 17:59:07 q+ 17:59:36 present+ 17:59:50 Manishearth_: I would prefer an imperative API, I would not want to standardise AFRame for a11y, I think for a DOM based API, that more ARIA tags which declarative APIs like AFrames could use would be nice. But an Imperative API would work for everyone. Whilst I think the DOM based API is fine I wouldn't want to force everyone through it. 18:00:40 ... there are lots of tools for that scenarios I don't want people to use a11y I nXR to stick with that approach. Imperative APIs can be integrated in to the DOM based APIs without it being an additional cost 18:00:40 q? 18:00:41 q+ 18:00:45 ack Manish 18:00:48 ack adarose 18:00:51 ack Manishearth_ 18:00:59 scribe+ 18:01:00 ack Dylan_XR_Access 18:01:15 q+ 18:01:19 Dylan: another player that we should keep in mind here is screenreaders 18:01:31 ... there's gonna be a big q of ... when they get this, how do they interpret it 18:01:43 ... what are they going to do with it 18:02:15 ... would be very curious to see what the differences are when it comes to how they acquire their content, and how different screenreaders fare when fed these things 18:02:47 q? 18:02:47 ... if there's a way we can make these experiences at least navigable from a user exp ... relatively similar, so people aren't coming to this completely confused as to the way it was built 18:02:50 q? 18:03:09 q+ 18:03:13 ada: i think things like unity, when they're targeting the web, ... 18:03:27 .... things in the session or on the document itself , they should be able to use it 18:04:09 q? 18:04:11 ack cabanier 18:04:16 q+ 18:04:18 ... because it's an new rendering mode, existing screenreaders would have to write addl apis to hook into it. needs to be easily accessible, not deeply ingrained in a way that you wouldn't get from the DOM tree + executed JS 18:05:11 cabanier: not sure if we ever wrote down results of a TPAC session about a lot of this 18:05:37 ada: hopefully minuted. wasn't our meeting, might be an a11y group 18:05:54 cabanier: at the time we thought we had something that covers most of what is needed by webxr 18:06:01 There was a workshop 18:06:13 q? 18:06:17 ada: going to make a repo to start work here. it's going to have to be impld in the browser 18:06:30 cwilso: Do you have the link to the Webex? 18:06:38 https://www.w3.org/2019/08/inclusive-xr-workshop/papers/XR_User_Requirements_Position_Paper_draft.html 18:06:46 q+ To point out https://github.com/WICG/aom 18:07:12 cwilso: 18:07:35 fetchez la vache 18:07:45 q? 18:08:17 q? 18:08:21 ack Jared 18:08:57 q? 18:09:10 q+ 18:09:18 Marisha has joined #immersive-web 18:09:40 Jared: what kind of process exists to ensure we follow the success criteria that each spec has to have an a11y section 18:10:12 ada: we generally ensure that the webxr APIs are more accessible than what they are building on 18:10:36 ack CharlesL 18:10:38 ... big problem is that devs aren't really using stuff we have at the per-spec level, doing something like this might work but nobody else is doing that kind of work 18:10:48 Yih has joined #immersive-web 18:10:58 Charles: The concept i was thinking of was the w3c registry 18:11:11 ... screenreaders already know how to navigate the DOM, that might make sense 18:11:33 ... as long as the new portions of the DOM get updated as you move around 18:12:07 ... parallel with the publishing group in the w3c, created a separate group 18:12:19 ada: regarding last point; pretty much all of our specs are done in parallel 18:12:25 ack Nick-Niantic 18:12:27 ... so a module would fit in very well 18:12:54 Nick: conversation earlier, may want to consider a spec that's not only for web devs but also useful for unity/etc people 18:13:23 ... on one hand a thorny problem to solve at an api level. thinking of GLTF as a format; maybe a way to do a11y tags is as a part of the gltf spec 18:13:37 ... and then you have browser libs/etc that read source information in that scenegraph 18:14:07 ... not perfect, if you're not using GLTF and doing runtime stuff, there's no real recourse 18:14:17 ada: for the model tag discussion we're going to need this kind of thing 18:14:19 q? 18:14:21 ack Dylan_XR_Access 18:14:57 Dylan: we can connect with the devs working with screenreaders 18:15:10 ... other thing is we can work with unity/etc people who need to integrate it 18:15:22 ... also need to figure out where we expose it at each level 18:15:39 ack bajones 18:15:39 bajones, you wanted to point out https://github.com/WICG/aom 18:15:59 q? 18:16:19 bajones: at tpac 2022 we had a meeting with the a11y object model group, part of WICG 18:16:36 ... part of the programmatic extension of ARIA 18:16:48 ... can we make imperative canvas/webgl stuff more accessible 18:17:25 ... mostly just "everyone defines this through js". problem becomes "how do we motivate that". should continue to interface with them, they were quite interested in working with us 18:18:19 ... second point: idk how well this would apply here. One of the things we did to the webxr api was an abundance of labels. This is just for development purposes; so you can have good error messages 18:18:35 s/webxr/webgpu 18:18:44 ... appealing to dev's selfish nature here ... works! 18:19:00 q+ 18:19:09 ... anything we can do to make object-picking, debugging, etc easier; whatever carrot we can dangle, that would prob be good 18:19:35 q? 18:19:38 +1 to the a11y labels carrot for devs! 18:19:39 ada: reminded of Google driving SEO with this 18:19:40 ack Manishearth_ 18:20:09 Yih has joined #immersive-web 18:21:19 ack Dylan_XR_Access 18:21:25 manish: note on process; we do have 18:21:57 ... a11y at CR review. if we want to do more than what the review requires we can also do that, tricky. generally in favor of having an a11y-focused model that other specs build on 18:22:10 Dylan: making things accessible makes it more readable to machines too 18:22:26 ... one hting we do is to get univs to teach a11y but also get people to work on these kinds of challenges 18:22:52 q+ 18:22:56 ... if there is oss code from this group, that's something we're interested in making easier to access 18:23:27 ... encourage people to reach out!!! 18:23:28 q? 18:23:31 ack CharlesL 18:23:55 Prototype for the people project - would be happy to add anything from this conversation that needs additional development muscle: https://xraccess.org/workstreams/prototype-for-the-people/ 18:23:56 I'm interested in that. Working on OSS WebXR samples now with lots of people in the community. 18:24:12 q? 18:24:14 Dylan: what about building a11y checker tools 18:24:30 ada: currently not much existing , a lot of tooling is around rendering 18:24:47 jfernandez has joined #immersive-web 18:25:12 s/Dylan/Charles 18:25:32 Charles: might end up becoming a legal requirement, even. 18:25:38 ada: really pro there being a11y standards for XR 18:25:51 q+ 18:26:01 felix_meta__ has joined #immersive-web 18:26:04 ... lots of places won't do a11y unless legally mandated to do so 18:26:09 q? 18:26:27 ... unsure if it's us or a different group 18:26:47 q? 18:26:49 RRSAgent, please draft the minutes 18:26:50 I have made the request to generate https://www.w3.org/2023/04/24-immersive-web-minutes.html Manishearth_ 18:26:56 ack Dylan_XR_Access 18:26:56 q+ 18:26:57 Dylan: def agree 18:27:24 ... we need to surface text, even, at the AOM model /etc 18:27:40 ... do we have info for the xaur group 18:27:53 ... e.g. if you're in a social vr setting you should be able to tell where peoples' avatars are 18:28:17 ack Nick-Niantic 18:28:24 ... ensure that the right concerns get directed to the right group 18:28:45 Nick: q for googlers: relatively recently Google transitioned Docs from being DOM-based to canvas-based 18:29:06 ... improves compat and smoothness, but now you have to reinvent a11y 18:29:33 bajones: idk about what efforts went through to make it accessible 18:30:20 ... i was under the impression that it had happened but recently i went spelunking and there was still some DOM there 18:30:40 ... so the transition may not be as complete 18:30:49 Nick: hm. find-in-page at least doesn't work 18:31:08 bajones: do not expect it was done in a way that was necessarily easy to replicate outside of google 18:31:29 q? 18:32:04 q+ 18:32:12 ack Dylan_XR_Access 18:32:22 A relevant link for the Docs canvas transition: https://workspaceupdates.googleblog.com/2021/05/Google-Docs-Canvas-Based-Rendering-Update.html 18:32:22 Nick: interesting that Docs is kinda in the opposite situation where they're moving from a structured model to 2d rendering 18:32:33 Yih has joined #immersive-web 18:32:51 q+ 18:33:53 Dylan: path forward: do we work with folks like unity/8thwall/etc to come up with the solution? can we require users to use something 18:34:03 Nick: yeah even figuring out the level at which to do this is hard 18:34:22 ack adarose 18:34:25 q+ 18:34:27 ack adarose 18:34:30 Nick: At least for the 2d web the browser knows everything about what's going on, we're nowhere near that here 18:34:52 ada: one approach i'd like to take is have a go at speccing out an API to let libraries add the info needed 18:35:35 ... "this is a thing we're proposing, a11y, SEO, etc", showing it to the various people who it's relevant to 18:35:42 ... "does this fit with what you're building" 18:36:04 ... then we can approach the model people with "these libraries have ways to add things to rendering, but these models are opaque blobs" 18:36:30 q+ 18:36:33 q+ 18:36:48 ada: even if we do something like that, it won't be useful in all situations 18:37:52 ... "there is a fox person in front of you with red ears and ..." is not necessarily as useful as "there is person A in front of you, they are walking away, slightly frowning" in many contexts 18:38:10 Dylan: our nsf grant is helping figure that out 18:38:34 q? 18:38:37 ack CharlesL 18:38:38 ada: have opinions about avatars on the web, think we need to drive standardization before we get into a problem 18:38:55 Charles: reach out to the various a11y groups at tpac? 18:39:06 q? 18:39:08 ada: good call, haven't started assembling an agenda but can 18:39:11 q+ 18:40:04 q+ 18:40:26 q? 18:40:29 ack Manishearth_ 18:41:09 Brandel has joined #immersive-web 18:43:52 ack Yonet q? 18:43:59 ack Yonet 18:45:25 Manish: a big diff b/w the 2d web and XR is that the 2d web can be represented as a roughly 1-dimensional thing ( a traverseable tree) with some jumping around, whereas for XR that's very ... not true; what is and isn't important; and how that changes over *time* leads to trickiness, and different applications will want to highlight different things. we do need something low level 18:45:41 Yonet: 18:45:44 q? 18:45:44 ack Dylan_XR_Access 18:46:27 Dylan: to give a sneak preview of the stuff we're doing, we did an AR thing to use e.g. the hololens for real spaces, to e.g. help blind people navigate to the right bus stop 18:46:52 ... when you try to make everything audible at once everything is irrelevant 18:47:01 I am interested in participating in the accessibility initiative too. 18:47:10 Great 18:47:49 Dylan: would like help setting the group up 18:48:43 zakim, take up agendum 3 18:48:43 agendum 3 -- semantic-labels -- taken up [from atsushi] 18:49:18 etienne has joined #immersive-web 18:49:44 scribe+ 18:50:00 agenda? 18:50:01 https://github.com/immersive-web/semantic-labels/issues/4 18:50:12 zakim, take up item 3 18:50:12 agendum 3 -- semantic-labels -- taken up [from atsushi] 18:50:59 cabanier: planes give you the different surfaces, hti testing lets you point rays at things and see the intersections 18:51:09 Is there a link for this issue? 18:51:21 Rik: quest browser gives back planes / etc. where in real world. not sure what you are hitting / planes you are hitting etc. user has to manually set up the room and what those objects are, table chair etc. 18:51:33 ... but you don't know what you're actually hitting. in quest the user tells us what their things are when they set stuff up (manually). we want to expose that to webxr 18:51:48 ... so you know if something is a door or window or something 18:51:56 Jared https://github.com/immersive-web/semantic-labels/issues/4 18:52:27 … update two existing specs. in the array of attributes single DOM string attibute. 18:52:31 q+ 18:52:40 … set up a repo that defines all that. 18:52:52 RRSAgent, please draft the minutes 18:52:54 I have made the request to generate https://www.w3.org/2023/04/24-immersive-web-minutes.html Manishearth_ 18:53:00 q? 18:53:02 ack bajones 18:53:32 bajones: topic came up before. only expose metadata on hits correct>: Yes 18:54:03 … hit tests could get a rough idea, as you point you can have items call out 18:54:37 … curious expected use cases? if I can only get back the real item you are pointing at in the real world 18:54:59 q+ 18:55:07 Rik: plaines API, Meshes API. you can query all the planes in a scene 18:55:31 … quest browser this website, gives link to the privacy policy on what data you are giving up. 18:56:00 … if you are putting furnature in a room you put it on the floor. and likewise a paining should be put on a wall and not a window. 18:56:15 ack bialpio 18:57:20 bialpio: know there are some products that exists that label context "mask" of what user sees would annotate every pixel. if devices do opperate like this how do we expose this info through the API. where is the Sky etc. 18:57:41 q+ 18:58:04 felix_meta_ has joined #immersive-web 18:58:30 … wonders annotated buffer, we may not know where all the pixels are. table top board games where is the table? how do we integrate with a buffered approach limited APIs limit to bitmask, sky vs. wall, vs. window. 18:58:55 Rik: going outside is still unsolved for VR. tied to the room, even walking between rooms. 18:59:20 … not really implemented correctly. Semantic Labelling comes from OpenXR. 18:59:59 … would be an optional label. 19:00:20 ack bajones 19:00:26 q+ 19:00:45 bajones: assume real world meshing paired with semantic labels, would this help with the tagged buffer? 19:01:19 ???: will there be a viewport like the sky? 19:02:04 bajones: if I am in my living room and I can label couch / chair, but when I go outside I won't know there is a mountain vs.. sky. 19:02:45 ???: No confidence level could be useful, tagged buffer could expose confidence level. I am looking for problems here not sure if they are real. 19:02:59 … we need to make sure the API is flexible 19:03:18 bajones: masking out sky, star chart AI. 19:03:32 … anyone know what they are using for masking out the sky in those applications. 19:05:09 ack Nick-Niantic 19:05:16 Nick:we employ two scenes and one with a mask over it. 19:05:21 https://github.com/immersive-web/semantic-labels 19:05:31 … do you what they are a list of semantic labels? 19:05:36 https://github.com/immersive-web/semantic-labels#list-of-semantic-labels-for-webxr 19:05:43 q+ 19:05:47 RiK: you can add more. 19:05:50 Desk, couch, floor ceiling, wall, door, window, other 19:06:19 ack lgombos 19:06:34 … other right now is undefined right now empty 19:06:58 … if you manually draw a table it won't have the semantics. one label per object. 19:07:41 Ada: is it an array of 1 item. table and round table, brown table. 19:08:01 Rik: we should not invent it now. 19:08:28 … confidence level, I don't like that pushes the decision to the developer, avoid conf. level would be good. 19:08:35 q+ 19:10:03 Nick: confidence level: content fades out along the edges, having confidence level is helpful. per pixel confidence level. 19:10:26 Rik: Depth. 19:11:09 ??: AI-Core could give you depth information. one issue about that. consider how to expose this. one buffer with both confidence level and data but they changed that. 19:11:11 q? 19:11:24 ack bialpio 19:11:29 q+ 19:11:45 bialpio: open XR coming from Meta. 19:11:58 … who implements the extension. 19:12:11 ack Dylan_XR_Access 19:12:13 q+ 19:12:47 Dylan: a11y impact being able to label whats in the users env. is very important and where the edges are, edge enhancement around the boarders very important. 19:12:59 ack bialpio 19:13:06 Rik: quest we do this so you don't trip over the edge of an object. 19:13:49 q+ 19:14:04 bialpio: Computer vision if we do expose the confidence levels it may not render, it may ignore if a table has 30% confidence it may not render it, but leaving it up to AI is probably not a good idea. 19:14:17 "The difference between something that might go wrong and something that can't possibly go wrong is that when the thing that can't possibly go wrong goes wrong, it's generally much harder to get at and fix." -Douglas Adams 19:14:34 q+ 19:14:38 … making sure we don't paint ourselves in a corner, make sure sky detection with blending sky not sky works for this. 19:14:59 ack Nick-Niantic 19:15:32 Nick: hit test for meshes / planes as headsets go outdoors. bulding meshes outside may be challenging. may be useful lables per vertex not per mesh. this region of a scene is a bush or tree. 19:15:58 RiK: could have 1000's of vertices 19:16:13 Nick: could be used to place content in a smart way. 19:16:55 … having labels per plane could be useful but outdoors could have multiple meshes for multiple objects. 19:16:58 Brandel has joined #immersive-web 19:17:06 Rik: Is there hardware? 19:17:06 q? 19:17:21 Nick: there are classifiers mesh generation from scanning. 19:18:38 bajones: which methods expose which data. Are these semantic labels / classifications and ability to add to it later. planes, pixels, meshes, pixels. seems to make sense. propose: we should have a registry of semantic labels. 19:19:41 … image based masked different pixels be labelled / confidence levels etc. give each of these values anoon. we should have concrete values an integer value for each label. 19:20:00 … Sounds like semantic labels is Yes. 19:20:44 ack bajones 19:20:50 rrsagent, draft minutes 19:20:51 I have made the request to generate https://www.w3.org/2023/04/24-immersive-web-minutes.html CharlesL 19:21:26 bajones_ has joined #Immersive-Web 19:21:39 Dylan_XR_Access_ has joined #Immersive-web 19:23:21 Jared_ has joined #immersive-web 19:26:45 CharlesL has joined #immersive-web 19:34:18 dino7 has joined #immersive-web 20:24:03 lgombos has joined #immersive-web 20:24:55 CharlesL has joined #immersive-web 20:25:23 Jared has joined #immersive-web 20:26:35 Marisha has joined #immersive-web 20:27:02 Brandel has joined #immersive-web 20:27:28 bajones has joined #Immersive-web 20:28:33 atsushi has joined #immersive-web 20:30:11 adarose has joined #immersive-web 20:30:28 present+ 20:30:29 zakim, choose a victim 20:30:29 Not knowing who is chairing or who scribed recently, I propose Brandel 20:30:35 scribenick cabanier 20:30:40 zakim, choose a victim 20:30:40 Not knowing who is chairing or who scribed recently, I propose lajava 20:30:44 Leonard has joined #immersive-web 20:30:46 ... email: Dylan [AT] xraccess [DOT] orgpresent+ 20:30:58 ... email: Dylan [AT] xraccess [DOT] orgpresent+ 20:31:01 present+ 20:31:02 bialpio_ has joined #immersive-web 20:31:36 present+ 20:31:36 marcosc has joined #immersive-web 20:31:36 present+ 20:31:37 present+ 20:31:42 present+ 20:31:42 present+ Laszlo_Gombos 20:31:43 present+ 20:31:43 present+ 20:31:45 Dat_Chu has joined #immersive-web 20:31:49 topic: Model Element 20:31:51 kdashg has joined #immersive-web 20:31:53 present+ 20:32:03 marcosc: not much has been done since the last meeting 20:32:13 ... mostly because the needed stuff is easy 20:32:27 ... the issue is that we need to agree that model is a good idea 20:32:41 q+ 20:32:43 ... I was waiting for mozilla's position, standard's position 20:32:54 ... I'm unsure cwilso has an opinion 20:32:54 Is there an issue for this topic? 20:33:05 ... there's more question that I've been grappling with? 20:33:09 s/I'm unsure/I think/ 20:33:13 ... like is this a media element? 20:33:29 ... is it like a 3d video? What about a11y? 20:33:44 ... how do we describe the accessibility of it? 20:34:04 ... we have a bunch of web content that moves and that has a11y content 20:34:14 ... one of the elephants is the format 20:34:25 ... I'm not pushing but we have gltf and usdz 20:34:34 ... and we designed it format agnostic 20:34:46 Okay, is this related to https://modelviewer.dev/ ? 20:34:48 ... there's going to be an industry push for a standard format 20:35:06 ... how are we going to work out the format issues? We're going to have a conversation about that 20:35:09 q? 20:35:15 ... this is roughly where we're at 20:35:16 q+ 20:35:28 ... in webkit we landed width/height attributes 20:35:30 q+ 20:35:39 ack Leonard 20:35:40 ... (ccormack worked on that for half a day) 20:35:43 dulce has joined #immersive-web 20:35:45 ... feedback please 20:35:57 q+ 20:36:00 Leonard: fundamentally this is a good idea to display 3d format 20:36:09 ... but there are a whole bunch of issue 20:36:09 Yih has joined #immersive-web 20:36:24 ... like how to ensure render quality, animation, interactivity 20:36:34 ... how do you get the camera in the scene 20:36:45 ... it's really hard to solve these issue 20:36:50 q+ 20:37:02 ... the formats are going to be an issue but the concept should work out first 20:37:19 marcosc: I didn't really mean that easy :-) 20:37:26 Dylan_XR_Access has joined #immersive-web 20:37:30 ... you are right that rendering is quite hard 20:37:33 Nick-Niantic has joined #immersive-web 20:37:39 ... and those are things that I need help with defining 20:37:39 q+ 20:37:39 ack Marisha 20:37:47 ... I'm unsure if they will be easy 20:37:50 q- 20:37:53 q+ 20:38:23 Leonard: in the issues, the specs are out of sync (??) 20:38:30 marcosc: we don't have versions in html 20:38:54 ... it's not trivial. there are test suites that may have fidelity 20:39:05 ... there is not the idea that we have version 20:39:14 ... specs can move faster than implementations 20:39:27 ... there is nothing process wise from making progress quickly 20:39:46 ... I don't want to merge things in the spec without other implementor feedback 20:39:58 ack Marisha 20:39:58 ... I don't want to add prototype-y stuff 20:40:19 Marisha: it came up earlier that webxr is a black box 20:40:38 ... there is a huge number of developers that can't participate because it's so complicated 20:40:48 q? 20:40:59 ack bajones 20:40:59 ... the web is inherintly semantic so the model would be very helpful 20:41:21 bajones: I think that the desire is understandable 20:41:32 ... especially in light of the a11y discussion 20:41:43 ... for developers that don't want to do the imperative thing 20:41:55 ... but my issue is that this feels like an unbounded space 20:42:26 ... working on webgpu and webxr, when talking about the model tag, what can the web already do that does that 20:42:46 ... three.js, babylon can add on new modules and grow in complexity forever 20:42:53 ... which is ok for a user space library 20:43:13 ... but I'm not comfortable with that on a web spec 20:43:22 +1 20:43:25 ... I don't want to become unreal engine in a tag 20:43:32 q+ 20:43:35 ... is there a reasonable way to cap that complexity 20:43:49 ...is there something that we're willing to limit it to? 20:44:13 ... I don't know what the escape values are looking like 20:44:31 ... getting gpu buffers from the model tag is likely not a solution 20:44:49 ... we'd feel much better if there was a clear scope of work 20:44:52 q+ 20:44:56 marcosc: I couldn't agree more 20:45:19 ... I though you were going to mention the video element 20:45:29 ... which is what I'm envisioning 20:45:42 ... given a file, render something on the screen 20:46:01 bajones: I've had conversation about glb/usdz 20:46:17 ... because people think that you can just extend it 20:46:34 ... we really don't want to add things like hair rendering 20:46:46 USD connectors are the extension mechanism 20:46:53 ... even gltf has a bunch of extensions 20:46:54 q+ 20:47:06 ... things like refraction index, thickness of glass 20:47:15 ... there should be a line 20:47:30 ... and there's a temptation to keep pushing the line 20:48:02 dulce: physics is a big problem in xr and you will always be pushing that 20:48:35 bajones: for context, babylon worked with the havok team so now we have high quality physics for the web 20:48:46 ... do physics need to be part of the web? 20:49:07 ... will this reduce the complexity? People will want to push the line 20:49:13 q+ 20:49:19 Marisha: do you see a cap? 20:49:28 bajones: I don't know what that is 20:49:39 ... but it shouldn't be infinity 20:50:02 vicki has joined #immersive-web 20:50:10 ... I think we can find something that doesn't require us to build a new silo 20:50:24 marcosc: how did video cope with that? 20:50:48 bajones: mpeg spec has a lot of extensions that nobody implements 20:51:03 ... if we look at how video is actually used, they are very complex 20:51:15 ... but in terms of behavior they are well bounded 20:51:35 ... nobody expects all the pixels have physics attributes 20:51:43 ... which could be reasonable for the model tag 20:52:01 marcosc: why don't I think that back to the usd team? 20:52:13 ... how do we limit it so it doesn't get out of hand 20:52:16 glTF team does not to limit the capabilities 20:52:46 mats_lundgren_ has joined #immersive-web 20:52:50 bajones: in the case of gltf, we'd likely support the base spec and a limited set of extension 20:53:02 ... I'm sure there's a similar set for usd 20:53:12 q? 20:53:12 marcosc: that is important to take back 20:53:35 kdashg: the format is the primary concern 20:53:50 Manishearth_ has joined #immersive-web 20:54:01 RRSAgent, please draft the minutes 20:54:02 I have made the request to generate https://www.w3.org/2023/04/24-immersive-web-minutes.html Manishearth_ 20:54:08 ... it would be bad if authors would have to provide 2 formats so it works everywhere 20:54:08 ... we need 1 path forward 20:54:28 ... whatever we do, we still going to be subsetting it 20:54:42 ... this is what happened with webm with matroshka 20:54:44 scribenick: cabanier 20:54:59 ... where webm is a subset 20:55:10 ... and it's explicitly cut down 20:55:30 ... so we'd need to do the same. People shouldn't have to experiment 20:55:39 ... use cases are also important 20:56:04 ... generally we don't see the model tag as something that makes it easier to draw 3d content 20:56:23 ... we're handling 3d content well today 20:56:33 ... we're focusing on narrower use cases 20:56:54 ... some of the things there. For instance priviliged interactions 20:57:16 ... like an AR scenario where you'd not need to give depth information to the web site 20:57:25 ... so it would work with untrusted website 20:57:48 ... the other thing is that you can interact with other priviliged content like iframes 20:58:03 ... which is what we should be focusing on. Triage our efforts 20:58:22 ... and not focus on making something it can already do easier. Focus on what it can't do 20:58:41 ... it's going to be really tempting to show a demo 20:58:46 s/Is there an issue for this topic?/scribe: cabanier/ 20:58:48 RRSAgent, please draft the minutes 20:58:50 I have made the request to generate https://www.w3.org/2023/04/24-immersive-web-minutes.html Manishearth_ 20:58:57 ... dropping a model in an area scene 20:59:00 q? 20:59:02 ... we can already do 20:59:04 ack kdashg 20:59:06 ack Nick-Niantic 20:59:13 Nick-Niantic: obviously there's a lot here. 20:59:17 +1 to "focus on what doesn't demo well, rather than what does demo well." 20:59:20 ... echoing what other people say 20:59:20 s/scribenick cabanier/scribenick: cabanier/ 20:59:23 RRSAgent, please draft the minutes 20:59:25 I have made the request to generate https://www.w3.org/2023/04/24-immersive-web-minutes.html Manishearth_ 20:59:46 ... models are easy and accessible on the web today 21:00:02 ... with very little markup you can embed a model on the web today 21:00:24 ... we don't see a lot of wanting to stop at a model 21:00:42 ... most use cases require a dynamic presentation 21:01:05 ... for instance, change a color on a model, you don't want to download a new model for each color 21:01:17 ... usually you have a bit of code to draw animations 21:01:41 ... running in offline mode is less compelling than something that is prebaked 21:01:52 marcosc: can you talk more about swapping in a model? 21:02:06 Nick-Niantic: I can show it on my screen 21:02:21 glTF has KHR_materials_variants that holds multiple materials for a single model 21:03:27 ... (demoing) this is an example of website with different model 21:03:59 ... this is telling aframe to make changes 21:04:12 ... other cases are character cameras 21:04:25 (additional case study: model viewer which uses glTF, but don't ask me how it works internally: https://modelviewer.dev/examples/scenegraph/#swapTextures) 21:04:28 ... on the one hand, I agree that complexity can grow high 21:04:43 ... I don't agree that low complexity is what we want 21:04:51 marcosc: (????) 21:05:04 Nick-Niantic: enough functionality grows quickly 21:05:38 ... talking about 3D video, holograms are popular. (volumetric captures) 21:05:55 ... for the need of the market, there are a lot of formats to consider 21:06:26 marcosc: we don't know what's coming down the pipe 21:06:40 Nick-Niantic: yes but we shouldn't limit us too much at the start 21:06:59 ... a lot of the interesting cases with the video tag 21:07:11 ... applying video interestingly in a 3d space 21:07:34 ... where if you were to have a model tag, the question is how to get vertices and textures out of the model 21:07:40 ... so it's limited that way 21:08:20 ... we talked about gltf extensions, where they might grow and be extended over time 21:08:42 ... maybe we add semantic information inside the gltf 21:09:05 ... if the model tag is too limited, people will become frustrated 21:09:15 ... finally, we were talking about a11y 21:09:27 ... this could be embedded with the model 21:09:46 ... what we want is an annotated scene graph like what aframe lets you do? 21:10:01 marcosc: what does aframe do with a11y? 21:10:21 Nick-Niantic: aframe lets you declare your scene as html in dom elements 21:10:38 ... this lets you hook into the browser a11y engine 21:11:09 ... it won't work out of the box today but it might require a new rendering engine 21:11:15 q? 21:11:43 ... in short the key is without a lot care, a model element is not as useful as what's in the market today 21:11:51 ack Brandel 21:11:52 ... what is the better more useful thing 21:12:07 Brandel: as someone who plays on the internet a lot 21:12:17 ... we're always going to be disappointed 21:12:32 ... to that end, I'm not concerned. 21:12:56 ... what we need to find is the bare minimum that is useful 21:13:37 ... I was looking at the model proposal. we talk about using the environment map without needing a user request 21:13:47 ... or without needing access to the textures 21:14:19 ... on the complexity, we should aim at what is the simplest thing possible 21:14:35 ... and we should focus on the benefits 21:14:45 ... knowing that there is more content in the future 21:15:14 cwilso: my biggest concern is that if a lot of functionality is in the format, that is problematic 21:15:33 ... this moves interop to a spec 21:15:49 ... implementations can put things together quickly 21:16:06 Yih has joined #immersive-web 21:16:07 ... safari use their usdz component to implement their model 21:16:24 ... and now everyone else has to use the same component 21:16:40 ... there are massive layers of features that need to be implemented 21:16:53 q? 21:17:01 ... if hair rendering was added, the model spec didn't change but the implementation did 21:17:18 ... people don't like to implement multiple formats 21:17:34 ... focussing on what demoes well is indeed the wrong thing 21:17:47 ... do people remember activex? 21:17:57 ... people could build fallbacks but didn't 21:18:13 ... this kept internet explorer alive 21:18:34 ... baking this much complexity in an engine without a web spec, is hard 21:18:41 q+ 21:18:53 ... you don't want to expose things to user or developer code 21:19:03 ... the boundaries have to be part of the standard 21:19:16 ... I'm worried that this is going to create a massive interop fracture 21:19:32 ... HTML should have defined an image and video format 21:19:51 ... and an audio one because we still don't have good ones today :-) 21:19:55 ack me 21:20:06 ack dylan 21:20:13 Dylan_XR_Access: we were talking about how much we want to push it 21:20:25 ... from usability perspective 21:20:50 dulce has joined #immersive-web 21:20:54 ... I'm wondering, are there certain things that we bake into this tag? 21:21:02 ... is it controlled by the user? 21:21:16 ... should we define the core things that are part of the tag? 21:21:25 ... where does it all fit into it? 21:21:56 ack ada 21:22:19 adarose: one benefit is that it won't render the same on different devices 21:22:52 ... if I want to show a model on a watch I don't want to use expensive shader or refractions 21:23:17 ... but if I show it on a high end computer, I would want all those expensive things turned on 21:23:32 ... if you want to be pixel perfect, webgl is the thing to do 21:23:42 ... different renditions is a feature an not a bug 21:23:46 q+ 21:23:47 q? 21:23:50 q+ to point out but if it looks better on Apple watch than on Samsung watch... 21:23:55 bkardell_ has joined #immersive-web 21:24:16 Leonard: many of the engines already have an idea of device capabilities 21:24:39 ... the bigger issue is that they should look the same on the different browser on the same device 21:24:42 present+ 21:25:09 ... you can differentiate but different browser should have the same rendering engine 21:25:23 ... usd is not a format, it's an api 21:25:45 ... making a new format takes at least 2 years 21:26:05 marcosc: we estimate that any spec takes 5 years :-) 21:26:13 ack leonard 21:26:20 ack marcos 21:26:33 ... having a single format, we've not seen such a thing 21:26:55 ... we've seen disasters happen with formats. We've seen implementations becoming the standard 21:27:03 ... we generally understand what we want 21:27:18 ... if we do it in the w3c, we could all agree 21:27:32 ... we could decide today to just use usdz 21:27:41 ... but it's going to be challenging 21:27:42 : They did not all agree. 21:27:46 q? 21:28:07 Leonard: the modelviewer tag can do most of what you're talking about 21:28:24 ... you should have demos that shows what modelviewer can't do 21:28:57 ... show what the community with model tag what can't be done with other capabilities 21:29:51 ... there was a discussion about gltf extensions, if the model tag allows them it would break the system 21:30:07 marcosc: we would only do that across browser vendors 21:30:28 ... like anything in ecmascript. there's a standard and aim for the same shipping date 21:30:42 Leonard: so it's extensions for browser? 21:30:45 ack Marisha 21:31:06 Marisha: why can't we just decide to not have 2 supported formats? 21:31:19 USD is not a format. It is an API 21:31:53 bajones: there are platform isues. adding usdz is easy for apple but hard for other 21:32:10 ... usd is not a standard. it's basicaly a black box 21:32:47 ... you can put a lot of things in usd but apple will only render their own content 21:32:58 q+ 21:33:06 Marisha: is there is no desire for USDZ a standard format 21:33:19 bajones: there is no really standard 21:33:30 q+ 21:33:32 Marisha: is there no document? 21:33:44 marcosc: there's a github repo and a reference renderer 21:33:48 q- 21:34:01 kdashg: this is not surmountable 21:34:30 ... the video codec space, many millions of users can't decode h264 video because of patents 21:34:59 ... it's because authors just use their defacto assets 21:35:22 ... people choose the easiest and then users have problems 21:35:40 ... we as browser vendors can't tear things apart and repackage it 21:35:43 +1 21:35:50 q? 21:35:55 ack cwilso 21:35:55 cwilso, you wanted to point out but if it looks better on Apple watch than on Samsung watch... 21:36:27 cwilso: the problem with having 2 formats, does that mean that they are both required? 21:36:39 ... that means that they are not web standards 21:36:51 ... you end up exploring what works in browser a and not browser b 21:37:04 marcosc has joined #immersive-web 21:37:12 ... and we have a responsibility to make things interoperable 21:37:43 .. yes, things can look look different on different devices but it should be roughly the same on similar devices 21:37:54 adarose: let's wrap it up there 21:38:08 Thanks you Ada. Is there any TODOs or TakeAways tasks from this discussion? 21:38:31 https://hackmd.io/@jgilbert/imm-web-unconf 21:39:16 scribe: Marisha 21:40:01 Emmanuel: we implemented keyboard integration for the user to trigger keyboard and use for input - first point of feedback was wanting to control where the keyboard appears 21:40:18 zakim, take up agendum 5 21:40:18 agendum 5 -- navigation#13 Let's have a chat about Navigation at the facetoface -- taken up [from atsushi] 21:40:30 https://github.com/immersive-web/webxr/issues/1321 21:40:43 zakim, take up agendum 6 21:40:43 agendum 6 -- webxr#1273 Next steps for raw camera access -- taken up [from atsushi] 21:40:58 q? 21:40:58 s|agendum 5 -- navigation#13 Let's have a chat about Navigation at the facetoface -- taken up [from atsushi]|| 21:41:02 q+ 21:41:06 q+ 21:41:07 s|agendum 6 -- webxr#1273 Next steps for raw camera access -- taken up [from atsushi]|| 21:41:09 q? 21:41:12 ack bajones 21:41:13 Emmanuel: We currently provide z-position but looking for feedback about what folks think about availability for positioning: 21:41:43 bajones: What level of control do native apps have around this (like OpenXR or old Oculus APIs)? Or do native apps invent their own wheel here 21:41:50 Emmanuel: Not sure what native apps do 21:42:06 i/Emmanuel: we implemented/topic webxr#1321 Control over system keyboard's positioning 21:42:12 rrsagent, publish minutes 21:42:13 I have made the request to generate https://www.w3.org/2023/04/24-immersive-web-minutes.html atsushi 21:42:19 cabanier: There's maybe an OpenXR extension for this... Emmanuel what do you do? 21:42:22 Emmanuel: This is brand new, not very mature 21:43:00 s/topic webxr#1321/topic: webxr#1321/ 21:43:02 bajones: Maybe it's too soon to try to standardize? 21:43:18 cabanier: there are standards in Unity that are used as an Android-ism 21:43:53 cabanier: If you are in Android, you can specify where on the screen the input is, and the keyboard will try to move itself to that position 21:44:22 cabanier: In immersive, that goes away 21:44:29 q? 21:44:49 q+ 21:45:37 bajones: The thing that makes sense is to specify the bounds/coords of the input rect. But maybe that's more complicated than what I'm thinking (what if someone specifies 3mi away) - unless you want devs to specify exact coordinates where they want keyboard to appear 21:45:58 ack Nick-Niantic 21:45:59 Emmanuel: Right now the keyboard renders at the same depth as the cylinder 21:47:00 Nick-Niantic: When I think about the placement of content, there are the nuances of the current scene + the current viewer. Asking developers to navigate that complexity can be challenging. Could offload some of the complexity onto the user and let them determine a better spot 21:48:00 Nick-Niantic: We had a project with a dom-tablet where you can pull things in and out of the 3D space - the way this was moved around was by grabbing and moving it in a radius around yourself, and follows you when you walk around. Making it easy for the user to move the keyboard is best. 21:48:02 ack CharlesL q? 21:48:06 ack CharlesL 21:48:33 CharlesL: From an accessibility point of view, a person with low vision may need keyboard to be in a very specific spot 21:48:37 q? 21:49:11 Dylan_XR_Access: We've heard from folks that to read things they usually bring things close to their face, but they often can't do that in an XR environment. Ideally we'd have system settings for this 21:49:34 cabanier: This is the system keyboard so if it adds accessibility settings, you'd get that part for free like high contrast or letters size 21:49:55 cabanier: It could be a pain for the user to be able to move the keyboard 21:50:18 q+ 21:50:37 bajones: The two things are not mutually exclusive - can make the keyboard not occlude the input, but also make it moveable for users. 21:51:00 bajones: The worst case scenario is two different platforms that have two different conventions for where the keyboard is placed, giving inconsistent results to users 21:51:34 q? 21:51:40 bajones: You don't want to rely on user's control of the keyboard but should enable it 21:52:00 ack Nick-Niantic 21:52:02 Emmanuel: Team is still working on "Follow" functionality vs fixed w/ a toggle. This gets to the question of how we surface this to webxr devs 21:52:59 Nick-Niantic: *Showing demo on screen* This is the dom-tablet, it's not a burden, it's easy for users to use and place wherever they want 21:53:51 Nick-Niantic: If they get too far away from it it will also follow the user. An idiom like this is useful. Also we'd love this (dom content) as a native WebXR feature. 21:54:09 q+ 21:54:14 q? 21:54:18 ack Dylan_XR_Access 21:54:45 q+ 21:55:00 ack adarose 21:55:04 Dylan_XR_Access: Something that comes to mind when it comes to interaction - we don't want just pointing, should have equivalents to tab, enter, gaze controls, etc, because there will be folks that have trouble pointing and need things like arrow keys 21:55:52 adarose: One heavily-requested feature has been DOM-overlay for VR, or some kind of DOM layer for XR that's interactive. But as much as it's desired, it's very difficult to implement. It's been discussed for years without a lot of movement. 21:56:01 Nick-Niantic: We can offer our existing implementation as a reference. 21:56:16 Dylan_XR_Access: What part of this is being handled by the WebXR vs the system? 21:56:46 adarose: There's a rectangle that the user is carrying around that has the HTML content on it, with all the accessibility features you'd expect for HTML. 21:57:10 adaores: Currently all we have is DOM Overlay which is only applicable to handheld mixed reality experiences. It's difficult to establish what it should do in virtual reality 21:57:37 s/adaores/adarose/ 21:57:56 q+ 21:58:10 bajones: There's a demo for this and how you can take content and display it, but no one has described how this should work for virtual reality specifically 21:58:18 ack Emmanuel 21:59:09 Emmanuel: These are great discussions: touching on some of the accessibility - one of the features for a system keyboard that will provide a strip at the top to show what content is in the input that is being modified. 21:59:19 q? 22:00:05 Manishearth_ has joined #immersive-web 22:00:17 q+ 22:00:34 Rigel: when thinking about text input in VR: hand tracking is becoming more popular, the raycaster handheld approach means that the keyboard is beyond the reach of your own hands. But with hand tracking you want something more touch type, and have to think about the distance from the user, have to think about input methods 22:01:08 bajones: If the system has been designed such that the keyboard can be accessed via touch typing, it should bring up a hands-friendly version of the keyboard. The system should know what input method is being used. 22:01:23 q- 22:01:43 topic: proposals#83 Proposal for panel distance API in VR 22:01:52 TOPIC: Proposal for panel distance API 22:01:54 Proposal for panel distance API in VR 22:02:05 Bryce: I'm an engineer from the Browser team at Meta 22:02:20 https://github.com/immersive-web/proposals/issues/83 22:02:36 Bryce: This is outside the context of WebXR, it is about exposing the distance of a virtual 2D panel to the user 22:03:02 Bryce: This could be like for a weather application - what is displayed depends on how close the user is to the panel. 22:03:27 Bryce: Another example is a picture window effect, as you get closer to it you can see more and more what is "outside" the picture window 22:03:30 q+ 22:03:38 q+ 22:04:04 ack mkeblx 22:04:04 Bryce: Do those examples make sense? At a high level - is there any precedent around this? Has it already been attempted? Just want to open up to the group for questions and considerations. 22:04:11 q+ 22:05:12 mkeblx: You alluded to the idea of a picture changing size - previous ideas in this group are things like a magic picture app - you don't have just the distance but also orientation and user's position relative to the screen. Do people still want that even though we dropped it for a long time? And would your idea be a subset of our previous idea 22:05:16 q+ 22:05:25 q+ 22:05:31 ack adarose 22:05:39 q+ 22:05:40 mkeblx: Another similar feature is the magic leap browser which exposed not just position but orientation via Javascript 22:06:32 q? 22:06:33 adarose: One concern is that it could potentially be a privacy vulnerability. Maybe users don't want you to know if they're sitting or standing, where their head is in relation to the panel. I don't like the idea of giving user position to web pages. 22:06:36 ack Dylan_XR_Access 22:06:51 Brandel has joined #immersive-web 22:06:57 bajones_ has joined #Immersive-web 22:07:00 q+ 22:07:10 q+ 22:07:19 Dylan_XR_Access: For some folks, being able to get close is necessary to see something. If it suddenly changes, that could be frustrating to users. But if that's something the user could control or have a setting for, that could be a feature. 22:07:22 q? 22:07:25 ack Jared 22:07:35 Jared: What is a panel app? 22:07:59 Bryce: Panel app in this context is just a 2D Browser outside of the context of WebXR. If you're in VR viewing a standard 2D browser window 22:08:01 q? 22:08:03 q+ 22:08:03 q+ 22:08:04 ack Nick-Niantic 22:08:48 Nick-Niantic: My understanding from previously is that there were non-immersive modes to the WebXR spec that were meant to handle cases like this. If you wanted to have DOM content but also have a magic window 22:09:50 bajones: Clarification - there is a non-immersive (inline) mode, but it does no tracking of any sort. The thing it gives you is the ability to use the same sort of render loop with immersive and non-immersive content. So you can use the XrSession's requestAnimationFrame. Nobody uses it much, I wish I hadn't spent so much time on it. 22:10:33 bajones: We talked a lot about the magic window mode, that involves tracking users position. there were privacy considerations, implementation questions. We could revisit that, but that doesn't sound like what's being discussed here. 22:11:22 Bryce: Yeah, in its simplest form it's just "how far away is the user in this virtual space". Following discussion about XRSession, I was thinking it could be for devs who don't know anything about WebXR. It could be like the geolocation API that's just surfaced in the navigator. 22:11:25 q? 22:11:29 ack cabanier 22:11:59 That is what I was going to say actually 22:12:20 cabanier: So we don't really need to know how far away the user is to the centimeter. We just need to know are they close, sorta far away, or really far away. It could be like a CSS media-query to resolve some of the privacy considerations. We don't need to know exactly how far away they are in relation to the window. 22:12:42 cabanier: It could be something on navigator but could also be some CSS-y thing that automatically reformats itself 22:12:50 ack bajones_ 22:13:09 q- 22:13:22 bajones: I like the idea of a CSS media query in this context. It seems like the right abstraction. This isn't necessarily about how far away the panel is, more about the angular resolution of the panel (includes both how far away, how wide/tall, etc) 22:14:08 bajones: There is still some fingerprinting in the media query but you're not looking at what the user's head is doing. It seems like it slots in nicely with Zoom level, what frame, etc. You could maybe call this perceptual width or something - how big the user perceives the page to be, and have CSS adjust to that. 22:14:13 q? 22:14:19 ack Brandel 22:15:06 Brandel: What might be confusing for folks: Head tracking and how far away the element is are essentially the same question - just depends on spatial/temporal resolution. CSS media query is one approach, other approach is to have exact xyz coordinates. While they are technically the same thing, they can be used to serve very different purposes. 22:15:32 Brandel: There was a discussion about having a universal virtual unit like a millimeter 22:15:49 Brandel: If there were reasonable limits on update frequency and such, it could be very useful 22:16:14 ack bkardell_ 22:16:21 adarose: You could even use the existing media queries, if you bin the information about panel size 22:16:30 https://www.youtube.com/watch?v=ES9jArHRFHQ is McKenzie's presentation 22:17:10 Brian: My immediate reaction is it sounds like it would be ideal in CSS like media-query list, listeners, CSS already has lots of things related to things like perceptual distance and calculations, since it is already used in televisions. This doesn't seem like it would be particularly hard, it fits well. 22:17:18 q+ 22:17:23 q? 22:17:23 adarose: This might be more for the CSS working group instead of here 22:17:28 ack cabanier 22:17:37 cabanier: Bryce first brought it to web apps, who told him to bring it here 22:17:44 wow I must have missed that 22:17:54 q+ 22:17:58 cabanier: But we're more looking to get people's opinions on it. Sounds like people don't have too maybe problems with it as a CSS media query and binning. 22:18:11 bryce can you share the css issue? 22:18:21 ack mkeblx 22:19:09 mkeblx: You mentioned the weather thing. But the Meta Quest browser stays at the same distance. What implementation are you imagining? 22:19:19 cabanier: Trying to do more mixed reality stuff, where people are expected to walk around more 22:19:50 cabanier: do you know what the css issue # is? I don't see a handle here that is bryce 22:19:51 q+ 22:19:52 Bryce: With mixed reality over time, you might have more scenarios where a panel is attached to physical space 22:20:00 ack Jared 22:20:28 Jared: If you utilized existing media queries via virtual screen size, there might be some good tools to play around with 22:20:31 @ bkardell_ : we didn't file a CSS issue yet. Bryce went to webapps first because he wanted to extend navigator 22:21:03 Bryce: I wanted to ask about fingerprinting risk - if there were a permission dialog, does this group handle that sort of thing? 22:21:16 adarose: Usually permission prompts are not determined by this group, left up to the Browser 22:21:49 bajones: Usually specifications don't determine what is shown or said regarding permissions. We can sometimes say "user consent is needed for this feature" and mention permission prompts as an example but we don't dictate that that needs to be how consent needs to be given. 22:22:13 q? 22:22:42 adarose: No one on queue, should we wrap up for coffee break? 22:22:45 Bryce: sounds good to me 22:23:04 rrsagent, generate minutes 22:23:05 I have made the request to generate https://www.w3.org/2023/04/24-immersive-web-minutes.html Marisha 23:04:10 CharlesL has joined #immersive-web 23:05:21 present+ 23:05:29 Dylan_XR_Access has joined #immersive-web 23:05:34 bajones has joined #Immersive-Web 23:05:35 present+ 23:05:38 present+ 23:05:40 Brandel has joined #immersive-web 23:05:44 rigel has joined #immersive-web 23:05:46 mats_lundgren has joined #immersive-web 23:05:53 brycethomas has joined #immersive-web 23:05:53 present+ 23:06:12 Dat_Chu has joined #immersive-web 23:06:12 present+ 23:06:13 present+ 23:06:18 present+ 23:06:19 present+ 23:06:53 present+ 23:07:28 Marisha has joined #immersive-web 23:07:29 atsushi has joined #immersive-web 23:08:13 present+ 23:08:16 zakim, choose a victim 23:08:16 Not knowing who is chairing or who scribed recently, I propose lgombos 23:08:18 Nick-Niantic_ has joined #immersive-web 23:08:24 kdashg has joined #immersive-web 23:08:29 present+ 23:08:33 scribe: lgombos 23:08:45 https://github.com/immersive-web/webxr/issues/1273 23:08:56 lgombos has joined #immersive-web 23:09:05 q+ 23:09:15 Nick-Niantic_: next step for raw camera access 23:09:37 ...goal to consent and developer use cases 23:10:17 ... reviewed Google Chrome implementation and reviewed it for headsets. chalange with render loop 23:11:09 ...unlock new use cases.. reflections, adapting scale of screen, media sharing, image target, qr code 23:11:11 vicki has joined #immersive-web 23:11:31 dulce has joined #immersive-web 23:11:57 ...skyeffects demo 23:13:12 ...running nn in background, to build sky map, create cube map 23:13:27 "It 23:13:41 "It's hard to get close to the sky" [citation needed] 23:14:08 ... In general Niantic cares about use cases outside (sky, ground, foliage) 23:15:23 ... marker based ar demo 23:15:48 ... camera texture processing 23:16:04 q+ 23:16:09 ... part of why demos are not polished further as Chrome API is still experimental 23:16:21 Yih has joined #immersive-web 23:16:21 q? 23:16:26 ack bialpio_ 23:16:53 q+ 23:16:57 bialpio_: raw camera access (for smartphone) launched (no longer experimental) in late 2022 23:16:59 Nick, can you share the slides so we can add them to the meeting notes. Thanks! 23:17:25 Enabled by default for Chrome since M107 23:17:45 ... only smartphone specific/capable api's are released to stable 23:18:14 q+ to ask what API (OpenXR presumably) Meta uses to handle passthrough. 23:18:34 ack me 23:18:34 bajones, you wanted to ask what API (OpenXR presumably) Meta uses to handle passthrough. 23:19:15 ... other Chromium based browser running on headset (Hololens, Quest) do not support raw camera access 23:19:41 q+ 23:19:44 ... headsets typically do not expose camera for web 23:19:45 q? 23:20:12 Brandel has joined #immersive-web 23:20:18 ack. cabanier 23:20:18 ... Nick-Niantic_ API proposed is a simple adapter 23:20:41 cabanier: On Quest Pro nobody gets access to camera feed 23:21:16 Nick-Niantic_: lot of advancements cimd execution on nn 23:21:34 ... 200 fps on handhelds with cimd 23:21:35 sorry it's a little noisy here at the moment - is the question whether any device would give wolvic those permissions? 23:22:11 We could help with that... Also provide it a virtual means to do it 23:22:26 cabanier: unlikely realtime access to camera even later.. 23:22:34 q? 23:22:51 q+ 23:23:05 ... Nick-Niantic_ exloring other headset providers to expose raw camera access 23:23:23 ack cabanier 23:23:40 ack Yih 23:24:02 Yih: question regarding camera feed processing 23:24:29 q+ 23:24:40 Nick-Niantic_: slide 8.. only meaning to show middle demo "location" 23:25:03 ... point is 6DoF tracking on the phone 23:25:07 q? 23:25:11 ack Jared 23:25:49 Jared: interesting helping.. actual and virtual devices .. what is input the algorithm ? just color ? 23:25:55 Nick-Niantic_: needs rgb texture 23:26:14 ... FoV of camera 23:26:38 Nick-Niantic_: will share the presentation 23:26:40 https://github.com/immersive-web/raw-camera-access/issues/11 23:26:57 Nick-Niantic_: these are the needs 23:27:27 q? 23:27:31 ack CharlesL 23:27:36 q+ 23:27:37 Jared: I can imaging and implementation on virtual enviromnet.. that later can work on the headset 23:27:55 CharlesL: I was wondering .. to add a secondary camera 23:28:01 q? 23:28:04 ack bialpio_ 23:28:08 cabanier: can not comment on future devices 23:28:30 bialpio_: we have been exploring marker detection.. it is in chromium repo, will link to it 23:28:52 ... we used opencv marker tracking module 23:29:15 q+ 23:29:18 ... not easy to get a performant implementation 23:29:38 https://source.chromium.org/chromium/chromium/src/+/main:third_party/webxr_test_pages/webxr-samples/proposals/camera-access-marker.html 23:29:41 q? 23:29:46 ack cabanier 23:30:17 cabanier: presenting stereo, how does that work with camera feed 23:30:37 bialpio_: the app could reproject 23:31:07 q+ 23:31:20 ... user sees excatly what the website has information to 23:31:36 cabanier: for PT ? 23:31:53 Nick-Niantic_: for PT, unlikely to have this problem 23:32:54 cabanier: is it timewarped ? 23:33:25 bialpio_: ARCore introduces a lag .. when image tracking is on 23:34:29 ... can not use camera for effects.. you might get frames from "future" 23:35:09 cabanier: we predict what camera feed will be 23:36:46 cabanier: hololens you can get access to one of the camera.. not all of them 23:37:09 bajones: hololens requirements are different.. not the whole scene 23:38:25 Nick-Niantic_: timewarping has to happen at some point.. image and timeline needs to be aligned 23:38:34 cabanier: api does not give you a snapshot 23:39:01 Nick-Niantic_: event based API 23:39:34 ack Brandel 23:39:44 Nick-Niantic_: it does not have to run on every frame 23:41:04 q? 23:41:07 Brandel: does it have to be raw stereo 23:41:13 q- 23:41:16 Nick-Niantic_: does not have to be stereo 23:41:19 q+ 23:41:56 ack Jared 23:41:56 Could be interesting to check out some of what is trending as being exposed for certain types of wearable XR in native APIs or extensions 23:41:57 https://registry.khronos.org/OpenXR/specs/1.0/html/xrspec.html#XR_HTC_passthrough 23:42:00 Nick-Niantic_: does not necessary need to show to user 23:42:17 Jared: similar concept, underlays 23:42:30 q? 23:42:34 ... can be used as a prototype 23:43:40 Sorry, I was mistaken. It doesn't give you access to the pixels. 23:45:07 adarose: new topic https://github.com/immersive-web/webxr/issues/892 23:45:16 q+ 23:45:43 zakim, choose a vixtim 23:45:43 I don't understand 'choose a vixtim', adarose 23:45:49 topic: webxr#892 Evaluate how/if WebXR should interact with audio-only devices. 23:45:54 zakim, choose a victim 23:45:54 Not knowing who is chairing or who scribed recently, I propose mjordan 23:46:15 mjordan has joined #immersive-web 23:46:23 scribe nick: mjordan 23:46:24 present+ 23:46:34 scribenick: mjordan 23:46:53 ack bajones 23:47:19 RRSAgent, please draft the minutes 23:47:20 I have made the request to generate https://www.w3.org/2023/04/24-immersive-web-minutes.html Manishearth_ 23:47:34 bajones: likes point about AirPods as audio only devices, fairly common 23:48:09 ... thinks that these are consuming 5.1 audio and hands-off? 23:48:22 q+ 23:49:10 ... doesn't seem like audio generated on the fly. Can get 3-dof pose data, but maybe not a way to get data bask into the scene? 23:49:19 q+ 23:49:41 q+ 23:49:45 ... are they designed to be interacted with as controllers necessarily... 23:50:01 q+ 23:50:16 ... Bose devices were trying to explicitly be XR devices. 23:50:18 ack Manishearth_ 23:50:46 q+ 23:51:13 Manishearth_: Can have devices that are audio only. Like a headset without lenses. Could be able to get poses from certain devices. 23:51:20 q+ 23:51:54 ... main benefit is that you could get pose based control, as well as controller based control. 23:52:44 ... experience that you want to work everywhere might not need pose because you have other controls. but if you don't have those devices, could you initiate session that is not backed by one of those devices. Might be good to look into. 23:52:49 ack Brandel 23:53:07 Brandel: headphones are looked at as display devices 23:53:36 ... public available apis do return at least 3dof, and sometimes acceleration. 23:54:05 q? 23:54:06 ... back to Gamepad discussion earlier, you can get orientation from gamepads, and those can be considered. 23:54:09 ack Jared 23:54:39 Jared: Anecdote - ghost story narrative where it's audio only with directional whispers, etc. 23:54:40 ack Nick-Niantic_ 23:54:59 Nick-Niantic_: Curious about expectation. 23:55:40 ... for audio-only headset, you are looking at a device in real space, so if you go into immersive mode, what should happen? 23:55:58 ... What is the expected use case, interface, user experience? 23:56:31 adarose: Like HTC vive attached to computer, maybe render a 3-dof view. 23:56:56 q+ 23:56:58 ... on the phone. Or maybe doesn't render something, but you still get 3dof audio. 23:57:18 ... could run independently on device, but maybe get audio transcription on device. 23:57:37 Nick-Niantic_: do you need an XR session for that? 23:58:13 q+ 23:58:14 bajones: probably inclined to treat as a different type of session? 23:58:29 ... could get back poses every frame. 23:58:45 q+ 23:58:55 Nick-Niantic_: What would happen on quest, when you ask for immersive audio? 23:59:26 bajones: might have limitations because of the device functionality. 23:59:58 ... maybe normalize around devices where this is the norm, or expected use case? 00:00:42 ... if you're trying to get poses, you could do some interesting accessibility stuff, sensors might not be super accurate? 00:00:48 ack CharlesL 00:00:57 ... Would give it it's own session type. 00:01:06 Brandel_ has joined #immersive-web 00:01:53 CharlesL: There is a link on how OpenXR should interact with audio only device, but not a lot of info there. Blind users do turn off their screens to this seems reasonable. 00:02:01 q? 00:02:04 ack Dylan_XR_Access 00:02:37 Dylan_XR_Access: Being able to support things like spatial sound is necessary for a lot of experiences. 00:02:43 q? 00:02:43 q+ 00:02:49 ... should support those use cases. 00:03:13 ack cabanier 00:03:24 adarose: shouldn't do a different session type, would say something about the user not wanting to view video. 00:03:24 q- 00:04:15 q+ 00:04:22 can get immersive sounds while not having have an imersive session 00:04:41 ack cabanier 00:06:07 cabanier: can't find 5.1 support in browsers? Certain devices, or special video formats may have that. What do we need to do to get 5.1 support in browsers? 00:06:18 ack bialpio_ 00:06:40 ... maybe manually decoding the audio streams? 00:06:58 ack bialpio_ 00:07:35 Chris Wilson: Can be supported in web audio, but need to use 3d panner to do positioning, but nothing that does 3d panning inside of 5.1 audio. 00:07:41 ack bialpio_ 00:08:08 bialpio_: I know we don't say how many views you get, but can we say we get only one? 00:08:51 bajones: you only get 1 or 2, unless you explicitly ask for them. 00:09:17 .. . even if allowed, wouldn't want scenarios where you get 0, 00:10:12 ... don't want to expose the fact that users are using accessibility settings. So you could advertise that the content is maybe audio-only, and put the acceptance choice back on the user. 00:10:38 ... try and make the page as unaware of what the user chooses as possible. 00:11:18 bialpio_: Be careful how you classify the session if there is a special session for this, so that it doesn't give that away. 00:12:01 adarose: should not be able to secretly overlay other audio over user's expected streams 00:12:24 bialpio_: had to refactor part of the spec for this. 00:13:22 ... do we pause other audio if they're already running? 00:13:32 q- 00:13:42 : sometimes background apps can play audio and sometimes not 00:14:11 ... sometimes confusing around ducking audio from other sources. 00:14:47 ??: we say exclusive audio, but maybe not exclusive-exclusive. Sometimes the OS can interrupt, etc. 00:15:14 cabanier: chrome will sometimes keep running when display is off 00:15:26 ... audio session might be like that. 00:15:49 ??: exclusive used to be the term, but now it's immersive 00:16:14 q? 00:16:33 adarose: if a media thing wanted to differentiate, there would be a difference between directional and directional where moving your head did something. 00:17:27 rigel: walking down a street where you can have audio-only immersion as you go down a street would be cool. 00:17:56 q? 00:18:01 ack rigel 00:18:12 ... different elements in a scene have different emitters, but currently tied to phone position. would be neat to get head pose instead of having to move phone around. 00:18:24 ... today need to move phone around. 00:18:26 q? 00:18:53 q+ 00:19:01 ack Brandel_ 00:19:01 ??: on issue for this topic Jared had a link to someone on the native side with motion info from the native side 00:19:04 q+ 00:19:19 brandel_: Can shake or nod and get that input from the headphones. 00:19:21 ack Jared 00:19:50 q? 00:19:53 Jared: Using tools like Unity, you can use things like colliders and things are helpful for making immersive audio experiences. 00:20:13 adarose: This seemed like a fun thing to end the day with. and this was a lovely discussion . 00:20:58 rrsagent, publish minutes 00:20:59 I have made the request to generate https://www.w3.org/2023/04/24-immersive-web-minutes.html atsushi 00:21:07 rrsagent, make log public 00:21:47 rrsagent, bye 00:21:47 I see no action items 00:59:59 s/??: For those of us with ownership/Marcos: For those of us with ownership/ 00:59:59 s/??: Yes, that would be great./Marcos: Yes, that would be great./ 00:59:59 s/scribe nick: adarose/scribenick: adarose/ 00:59:59 i/Dylan: another player that/scribenick: Manishearth_/ 00:59:59 s/one hting/one thing/ 00:59:59 s/agendum 3 -- semantic-labels -- taken up [from atsushi]// 00:59:59 s/TOPIC: Proposal for panel distance API// 00:59:59 s/Proposal for panel distance API in VR// 00:59:59 i|https://github.com/immersive-web/webxr/issues/1273|topic: webxr#1273 Next steps for raw camera access| 00:59:59 s/Chris Wilson: Can/Chris_Wilson: Can/ 00:59:59 s/ can get immersive sounds while not/... can get immersive sounds while not/