IRC log of immersive-web on 2023-04-24
Timestamps are in UTC.
- 16:22:51 [RRSAgent]
- RRSAgent has joined #immersive-web
- 16:22:56 [RRSAgent]
- logging to https://www.w3.org/2023/04/24-immersive-web-irc
- 16:23:05 [atsushi]
- zakim, clear agenda
- 16:23:05 [Zakim]
- agenda cleared
- 16:28:02 [lgombos]
- lgombos has joined #immersive-web
- 16:30:30 [CharlesL]
- present+
- 16:33:01 [cabanier]
- cabanier has joined #immersive-web
- 16:33:12 [cabanier]
- present+
- 16:35:48 [dom]
- dom has joined #immersive-web
- 16:50:46 [atsushi]
- meeting: Immersive-Web WG/CG face-to-face day 1
- 16:50:51 [atsushi]
- chair: Ada
- 16:50:56 [atsushi]
- rrsagent, make log public
- 16:56:56 [adarose]
- adarose has joined #immersive-web
- 16:56:59 [Brandel]
- Brandel has joined #immersive-web
- 16:57:12 [etienne]
- etienne has joined #immersive-web
- 16:58:00 [Manishearth_]
- Manishearth_ has joined #immersive-web
- 16:58:26 [atsushi]
- agenda: https://github.com/immersive-web/administrivia/blob/main/F2F-April-2023/schedule.md
- 16:58:33 [Yonet]
- Yonet has joined #immersive-web
- 16:58:43 [marcosc]
- marcosc has joined #immersive-web
- 16:59:31 [Dylan_XR_Access]
- Dylan_XR_Access has joined #immersive-web
- 16:59:41 [marcosc]
- marcosc has changed the topic to: Immerisive Web F2F - Cupertino
- 17:00:13 [Jared]
- present +
- 17:00:16 [Manishearth_]
- present+
- 17:00:16 [Dylan_XR_Access]
- present+
- 17:00:16 [Yonet]
- present+
- 17:00:19 [marcosc]
- present+
- 17:00:24 [etienne]
- present+
- 17:00:30 [bialpio]
- bialpio has joined #immersive-web
- 17:00:37 [lgombos]
- present+
- 17:00:40 [bialpio]
- present+
- 17:00:45 [Marisha]
- Marisha has joined #immersive-web
- 17:00:49 [bajones]
- bajones has joined #Immersive-Web
- 17:00:49 [gmz]
- gmz has joined #immersive-web
- 17:00:51 [Nick-Niantic]
- Nick-Niantic has joined #immersive-web
- 17:00:56 [bajones]
- present+
- 17:01:03 [Nick-Niantic]
- present+
- 17:01:09 [Marisha]
- present+
- 17:01:11 [DatChu]
- DatChu has joined #immersive-web
- 17:01:15 [felix_Meta_]
- felix_Meta_ has joined #immersive-web
- 17:01:20 [kdashg]
- kdashg has joined #Immersive-Web
- 17:01:22 [Jared]
- present+
- 17:01:25 [kdashg]
- present+
- 17:01:25 [mkeblx]
- mkeblx has joined #immersive-web
- 17:01:39 [rigel]
- rigel has joined #immersive-web
- 17:01:40 [mkeblx]
- present+
- 17:01:45 [mjordan]
- mjordan has joined #immersive-web
- 17:01:56 [rigel]
- present+
- 17:02:07 [mjordan]
- present+
- 17:02:08 [atsushi]
- rrsagent, publish minutes
- 17:02:09 [RRSAgent]
- I have made the request to generate https://www.w3.org/2023/04/24-immersive-web-minutes.html atsushi
- 17:02:28 [vicki]
- vicki has joined #immersive-web
- 17:02:30 [felix_Meta_]
- present+
- 17:02:49 [atsushi]
- agenda+ webxr-gamepads-module#58 Add support for a PCM buffer to the gamepad actuator
- 17:02:59 [vicki]
- present+
- 17:03:08 [atsushi]
- agenda+ webxr#1320 Discuss Accessibility Standards Process
- 17:03:17 [Brandel]
- present+
- 17:03:18 [atsushi]
- agenda+ semantic-labels
- 17:03:27 [dulce]
- dulce has joined #immersive-web
- 17:03:31 [atsushi]
- agenda+ webxr#1317 Some WebXR Implementations pause the 2D browser page in XR, make this optional?
- 17:03:36 [adarose]
- present+
- 17:03:48 [atsushi]
- agenda+ navigation#13 Let's have a chat about Navigation at the facetoface
- 17:03:54 [Dylan_XR_Access]
- scribe: Dylan_XR_Access
- 17:04:09 [Mats_Lundgren]
- Mats_Lundgren has joined #immersive-web
- 17:04:11 [atsushi]
- agenda +webxr#1273 Next steps for raw camera access
- 17:04:15 [Dylan_XR_Access]
- Introductions
- 17:04:26 [atsushi]
- agenda+ webxr#892 Evaluate how/if WebXR should interact with audio-only devices
- 17:04:30 [atsushi]
- zakim, list agenda
- 17:04:30 [Zakim]
- I see 7 items remaining on the agenda:
- 17:04:31 [Zakim]
- 1. webxr-gamepads-module#58 Add support for a PCM buffer to the gamepad actuator [from atsushi]
- 17:04:31 [Zakim]
- 2. webxr#1320 Discuss Accessibility Standards Process [from atsushi]
- 17:04:31 [Zakim]
- 3. semantic-labels [from atsushi]
- 17:04:32 [Zakim]
- 4. webxr#1317 Some WebXR Implementations pause the 2D browser page in XR, make this optional? [from atsushi]
- 17:04:32 [Zakim]
- 5. navigation#13 Let's have a chat about Navigation at the facetoface [from atsushi]
- 17:04:33 [Zakim]
- 6. webxr#1273 Next steps for raw camera access [from atsushi]
- 17:04:33 [Zakim]
- 7. webxr#892 Evaluate how/if WebXR should interact with audio-only devices [from atsushi]
- 17:04:35 [Dylan_XR_Access]
- Ada Rose Cannon, Apple; into declarative stuff
- 17:04:49 [Dylan_XR_Access]
- Nick, sr director at Niantic; AR web and geodata platform for devs
- 17:09:35 [atsushi]
- present+
- 17:13:16 [adarose]
- https://hackmd.io/@jgilbert/imm-web-unconf
- 17:13:16 [Dylan_XR_Access]
- additional introductions available on request
- 17:13:43 [atsushi]
- i/Ada Rose Cannon, Apple; into/topic: intro
- 17:14:04 [atsushi]
- zakim, take up agendum 1
- 17:14:04 [Zakim]
- agendum 1 -- webxr-gamepads-module#58 Add support for a PCM buffer to the gamepad actuator -- taken up [from atsushi]
- 17:14:18 [adarose]
- https://github.com/immersive-web/administrivia/blob/main/F2F-April-2023/schedule.md
- 17:15:08 [Dylan_XR_Access]
- Ada: First item on agenda is Add support for a PCM buffer to the gamepad actuator
- 17:15:13 [cabanier]
- https://github.com/WebKit/standards-positions/issues/1
- 17:15:18 [cabanier]
- https://github.com/w3c/gamepad/issues/186
- 17:15:43 [Dylan_XR_Access]
- Rik: had support for intensity and vibration of rumble/haptic on controller, but done through nonstandard API
- 17:16:01 [Dylan_XR_Access]
- ...Google wanted to extend API, Apple objected; should use .WAV file and leave implementation up to developer
- 17:16:03 [Yih]
- Yih has joined #immersive-web
- 17:16:04 [marcosc]
- q+
- 17:16:05 [alcooper]
- alcooper has joined #immersive-web
- 17:16:28 [Dylan_XR_Access]
- ...Can send multiple frequencies through motor; want to add API to pass audio buffer to controller
- 17:16:49 [Dylan_XR_Access]
- ...Haptic actuator is nonstandard, people want to get rid of it; but alternate proposals haven't been developed in two years
- 17:16:54 [Yih]
- present+
- 17:16:58 [Dylan_XR_Access]
- ...Also based on touch events, focused on mouse more than controller
- 17:17:29 [Dylan_XR_Access]
- ...Want API for it in WebXR, so instead of going through input profile, gamepad, haptic actuator, etc.; just go straight through WebXR
- 17:17:42 [Dylan_XR_Access]
- ...Complication is a constant source of problems
- 17:18:05 [Dylan_XR_Access]
- ...Really just need a method to play audio file
- 17:18:07 [bajones]
- q+
- 17:18:38 [Dylan_XR_Access]
- Marcos: putting on web apps working group chair hat, I work on gamepad API with colleagues at Google
- 17:18:58 [CharlesL]
- ack marcosc
- 17:19:02 [Dylan_XR_Access]
- ...objection was shared, and based on the idea that Xbox live folks were using dual rumble
- 17:19:22 [Dylan_XR_Access]
- ...dual rumble was supported in chrome; objected that this is a terrible design, all you do is pass enum and get things to rumble
- 17:19:25 [Emmanuel]
- Emmanuel has joined #immersive-web
- 17:19:47 [Dylan_XR_Access]
- ...no fine-grained haptics there. Implemented in web kit, safari, but all found it abhorrent as a working group
- 17:20:22 [Dylan_XR_Access]
- ...Putting Apple hat on instead, would object to it moving because compared to core haptics, using only audio to represent haptics is not good enough; can't get fidelity you need
- 17:20:49 [Dylan_XR_Access]
- ...must synchronize audio and haptics together; not sure what a WAV file would lead to on a gamepad
- 17:20:49 [cabanier]
- q+
- 17:21:13 [Dylan_XR_Access]
- ...for more complicated haptic devices, there are different regions, multiple actuators; proposal from microsoft is more region-based, e.g. a glove with actuator for each finger
- 17:22:03 [adarose]
- q?
- 17:22:05 [Dylan_XR_Access]
- ...In web apps, we claimed actuator part because of gamepad; want to figure out in this space whether it's the right time to do generalization
- 17:22:25 [Dylan_XR_Access]
- ...Minefield with regards to IPR as well. It's a new area for the web, fraught with potential issues
- 17:22:51 [Dylan_XR_Access]
- ...e.g. vibration API is not in webkit because you can do vibrations that feel like system vibrations, alerts; could be scary
- 17:23:22 [Dylan_XR_Access]
- ...Together, many issues that increase complexity; just sending an audio stream isn't good enough
- 17:23:40 [adarose]
- ack bajones
- 17:23:40 [CharlesL]
- +1 to include region based tactile as well
- 17:23:41 [adarose]
- q+
- 17:23:46 [Dylan_XR_Access]
- ...But acknowledge that a lot of devices do take audio input. Need to find a happy medium
- 17:24:35 [Dylan_XR_Access]
- Brandon: In addition to being concerned that this would map to devices we're trying to support, I feel strongly that putting an API on object A when it likely belongs on object B because we aren't getting what we want from group B is not the right direction
- 17:25:09 [Dylan_XR_Access]
- ...Could be applicable to any gamepad; we should be improving this for all gamepad-like objects
- 17:25:23 [Dylan_XR_Access]
- ...Would want to see evidence that what we're doing only applies to webXR devices
- 17:25:52 [Dylan_XR_Access]
- ...PSVR2 has rumble in the headset. Could see argument for "let's give the session itself as a proxy for the device the ability to rumble" (though an edge case right now)
- 17:26:19 [adarose]
- q?
- 17:26:21 [Dylan_XR_Access]
- ...Don't just try to leapfrog bureaucracy using the spec - shouldn't take exclusive ownership of this capability
- 17:26:23 [adarose]
- ack cabanier
- 17:26:24 [marcosc]
- marcosc has joined #immersive-web
- 17:26:32 [marcosc]
- q+
- 17:26:40 [Dylan_XR_Access]
- Rik: Some frustration because haptic actuator has been festering for years. Shipped it nonstandard, leaving us in bad situation
- 17:27:24 [adarose]
- q?
- 17:27:27 [Dylan_XR_Access]
- ...Some frustration over lack of progress. OpenXR supports ___ CCM, with plenty of experiences that use API without problems. Not sure if there's something missing by playing an audio file
- 17:27:59 [Dylan_XR_Access]
- Ada: From a politics standpoint, is there anything we can do as a group to encourage discussion? "Festering" is an unfortunately accurate verb
- 17:28:11 [Manishearth_]
- q+
- 17:28:22 [CharlesL]
- q+
- 17:28:44 [Dylan_XR_Access]
- ??: For those of us with ownership over gamepad, we meet once a month Thurs at 4pm; could be a good time to grab Microsoft folks and push discussion
- 17:28:56 [adarose]
- ack adarose
- 17:29:10 [adarose]
- ack marcosc
- 17:29:57 [Dylan_XR_Access]
- ...On the scope questions that came up, targeting gamepads, writing instruments, etc.; could be overly generic. How much of this is an XR issue?
- 17:30:34 [Dylan_XR_Access]
- ...Folks at Apple adamant that audio isn't going to cut it. Need better synchronization
- 17:30:49 [adarose]
- q+ to ask about a web audio node
- 17:30:50 [etienne]
- etienne has joined #immersive-web
- 17:30:53 [Dylan_XR_Access]
- ...Must synchronize haptics to audio itself. Renderers need to sync with each other, which is challenging
- 17:31:05 [adarose]
- ack Manishearth_
- 17:31:28 [Dylan_XR_Access]
- Manish: Heard a bunch of political/technical reasons for trickiness; sounds like there might also be a lack of people to do the work
- 17:31:47 [bajones]
- q+ to point out compatibility challenges if only targeting the highest end haptics
- 17:32:16 [Brandel]
- q+
- 17:32:17 [Dylan_XR_Access]
- ...Quite a bit of interest here, in this group. Worth wondering if there's a way for people in this group to submit proposal to help
- 17:33:05 [Dylan_XR_Access]
- ??: Yes, that would be great. Have been wanting to rage-rewrite it for a while, it's a mess. But it's a matter of resource allocation - need testing framework, etc.
- 17:33:31 [Dylan_XR_Access]
- ...Would be great to apply resources from multiple companies, have a nice base to apply future WebXR work as well
- 17:33:36 [adarose]
- ack CharlesL
- 17:34:10 [Manishearth_]
- q+
- 17:34:14 [Dylan_XR_Access]
- Charles: From accessibility POV, having only an audio API would be an issue. Having multiple ways to target different regions could be very beneficial if e.g. audio is only coming from your right or left
- 17:34:20 [dino7]
- dino7 has joined #immersive-web
- 17:34:24 [adarose]
- ack CharlesL
- 17:34:31 [adarose]
- ack adarose
- 17:34:31 [Zakim]
- adarose, you wanted to ask about a web audio node
- 17:34:51 [Dylan_XR_Access]
- Ada: This is probably wrong group for this, but could be cool if it was a web audio node
- 17:35:01 [Dylan_XR_Access]
- Manish: as hacky as that sounds, it might be the best way to do this
- 17:35:06 [adarose]
- q?
- 17:35:09 [Manishearth_]
- q-
- 17:35:10 [adarose]
- ack bajones
- 17:35:10 [Zakim]
- bajones, you wanted to point out compatibility challenges if only targeting the highest end haptics
- 17:36:15 [Dylan_XR_Access]
- Brandon: Want to caution against the perfect being the enemy of the good. In some cases, you've just got a little motor that buzzes
- 17:36:17 [marcosc]
- q+
- 17:36:39 [Dylan_XR_Access]
- ...Would be a shame if we ignore pressing need for haptics in devices available today because people want to be architectural astronauts
- 17:36:56 [Dylan_XR_Access]
- ...Balance to be made between quick and dirty, vs planning for the future
- 17:36:56 [adarose]
- q?
- 17:37:09 [adarose]
- ack Brandel
- 17:37:18 [cabanier]
- q+
- 17:37:40 [Dylan_XR_Access]
- Brandel: on topic of devices that exist today, Xbox One has 4 actuators, with intention of spatializing a haptic moment. The accessibility controller also has haptics
- 17:38:06 [Dylan_XR_Access]
- ...need a higher level signal to make judgments on what spatialization entails
- 17:38:14 [adarose]
- q?
- 17:38:20 [adarose]
- ack marcosc
- 17:38:45 [Dylan_XR_Access]
- Marcos: Sony are editors of gamepad spec, have asked them to take a look at uploading audio from the web
- 17:38:57 [Dylan_XR_Access]
- ...Concern that comes up is that controllers were never designed to take random files from the web
- 17:39:10 [Dylan_XR_Access]
- ...From a security perspective, not sure whether harm can be done. e.g. overloading the motor
- 17:39:42 [Dylan_XR_Access]
- ...iPhone considered a gamepad as well
- 17:39:42 [adarose]
- q?
- 17:39:45 [adarose]
- ack cabanier
- 17:40:19 [Dylan_XR_Access]
- Rik: for reference, Quest Pro controller has 4 haptic actuators, including a fancy one; all take audio, system downsamples to do something reasonable
- 17:40:28 [Dylan_XR_Access]
- Brandel: does it expose relative position?
- 17:40:39 [Dylan_XR_Access]
- Rik: no, the gamepad is just supposed to know which one is which
- 17:41:11 [Dylan_XR_Access]
- Marcos: have a demo that could show how it does work with audio. How it synchronizes, etc.
- 17:41:16 [bajones]
- bajones has joined #Immersive-web
- 17:41:28 [Dylan_XR_Access]
- Rik: Everything is synchronized to display time. Pass it a time, it plays at that time.
- 17:41:35 [Dylan_XR_Access]
- Marcos: Send it like a URL?
- 17:41:49 [Dylan_XR_Access]
- Rik: No, it's a web audio buffer. Already in memory
- 17:42:02 [Dylan_XR_Access]
- Ada: We should set a cross-group meeting
- 17:42:32 [Dylan_XR_Access]
- Marcos: next meeting on Thursday May 11th
- 17:43:25 [atsushi]
- zakim, take up agendum 2
- 17:43:25 [Zakim]
- agendum 2 -- webxr#1320 Discuss Accessibility Standards Process -- taken up [from atsushi]
- 17:44:00 [Manishearth_]
- scribenick: Manishearth_
- 17:44:16 [mjordan]
- mjordan has joined #immersive-web
- 17:44:34 [Manishearth_]
- Dylan: prior a11y discussions: webxr has *some* control over this but it's fundamentally a low level system
- 17:44:44 [atsushi]
- rrsagent, this meeting spans midnight
- 17:44:53 [Manishearth_]
- .... we should figure out what of this is under our scope, and what falls under other groups
- 17:45:29 [Manishearth_]
- ... case study: charles & i are a part of an NSF team, making nonverbal communication in XR accessible to low-vision people. making gestures, physical proximity, and 3d/2d content and turning that into sound and haptics
- 17:46:04 [Yih_]
- Yih_ has joined #immersive-web
- 17:46:12 [Manishearth_]
- ... some things here we can help handle, some things like gestures or emoji are beyond the webxr level
- 17:46:50 [Dylan_XR_Access]
- Resource: How Do You Add Alternative Text and Metadata to glTF Objects? https://equalentry.com/accessibility-gltf-objects/
- 17:47:19 [adarose]
- q?
- 17:47:20 [Manishearth_]
- .... <missed>
- 17:47:28 [Jared]
- q+
- 17:47:36 [Manishearth_]
- .... could create a task force from XR a11y, separate from tuesday meetings
- 17:48:03 [Manishearth_]
- ... can bring recs to this group as a while, and bring in the APA/etc
- 17:48:19 [Manishearth_]
- Charle: <last two lines>
- 17:48:32 [adarose]
- q+ to ask about standardisation work we can do in this group
- 17:48:49 [adarose]
- ack Jared
- 17:48:50 [Manishearth_]
- Dylan: A lot of the current screenreaders in VR are about pointing at what you want and OCRing what you see, as opposed to looking at "everything in the space"
- 17:48:53 [Jared]
- https://www.w3.org/2019/08/inclusive-xr-workshop/
- 17:49:25 [Manishearth_]
- Jared: back in 2019 was that there was a workshop. there's quite a bit of shifts around responsibilities in the w3c
- 17:49:43 [Manishearth_]
- ... could be interesting for this group to have one resource for what the responsibilities currently are
- 17:50:07 [Manishearth_]
- ... i've had a hard time discovering what that is. would be good to come up with consensus on the current state of things
- 17:50:11 [adarose]
- q?
- 17:50:28 [Dylan_XR_Access]
- XR Access github: https://bit.ly/xraccess-github
- 17:51:06 [Nick-Niantic]
- q+
- 17:51:15 [adarose]
- ack adarose
- 17:51:15 [Zakim]
- adarose, you wanted to ask about standardisation work we can do in this group
- 17:51:29 [Manishearth_]
- Dylan: one thing is that we don't have things like a list of legal responsibilities for XR, and that's one of the problems
- 17:51:39 [Manishearth_]
- ... good to have minimum guidelines around this
- 17:51:56 [Manishearth_]
- ada: 100%, if we had such minimal guidelines we could start building the things we need so people can satisfy them
- 17:52:23 [Manishearth_]
- ... also this is a good group to do this work, to do it in the group or as a separate task force formed from this group
- 17:53:09 [Manishearth_]
- ... something mentioned last week, might be a good idea to ... like the a11y object model ... the visual parts of that model is quite tied in, but nothing like that for WebGL. Giving people the option to generate that themselves will be useful
- 17:53:25 [CharlesL]
- q+
- 17:53:25 [adarose]
- q?
- 17:53:29 [adarose]
- ack Nick-Niantic
- 17:53:43 [Manishearth_]
- ... and then if there are minimum-viable standards later, we can say "hey we made this easy for you" (and if you don't do it, there's the stick)
- 17:53:56 [Manishearth_]
- Nick: when we talk about a11y we talk about alt text, ARIA tags, ... markup
- 17:54:13 [Manishearth_]
- ... as ada said we now have webgl/gpu which don't know anything about what they're rendering
- 17:54:30 [adarose]
- q+
- 17:54:36 [Manishearth_]
- ... but you also have frameworks like a-frame/etc that integrate with DOM/etc
- 17:55:04 [Manishearth_]
- ... and they can perhaps do more semantic a11y stuff
- 17:55:11 [Manishearth_]
- ... otoh there's pushback against them for being heavy on the DOM
- 17:55:19 [adarose]
- ack CharlesL
- 17:55:42 [Manishearth_]
- Nick: in other words; do you think we should make a standard like a-frame, or something else?
- 17:55:46 [Manishearth_]
- q+
- 17:56:18 [Manishearth_]
- adarose: would like an imperative API, where you build some kind of tree
- 17:56:35 [Manishearth_]
- ... probably has access to hit boxes / etc
- 17:56:56 [Manishearth_]
- Nick: could it be declarative? like a json file?
- 17:57:03 [Manishearth_]
- ada: i guess you could and then parse it into tree
- 17:57:29 [Manishearth_]
- ada: part of my instinct is to keep the DOM for stuff that is rendered in DOM
- 17:57:37 [Manishearth_]
- ... especially as we get more DOM integration
- 17:58:17 [Dylan_XR_Access]
- q+
- 17:58:17 [Manishearth_]
- ... a-frame has shown you can have a nice matchup bw the DOM tree and the scenegraph
- 17:58:17 [adarose]
- scribe nick: adarose
- 17:59:07 [cabanier]
- q+
- 17:59:36 [cwilso]
- present+
- 17:59:50 [adarose]
- Manishearth_: I would prefer an imperative API, I would not want to standardise AFRame for a11y, I think for a DOM based API, that more ARIA tags which declarative APIs like AFrames could use would be nice. But an Imperative API would work for everyone. Whilst I think the DOM based API is fine I wouldn't want to force everyone through it.
- 18:00:40 [adarose]
- ... there are lots of tools for that scenarios I don't want people to use a11y I nXR to stick with that approach. Imperative APIs can be integrated in to the DOM based APIs without it being an additional cost
- 18:00:40 [Manishearth_]
- q?
- 18:00:41 [Jared]
- q+
- 18:00:45 [Manishearth_]
- ack Manish
- 18:00:48 [adarose]
- ack adarose
- 18:00:51 [adarose]
- ack Manishearth_
- 18:00:59 [Manishearth_]
- scribe+
- 18:01:00 [adarose]
- ack Dylan_XR_Access
- 18:01:15 [CharlesL]
- q+
- 18:01:19 [Manishearth_]
- Dylan: another player that we should keep in mind here is screenreaders
- 18:01:31 [Manishearth_]
- ... there's gonna be a big q of ... when they get this, how do they interpret it
- 18:01:43 [Manishearth_]
- ... what are they going to do with it
- 18:02:15 [Manishearth_]
- ... would be very curious to see what the differences are when it comes to how they acquire their content, and how different screenreaders fare when fed these things
- 18:02:47 [adarose]
- q?
- 18:02:47 [Manishearth_]
- ... if there's a way we can make these experiences at least navigable from a user exp ... relatively similar, so people aren't coming to this completely confused as to the way it was built
- 18:02:50 [Manishearth_]
- q?
- 18:03:09 [Nick-Niantic]
- q+
- 18:03:13 [Manishearth_]
- ada: i think things like unity, when they're targeting the web, ...
- 18:03:27 [Manishearth_]
- .... things in the session or on the document itself , they should be able to use it
- 18:04:09 [adarose]
- q?
- 18:04:11 [adarose]
- ack cabanier
- 18:04:16 [Dylan_XR_Access]
- q+
- 18:04:18 [Manishearth_]
- ... because it's an new rendering mode, existing screenreaders would have to write addl apis to hook into it. needs to be easily accessible, not deeply ingrained in a way that you wouldn't get from the DOM tree + executed JS
- 18:05:11 [Manishearth_]
- cabanier: not sure if we ever wrote down results of a TPAC session about a lot of this
- 18:05:37 [Manishearth_]
- ada: hopefully minuted. wasn't our meeting, might be an a11y group
- 18:05:54 [Manishearth_]
- cabanier: at the time we thought we had something that covers most of what is needed by webxr
- 18:06:01 [cwilso]
- There was a workshop
- 18:06:13 [CharlesL]
- q?
- 18:06:17 [Manishearth_]
- ada: going to make a repo to start work here. it's going to have to be impld in the browser
- 18:06:30 [Brandel]
- cwilso: Do you have the link to the Webex?
- 18:06:38 [cwilso]
- https://www.w3.org/2019/08/inclusive-xr-workshop/papers/XR_User_Requirements_Position_Paper_draft.html
- 18:06:46 [bajones]
- q+ To point out https://github.com/WICG/aom
- 18:07:12 [Manishearth_]
- cwilso: <is banging on the gates>
- 18:07:35 [Manishearth_]
- fetchez la vache
- 18:07:45 [CharlesL]
- q?
- 18:08:17 [adarose]
- q?
- 18:08:21 [adarose]
- ack Jared
- 18:08:57 [adarose]
- q?
- 18:09:10 [Manishearth_]
- q+
- 18:09:18 [Marisha]
- Marisha has joined #immersive-web
- 18:09:40 [Manishearth_]
- Jared: what kind of process exists to ensure we follow the success criteria that each spec has to have an a11y section
- 18:10:12 [Manishearth_]
- ada: we generally ensure that the webxr APIs are more accessible than what they are building on
- 18:10:36 [adarose]
- ack CharlesL
- 18:10:38 [Manishearth_]
- ... big problem is that devs aren't really using stuff we have at the per-spec level, doing something like this might work but nobody else is doing that kind of work
- 18:10:48 [Yih]
- Yih has joined #immersive-web
- 18:10:58 [Manishearth_]
- Charles: The concept i was thinking of was the w3c registry
- 18:11:11 [Manishearth_]
- ... screenreaders already know how to navigate the DOM, that might make sense
- 18:11:33 [Manishearth_]
- ... as long as the new portions of the DOM get updated as you move around
- 18:12:07 [Manishearth_]
- ... parallel with the publishing group in the w3c, created a separate group
- 18:12:19 [Manishearth_]
- ada: regarding last point; pretty much all of our specs are done in parallel
- 18:12:25 [adarose]
- ack Nick-Niantic
- 18:12:27 [Manishearth_]
- ... so a module would fit in very well
- 18:12:54 [Manishearth_]
- Nick: conversation earlier, may want to consider a spec that's not only for web devs but also useful for unity/etc people
- 18:13:23 [Manishearth_]
- ... on one hand a thorny problem to solve at an api level. thinking of GLTF as a format; maybe a way to do a11y tags is as a part of the gltf spec
- 18:13:37 [Manishearth_]
- ... and then you have browser libs/etc that read source information in that scenegraph
- 18:14:07 [Manishearth_]
- ... not perfect, if you're not using GLTF and doing runtime stuff, there's no real recourse
- 18:14:17 [Manishearth_]
- ada: for the model tag discussion we're going to need this kind of thing
- 18:14:19 [Manishearth_]
- q?
- 18:14:21 [adarose]
- ack Dylan_XR_Access
- 18:14:57 [Manishearth_]
- Dylan: we can connect with the devs working with screenreaders
- 18:15:10 [Manishearth_]
- ... other thing is we can work with unity/etc people who need to integrate it
- 18:15:22 [Manishearth_]
- ... also need to figure out where we expose it at each level
- 18:15:39 [adarose]
- ack bajones
- 18:15:39 [Zakim]
- bajones, you wanted to point out https://github.com/WICG/aom
- 18:15:59 [Manishearth_]
- q?
- 18:16:19 [Manishearth_]
- bajones: at tpac 2022 we had a meeting with the a11y object model group, part of WICG
- 18:16:36 [Manishearth_]
- ... part of the programmatic extension of ARIA
- 18:16:48 [Manishearth_]
- ... can we make imperative canvas/webgl stuff more accessible
- 18:17:25 [Manishearth_]
- ... mostly just "everyone defines this through js". problem becomes "how do we motivate that". should continue to interface with them, they were quite interested in working with us
- 18:18:19 [Manishearth_]
- ... second point: idk how well this would apply here. One of the things we did to the webxr api was an abundance of labels. This is just for development purposes; so you can have good error messages
- 18:18:35 [Manishearth_]
- s/webxr/webgpu
- 18:18:44 [Manishearth_]
- ... appealing to dev's selfish nature here ... works!
- 18:19:00 [Dylan_XR_Access]
- q+
- 18:19:09 [Manishearth_]
- ... anything we can do to make object-picking, debugging, etc easier; whatever carrot we can dangle, that would prob be good
- 18:19:35 [adarose]
- q?
- 18:19:38 [CharlesL]
- +1 to the a11y labels carrot for devs!
- 18:19:39 [Manishearth_]
- ada: reminded of Google driving SEO with this
- 18:19:40 [adarose]
- ack Manishearth_
- 18:20:09 [Yih]
- Yih has joined #immersive-web
- 18:21:19 [adarose]
- ack Dylan_XR_Access
- 18:21:25 [Manishearth_]
- manish: note on process; we do have
- 18:21:57 [Manishearth_]
- ... a11y at CR review. if we want to do more than what the review requires we can also do that, tricky. generally in favor of having an a11y-focused model that other specs build on
- 18:22:10 [Manishearth_]
- Dylan: making things accessible makes it more readable to machines too
- 18:22:26 [Manishearth_]
- ... one hting we do is to get univs to teach a11y but also get people to work on these kinds of challenges
- 18:22:52 [CharlesL]
- q+
- 18:22:56 [Manishearth_]
- ... if there is oss code from this group, that's something we're interested in making easier to access
- 18:23:27 [Manishearth_]
- ... encourage people to reach out!!!
- 18:23:28 [adarose]
- q?
- 18:23:31 [adarose]
- ack CharlesL
- 18:23:55 [Dylan_XR_Access]
- Prototype for the people project - would be happy to add anything from this conversation that needs additional development muscle: https://xraccess.org/workstreams/prototype-for-the-people/
- 18:23:56 [Jared]
- I'm interested in that. Working on OSS WebXR samples now with lots of people in the community.
- 18:24:12 [adarose]
- q?
- 18:24:14 [Manishearth_]
- Dylan: what about building a11y checker tools
- 18:24:30 [Manishearth_]
- ada: currently not much existing , a lot of tooling is around rendering
- 18:24:47 [jfernandez]
- jfernandez has joined #immersive-web
- 18:25:12 [Manishearth_]
- s/Dylan/Charles
- 18:25:32 [Manishearth_]
- Charles: might end up becoming a legal requirement, even.
- 18:25:38 [Manishearth_]
- ada: really pro there being a11y standards for XR
- 18:25:51 [Dylan_XR_Access]
- q+
- 18:26:01 [felix_meta__]
- felix_meta__ has joined #immersive-web
- 18:26:04 [Manishearth_]
- ... lots of places won't do a11y unless legally mandated to do so
- 18:26:09 [adarose]
- q?
- 18:26:27 [Manishearth_]
- ... unsure if it's us or a different group
- 18:26:47 [adarose]
- q?
- 18:26:49 [Manishearth_]
- RRSAgent, please draft the minutes
- 18:26:50 [RRSAgent]
- I have made the request to generate https://www.w3.org/2023/04/24-immersive-web-minutes.html Manishearth_
- 18:26:56 [adarose]
- ack Dylan_XR_Access
- 18:26:56 [Nick-Niantic]
- q+
- 18:26:57 [Manishearth_]
- Dylan: def agree
- 18:27:24 [Manishearth_]
- ... we need to surface text, even, at the AOM model /etc
- 18:27:40 [Manishearth_]
- ... do we have info for the xaur group
- 18:27:53 [Manishearth_]
- ... e.g. if you're in a social vr setting you should be able to tell where peoples' avatars are
- 18:28:17 [adarose]
- ack Nick-Niantic
- 18:28:24 [Manishearth_]
- ... ensure that the right concerns get directed to the right group
- 18:28:45 [Manishearth_]
- Nick: q for googlers: relatively recently Google transitioned Docs from being DOM-based to canvas-based
- 18:29:06 [Manishearth_]
- ... improves compat and smoothness, but now you have to reinvent a11y
- 18:29:33 [Manishearth_]
- bajones: idk about what efforts went through to make it accessible
- 18:30:20 [Manishearth_]
- ... i was under the impression that it had happened but recently i went spelunking and there was still some DOM there
- 18:30:40 [Manishearth_]
- ... so the transition may not be as complete
- 18:30:49 [Manishearth_]
- Nick: hm. find-in-page at least doesn't work
- 18:31:08 [Manishearth_]
- bajones: do not expect it was done in a way that was necessarily easy to replicate outside of google
- 18:31:29 [CharlesL]
- q?
- 18:32:04 [Dylan_XR_Access]
- q+
- 18:32:12 [adarose]
- ack Dylan_XR_Access
- 18:32:22 [bajones]
- A relevant link for the Docs canvas transition: https://workspaceupdates.googleblog.com/2021/05/Google-Docs-Canvas-Based-Rendering-Update.html
- 18:32:22 [Manishearth_]
- Nick: interesting that Docs is kinda in the opposite situation where they're moving from a structured model to 2d rendering
- 18:32:33 [Yih]
- Yih has joined #immersive-web
- 18:32:51 [adarose]
- q+
- 18:33:53 [Manishearth_]
- Dylan: path forward: do we work with folks like unity/8thwall/etc to come up with the solution? can we require users to use something
- 18:34:03 [Manishearth_]
- Nick: yeah even figuring out the level at which to do this is hard
- 18:34:22 [adarose]
- ack adarose
- 18:34:25 [adarose]
- q+
- 18:34:27 [adarose]
- ack adarose
- 18:34:30 [Manishearth_]
- Nick: At least for the 2d web the browser knows everything about what's going on, we're nowhere near that here
- 18:34:52 [Manishearth_]
- ada: one approach i'd like to take is have a go at speccing out an API to let libraries add the info needed
- 18:35:35 [Manishearth_]
- ... "this is a thing we're proposing, a11y, SEO, etc", showing it to the various people who it's relevant to
- 18:35:42 [Manishearth_]
- ... "does this fit with what you're building"
- 18:36:04 [Manishearth_]
- ... then we can approach the model people with "these libraries have ways to add things to rendering, but these models are opaque blobs"
- 18:36:30 [CharlesL]
- q+
- 18:36:33 [Manishearth_]
- q+
- 18:36:48 [Manishearth_]
- ada: even if we do something like that, it won't be useful in all situations
- 18:37:52 [Manishearth_]
- ... "there is a fox person in front of you with red ears and ..." is not necessarily as useful as "there is person A in front of you, they are walking away, slightly frowning" in many contexts
- 18:38:10 [Manishearth_]
- Dylan: our nsf grant is helping figure that out
- 18:38:34 [adarose]
- q?
- 18:38:37 [adarose]
- ack CharlesL
- 18:38:38 [Manishearth_]
- ada: have opinions about avatars on the web, think we need to drive standardization before we get into a problem
- 18:38:55 [Manishearth_]
- Charles: reach out to the various a11y groups at tpac?
- 18:39:06 [adarose]
- q?
- 18:39:08 [Manishearth_]
- ada: good call, haven't started assembling an agenda but can
- 18:39:11 [Yonet]
- q+
- 18:40:04 [Dylan_XR_Access]
- q+
- 18:40:26 [adarose]
- q?
- 18:40:29 [adarose]
- ack Manishearth_
- 18:41:09 [Brandel]
- Brandel has joined #immersive-web
- 18:43:52 [adarose]
- ack Yonet q?
- 18:43:59 [adarose]
- ack Yonet
- 18:45:25 [Manishearth_]
- Manish: a big diff b/w the 2d web and XR is that the 2d web can be represented as a roughly 1-dimensional thing ( a traverseable tree) with some jumping around, whereas for XR that's very ... not true; what is and isn't important; and how that changes over *time* leads to trickiness, and different applications will want to highlight different things. we do need something low level
- 18:45:41 [Manishearth_]
- Yonet: <offer to help Dylan and also solicit other help>
- 18:45:44 [Manishearth_]
- q?
- 18:45:44 [adarose]
- ack Dylan_XR_Access
- 18:46:27 [Manishearth_]
- Dylan: to give a sneak preview of the stuff we're doing, we did an AR thing to use e.g. the hololens for real spaces, to e.g. help blind people navigate to the right bus stop
- 18:46:52 [Manishearth_]
- ... when you try to make everything audible at once everything is irrelevant
- 18:47:01 [Jared]
- I am interested in participating in the accessibility initiative too.
- 18:47:10 [Yonet]
- Great
- 18:47:49 [Manishearth_]
- Dylan: would like help setting the group up
- 18:48:43 [atsushi]
- zakim, take up agendum 3
- 18:48:43 [Zakim]
- agendum 3 -- semantic-labels -- taken up [from atsushi]
- 18:49:18 [etienne]
- etienne has joined #immersive-web
- 18:49:44 [CharlesL]
- scribe+
- 18:50:00 [CharlesL]
- agenda?
- 18:50:01 [adarose]
- https://github.com/immersive-web/semantic-labels/issues/4
- 18:50:12 [CharlesL]
- zakim, take up item 3
- 18:50:12 [Zakim]
- agendum 3 -- semantic-labels -- taken up [from atsushi]
- 18:50:59 [Manishearth_]
- cabanier: planes give you the different surfaces, hti testing lets you point rays at things and see the intersections
- 18:51:09 [Jared]
- Is there a link for this issue?
- 18:51:21 [CharlesL]
- Rik: quest browser gives back planes / etc. where in real world. not sure what you are hitting / planes you are hitting etc. user has to manually set up the room and what those objects are, table chair etc.
- 18:51:33 [Manishearth_]
- ... but you don't know what you're actually hitting. in quest the user tells us what their things are when they set stuff up (manually). we want to expose that to webxr
- 18:51:48 [Manishearth_]
- ... so you know if something is a door or window or something
- 18:51:56 [Yonet]
- Jared https://github.com/immersive-web/semantic-labels/issues/4
- 18:52:27 [CharlesL]
- … update two existing specs. in the array of attributes single DOM string attibute.
- 18:52:31 [bajones]
- q+
- 18:52:40 [CharlesL]
- … set up a repo that defines all that.
- 18:52:52 [Manishearth_]
- RRSAgent, please draft the minutes
- 18:52:54 [RRSAgent]
- I have made the request to generate https://www.w3.org/2023/04/24-immersive-web-minutes.html Manishearth_
- 18:53:00 [adarose]
- q?
- 18:53:02 [adarose]
- ack bajones
- 18:53:32 [CharlesL]
- bajones: topic came up before. only expose metadata on hits correct>: Yes
- 18:54:03 [CharlesL]
- … hit tests could get a rough idea, as you point you can have items call out
- 18:54:37 [CharlesL]
- … curious expected use cases? if I can only get back the real item you are pointing at in the real world
- 18:54:59 [bialpio]
- q+
- 18:55:07 [CharlesL]
- Rik: plaines API, Meshes API. you can query all the planes in a scene
- 18:55:31 [CharlesL]
- … quest browser this website, gives link to the privacy policy on what data you are giving up.
- 18:56:00 [CharlesL]
- … if you are putting furnature in a room you put it on the floor. and likewise a paining should be put on a wall and not a window.
- 18:56:15 [Yonet]
- ack bialpio
- 18:57:20 [CharlesL]
- bialpio: know there are some products that exists that label context "mask" of what user sees would annotate every pixel. if devices do opperate like this how do we expose this info through the API. where is the Sky etc.
- 18:57:41 [bajones]
- q+
- 18:58:04 [felix_meta_]
- felix_meta_ has joined #immersive-web
- 18:58:30 [CharlesL]
- … wonders annotated buffer, we may not know where all the pixels are. table top board games where is the table? how do we integrate with a buffered approach limited APIs limit to bitmask, sky vs. wall, vs. window.
- 18:58:55 [CharlesL]
- Rik: going outside is still unsolved for VR. tied to the room, even walking between rooms.
- 18:59:20 [CharlesL]
- … not really implemented correctly. Semantic Labelling comes from OpenXR.
- 18:59:59 [CharlesL]
- … would be an optional label.
- 19:00:20 [Yonet]
- ack bajones
- 19:00:26 [Nick-Niantic]
- q+
- 19:00:45 [CharlesL]
- bajones: assume real world meshing paired with semantic labels, would this help with the tagged buffer?
- 19:01:19 [CharlesL]
- ???: will there be a viewport like the sky?
- 19:02:04 [CharlesL]
- bajones: if I am in my living room and I can label couch / chair, but when I go outside I won't know there is a mountain vs.. sky.
- 19:02:45 [CharlesL]
- ???: No confidence level could be useful, tagged buffer could expose confidence level. I am looking for problems here not sure if they are real.
- 19:02:59 [CharlesL]
- … we need to make sure the API is flexible
- 19:03:18 [CharlesL]
- bajones: masking out sky, star chart AI.
- 19:03:32 [CharlesL]
- … anyone know what they are using for masking out the sky in those applications.
- 19:05:09 [Yonet]
- ack Nick-Niantic
- 19:05:16 [CharlesL]
- Nick:we employ two scenes and one with a mask over it.
- 19:05:21 [Yonet]
- https://github.com/immersive-web/semantic-labels
- 19:05:31 [CharlesL]
- … do you what they are a list of semantic labels?
- 19:05:36 [Yonet]
- https://github.com/immersive-web/semantic-labels#list-of-semantic-labels-for-webxr
- 19:05:43 [lgombos]
- q+
- 19:05:47 [CharlesL]
- RiK: you can add more.
- 19:05:50 [Dylan_XR_Access]
- Desk, couch, floor ceiling, wall, door, window, other
- 19:06:19 [Yonet]
- ack lgombos
- 19:06:34 [CharlesL]
- … other right now is undefined right now empty
- 19:06:58 [CharlesL]
- … if you manually draw a table it won't have the semantics. one label per object.
- 19:07:41 [CharlesL]
- Ada: is it an array of 1 item. table and round table, brown table.
- 19:08:01 [CharlesL]
- Rik: we should not invent it now.
- 19:08:28 [CharlesL]
- … confidence level, I don't like that pushes the decision to the developer, avoid conf. level would be good.
- 19:08:35 [bialpio]
- q+
- 19:10:03 [CharlesL]
- Nick: confidence level: content fades out along the edges, having confidence level is helpful. per pixel confidence level.
- 19:10:26 [CharlesL]
- Rik: Depth.
- 19:11:09 [CharlesL]
- ??: AI-Core could give you depth information. one issue about that. consider how to expose this. one buffer with both confidence level and data but they changed that.
- 19:11:11 [Dylan_XR_Access]
- q?
- 19:11:24 [Yonet]
- ack bialpio
- 19:11:29 [Dylan_XR_Access]
- q+
- 19:11:45 [CharlesL]
- bialpio: open XR coming from Meta.
- 19:11:58 [CharlesL]
- … who implements the extension.
- 19:12:11 [Yonet]
- ack Dylan_XR_Access
- 19:12:13 [bialpio]
- q+
- 19:12:47 [CharlesL]
- Dylan: a11y impact being able to label whats in the users env. is very important and where the edges are, edge enhancement around the boarders very important.
- 19:12:59 [Yonet]
- ack bialpio
- 19:13:06 [CharlesL]
- Rik: quest we do this so you don't trip over the edge of an object.
- 19:13:49 [Nick-Niantic]
- q+
- 19:14:04 [CharlesL]
- bialpio: Computer vision if we do expose the confidence levels it may not render, it may ignore if a table has 30% confidence it may not render it, but leaving it up to AI is probably not a good idea.
- 19:14:17 [Dylan_XR_Access]
- "The difference between something that might go wrong and something that can't possibly go wrong is that when the thing that can't possibly go wrong goes wrong, it's generally much harder to get at and fix." -Douglas Adams
- 19:14:34 [bajones]
- q+
- 19:14:38 [CharlesL]
- … making sure we don't paint ourselves in a corner, make sure sky detection with blending sky not sky works for this.
- 19:14:59 [Yonet]
- ack Nick-Niantic
- 19:15:32 [CharlesL]
- Nick: hit test for meshes / planes as headsets go outdoors. bulding meshes outside may be challenging. may be useful lables per vertex not per mesh. this region of a scene is a bush or tree.
- 19:15:58 [CharlesL]
- RiK: could have 1000's of vertices
- 19:16:13 [CharlesL]
- Nick: could be used to place content in a smart way.
- 19:16:55 [CharlesL]
- … having labels per plane could be useful but outdoors could have multiple meshes for multiple objects.
- 19:16:58 [Brandel]
- Brandel has joined #immersive-web
- 19:17:06 [CharlesL]
- Rik: Is there hardware?
- 19:17:06 [Brandel]
- q?
- 19:17:21 [CharlesL]
- Nick: there are classifiers mesh generation from scanning.
- 19:18:38 [CharlesL]
- bajones: which methods expose which data. Are these semantic labels / classifications and ability to add to it later. planes, pixels, meshes, pixels. seems to make sense. propose: we should have a registry of semantic labels.
- 19:19:41 [CharlesL]
- … image based masked different pixels be labelled / confidence levels etc. give each of these values anoon. we should have concrete values an integer value for each label.
- 19:20:00 [CharlesL]
- … Sounds like semantic labels is Yes.
- 19:20:44 [Yonet]
- ack bajones
- 19:20:50 [CharlesL]
- rrsagent, draft minutes
- 19:20:51 [RRSAgent]
- I have made the request to generate https://www.w3.org/2023/04/24-immersive-web-minutes.html CharlesL
- 19:21:26 [bajones_]
- bajones_ has joined #Immersive-Web
- 19:21:39 [Dylan_XR_Access_]
- Dylan_XR_Access_ has joined #Immersive-web
- 19:23:21 [Jared_]
- Jared_ has joined #immersive-web
- 19:26:45 [CharlesL]
- CharlesL has joined #immersive-web
- 19:34:18 [dino7]
- dino7 has joined #immersive-web
- 20:24:03 [lgombos]
- lgombos has joined #immersive-web
- 20:24:55 [CharlesL]
- CharlesL has joined #immersive-web
- 20:25:23 [Jared]
- Jared has joined #immersive-web
- 20:26:35 [Marisha]
- Marisha has joined #immersive-web
- 20:27:02 [Brandel]
- Brandel has joined #immersive-web
- 20:27:28 [bajones]
- bajones has joined #Immersive-web
- 20:28:33 [atsushi]
- atsushi has joined #immersive-web
- 20:30:11 [adarose]
- adarose has joined #immersive-web
- 20:30:28 [Brandel]
- present+
- 20:30:29 [adarose]
- zakim, choose a victim
- 20:30:29 [Zakim]
- Not knowing who is chairing or who scribed recently, I propose Brandel
- 20:30:35 [cabanier]
- scribenick cabanier
- 20:30:40 [adarose]
- zakim, choose a victim
- 20:30:40 [Zakim]
- Not knowing who is chairing or who scribed recently, I propose lajava
- 20:30:44 [Leonard]
- Leonard has joined #immersive-web
- 20:30:46 [Leonard]
- ... email: Dylan [AT] xraccess [DOT] orgpresent+
- 20:30:58 [Leonard]
- ... email: Dylan [AT] xraccess [DOT] orgpresent+
- 20:31:01 [Leonard]
- present+
- 20:31:02 [bialpio_]
- bialpio_ has joined #immersive-web
- 20:31:36 [jfernandez]
- present+
- 20:31:36 [marcosc]
- marcosc has joined #immersive-web
- 20:31:36 [Marisha]
- present+
- 20:31:37 [adarose]
- present+
- 20:31:42 [CharlesL]
- present+
- 20:31:42 [lgombos]
- present+ Laszlo_Gombos
- 20:31:43 [marcosc]
- present+
- 20:31:43 [Jared]
- present+
- 20:31:45 [Dat_Chu]
- Dat_Chu has joined #immersive-web
- 20:31:49 [cabanier]
- topic: Model Element
- 20:31:51 [kdashg]
- kdashg has joined #immersive-web
- 20:31:53 [kdashg]
- present+
- 20:32:03 [cabanier]
- marcosc: not much has been done since the last meeting
- 20:32:13 [cabanier]
- ... mostly because the needed stuff is easy
- 20:32:27 [cabanier]
- ... the issue is that we need to agree that model is a good idea
- 20:32:41 [Leonard]
- q+
- 20:32:43 [cabanier]
- ... I was waiting for mozilla's position, standard's position
- 20:32:54 [cabanier]
- ... I'm unsure cwilso has an opinion
- 20:32:54 [Jared]
- Is there an issue for this topic?
- 20:33:05 [cabanier]
- ... there's more question that I've been grappling with?
- 20:33:09 [cwilso]
- s/I'm unsure/I think/
- 20:33:13 [cabanier]
- ... like is this a media element?
- 20:33:29 [cabanier]
- ... is it like a 3d video? What about a11y?
- 20:33:44 [cabanier]
- ... how do we describe the accessibility of it?
- 20:34:04 [cabanier]
- ... we have a bunch of web content that moves and that has a11y content
- 20:34:14 [cabanier]
- ... one of the elephants is the format
- 20:34:25 [cabanier]
- ... I'm not pushing but we have gltf and usdz
- 20:34:34 [cabanier]
- ... and we designed it format agnostic
- 20:34:46 [Jared]
- Okay, is this related to https://modelviewer.dev/ ?
- 20:34:48 [cabanier]
- ... there's going to be an industry push for a standard format
- 20:35:06 [cabanier]
- ... how are we going to work out the format issues? We're going to have a conversation about that
- 20:35:09 [adarose]
- q?
- 20:35:15 [cabanier]
- ... this is roughly where we're at
- 20:35:16 [Marisha]
- q+
- 20:35:28 [cabanier]
- ... in webkit we landed width/height attributes
- 20:35:30 [bajones]
- q+
- 20:35:39 [adarose]
- ack Leonard
- 20:35:40 [cabanier]
- ... (ccormack worked on that for half a day)
- 20:35:43 [dulce]
- dulce has joined #immersive-web
- 20:35:45 [cabanier]
- ... feedback please
- 20:35:57 [kdashg]
- q+
- 20:36:00 [cabanier]
- Leonard: fundamentally this is a good idea to display 3d format
- 20:36:09 [cabanier]
- ... but there are a whole bunch of issue
- 20:36:09 [Yih]
- Yih has joined #immersive-web
- 20:36:24 [cabanier]
- ... like how to ensure render quality, animation, interactivity
- 20:36:34 [cabanier]
- ... how do you get the camera in the scene
- 20:36:45 [cabanier]
- ... it's really hard to solve these issue
- 20:36:50 [marcosc]
- q+
- 20:37:02 [cabanier]
- ... the formats are going to be an issue but the concept should work out first
- 20:37:19 [cabanier]
- marcosc: I didn't really mean that easy :-)
- 20:37:26 [Dylan_XR_Access]
- Dylan_XR_Access has joined #immersive-web
- 20:37:30 [cabanier]
- ... you are right that rendering is quite hard
- 20:37:33 [Nick-Niantic]
- Nick-Niantic has joined #immersive-web
- 20:37:39 [cabanier]
- ... and those are things that I need help with defining
- 20:37:39 [Nick-Niantic]
- q+
- 20:37:39 [adarose]
- ack Marisha
- 20:37:47 [cabanier]
- ... I'm unsure if they will be easy
- 20:37:50 [marcosc]
- q-
- 20:37:53 [Brandel]
- q+
- 20:38:23 [cabanier]
- Leonard: in the issues, the specs are out of sync (??)
- 20:38:30 [cabanier]
- marcosc: we don't have versions in html
- 20:38:54 [cabanier]
- ... it's not trivial. there are test suites that may have fidelity
- 20:39:05 [cabanier]
- ... there is not the idea that we have version
- 20:39:14 [cabanier]
- ... specs can move faster than implementations
- 20:39:27 [cabanier]
- ... there is nothing process wise from making progress quickly
- 20:39:46 [cabanier]
- ... I don't want to merge things in the spec without other implementor feedback
- 20:39:58 [adarose]
- ack Marisha
- 20:39:58 [cabanier]
- ... I don't want to add prototype-y stuff
- 20:40:19 [cabanier]
- Marisha: it came up earlier that webxr is a black box
- 20:40:38 [cabanier]
- ... there is a huge number of developers that can't participate because it's so complicated
- 20:40:48 [Dylan_XR_Access]
- q?
- 20:40:59 [adarose]
- ack bajones
- 20:40:59 [cabanier]
- ... the web is inherintly semantic so the model would be very helpful
- 20:41:21 [cabanier]
- bajones: I think that the desire is understandable
- 20:41:32 [cabanier]
- ... especially in light of the a11y discussion
- 20:41:43 [cabanier]
- ... for developers that don't want to do the imperative thing
- 20:41:55 [cabanier]
- ... but my issue is that this feels like an unbounded space
- 20:42:26 [cabanier]
- ... working on webgpu and webxr, when talking about the model tag, what can the web already do that does that
- 20:42:46 [cabanier]
- ... three.js, babylon can add on new modules and grow in complexity forever
- 20:42:53 [cabanier]
- ... which is ok for a user space library
- 20:43:13 [cabanier]
- ... but I'm not comfortable with that on a web spec
- 20:43:22 [cwilso]
- +1
- 20:43:25 [cabanier]
- ... I don't want to become unreal engine in a tag
- 20:43:32 [cwilso]
- q+
- 20:43:35 [cabanier]
- ... is there a reasonable way to cap that complexity
- 20:43:49 [cabanier]
- ...is there something that we're willing to limit it to?
- 20:44:13 [cabanier]
- ... I don't know what the escape values are looking like
- 20:44:31 [cabanier]
- ... getting gpu buffers from the model tag is likely not a solution
- 20:44:49 [cabanier]
- ... we'd feel much better if there was a clear scope of work
- 20:44:52 [Leonard]
- q+
- 20:44:56 [cabanier]
- marcosc: I couldn't agree more
- 20:45:19 [cabanier]
- ... I though you were going to mention the video element
- 20:45:29 [cabanier]
- ... which is what I'm envisioning
- 20:45:42 [cabanier]
- ... given a file, render something on the screen
- 20:46:01 [cabanier]
- bajones: I've had conversation about glb/usdz
- 20:46:17 [cabanier]
- ... because people think that you can just extend it
- 20:46:34 [cabanier]
- ... we really don't want to add things like hair rendering
- 20:46:46 [Leonard]
- USD connectors are the extension mechanism
- 20:46:53 [cabanier]
- ... even gltf has a bunch of extensions
- 20:46:54 [adarose]
- q+
- 20:47:06 [cabanier]
- ... things like refraction index, thickness of glass
- 20:47:15 [cabanier]
- ... there should be a line
- 20:47:30 [cabanier]
- ... and there's a temptation to keep pushing the line
- 20:48:02 [cabanier]
- dulce: physics is a big problem in xr and you will always be pushing that
- 20:48:35 [cabanier]
- bajones: for context, babylon worked with the havok team so now we have high quality physics for the web
- 20:48:46 [cabanier]
- ... do physics need to be part of the web?
- 20:49:07 [cabanier]
- ... will this reduce the complexity? People will want to push the line
- 20:49:13 [Dylan_XR_Access]
- q+
- 20:49:19 [cabanier]
- Marisha: do you see a cap?
- 20:49:28 [cabanier]
- bajones: I don't know what that is
- 20:49:39 [cabanier]
- ... but it shouldn't be infinity
- 20:50:02 [vicki]
- vicki has joined #immersive-web
- 20:50:10 [cabanier]
- ... I think we can find something that doesn't require us to build a new silo
- 20:50:24 [cabanier]
- marcosc: how did video cope with that?
- 20:50:48 [cabanier]
- bajones: mpeg spec has a lot of extensions that nobody implements
- 20:51:03 [cabanier]
- ... if we look at how video is actually used, they are very complex
- 20:51:15 [cabanier]
- ... but in terms of behavior they are well bounded
- 20:51:35 [cabanier]
- ... nobody expects all the pixels have physics attributes
- 20:51:43 [cabanier]
- ... which could be reasonable for the model tag
- 20:52:01 [cabanier]
- marcosc: why don't I think that back to the usd team?
- 20:52:13 [cabanier]
- ... how do we limit it so it doesn't get out of hand
- 20:52:16 [Leonard]
- glTF team does not to limit the capabilities
- 20:52:46 [mats_lundgren_]
- mats_lundgren_ has joined #immersive-web
- 20:52:50 [cabanier]
- bajones: in the case of gltf, we'd likely support the base spec and a limited set of extension
- 20:53:02 [cabanier]
- ... I'm sure there's a similar set for usd
- 20:53:12 [adarose]
- q?
- 20:53:12 [cabanier]
- marcosc: that is important to take back
- 20:53:35 [cabanier]
- kdashg: the format is the primary concern
- 20:53:50 [Manishearth_]
- Manishearth_ has joined #immersive-web
- 20:54:01 [Manishearth_]
- RRSAgent, please draft the minutes
- 20:54:02 [RRSAgent]
- I have made the request to generate https://www.w3.org/2023/04/24-immersive-web-minutes.html Manishearth_
- 20:54:08 [cabanier]
- ... it would be bad if authors would have to provide 2 formats so it works everywhere
- 20:54:08 [cabanier]
- ... we need 1 path forward
- 20:54:28 [cabanier]
- ... whatever we do, we still going to be subsetting it
- 20:54:42 [cabanier]
- ... this is what happened with webm with matroshka
- 20:54:44 [Manishearth_]
- scribenick: cabanier
- 20:54:59 [cabanier]
- ... where webm is a subset
- 20:55:10 [cabanier]
- ... and it's explicitly cut down
- 20:55:30 [cabanier]
- ... so we'd need to do the same. People shouldn't have to experiment
- 20:55:39 [cabanier]
- ... use cases are also important
- 20:56:04 [cabanier]
- ... generally we don't see the model tag as something that makes it easier to draw 3d content
- 20:56:23 [cabanier]
- ... we're handling 3d content well today
- 20:56:33 [cabanier]
- ... we're focusing on narrower use cases
- 20:56:54 [cabanier]
- ... some of the things there. For instance priviliged interactions
- 20:57:16 [cabanier]
- ... like an AR scenario where you'd not need to give depth information to the web site
- 20:57:25 [cabanier]
- ... so it would work with untrusted website
- 20:57:48 [cabanier]
- ... the other thing is that you can interact with other priviliged content like iframes
- 20:58:03 [cabanier]
- ... which is what we should be focusing on. Triage our efforts
- 20:58:22 [cabanier]
- ... and not focus on making something it can already do easier. Focus on what it can't do
- 20:58:41 [cabanier]
- ... it's going to be really tempting to show a demo
- 20:58:46 [Manishearth_]
- s/Is there an issue for this topic?/scribe: cabanier/
- 20:58:48 [Manishearth_]
- RRSAgent, please draft the minutes
- 20:58:50 [RRSAgent]
- I have made the request to generate https://www.w3.org/2023/04/24-immersive-web-minutes.html Manishearth_
- 20:58:57 [cabanier]
- ... dropping a model in an area scene
- 20:59:00 [adarose]
- q?
- 20:59:02 [cabanier]
- ... we can already do
- 20:59:04 [adarose]
- ack kdashg
- 20:59:06 [adarose]
- ack Nick-Niantic
- 20:59:13 [cabanier]
- Nick-Niantic: obviously there's a lot here.
- 20:59:17 [cwilso]
- +1 to "focus on what doesn't demo well, rather than what does demo well."
- 20:59:20 [cabanier]
- ... echoing what other people say
- 20:59:20 [Manishearth_]
- s/scribenick cabanier/scribenick: cabanier/
- 20:59:23 [Manishearth_]
- RRSAgent, please draft the minutes
- 20:59:25 [RRSAgent]
- I have made the request to generate https://www.w3.org/2023/04/24-immersive-web-minutes.html Manishearth_
- 20:59:46 [cabanier]
- ... models are easy and accessible on the web today
- 21:00:02 [cabanier]
- ... with very little markup you can embed a model on the web today
- 21:00:24 [cabanier]
- ... we don't see a lot of wanting to stop at a model
- 21:00:42 [cabanier]
- ... most use cases require a dynamic presentation
- 21:01:05 [cabanier]
- ... for instance, change a color on a model, you don't want to download a new model for each color
- 21:01:17 [cabanier]
- ... usually you have a bit of code to draw animations
- 21:01:41 [cabanier]
- ... running in offline mode is less compelling than something that is prebaked
- 21:01:52 [cabanier]
- marcosc: can you talk more about swapping in a model?
- 21:02:06 [cabanier]
- Nick-Niantic: I can show it on my screen
- 21:02:21 [Leonard]
- glTF has KHR_materials_variants that holds multiple materials for a single model
- 21:03:27 [cabanier]
- ... (demoing) this is an example of website with different model
- 21:03:59 [cabanier]
- ... this is telling aframe to make changes
- 21:04:12 [cabanier]
- ... other cases are character cameras
- 21:04:25 [bialpio_]
- (additional case study: model viewer which uses glTF, but don't ask me how it works internally: https://modelviewer.dev/examples/scenegraph/#swapTextures)
- 21:04:28 [cabanier]
- ... on the one hand, I agree that complexity can grow high
- 21:04:43 [cabanier]
- ... I don't agree that low complexity is what we want
- 21:04:51 [cabanier]
- marcosc: (????)
- 21:05:04 [cabanier]
- Nick-Niantic: enough functionality grows quickly
- 21:05:38 [cabanier]
- ... talking about 3D video, holograms are popular. (volumetric captures)
- 21:05:55 [cabanier]
- ... for the need of the market, there are a lot of formats to consider
- 21:06:26 [cabanier]
- marcosc: we don't know what's coming down the pipe
- 21:06:40 [cabanier]
- Nick-Niantic: yes but we shouldn't limit us too much at the start
- 21:06:59 [cabanier]
- ... a lot of the interesting cases with the video tag
- 21:07:11 [cabanier]
- ... applying video interestingly in a 3d space
- 21:07:34 [cabanier]
- ... where if you were to have a model tag, the question is how to get vertices and textures out of the model
- 21:07:40 [cabanier]
- ... so it's limited that way
- 21:08:20 [cabanier]
- ... we talked about gltf extensions, where they might grow and be extended over time
- 21:08:42 [cabanier]
- ... maybe we add semantic information inside the gltf
- 21:09:05 [cabanier]
- ... if the model tag is too limited, people will become frustrated
- 21:09:15 [cabanier]
- ... finally, we were talking about a11y
- 21:09:27 [cabanier]
- ... this could be embedded with the model
- 21:09:46 [cabanier]
- ... what we want is an annotated scene graph like what aframe lets you do?
- 21:10:01 [cabanier]
- marcosc: what does aframe do with a11y?
- 21:10:21 [cabanier]
- Nick-Niantic: aframe lets you declare your scene as html in dom elements
- 21:10:38 [cabanier]
- ... this lets you hook into the browser a11y engine
- 21:11:09 [cabanier]
- ... it won't work out of the box today but it might require a new rendering engine
- 21:11:15 [adarose]
- q?
- 21:11:43 [cabanier]
- ... in short the key is without a lot care, a model element is not as useful as what's in the market today
- 21:11:51 [adarose]
- ack Brandel
- 21:11:52 [cabanier]
- ... what is the better more useful thing
- 21:12:07 [cabanier]
- Brandel: as someone who plays on the internet a lot
- 21:12:17 [cabanier]
- ... we're always going to be disappointed
- 21:12:32 [cabanier]
- ... to that end, I'm not concerned.
- 21:12:56 [cabanier]
- ... what we need to find is the bare minimum that is useful
- 21:13:37 [cabanier]
- ... I was looking at the model proposal. we talk about using the environment map without needing a user request
- 21:13:47 [cabanier]
- ... or without needing access to the textures
- 21:14:19 [cabanier]
- ... on the complexity, we should aim at what is the simplest thing possible
- 21:14:35 [cabanier]
- ... and we should focus on the benefits
- 21:14:45 [cabanier]
- ... knowing that there is more content in the future
- 21:15:14 [cabanier]
- cwilso: my biggest concern is that if a lot of functionality is in the format, that is problematic
- 21:15:33 [cabanier]
- ... this moves interop to a spec
- 21:15:49 [cabanier]
- ... implementations can put things together quickly
- 21:16:06 [Yih]
- Yih has joined #immersive-web
- 21:16:07 [cabanier]
- ... safari use their usdz component to implement their model
- 21:16:24 [cabanier]
- ... and now everyone else has to use the same component
- 21:16:40 [cabanier]
- ... there are massive layers of features that need to be implemented
- 21:16:53 [Dylan_XR_Access]
- q?
- 21:17:01 [cabanier]
- ... if hair rendering was added, the model spec didn't change but the implementation did
- 21:17:18 [cabanier]
- ... people don't like to implement multiple formats
- 21:17:34 [cabanier]
- ... focussing on what demoes well is indeed the wrong thing
- 21:17:47 [cabanier]
- ... do people remember activex?
- 21:17:57 [cabanier]
- ... people could build fallbacks but didn't
- 21:18:13 [cabanier]
- ... this kept internet explorer alive
- 21:18:34 [cabanier]
- ... baking this much complexity in an engine without a web spec, is hard
- 21:18:41 [marcosc]
- q+
- 21:18:53 [cabanier]
- ... you don't want to expose things to user or developer code
- 21:19:03 [cabanier]
- ... the boundaries have to be part of the standard
- 21:19:16 [cabanier]
- ... I'm worried that this is going to create a massive interop fracture
- 21:19:32 [cabanier]
- ... HTML should have defined an image and video format
- 21:19:51 [cabanier]
- ... and an audio one because we still don't have good ones today :-)
- 21:19:55 [cwilso]
- ack me
- 21:20:06 [cwilso]
- ack dylan
- 21:20:13 [cabanier]
- Dylan_XR_Access: we were talking about how much we want to push it
- 21:20:25 [cabanier]
- ... from usability perspective
- 21:20:50 [dulce]
- dulce has joined #immersive-web
- 21:20:54 [cabanier]
- ... I'm wondering, are there certain things that we bake into this tag?
- 21:21:02 [cabanier]
- ... is it controlled by the user?
- 21:21:16 [cabanier]
- ... should we define the core things that are part of the tag?
- 21:21:25 [cabanier]
- ... where does it all fit into it?
- 21:21:56 [cwilso]
- ack ada
- 21:22:19 [cabanier]
- adarose: one benefit is that it won't render the same on different devices
- 21:22:52 [cabanier]
- ... if I want to show a model on a watch I don't want to use expensive shader or refractions
- 21:23:17 [cabanier]
- ... but if I show it on a high end computer, I would want all those expensive things turned on
- 21:23:32 [cabanier]
- ... if you want to be pixel perfect, webgl is the thing to do
- 21:23:42 [cabanier]
- ... different renditions is a feature an not a bug
- 21:23:46 [Marisha]
- q+
- 21:23:47 [adarose]
- q?
- 21:23:50 [cwilso]
- q+ to point out but if it looks better on Apple watch than on Samsung watch...
- 21:23:55 [bkardell_]
- bkardell_ has joined #immersive-web
- 21:24:16 [cabanier]
- Leonard: many of the engines already have an idea of device capabilities
- 21:24:39 [cabanier]
- ... the bigger issue is that they should look the same on the different browser on the same device
- 21:24:42 [bkardell_]
- present+
- 21:25:09 [cabanier]
- ... you can differentiate but different browser should have the same rendering engine
- 21:25:23 [cabanier]
- ... usd is not a format, it's an api
- 21:25:45 [cabanier]
- ... making a new format takes at least 2 years
- 21:26:05 [cabanier]
- marcosc: we estimate that any spec takes 5 years :-)
- 21:26:13 [cwilso]
- ack leonard
- 21:26:20 [cwilso]
- ack marcos
- 21:26:33 [cabanier]
- ... having a single format, we've not seen such a thing
- 21:26:55 [cabanier]
- ... we've seen disasters happen with formats. We've seen implementations becoming the standard
- 21:27:03 [cabanier]
- ... we generally understand what we want
- 21:27:18 [cabanier]
- ... if we do it in the w3c, we could all agree
- 21:27:32 [cabanier]
- ... we could decide today to just use usdz
- 21:27:41 [cabanier]
- ... but it's going to be challenging
- 21:27:42 [bajones]
- <narrator>: They did not all agree.
- 21:27:46 [adarose]
- q?
- 21:28:07 [cabanier]
- Leonard: the modelviewer tag can do most of what you're talking about
- 21:28:24 [cabanier]
- ... you should have demos that shows what modelviewer can't do
- 21:28:57 [cabanier]
- ... show what the community with model tag what can't be done with other capabilities
- 21:29:51 [cabanier]
- ... there was a discussion about gltf extensions, if the model tag allows them it would break the system
- 21:30:07 [cabanier]
- marcosc: we would only do that across browser vendors
- 21:30:28 [cabanier]
- ... like anything in ecmascript. there's a standard and aim for the same shipping date
- 21:30:42 [cabanier]
- Leonard: so it's extensions for browser?
- 21:30:45 [adarose]
- ack Marisha
- 21:31:06 [cabanier]
- Marisha: why can't we just decide to not have 2 supported formats?
- 21:31:19 [Leonard]
- USD is not a format. It is an API
- 21:31:53 [cabanier]
- bajones: there are platform isues. adding usdz is easy for apple but hard for other
- 21:32:10 [cabanier]
- ... usd is not a standard. it's basicaly a black box
- 21:32:47 [cabanier]
- ... you can put a lot of things in usd but apple will only render their own content
- 21:32:58 [Leonard]
- q+
- 21:33:06 [cabanier]
- Marisha: is there is no desire for USDZ a standard format
- 21:33:19 [cabanier]
- bajones: there is no really standard
- 21:33:30 [Leonard]
- q+
- 21:33:32 [cabanier]
- Marisha: is there no document?
- 21:33:44 [cabanier]
- marcosc: there's a github repo and a reference renderer
- 21:33:48 [Leonard]
- q-
- 21:34:01 [cabanier]
- kdashg: this is not surmountable
- 21:34:30 [cabanier]
- ... the video codec space, many millions of users can't decode h264 video because of patents
- 21:34:59 [cabanier]
- ... it's because authors just use their defacto assets
- 21:35:22 [cabanier]
- ... people choose the easiest and then users have problems
- 21:35:40 [cabanier]
- ... we as browser vendors can't tear things apart and repackage it
- 21:35:43 [cwilso]
- +1
- 21:35:50 [adarose]
- q?
- 21:35:55 [adarose]
- ack cwilso
- 21:35:55 [Zakim]
- cwilso, you wanted to point out but if it looks better on Apple watch than on Samsung watch...
- 21:36:27 [cabanier]
- cwilso: the problem with having 2 formats, does that mean that they are both required?
- 21:36:39 [cabanier]
- ... that means that they are not web standards
- 21:36:51 [cabanier]
- ... you end up exploring what works in browser a and not browser b
- 21:37:04 [marcosc]
- marcosc has joined #immersive-web
- 21:37:12 [cabanier]
- ... and we have a responsibility to make things interoperable
- 21:37:43 [cabanier]
- .. yes, things can look look different on different devices but it should be roughly the same on similar devices
- 21:37:54 [cabanier]
- adarose: let's wrap it up there
- 21:38:08 [Leonard]
- Thanks you Ada. Is there any TODOs or TakeAways tasks from this discussion?
- 21:38:31 [adarose]
- https://hackmd.io/@jgilbert/imm-web-unconf
- 21:39:16 [Marisha]
- scribe: Marisha
- 21:40:01 [Marisha]
- Emmanuel: we implemented keyboard integration for the user to trigger keyboard and use for input - first point of feedback was wanting to control where the keyboard appears
- 21:40:18 [atsushi]
- zakim, take up agendum 5
- 21:40:18 [Zakim]
- agendum 5 -- navigation#13 Let's have a chat about Navigation at the facetoface -- taken up [from atsushi]
- 21:40:30 [adarose]
- https://github.com/immersive-web/webxr/issues/1321
- 21:40:43 [atsushi]
- zakim, take up agendum 6
- 21:40:43 [Zakim]
- agendum 6 -- webxr#1273 Next steps for raw camera access -- taken up [from atsushi]
- 21:40:58 [bajones]
- q?
- 21:40:58 [atsushi]
- s|agendum 5 -- navigation#13 Let's have a chat about Navigation at the facetoface -- taken up [from atsushi]||
- 21:41:02 [bajones]
- q+
- 21:41:06 [Nick-Niantic]
- q+
- 21:41:07 [atsushi]
- s|agendum 6 -- webxr#1273 Next steps for raw camera access -- taken up [from atsushi]||
- 21:41:09 [adarose]
- q?
- 21:41:12 [adarose]
- ack bajones
- 21:41:13 [Marisha]
- Emmanuel: We currently provide z-position but looking for feedback about what folks think about availability for positioning:
- 21:41:43 [Marisha]
- bajones: What level of control do native apps have around this (like OpenXR or old Oculus APIs)? Or do native apps invent their own wheel here
- 21:41:50 [Marisha]
- Emmanuel: Not sure what native apps do
- 21:42:06 [atsushi]
- i/Emmanuel: we implemented/topic webxr#1321 Control over system keyboard's positioning
- 21:42:12 [atsushi]
- rrsagent, publish minutes
- 21:42:13 [RRSAgent]
- I have made the request to generate https://www.w3.org/2023/04/24-immersive-web-minutes.html atsushi
- 21:42:19 [Marisha]
- cabanier: There's maybe an OpenXR extension for this... Emmanuel what do you do?
- 21:42:22 [Marisha]
- Emmanuel: This is brand new, not very mature
- 21:43:00 [atsushi]
- s/topic webxr#1321/topic: webxr#1321/
- 21:43:02 [Marisha]
- bajones: Maybe it's too soon to try to standardize?
- 21:43:18 [Marisha]
- cabanier: there are standards in Unity that are used as an Android-ism
- 21:43:53 [Marisha]
- cabanier: If you are in Android, you can specify where on the screen the input is, and the keyboard will try to move itself to that position
- 21:44:22 [Marisha]
- cabanier: In immersive, that goes away
- 21:44:29 [adarose]
- q?
- 21:44:49 [CharlesL]
- q+
- 21:45:37 [Marisha]
- bajones: The thing that makes sense is to specify the bounds/coords of the input rect. But maybe that's more complicated than what I'm thinking (what if someone specifies 3mi away) - unless you want devs to specify exact coordinates where they want keyboard to appear
- 21:45:58 [adarose]
- ack Nick-Niantic
- 21:45:59 [Marisha]
- Emmanuel: Right now the keyboard renders at the same depth as the cylinder
- 21:47:00 [Marisha]
- Nick-Niantic: When I think about the placement of content, there are the nuances of the current scene + the current viewer. Asking developers to navigate that complexity can be challenging. Could offload some of the complexity onto the user and let them determine a better spot
- 21:48:00 [Marisha]
- Nick-Niantic: We had a project with a dom-tablet where you can pull things in and out of the 3D space - the way this was moved around was by grabbing and moving it in a radius around yourself, and follows you when you walk around. Making it easy for the user to move the keyboard is best.
- 21:48:02 [adarose]
- ack CharlesL q?
- 21:48:06 [adarose]
- ack CharlesL
- 21:48:33 [Marisha]
- CharlesL: From an accessibility point of view, a person with low vision may need keyboard to be in a very specific spot
- 21:48:37 [adarose]
- q?
- 21:49:11 [Marisha]
- Dylan_XR_Access: We've heard from folks that to read things they usually bring things close to their face, but they often can't do that in an XR environment. Ideally we'd have system settings for this
- 21:49:34 [Marisha]
- cabanier: This is the system keyboard so if it adds accessibility settings, you'd get that part for free like high contrast or letters size
- 21:49:55 [Marisha]
- cabanier: It could be a pain for the user to be able to move the keyboard
- 21:50:18 [Nick-Niantic]
- q+
- 21:50:37 [Marisha]
- bajones: The two things are not mutually exclusive - can make the keyboard not occlude the input, but also make it moveable for users.
- 21:51:00 [Marisha]
- bajones: The worst case scenario is two different platforms that have two different conventions for where the keyboard is placed, giving inconsistent results to users
- 21:51:34 [adarose]
- q?
- 21:51:40 [Marisha]
- bajones: You don't want to rely on user's control of the keyboard but should enable it
- 21:52:00 [adarose]
- ack Nick-Niantic
- 21:52:02 [Marisha]
- Emmanuel: Team is still working on "Follow" functionality vs fixed w/ a toggle. This gets to the question of how we surface this to webxr devs
- 21:52:59 [Marisha]
- Nick-Niantic: *Showing demo on screen* This is the dom-tablet, it's not a burden, it's easy for users to use and place wherever they want
- 21:53:51 [Marisha]
- Nick-Niantic: If they get too far away from it it will also follow the user. An idiom like this is useful. Also we'd love this (dom content) as a native WebXR feature.
- 21:54:09 [Dylan_XR_Access]
- q+
- 21:54:14 [adarose]
- q?
- 21:54:18 [adarose]
- ack Dylan_XR_Access
- 21:54:45 [adarose]
- q+
- 21:55:00 [adarose]
- ack adarose
- 21:55:04 [Marisha]
- Dylan_XR_Access: Something that comes to mind when it comes to interaction - we don't want just pointing, should have equivalents to tab, enter, gaze controls, etc, because there will be folks that have trouble pointing and need things like arrow keys
- 21:55:52 [Marisha]
- adarose: One heavily-requested feature has been DOM-overlay for VR, or some kind of DOM layer for XR that's interactive. But as much as it's desired, it's very difficult to implement. It's been discussed for years without a lot of movement.
- 21:56:01 [Marisha]
- Nick-Niantic: We can offer our existing implementation as a reference.
- 21:56:16 [Marisha]
- Dylan_XR_Access: What part of this is being handled by the WebXR vs the system?
- 21:56:46 [Marisha]
- adarose: There's a rectangle that the user is carrying around that has the HTML content on it, with all the accessibility features you'd expect for HTML.
- 21:57:10 [Marisha]
- adaores: Currently all we have is DOM Overlay which is only applicable to handheld mixed reality experiences. It's difficult to establish what it should do in virtual reality
- 21:57:37 [Marisha]
- s/adaores/adarose/
- 21:57:56 [Emmanuel]
- q+
- 21:58:10 [Marisha]
- bajones: There's a demo for this and how you can take content and display it, but no one has described how this should work for virtual reality specifically
- 21:58:18 [adarose]
- ack Emmanuel
- 21:59:09 [Marisha]
- Emmanuel: These are great discussions: touching on some of the accessibility - one of the features for a system keyboard that will provide a strip at the top to show what content is in the input that is being modified.
- 21:59:19 [adarose]
- q?
- 22:00:05 [Manishearth_]
- Manishearth_ has joined #immersive-web
- 22:00:17 [Manishearth_]
- q+
- 22:00:34 [Marisha]
- Rigel: when thinking about text input in VR: hand tracking is becoming more popular, the raycaster handheld approach means that the keyboard is beyond the reach of your own hands. But with hand tracking you want something more touch type, and have to think about the distance from the user, have to think about input methods
- 22:01:08 [Marisha]
- bajones: If the system has been designed such that the keyboard can be accessed via touch typing, it should bring up a hands-friendly version of the keyboard. The system should know what input method is being used.
- 22:01:23 [Manishearth_]
- q-
- 22:01:43 [atsushi]
- topic: proposals#83 Proposal for panel distance API in VR
- 22:01:52 [Marisha]
- TOPIC: Proposal for panel distance API
- 22:01:54 [adarose]
- Proposal for panel distance API in VR
- 22:02:05 [Marisha]
- Bryce: I'm an engineer from the Browser team at Meta
- 22:02:20 [adarose]
- https://github.com/immersive-web/proposals/issues/83
- 22:02:36 [Marisha]
- Bryce: This is outside the context of WebXR, it is about exposing the distance of a virtual 2D panel to the user
- 22:03:02 [Marisha]
- Bryce: This could be like for a weather application - what is displayed depends on how close the user is to the panel.
- 22:03:27 [Marisha]
- Bryce: Another example is a picture window effect, as you get closer to it you can see more and more what is "outside" the picture window
- 22:03:30 [mkeblx]
- q+
- 22:03:38 [adarose]
- q+
- 22:04:04 [adarose]
- ack mkeblx
- 22:04:04 [Marisha]
- Bryce: Do those examples make sense? At a high level - is there any precedent around this? Has it already been attempted? Just want to open up to the group for questions and considerations.
- 22:04:11 [Dylan_XR_Access]
- q+
- 22:05:12 [Marisha]
- mkeblx: You alluded to the idea of a picture changing size - previous ideas in this group are things like a magic picture app - you don't have just the distance but also orientation and user's position relative to the screen. Do people still want that even though we dropped it for a long time? And would your idea be a subset of our previous idea
- 22:05:16 [Jared]
- q+
- 22:05:25 [Nick-Niantic]
- q+
- 22:05:31 [adarose]
- ack adarose
- 22:05:39 [cabanier]
- q+
- 22:05:40 [Marisha]
- mkeblx: Another similar feature is the magic leap browser which exposed not just position but orientation via Javascript
- 22:06:32 [adarose]
- q?
- 22:06:33 [Marisha]
- adarose: One concern is that it could potentially be a privacy vulnerability. Maybe users don't want you to know if they're sitting or standing, where their head is in relation to the panel. I don't like the idea of giving user position to web pages.
- 22:06:36 [adarose]
- ack Dylan_XR_Access
- 22:06:51 [Brandel]
- Brandel has joined #immersive-web
- 22:06:57 [bajones_]
- bajones_ has joined #Immersive-web
- 22:07:00 [bajones_]
- q+
- 22:07:10 [Brandel]
- q+
- 22:07:19 [Marisha]
- Dylan_XR_Access: For some folks, being able to get close is necessary to see something. If it suddenly changes, that could be frustrating to users. But if that's something the user could control or have a setting for, that could be a feature.
- 22:07:22 [adarose]
- q?
- 22:07:25 [adarose]
- ack Jared
- 22:07:35 [Marisha]
- Jared: What is a panel app?
- 22:07:59 [Marisha]
- Bryce: Panel app in this context is just a 2D Browser outside of the context of WebXR. If you're in VR viewing a standard 2D browser window
- 22:08:01 [adarose]
- q?
- 22:08:03 [bkardell_]
- q+
- 22:08:03 [bialpio_]
- q+
- 22:08:04 [adarose]
- ack Nick-Niantic
- 22:08:48 [Marisha]
- Nick-Niantic: My understanding from previously is that there were non-immersive modes to the WebXR spec that were meant to handle cases like this. If you wanted to have DOM content but also have a magic window
- 22:09:50 [Marisha]
- bajones: Clarification - there is a non-immersive (inline) mode, but it does no tracking of any sort. The thing it gives you is the ability to use the same sort of render loop with immersive and non-immersive content. So you can use the XrSession's requestAnimationFrame. Nobody uses it much, I wish I hadn't spent so much time on it.
- 22:10:33 [Marisha]
- bajones: We talked a lot about the magic window mode, that involves tracking users position. there were privacy considerations, implementation questions. We could revisit that, but that doesn't sound like what's being discussed here.
- 22:11:22 [Marisha]
- Bryce: Yeah, in its simplest form it's just "how far away is the user in this virtual space". Following discussion about XRSession, I was thinking it could be for devs who don't know anything about WebXR. It could be like the geolocation API that's just surfaced in the navigator.
- 22:11:25 [adarose]
- q?
- 22:11:29 [adarose]
- ack cabanier
- 22:11:59 [bkardell_]
- That is what I was going to say actually
- 22:12:20 [Marisha]
- cabanier: So we don't really need to know how far away the user is to the centimeter. We just need to know are they close, sorta far away, or really far away. It could be like a CSS media-query to resolve some of the privacy considerations. We don't need to know exactly how far away they are in relation to the window.
- 22:12:42 [Marisha]
- cabanier: It could be something on navigator but could also be some CSS-y thing that automatically reformats itself
- 22:12:50 [adarose]
- ack bajones_
- 22:13:09 [bialpio_]
- q-
- 22:13:22 [Marisha]
- bajones: I like the idea of a CSS media query in this context. It seems like the right abstraction. This isn't necessarily about how far away the panel is, more about the angular resolution of the panel (includes both how far away, how wide/tall, etc)
- 22:14:08 [Marisha]
- bajones: There is still some fingerprinting in the media query but you're not looking at what the user's head is doing. It seems like it slots in nicely with Zoom level, what frame, etc. You could maybe call this perceptual width or something - how big the user perceives the page to be, and have CSS adjust to that.
- 22:14:13 [adarose]
- q?
- 22:14:19 [adarose]
- ack Brandel
- 22:15:06 [Marisha]
- Brandel: What might be confusing for folks: Head tracking and how far away the element is are essentially the same question - just depends on spatial/temporal resolution. CSS media query is one approach, other approach is to have exact xyz coordinates. While they are technically the same thing, they can be used to serve very different purposes.
- 22:15:32 [Marisha]
- Brandel: There was a discussion about having a universal virtual unit like a millimeter
- 22:15:49 [Marisha]
- Brandel: If there were reasonable limits on update frequency and such, it could be very useful
- 22:16:14 [adarose]
- ack bkardell_
- 22:16:21 [Marisha]
- adarose: You could even use the existing media queries, if you bin the information about panel size
- 22:16:30 [Brandel]
- https://www.youtube.com/watch?v=ES9jArHRFHQ is McKenzie's presentation
- 22:17:10 [Marisha]
- Brian: My immediate reaction is it sounds like it would be ideal in CSS like media-query list, listeners, CSS already has lots of things related to things like perceptual distance and calculations, since it is already used in televisions. This doesn't seem like it would be particularly hard, it fits well.
- 22:17:18 [cabanier]
- q+
- 22:17:23 [adarose]
- q?
- 22:17:23 [Marisha]
- adarose: This might be more for the CSS working group instead of here
- 22:17:28 [adarose]
- ack cabanier
- 22:17:37 [Marisha]
- cabanier: Bryce first brought it to web apps, who told him to bring it here
- 22:17:44 [bkardell_]
- wow I must have missed that
- 22:17:54 [mkeblx]
- q+
- 22:17:58 [Marisha]
- cabanier: But we're more looking to get people's opinions on it. Sounds like people don't have too maybe problems with it as a CSS media query and binning.
- 22:18:11 [bkardell_]
- bryce can you share the css issue?
- 22:18:21 [adarose]
- ack mkeblx
- 22:19:09 [Marisha]
- mkeblx: You mentioned the weather thing. But the Meta Quest browser stays at the same distance. What implementation are you imagining?
- 22:19:19 [Marisha]
- cabanier: Trying to do more mixed reality stuff, where people are expected to walk around more
- 22:19:50 [bkardell_]
- cabanier: do you know what the css issue # is? I don't see a handle here that is bryce
- 22:19:51 [Jared]
- q+
- 22:19:52 [Marisha]
- Bryce: With mixed reality over time, you might have more scenarios where a panel is attached to physical space
- 22:20:00 [adarose]
- ack Jared
- 22:20:28 [Marisha]
- Jared: If you utilized existing media queries via virtual screen size, there might be some good tools to play around with
- 22:20:31 [cabanier]
- @ bkardell_ : we didn't file a CSS issue yet. Bryce went to webapps first because he wanted to extend navigator
- 22:21:03 [Marisha]
- Bryce: I wanted to ask about fingerprinting risk - if there were a permission dialog, does this group handle that sort of thing?
- 22:21:16 [Marisha]
- adarose: Usually permission prompts are not determined by this group, left up to the Browser
- 22:21:49 [Marisha]
- bajones: Usually specifications don't determine what is shown or said regarding permissions. We can sometimes say "user consent is needed for this feature" and mention permission prompts as an example but we don't dictate that that needs to be how consent needs to be given.
- 22:22:13 [adarose]
- q?
- 22:22:42 [Marisha]
- adarose: No one on queue, should we wrap up for coffee break?
- 22:22:45 [Marisha]
- Bryce: sounds good to me
- 22:23:04 [Marisha]
- rrsagent, generate minutes
- 22:23:05 [RRSAgent]
- I have made the request to generate https://www.w3.org/2023/04/24-immersive-web-minutes.html Marisha
- 23:04:10 [CharlesL]
- CharlesL has joined #immersive-web
- 23:05:21 [adarose]
- present+
- 23:05:29 [Dylan_XR_Access]
- Dylan_XR_Access has joined #immersive-web
- 23:05:34 [bajones]
- bajones has joined #Immersive-Web
- 23:05:35 [CharlesL]
- present+
- 23:05:38 [Dylan_XR_Access]
- present+
- 23:05:40 [Brandel]
- Brandel has joined #immersive-web
- 23:05:44 [rigel]
- rigel has joined #immersive-web
- 23:05:46 [mats_lundgren]
- mats_lundgren has joined #immersive-web
- 23:05:53 [brycethomas]
- brycethomas has joined #immersive-web
- 23:05:53 [cabanier]
- present+
- 23:06:12 [Dat_Chu]
- Dat_Chu has joined #immersive-web
- 23:06:12 [mats_lundgren]
- present+
- 23:06:13 [rigel]
- present+
- 23:06:18 [Dat_Chu]
- present+
- 23:06:19 [Jared]
- present+
- 23:06:53 [bialpio_]
- present+
- 23:07:28 [Marisha]
- Marisha has joined #immersive-web
- 23:07:29 [atsushi]
- atsushi has joined #immersive-web
- 23:08:13 [Marisha]
- present+
- 23:08:16 [adarose]
- zakim, choose a victim
- 23:08:16 [Zakim]
- Not knowing who is chairing or who scribed recently, I propose lgombos
- 23:08:18 [Nick-Niantic_]
- Nick-Niantic_ has joined #immersive-web
- 23:08:24 [kdashg]
- kdashg has joined #immersive-web
- 23:08:29 [kdashg]
- present+
- 23:08:33 [Manishearth_]
- scribe: lgombos
- 23:08:45 [adarose]
- https://github.com/immersive-web/webxr/issues/1273
- 23:08:56 [lgombos]
- lgombos has joined #immersive-web
- 23:09:05 [bialpio_]
- q+
- 23:09:15 [lgombos]
- Nick-Niantic_: next step for raw camera access
- 23:09:37 [lgombos]
- ...goal to consent and developer use cases
- 23:10:17 [lgombos]
- ... reviewed Google Chrome implementation and reviewed it for headsets. chalange with render loop
- 23:11:09 [lgombos]
- ...unlock new use cases.. reflections, adapting scale of screen, media sharing, image target, qr code
- 23:11:11 [vicki]
- vicki has joined #immersive-web
- 23:11:31 [dulce]
- dulce has joined #immersive-web
- 23:11:57 [lgombos]
- ...skyeffects demo
- 23:13:12 [lgombos]
- ...running nn in background, to build sky map, create cube map
- 23:13:27 [bajones]
- "It
- 23:13:41 [bajones]
- "It's hard to get close to the sky" [citation needed]
- 23:14:08 [lgombos]
- ... In general Niantic cares about use cases outside (sky, ground, foliage)
- 23:15:23 [lgombos]
- ... marker based ar demo
- 23:15:48 [lgombos]
- ... camera texture processing
- 23:16:04 [cabanier]
- q+
- 23:16:09 [lgombos]
- ... part of why demos are not polished further as Chrome API is still experimental
- 23:16:21 [Yih]
- Yih has joined #immersive-web
- 23:16:21 [adarose]
- q?
- 23:16:26 [adarose]
- ack bialpio_
- 23:16:53 [Yih]
- q+
- 23:16:57 [lgombos]
- bialpio_: raw camera access (for smartphone) launched (no longer experimental) in late 2022
- 23:16:59 [Yonet]
- Nick, can you share the slides so we can add them to the meeting notes. Thanks!
- 23:17:25 [alcooper]
- Enabled by default for Chrome since M107
- 23:17:45 [lgombos]
- ... only smartphone specific/capable api's are released to stable
- 23:18:14 [bajones]
- q+ to ask what API (OpenXR presumably) Meta uses to handle passthrough.
- 23:18:34 [bajones]
- ack me
- 23:18:34 [Zakim]
- bajones, you wanted to ask what API (OpenXR presumably) Meta uses to handle passthrough.
- 23:19:15 [lgombos]
- ... other Chromium based browser running on headset (Hololens, Quest) do not support raw camera access
- 23:19:41 [Jared]
- q+
- 23:19:44 [lgombos]
- ... headsets typically do not expose camera for web
- 23:19:45 [adarose]
- q?
- 23:20:12 [Brandel]
- Brandel has joined #immersive-web
- 23:20:18 [adarose]
- ack. cabanier
- 23:20:18 [lgombos]
- ... Nick-Niantic_ API proposed is a simple adapter
- 23:20:41 [lgombos]
- cabanier: On Quest Pro nobody gets access to camera feed
- 23:21:16 [lgombos]
- Nick-Niantic_: lot of advancements cimd execution on nn
- 23:21:34 [lgombos]
- ... 200 fps on handhelds with cimd
- 23:21:35 [bkardell_]
- sorry it's a little noisy here at the moment - is the question whether any device would give wolvic those permissions?
- 23:22:11 [Jared]
- We could help with that... Also provide it a virtual means to do it
- 23:22:26 [lgombos]
- cabanier: unlikely realtime access to camera even later..
- 23:22:34 [Brandel]
- q?
- 23:22:51 [CharlesL]
- q+
- 23:23:05 [lgombos]
- ... Nick-Niantic_ exloring other headset providers to expose raw camera access
- 23:23:23 [adarose]
- ack cabanier
- 23:23:40 [adarose]
- ack Yih
- 23:24:02 [lgombos]
- Yih: question regarding camera feed processing
- 23:24:29 [bialpio_]
- q+
- 23:24:40 [lgombos]
- Nick-Niantic_: slide 8.. only meaning to show middle demo "location"
- 23:25:03 [lgombos]
- ... point is 6DoF tracking on the phone
- 23:25:07 [adarose]
- q?
- 23:25:11 [adarose]
- ack Jared
- 23:25:49 [lgombos]
- Jared: interesting helping.. actual and virtual devices .. what is input the algorithm ? just color ?
- 23:25:55 [lgombos]
- Nick-Niantic_: needs rgb texture
- 23:26:14 [lgombos]
- ... FoV of camera
- 23:26:38 [lgombos]
- Nick-Niantic_: will share the presentation
- 23:26:40 [Nick-Niantic_]
- https://github.com/immersive-web/raw-camera-access/issues/11
- 23:26:57 [lgombos]
- Nick-Niantic_: these are the needs
- 23:27:27 [adarose]
- q?
- 23:27:31 [adarose]
- ack CharlesL
- 23:27:36 [cabanier]
- q+
- 23:27:37 [lgombos]
- Jared: I can imaging and implementation on virtual enviromnet.. that later can work on the headset
- 23:27:55 [lgombos]
- CharlesL: I was wondering .. to add a secondary camera
- 23:28:01 [adarose]
- q?
- 23:28:04 [adarose]
- ack bialpio_
- 23:28:08 [lgombos]
- cabanier: can not comment on future devices
- 23:28:30 [lgombos]
- bialpio_: we have been exploring marker detection.. it is in chromium repo, will link to it
- 23:28:52 [lgombos]
- ... we used opencv marker tracking module
- 23:29:15 [Brandel]
- q+
- 23:29:18 [lgombos]
- ... not easy to get a performant implementation
- 23:29:38 [bialpio_]
- https://source.chromium.org/chromium/chromium/src/+/main:third_party/webxr_test_pages/webxr-samples/proposals/camera-access-marker.html
- 23:29:41 [adarose]
- q?
- 23:29:46 [adarose]
- ack cabanier
- 23:30:17 [lgombos]
- cabanier: presenting stereo, how does that work with camera feed
- 23:30:37 [lgombos]
- bialpio_: the app could reproject
- 23:31:07 [adarose]
- q+
- 23:31:20 [lgombos]
- ... user sees excatly what the website has information to
- 23:31:36 [lgombos]
- cabanier: for PT ?
- 23:31:53 [lgombos]
- Nick-Niantic_: for PT, unlikely to have this problem
- 23:32:54 [lgombos]
- cabanier: is it timewarped ?
- 23:33:25 [lgombos]
- bialpio_: ARCore introduces a lag .. when image tracking is on
- 23:34:29 [lgombos]
- ... can not use camera for effects.. you might get frames from "future"
- 23:35:09 [lgombos]
- cabanier: we predict what camera feed will be
- 23:36:46 [lgombos]
- cabanier: hololens you can get access to one of the camera.. not all of them
- 23:37:09 [lgombos]
- bajones: hololens requirements are different.. not the whole scene
- 23:38:25 [lgombos]
- Nick-Niantic_: timewarping has to happen at some point.. image and timeline needs to be aligned
- 23:38:34 [lgombos]
- cabanier: api does not give you a snapshot
- 23:39:01 [lgombos]
- Nick-Niantic_: event based API
- 23:39:34 [adarose]
- ack Brandel
- 23:39:44 [lgombos]
- Nick-Niantic_: it does not have to run on every frame
- 23:41:04 [adarose]
- q?
- 23:41:07 [lgombos]
- Brandel: does it have to be raw stereo
- 23:41:13 [adarose]
- q-
- 23:41:16 [lgombos]
- Nick-Niantic_: does not have to be stereo
- 23:41:19 [Jared]
- q+
- 23:41:56 [adarose]
- ack Jared
- 23:41:56 [Jared]
- Could be interesting to check out some of what is trending as being exposed for certain types of wearable XR in native APIs or extensions
- 23:41:57 [Jared]
- https://registry.khronos.org/OpenXR/specs/1.0/html/xrspec.html#XR_HTC_passthrough
- 23:42:00 [lgombos]
- Nick-Niantic_: does not necessary need to show to user
- 23:42:17 [lgombos]
- Jared: similar concept, underlays
- 23:42:30 [adarose]
- q?
- 23:42:34 [lgombos]
- ... can be used as a prototype
- 23:43:40 [Jared]
- Sorry, I was mistaken. It doesn't give you access to the pixels.
- 23:45:07 [lgombos]
- adarose: new topic https://github.com/immersive-web/webxr/issues/892
- 23:45:16 [bajones]
- q+
- 23:45:43 [adarose]
- zakim, choose a vixtim
- 23:45:43 [Zakim]
- I don't understand 'choose a vixtim', adarose
- 23:45:49 [atsushi]
- topic: webxr#892 Evaluate how/if WebXR should interact with audio-only devices.
- 23:45:54 [adarose]
- zakim, choose a victim
- 23:45:54 [Zakim]
- Not knowing who is chairing or who scribed recently, I propose mjordan
- 23:46:15 [mjordan]
- mjordan has joined #immersive-web
- 23:46:23 [adarose]
- scribe nick: mjordan
- 23:46:24 [mjordan]
- present+
- 23:46:34 [cabanier]
- scribenick: mjordan
- 23:46:53 [adarose]
- ack bajones
- 23:47:19 [Manishearth_]
- RRSAgent, please draft the minutes
- 23:47:20 [RRSAgent]
- I have made the request to generate https://www.w3.org/2023/04/24-immersive-web-minutes.html Manishearth_
- 23:47:34 [mjordan]
- bajones: likes point about AirPods as audio only devices, fairly common
- 23:48:09 [mjordan]
- ... thinks that these are consuming 5.1 audio and hands-off?
- 23:48:22 [Manishearth_]
- q+
- 23:49:10 [mjordan]
- ... doesn't seem like audio generated on the fly. Can get 3-dof pose data, but maybe not a way to get data bask into the scene?
- 23:49:19 [Brandel]
- q+
- 23:49:41 [Jared]
- q+
- 23:49:45 [mjordan]
- ... are they designed to be interacted with as controllers necessarily...
- 23:50:01 [Nick-Niantic_]
- q+
- 23:50:16 [mjordan]
- ... Bose devices were trying to explicitly be XR devices.
- 23:50:18 [adarose]
- ack Manishearth_
- 23:50:46 [CharlesL]
- q+
- 23:51:13 [mjordan]
- Manishearth_: Can have devices that are audio only. Like a headset without lenses. Could be able to get poses from certain devices.
- 23:51:20 [Dylan_XR_Access]
- q+
- 23:51:54 [mjordan]
- ... main benefit is that you could get pose based control, as well as controller based control.
- 23:52:44 [mjordan]
- ... experience that you want to work everywhere might not need pose because you have other controls. but if you don't have those devices, could you initiate session that is not backed by one of those devices. Might be good to look into.
- 23:52:49 [adarose]
- ack Brandel
- 23:53:07 [mjordan]
- Brandel: headphones are looked at as display devices
- 23:53:36 [mjordan]
- ... public available apis do return at least 3dof, and sometimes acceleration.
- 23:54:05 [adarose]
- q?
- 23:54:06 [mjordan]
- ... back to Gamepad discussion earlier, you can get orientation from gamepads, and those can be considered.
- 23:54:09 [adarose]
- ack Jared
- 23:54:39 [mjordan]
- Jared: Anecdote - ghost story narrative where it's audio only with directional whispers, etc.
- 23:54:40 [adarose]
- ack Nick-Niantic_
- 23:54:59 [mjordan]
- Nick-Niantic_: Curious about expectation.
- 23:55:40 [mjordan]
- ... for audio-only headset, you are looking at a device in real space, so if you go into immersive mode, what should happen?
- 23:55:58 [mjordan]
- ... What is the expected use case, interface, user experience?
- 23:56:31 [mjordan]
- adarose: Like HTC vive attached to computer, maybe render a 3-dof view.
- 23:56:56 [cabanier]
- q+
- 23:56:58 [mjordan]
- ... on the phone. Or maybe doesn't render something, but you still get 3dof audio.
- 23:57:18 [mjordan]
- ... could run independently on device, but maybe get audio transcription on device.
- 23:57:37 [mjordan]
- Nick-Niantic_: do you need an XR session for that?
- 23:58:13 [bialpio_]
- q+
- 23:58:14 [mjordan]
- bajones: probably inclined to treat as a different type of session?
- 23:58:29 [mjordan]
- ... could get back poses every frame.
- 23:58:45 [rigel]
- q+
- 23:58:55 [mjordan]
- Nick-Niantic_: What would happen on quest, when you ask for immersive audio?
- 23:59:26 [mjordan]
- bajones: might have limitations because of the device functionality.
- 23:59:58 [mjordan]
- ... maybe normalize around devices where this is the norm, or expected use case?
- 00:00:42 [mjordan]
- ... if you're trying to get poses, you could do some interesting accessibility stuff, sensors might not be super accurate?
- 00:00:48 [adarose]
- ack CharlesL
- 00:00:57 [mjordan]
- ... Would give it it's own session type.
- 00:01:06 [Brandel_]
- Brandel_ has joined #immersive-web
- 00:01:53 [mjordan]
- CharlesL: There is a link on how OpenXR should interact with audio only device, but not a lot of info there. Blind users do turn off their screens to this seems reasonable.
- 00:02:01 [adarose]
- q?
- 00:02:04 [adarose]
- ack Dylan_XR_Access
- 00:02:37 [mjordan]
- Dylan_XR_Access: Being able to support things like spatial sound is necessary for a lot of experiences.
- 00:02:43 [adarose]
- q?
- 00:02:43 [Jared]
- q+
- 00:02:49 [mjordan]
- ... should support those use cases.
- 00:03:13 [adarose]
- ack cabanier
- 00:03:24 [mjordan]
- adarose: shouldn't do a different session type, would say something about the user not wanting to view video.
- 00:03:24 [Jared]
- q-
- 00:04:15 [Jared]
- q+
- 00:04:22 [mjordan]
- can get immersive sounds while not having have an imersive session
- 00:04:41 [adarose]
- ack cabanier
- 00:06:07 [mjordan]
- cabanier: can't find 5.1 support in browsers? Certain devices, or special video formats may have that. What do we need to do to get 5.1 support in browsers?
- 00:06:18 [adarose]
- ack bialpio_
- 00:06:40 [mjordan]
- ... maybe manually decoding the audio streams?
- 00:06:58 [adarose]
- ack bialpio_
- 00:07:35 [mjordan]
- Chris Wilson: Can be supported in web audio, but need to use 3d panner to do positioning, but nothing that does 3d panning inside of 5.1 audio.
- 00:07:41 [adarose]
- ack bialpio_
- 00:08:08 [mjordan]
- bialpio_: I know we don't say how many views you get, but can we say we get only one?
- 00:08:51 [mjordan]
- bajones: you only get 1 or 2, unless you explicitly ask for them.
- 00:09:17 [mjordan]
- .. . even if allowed, wouldn't want scenarios where you get 0,
- 00:10:12 [mjordan]
- ... don't want to expose the fact that users are using accessibility settings. So you could advertise that the content is maybe audio-only, and put the acceptance choice back on the user.
- 00:10:38 [mjordan]
- ... try and make the page as unaware of what the user chooses as possible.
- 00:11:18 [mjordan]
- bialpio_: Be careful how you classify the session if there is a special session for this, so that it doesn't give that away.
- 00:12:01 [mjordan]
- adarose: should not be able to secretly overlay other audio over user's expected streams
- 00:12:24 [mjordan]
- bialpio_: had to refactor part of the spec for this.
- 00:13:22 [mjordan]
- ... do we pause other audio if they're already running?
- 00:13:32 [Jared]
- q-
- 00:13:42 [mjordan]
- : sometimes background apps can play audio and sometimes not
- 00:14:11 [mjordan]
- ... sometimes confusing around ducking audio from other sources.
- 00:14:47 [mjordan]
- ??: we say exclusive audio, but maybe not exclusive-exclusive. Sometimes the OS can interrupt, etc.
- 00:15:14 [mjordan]
- cabanier: chrome will sometimes keep running when display is off
- 00:15:26 [mjordan]
- ... audio session might be like that.
- 00:15:49 [mjordan]
- ??: exclusive used to be the term, but now it's immersive
- 00:16:14 [adarose]
- q?
- 00:16:33 [mjordan]
- adarose: if a media thing wanted to differentiate, there would be a difference between directional and directional where moving your head did something.
- 00:17:27 [mjordan]
- rigel: walking down a street where you can have audio-only immersion as you go down a street would be cool.
- 00:17:56 [adarose]
- q?
- 00:18:01 [adarose]
- ack rigel
- 00:18:12 [mjordan]
- ... different elements in a scene have different emitters, but currently tied to phone position. would be neat to get head pose instead of having to move phone around.
- 00:18:24 [mjordan]
- ... today need to move phone around.
- 00:18:26 [adarose]
- q?
- 00:18:53 [Brandel_]
- q+
- 00:19:01 [adarose]
- ack Brandel_
- 00:19:01 [mjordan]
- ??: on issue for this topic Jared had a link to someone on the native side with motion info from the native side
- 00:19:04 [Jared]
- q+
- 00:19:19 [mjordan]
- brandel_: Can shake or nod and get that input from the headphones.
- 00:19:21 [adarose]
- ack Jared
- 00:19:50 [adarose]
- q?
- 00:19:53 [mjordan]
- Jared: Using tools like Unity, you can use things like colliders and things are helpful for making immersive audio experiences.
- 00:20:13 [mjordan]
- adarose: This seemed like a fun thing to end the day with. and this was a lovely discussion .
- 00:20:58 [atsushi]
- rrsagent, publish minutes
- 00:20:59 [RRSAgent]
- I have made the request to generate https://www.w3.org/2023/04/24-immersive-web-minutes.html atsushi
- 00:21:07 [atsushi]
- rrsagent, make log public
- 00:21:47 [atsushi]
- rrsagent, bye
- 00:21:47 [RRSAgent]
- I see no action items