13:46:14 RRSAgent has joined #immersive-web 13:46:18 logging to https://www.w3.org/2023/04/25-immersive-web-irc 13:46:28 rrsagent, draft minutes 13:46:29 I have made the request to generate https://www.w3.org/2023/04/25-immersive-web-minutes.html CharlesL 13:57:59 rrsagent, make logs public 14:51:15 cabanier has joined #immersive-web 16:52:05 atsushi has joined #immersive-web 16:58:18 CharlesL has joined #immersive-web 17:00:23 zakim, clear agenda 17:00:23 agenda cleared 17:00:28 rrsagent, make log public 17:00:38 rrsagent, publish minutes 17:00:39 I have made the request to generate https://www.w3.org/2023/04/25-immersive-web-minutes.html atsushi 17:00:53 meeting: Immersive-web WG/CG face-to-face 2023/04 Day 2 17:00:56 chair: Ada 17:01:07 zakim, who is here 17:01:07 atsushi, you need to end that query with '?' 17:01:11 zakim, who is here? 17:01:11 Present: Leonard, cabanier, CharlesL, bialpio, ada, etienne, yonet, atsushi, lajava, bkardell_, Manishearth_, Dylan_XR_Access, marcosc, lgombos, bajones, Nick-Niantic, Marisha, 17:01:15 ... Jared, kdashg, mkeblx, rigel, mjordan, felix_Meta_, vicki, Brandel, adarose, Yih, cwilso, jfernandez, Laszlo_Gombos, mats_lundgren, Dat_Chu, bialpio_ 17:01:15 On IRC I see CharlesL, atsushi, cabanier, RRSAgent, brycethomas, bkardell_, Leonard, dino7, jfernandez, general_j, sharonmary6, vianka, OverTime, sentinel1975, rzr, maxass99, 17:01:18 ... LawrenceKincheloe, Matthew, NicolaiIvanov, helixhexagon, fernansd, [old]freshgumbubbles, etropea73101, dietrich, Chrysippus, SergeyRubanov, bemfmmhhj, babage, KevinBush, Zakim, 17:01:18 ... Manishearth, mounir, ada, sangwhan, iank_, scheib, hyojin, cwilso 17:01:31 zakim, bye 17:01:31 leaving. As of this point the attendees have been Leonard, cabanier, CharlesL, bialpio, ada, etienne, yonet, atsushi, lajava, bkardell_, Manishearth_, Dylan_XR_Access, marcosc, 17:01:31 Zakim has left #immersive-web 17:01:34 ... lgombos, bajones, Nick-Niantic, Marisha, Jared, kdashg, mkeblx, rigel, mjordan, felix_Meta_, vicki, Brandel, adarose, Yih, cwilso, jfernandez, Laszlo_Gombos, mats_lundgren, 17:01:34 ... Dat_Chu, bialpio_ 17:01:44 Zakim has joined #immersive-web 17:01:44 zakim, who is here? 17:01:44 Present: (no one) 17:01:45 On IRC I see CharlesL, atsushi, cabanier, RRSAgent, brycethomas, bkardell_, Leonard, dino7, jfernandez, general_j, sharonmary6, vianka, OverTime, sentinel1975, rzr, maxass99, 17:01:45 ... LawrenceKincheloe, Matthew, NicolaiIvanov, helixhexagon, fernansd, [old]freshgumbubbles, etropea73101, dietrich, Chrysippus, SergeyRubanov, bemfmmhhj, babage, KevinBush, 17:01:45 ... Manishearth, mounir, ada, sangwhan, iank_, scheib, hyojin, cwilso 17:01:53 rrsagent, publish minutes 17:01:55 I have made the request to generate https://www.w3.org/2023/04/25-immersive-web-minutes.html atsushi 17:02:02 agenda: https://github.com/immersive-web/administrivia/blob/main/F2F-April-2023/schedule.md 17:02:15 previous meeting: https://www.w3.org/2023/04/24-immersive-web-minutes.html 17:02:20 day 1 minutes -> https://www.w3.org/2023/04/25-immersive-web-minutes.html 17:02:23 rrsagent, publish minutes 17:02:24 I have made the request to generate https://www.w3.org/2023/04/25-immersive-web-minutes.html atsushi 17:02:39 present+ 17:03:41 bajones has joined #Immersive-Web 17:07:17 yonet has joined #immersive-web 17:07:18 Dat_Chu has joined #immersive-web 17:07:24 present+ 17:07:28 present+ 17:07:30 Present+ 17:07:32 Marisha has joined #immersive-web 17:07:34 present+ 17:07:36 etienne has joined #immersive-web 17:07:44 marcosc has joined #immersive-web 17:07:49 s|2023/04/25-immersive|2023/04/24-immersive| 17:07:51 rrsagent, publish minutes 17:07:53 I have made the request to generate https://www.w3.org/2023/04/25-immersive-web-minutes.html atsushi 17:07:59 present+ 17:07:59 adarose has joined #immersive-web 17:08:00 zakim, who is here? 17:08:00 Present: cabanier, CharlesL, Dat_Chu, bajones, Marisha, etienne 17:08:01 present+ 17:08:01 Brandel has joined #immersive-web 17:08:02 On IRC I see adarose, marcosc, etienne, Marisha, Dat_Chu, yonet, bajones, Zakim, CharlesL, atsushi, cabanier, RRSAgent, brycethomas, bkardell_, Leonard, dino7, jfernandez, 17:08:02 ... general_j, sharonmary6, vianka, OverTime, sentinel1975, rzr, maxass99, LawrenceKincheloe, Matthew, NicolaiIvanov, helixhexagon, fernansd, [old]freshgumbubbles, etropea73101, 17:08:06 ... dietrich, Chrysippus, SergeyRubanov, bemfmmhhj, babage, KevinBush, Manishearth, mounir, ada, sangwhan, iank_, scheib, hyojin, cwilso 17:08:15 bialpio has joined #immersive-web 17:08:16 mkeblx has joined #immersive-web 17:08:23 present+ 17:08:29 scribe: Marisha 17:08:30 alcooper has joined #immersive-web 17:08:45 s|2023/04/25-immersive|2023/04/24-immersive| 17:08:48 rrsagent, publish minutes 17:08:49 I have made the request to generate https://www.w3.org/2023/04/25-immersive-web-minutes.html atsushi 17:08:56 present+ 17:08:57 new spec: https://immersive-web.github.io/real-world-meshing/mesh-detection.html 17:09:04 TOPIC: Add support for 3D geometry 17:09:08 agenda+ real-world-geometry#38 Add support for 3D geometry 17:09:13 cabanier: I made a spec in the new repo 17:09:18 agenda+ Proposals - Some sort of local shared space 17:09:31 agenda+ proposals#84 Immersive Capable/WebXR meta tag 17:09:40 ... We don't support meshes like the hololens does but you can create shapes that come back as 3D geometry in your room 17:09:47 agenda+ webxr#1264 Proposal: lower friction to enter WebXR sessions 17:10:02 agenda+ webxr#1317 Some WebXR Implementations pause the 2D browser page in XR, make this optional? 17:10:08 present+ 17:10:08 ... with the 2D planes API there's no practical way to return them but we want to be able to and return these objects 17:10:14 ... They are not fine meshes, they are outlines 17:10:22 agenda+ navigation#13 Let's have a chat about Navigation at the facetoface 17:10:24 agenda+ Unconference 17:10:28 ... Should there be an attribute on the mesh itself that can do basic hit testing or object detection? 17:10:34 q? 17:10:42 ... I'm not sure if the hololens supports multiple meshes or a single mesh 17:10:42 q+ 17:10:50 present+ 17:10:52 ... The API right now has separate meshes, a table, a chair, a room 17:10:53 rrsagent, publish minutes 17:10:54 I have made the request to generate https://www.w3.org/2023/04/25-immersive-web-minutes.html atsushi 17:11:00 q+ 17:11:00 ... But I'm not sure if the hololens segments the meshes 17:11:08 Nick-Niantic has joined #immersive-web 17:11:11 ... On magic leap it just returned the whole thing 17:11:14 ack bajones 17:11:22 bajones: A couple of observations looking through the spec 17:11:52 ... First, re Hololens: what I recall is that they would have submeshes not associated with any particular object 17:11:58 evias has joined #immersive-web 17:12:00 ... And then the submeshes would update over time 17:12:20 ... So that'd be one of my first concerns, I can see on the XR Mesh you've defined that there's a last change time 17:12:42 ... I would want something where it could go through and indicate 'these are the meshes that have changed since last time' so I'm not scanning through every mesh every time 17:12:56 q+ 17:13:03 ... that would require me to keep a list of all meshes that I've seen before and compare against the list every single frame 17:13:07 q+ to ask about the can be used for occlusion property changing as meshes refine 17:13:09 ... would be cool to refine down to an event 17:13:18 ... I also recall that meshes on the Hololens had a unique ID to help with that process 17:13:37 ... And the last point of feedback is that vertices are a frozen array of dom point-read only 17:13:52 ... I would suggest that there's very little point to this, we probably want that to be a float32 array 17:14:25 ... since I want to put them into float32 anyway, it would make javascript easier but at this point the 3D world involves pushing things into buffers so much that it wouldn't be a problem 17:14:37 ... Since we're dealing with potentially large array 17:14:45 cabanier: Some of your feedback also applies to planes 17:15:03 ... For planes I could do one thing and meshes I would have to do something else 17:15:14 q? 17:15:17 bajones: I expect there to be more meshes than planes in the environment, especially in the Hololens 17:15:29 ... the Meta approach right now seems a bit different 17:15:56 ... If you want it to refine further like to Magic Leap or Hololens (which seems reasonable) then I would expect a high volume of meshes and changes to be coming through 17:15:58 ... more than planes 17:16:12 ... But I'm not opposed to making those approaches more unified 17:16:23 ack bialpio 17:16:39 bialpio: To respond to some of Brandon's comments, we're thinking about ID for planes as well 17:16:46 ... but the conclusion was that since planes are live objects, we're just updating for you 17:16:55 ... if you want to assign some kind of ID, you can 17:16:59 ... but it's not a best practice 17:17:07 ... but you can use Javascript equality operator to compare things 17:17:21 ... it puts the responsibility on you to hold on to the objects from the last callback to compare against 17:17:29 ... to evaluate meshes that are new or are removed 17:17:45 ... They will be able to emulate the event based approach 17:18:02 ... If we do want to have an event-based API, do we want to have it in a context where we can issue GL calls? 17:18:16 ... if not, we shoudl stick to the approach of letting the developers emulate the event-based approach 17:18:31 ... The other point was for planes, we expect lower amount of data, we just give the outline of the plane 17:18:59 ... Can we make sure this design works with whatever Hololens is doing? 17:19:09 ... their data is likely to be wavier than a user annotating their environment 17:19:17 ... especially with Hololens having a dynamic environment 17:19:28 ... In other versions of the Mesh API we were concerned with that 17:19:40 ... Just want to do the right thing here so we won't need a diffrent API to support Hololens 17:19:54 cabanier: So you think the spec should say the user agent should simplify the mesh? 17:20:14 bialpio: If Hololens wants to use it without having to simplify the mesh, they just won't use it 17:20:45 ... If there is a device that wants to implement this API but also has concerns similar to Hololens, we don't want to have to scratch this API and invent a new one that would support the future 17:20:50 cabanier: What would that API look like? 17:20:56 bialpio: That's the question 17:21:14 ... We have this design from Magic Leap I don't remember what was on it 17:21:26 cabanier: There was a fan mesh that was changed while walking 17:21:37 bialpio: I remember a lot of conversation about how to do the updates efficiently 17:21:47 ... Do we have anyone from Microsoft in the room? 17:21:58 q? 17:22:06 cabanier: Things would also go away and come back, which is different from now 17:22:17 bialpio: I'm not saying this API won't work for them, I just don't know if it will 17:22:38 bajones: I assume that as long as we can be assured that the process of advertising new meshes, updated, and removed meshes, can be detected and responded to, 17:22:44 ... the actual content of this doesn't seem like it's that bad 17:22:55 ... the thing you want to avoid is putting the users in a position where they can't tell if things have changed 17:23:00 ... so they just upload every buffer every frame 17:23:10 ... I do think the float32 array would be a requirement here 17:23:18 ... the most flexible way to shove things off to the GPU 17:23:34 ... We don't know if the user wants to send this to a physics engine or something but they take float arrays too 17:23:50 bialpio: The basic thing is that Hololens submeshing, they can just model it as a distinct XR mesh 17:23:56 bajones: and that's how the Hololens already does things 17:24:03 bialpio: If you don't have any concerns... 17:24:18 bajones: I will look through their documentation and make sure I remember correctly but yes 17:24:29 ... I think your argument for why we don't need the same integer index is compelling 17:24:35 ... You can do comparison or set your own as you want it 17:24:44 ... as long as we are returning the same XR Mesh javascript wrapper every frame 17:24:53 ... that probably takes care of itself 17:24:57 ... removal might be the trickiest 17:25:05 bialpio: You will see something that is stale and not in the new XR frame 17:25:14 ... it is bookkeeping that the app needs to do 17:25:27 ... Maybe then one more question: Do we intend to attach a semantic label to it ever? 17:25:28 cabanier: yes 17:25:38 bialpio: that might throw a wrench into Hololens design 17:25:54 bajones: The only question I would have about semantic bits is do we expect any devices to reasonably support them? 17:25:56 q+ 17:25:59 cabanier: Not yet but in the future yes 17:26:17 bajones: On Hololens where they generate a mesh but has no semantic meaning, it just comes through as unknown empty string 17:26:20 There are semantic labels for the mesh on HoloLens. 17:26:25 ack Nick-Niantic 17:26:37 Nick-Niantic: I want to share our API for this 17:26:41 Yih has joined #immersive-web 17:26:44 present+ 17:26:51 ... (displaying screen) This is from our developer documentation 17:27:03 q+ 17:27:04 ... It explains what we give today for meshes when you use our web-based systems 17:27:26 ... basically we have meshfound events, and the meshfound event has an ID, has a transform of hte mesh, and geometry index array and attributes 17:27:34 ... the attributes have a float32 position array, float32 color array 17:27:38 ... This is what we give people 17:27:46 bajones: What do you use the color for and where does it come from? 17:27:59 Nick-Niantic: These meshes come from previous scans either from developer or community 17:28:09 bajones: How high res are they? 17:28:16 Nick-Niantic: Whatever the resolution of the mesh is - relatively coarse 17:28:29 ... Color is useful for visualizing the mesh in the space, it's not high fidelity 17:28:45 ... I don't know that it's required but it is what we give today 17:29:06 ... Our meshupdates have anchor, position, rotation to account for drift 17:29:14 ... you can update the position of a mesh 17:29:31 ... I'm not sure how Oculus handle drift within a session but it might be useful to tweak location 17:29:41 ... Event for no longer tracking a mesh as well 17:29:47 q- 17:30:06 ... typically we don't have semantic labels, but we can give per-vertex semantic labels within a mesh 17:30:18 ... our preferred method of semantic labels is through a dense mask of the image 17:30:22 ... since you can test any portion of the image 17:30:57 ... I'm not sure where I stand for usefulness of labeling vertices with semantic info, but if we wanted to, having granularity on the submesh level would be valuable for us to remain compatible with WebXR APIs 17:31:21 q- 17:31:23 ... It's better to have callbacks for these things than provide every frame 17:31:29 ack yonet 17:31:45 yonet: There's more than one way to mesh on Hololens, and semantic labels are part of it like for windows or ceilings 17:31:55 q+ 17:32:00 ... Portions of a mesh are labeled, it has to be a closed mesh 17:32:13 Nick-Niantic: One way to do semantic labels is a map from vertices to semantic label 17:32:25 cabanier: So every mesh would have an array of submeshes 17:32:25 q? 17:32:33 Nick-Niantic: That just came to mind just now 17:32:40 ... Not sure how it would work with multiple disconnected meshes 17:32:56 cabanier: Accounting for drift, a mesh has an anchor associated with it, so it won't drift 17:33:01 ack cabanier 17:33:03 Nick-Niantic: Okay so it's being kept consistent internally 17:33:17 cabanier: I asked if it should return a fine mesh or not? 17:33:28 adarose: It seems like a fine idea, but it would have to change over time as it gets updated 17:33:39 cabanier: Hololens could use it for occlusion 17:33:46 Nick-Niantic: Maybe could be an optional feature 17:34:00 ... Could use meshes for things like physics 17:34:14 ... even occlusion may be improved with a coarse mesh even though fine is better for that 17:34:18 dino7 has joined #immersive-web 17:34:28 ... if the user requests a mesh quality level, then you don't have to give it to them 17:34:35 cabanier: We don't have to stop them but could inform them 17:34:43 Brandel_ has joined #immersive-web 17:34:44 ack bajones 17:34:50 q+ 17:35:12 bajones: Nick, in your system you have an update event for the anchor moving around. How do you handle actual updates to the mesh itself? 17:35:22 Nick-Niantic: Currently our meshes are based on pregenerated user scans so they don't update 17:35:35 ... I pushed the team to allow update API but they didn't add it 17:36:22 bajones: I do wonder, looking through Microsoft mapping, they have two systems now for doing this, 17:36:36 ... They have a breakdown between spatial mapipng and scene understanding APIs 17:36:43 Yih has joined #immersive-web 17:36:47 ... the scene understanding SDK is that it gives back static versions of spatial mapping data 17:37:02 ... Is there value to call out whether a particular mesh will be static or not 17:37:15 ... There is some certainty about whether it would be static or dynamic 17:37:21 ... Metadata would be considered mostly static 17:37:36 cabanier: People can run room setup while in XR session so mesh could disappear 17:37:41 q? 17:38:00 bajones: You could say meshes have been removed and put in, while still static 17:38:05 ack Brandel_ 17:38:18 Brandel_: That makes sense, especially as different purposes and functions for meshes arise 17:38:34 ... I don't love the terms fine and coarse because over time, 640x480 was once high-res 17:38:42 ... Would suggest more explicit terms for the function of the data 17:39:03 ... Otherwise we have to get into doublefine and ultrafine, where are fun but not good names 17:39:10 bajones: It's hard to prescribe a particular use for a mesh 17:39:24 ... For example it might be useful for physics, but it might just be a box 17:39:40 rigel has joined #immersive-web 17:39:49 ... Tabletop and refrigerator would be good for occlusion but not a statue 17:39:55 ... Fine and coarse doesn't seem like the right distinction 17:40:02 ... marking as something for occlusion doesn't seem right either 17:40:30 Nick-Niantic: The thing that makes sense to me is that you are trying to give an indication of 'dont' spend too much work to get this data to me' vs 'spend extra work to make this data better' 17:40:38 ... You're looking at a performance tradeoff 17:40:55 ... they don't care about how fine is too fine, vs spending work on the data 17:41:07 ... I don't know if that applies to Oculus since users are defining the meshes 17:41:15 ... someone could draw their mesh very carefully for a fine mesh 17:41:26 cabanier: I could leave the spec as is 17:41:40 ... if it becomes an issue in the future we could modify it 17:41:52 adarose: Maybe a "could be better" designation 17:41:57 q+ 17:42:04 bajones: Are there going to be systems where you can meaningfully turn that dial? 17:42:11 ... where the user can be as detailed as they want 17:42:27 ack bialpio 17:42:29 ... if someone is scanning, they can specify the resolution? 17:42:34 Nick-Niantic: There are different resolutions 17:42:52 bialpio: Does it matter from the perspective of the app? 17:43:08 ... if all the meshes are static, I could use a different version of the code, but if things are going to change, it doesn't matter whether one thing is changing or all of them 17:43:14 ... Does that help us with design here? 17:43:40 ... It sounds like most of our current devices for this API would be handing out static data 17:43:53 ... don't know about Microsoft looking into this implementation, maybe it doesn't matter 17:44:11 cabanier: With the example of planes, static on device, it would break on Android 17:44:30 bialpio: but I can't assume that this data won't change, I have to handle it in my code 17:44:47 ... The assumption that things won't change doesnt' come from us, it comes from experience 17:44:56 cabanier: Sounds like we should not have a static 17:45:08 bajones: I can't think of a specific thing that would drastically change 17:45:20 ... You can tell a GPU whether it's going to be a static or dynamic mesh 17:45:30 ... but it's not going to matter if you're not updating the same buffer 17:45:37 ... unless you can assume that indices are never going to change 17:45:49 bialpio: for Planes we do guarantee that positions can change without touching vertices 17:46:15 ... we say that change of position will not affect last-update time since pose is a property of a pair of things and not the thing 17:46:26 bajones: I would expect the same to be here 17:46:47 ... for data that you're shoving off to the GPU, if there were a dynamic mesh, you couldn't guarantee the same number of vertices every time 17:46:54 ... so you're going through the same process as a brand new mesh 17:47:00 ... same for the physics systems 17:47:15 ... In that case, it's a good observation that there's not too much difference in how the user will handle that. 17:47:18 q? 17:47:55 bajones: Dom Float array, you want to make sure we indicate the meshes are going to be the same object frame to frame if they represent the same data 17:47:58 cabanier: that's already in there 17:48:13 bialpio: We don't have a way to annotate it in IDL since we have sets, we cannot say if you have the same object 17:48:23 s|day 1 minutes -> https://www.w3.org/2023/04/25-immersive-web-minutes.html|day 1 minutes -> https://www.w3.org/2023/04/24-immersive-web-minutes.html 17:48:27 zakim, take up agendum 2 17:48:27 agendum 2 -- Proposals - Some sort of local shared space -- taken up [from atsushi] 17:48:30 agenda:https://github.com/immersive-web/proposals/issues/82 17:48:32 rrsagent, publish minutes 17:48:33 I have made the request to generate https://www.w3.org/2023/04/25-immersive-web-minutes.html atsushi 17:48:39 TOPIC: Proposals - Some sort of local shared space 17:48:44 scribe: Marisha 17:48:53 s/agenda:https/issue -> / 17:48:55 rrsagent, publish minutes 17:48:56 I have made the request to generate https://www.w3.org/2023/04/25-immersive-web-minutes.html atsushi 17:49:33 adarose: We talked about shared anchors for a long time, but persistent and shared anchors were a future thing 17:49:33 ... It would be good to do some work to do a shared space, a lot of people would like it 17:49:34 ... How would we do this? I think there's different options 17:50:02 q+ 17:50:07 ... We could build an example where you print out a piece of paper with three numbers, use that to generate an offset reference space that both people agree on, and just have that as an example that people can do 17:50:16 Thanks atsushi 17:50:16 ... That would be a way to get people started with doing this kind of thing 17:50:23 s/issue -> /issue -> https/ 17:50:28 ... but it might be nice to standardize something for creating a shared reference space 17:50:55 ... Could have it so that the same space could be maintained for future sessions 17:51:05 ... like going from one domain to another without having to reset the space 17:51:17 q+ 17:51:25 ... People would like it and it's not necessarily worth waiting for shared anchors 17:51:27 q+ 17:51:27 ack bialpio 17:51:44 bialpio: If we are okay with requiring that the users are coordinating, hit testing on the same thing would be the easiest way 17:51:54 ... would have to maintain the same orientation.. 17:52:00 adarose: Yes would align on three points 17:52:19 bialpio: Image tracking is something we don't have an API for but it becomes easier if it only needs to work for a few frames initially 17:52:26 ... based on camera access experiment 17:52:37 ... might not be performant enough to do the entire session 17:53:00 ... could establish space, create an anchor, then assume that users have the anchor in the correct space 17:53:16 ... is there any other API that we could leverage? Planes on ARCore wouldn't work since they're not static 17:53:28 ... Depth sensing might give you some information you could correlate but that'd be difficult 17:53:33 q? 17:53:38 ack bajones 17:53:51 bajones: Agree with Piotr, 17:54:02 ... if you want to approach this with guidelines of how to set this up with existing mechanisms, 17:54:13 ... we already have examples like "observer mode" 17:54:39 ... you could do the same thing here. This would make a spectacular demo to have people standing around with their Android phones, and people with a Quest Pro, and they're all seeing the same thing in the same space 17:54:52 q+ 17:54:53 ... This would be the coolest way to show off why the web matters in this space 17:55:03 ... Would be worthwhile, I want to see this on Twitter 17:55:16 ... My biggest worry is that it'd be a bit flaky 17:55:45 ... In terms of what you say about going between sites, at last TPAC we talked about per-device caching or saving anchors. Is that something you've implemented? 17:55:51 cabanier: Yes it's shipped, but it only works per origin 17:56:05 adarose: It's something we'd have to do as a user agent 17:56:21 ... it'd be a user agent guide or wizard which would then give you the session 17:56:39 bajones: That would be the mechanism, if each device can retain its sense of where the shared marker is 17:56:47 ... It would take some coordination between the sites 17:57:00 ... This would be one of the most compelling demos I can think of for XR on the web 17:57:04 ... especially between mobiles and headsets 17:57:17 ... I want to help organize and coordinate that if it could realistically happen 17:57:26 q? 17:57:29 adarose: Would be good for "help wanted" post 17:57:29 ack cabanier 17:57:46 cabanier: From our perspective there are a lot of people who have multiple devices at home, we are focusing more on mixed reality and social mixed reality 17:57:50 ... so we are interested in exploring this area 17:57:56 ... if we can do it with a polyfill that'd be awesome 17:58:05 ... our systems do support shared anchors but they are a pain to set up 17:58:14 bajones: We're not even talking about shared anchors at this point 17:58:21 cabanier: more like a shared coordinate space 17:58:47 bajones: If I print out a big A B and C in the corners, and you have an app where you point at A, B, and C, and generates three anchors, triangulates them 17:59:05 ... If I can do that on Quest Pro, then on ARCore, they can presumably both start sharing their approximation of the same space 17:59:10 ... but it wouldn't require anything new 17:59:17 cabanier: If you want to skip even that step... 17:59:25 bajones: That would be awesome but is a much harder technical problem 17:59:50 adarose: even some sort of shared function? 17:59:57 bajones: might not be consistent enough 18:00:15 cabanier: The thing could take three hit tests 18:00:40 adarose: If there are two different sites but they have a flipped orientation... would be good to have a function on the session that generates space from three points 18:00:51 ... it's not trivial 18:01:02 ... doing it with anchors and it is updating, this space will probably change over time 18:01:14 Brandel has joined #immersive-web 18:01:14 ... If it's something that takes care of tracking those anchors and making the space for you 18:01:20 ... but we probably need to show the value of it 18:01:21 q? 18:01:23 ack bialpio 18:01:43 bialpio: I want to also mention cross-origin persistent anchors... they will have to be browser-coordinated, user would have to agree to it, 18:02:12 ... one idea of how we do this is to use the origin of a local space, localize it in a consistent manner when the user agrees to use that information 18:02:21 q+ 18:02:23 ... might be the simplest thing to do to maintain that space across multiple sessions 18:02:31 ... Not sure if that would work for ARCore 18:02:34 q+ 18:02:47 ... Might need to have some manner of conveying the information to the experience that the space is stable 18:03:09 ... with the example of some games, the site can assume the space is consistent for a user, but the challenge is how do they coordinate with each other? 18:03:19 adarose: Maybe some kind of menu option to "resync space" 18:03:29 ... if you sit down with another person, could both resync space 18:03:34 q? 18:03:37 ack Brandel 18:03:58 Brandel: Presumably all the devices we're talking about have magnetometers, compasses and IMUs that can detect gravity.. 18:04:01 cabanier: I don't think we do 18:04:32 Brandel: Everyone agrees on gravity, it's pretty stable 18:04:51 ... if magnetometers were more reliable across devices, you could use that for helping anchor spaces 18:05:08 cabanier: If you know the gravity, how does that help? 18:05:18 Brandel: You know which way is up, and then you'd know which direction people were in 18:05:27 cabanier: For refinement, I see 18:05:43 q+ 18:05:50 Brandel: If you have three points, someone could be looking from underneath. But if you have gravity and people are looking down, that's unambiguous 18:06:03 adarose: ABC helps too with the right-hand rule 18:06:06 q? 18:06:27 cabanier: I'm not sure if you can use the local space and just transfer the local space 18:06:46 ... We could have a different space, something you can request, and if it's not there it is the local space 18:06:53 bajones: You're allowed to not have local space be exactly where you start 18:07:03 bialpio: I think experiences might assume you start at 0 18:07:23 bajones: I remember stressing about this text when we wrote it because we didn't want local space to shift between pages or between refresh of the page 18:07:45 ... "A local space represents tracking space with the origin near the viewer at the time of creation" 18:07:52 bialpio: Something the platforms can tweak but is close enough to you... 18:08:09 bajones: If your'e doing it with Cardboard or devices where that's the mode you're working in, it probably just uses 0, 0, 0 every time 18:08:24 ... but for the Quest it doesn't matter whether the origin is here or slightly off here (gesturing) 18:08:24 q? 18:08:28 ack cabanier 18:08:31 ack mkeblx 18:08:56 mkeblx: The three points on a plane, piece of paper, another idea is to start in the same orientation, start in the same chair 18:09:14 q+ 18:09:23 bajones: I thought about the same thing, but the most likely use you're going to get for that, you're going to put a paper on a flat surface, it's hard to guarantee everyone is standing at the same space 18:09:31 ... They could all configure themselves at roughly the same time 18:09:42 ack bialpio 18:09:58 bialpio: At least for ARCore, you probably need to ensure we are already in reliable tracking state 18:10:12 ... At session start there might be phone wave thing were you want the user to go through that first before they start walking around 18:10:26 ... I don't know how much relying on the same starting point would work there 18:10:37 q? 18:10:38 ... but if we are in a reliable tracking state and the user is going to be in the same place, maybe could work with that 18:11:12 adarose: five minute break! 18:11:40 adarose: Wait do people think we should make an API for this? 18:11:46 bajones: Might be better first as a library 18:12:04 ... Do we have image tracking enabled on Android? 18:12:17 bialpio: Behind a flag or raw camera access in a way I cannot comment on without knowledge 18:12:36 bajones: That could just speed it up, I don't believe the Meta Quest Pro has any sort of image... 18:12:54 ... you could still do image recognition without saying the image you've registered has an anchor, without giving away the camera data 18:13:00 vicki has joined #immersive-web 18:13:27 ... Does Meta have image recognition? 18:13:33 Marisha: I don't think so, at least not released 18:13:41 rrsagent, generate minutes 18:13:43 I have made the request to generate https://www.w3.org/2023/04/25-immersive-web-minutes.html Marisha 18:14:49 atsushi has joined #immersive-web 18:17:12 alcooper_ has joined #immersive-web 18:20:14 alcooper has joined #immersive-web 18:23:41 present+ 18:26:10 etienne has joined #immersive-web 18:26:33 present+ 18:26:34 present+ 18:26:34 present+ 18:26:36 Brandel has joined #immersive-web 18:26:38 present+ 18:26:41 present+ 18:26:42 present+ 18:26:49 bajones has joined #Immersive-Web 18:26:51 present+ 18:26:52 Yih has joined #immersive-web 18:26:58 present+ 18:27:04 scribenick: Brandel 18:27:54 zakim, take up agendum 3 18:27:54 agendum 3 -- proposals#84 Immersive Capable/WebXR meta tag -- taken up [from atsushi] 18:28:10 ada: One from me, but a continuation from Apr '22 18:28:38 ... It would be handy if there was a way to identify that this page is webXR-enable 18:29:10 ... indicating the features you want to support, conferring potential benefits like SEO for XR-prioritized search 18:29:10 q+ 18:29:44 ... and used for ambient badging in search results, or on a URL bar 18:30:12 is this on github? 18:30:32 ... so the UA could also invoke the XR session rather than having the user hunt around for the button to invoke 18:30:41 ah https://github.com/immersive-web/proposals/issues/84 18:30:49 q+ 18:31:06 ... and for archives and researchers, it would be possible to more easily identify pages that happen to have been XR in some kind of future past 18:31:07 q+ 18:31:36 ... so this is a general query about the value and function of this - is it useful, where would you use it, where would it go? Int an HTTP header, a web manifest, where? 18:31:39 ack Nick-Niantic 18:31:59 Nick-Niantic: I got caught on one of the first things you said - declaring ahead of time the required permissions for a page 18:32:14 ... The inability to 'pre-flight' permissions causes us grief 18:32:50 ... we want to build an HTML lobby that gets everyone ready re:permissions so that players can launch into experiences with all the prerequisite permissions 18:33:17 ... it's a shift from the initial remit of the discussion but important to us and related 18:33:49 Nick-Niantic: what is meant by 'ambient badging'? 18:34:23 Ada: Ambient badging is a property on things like PWAs, indicating the "appness" of a page, allowing UAs to invoke actions related to that 18:34:45 q? 18:35:04 ... so for an approach like that, you could have the browser take on that responsibility, but we have a separate session for that 18:35:27 bkardell_: I am on the call! It is hard to hear 18:35:55 q? 18:36:03 ack cabanier 18:36:10 ada: It was suggested that wolvic has ambient badging of XR capability that could allow these things to launch 18:36:17 bkardell_: ..maybe? 18:36:43 cabanier: I think the badging just guesses that an experience is XR-capable 18:37:01 q+ 18:37:08 bajones: I would imagine that most pages are searching for isSessionSupported, which is a good guess that the page intends to use it 18:37:42 cabanier: showign the button has the drawback that it may not account for the loading of prerequisite assets - often pages need a lot of things to go into XR. 18:37:55 s/showign/showing 18:38:15 cabanier: the search engines would need to be updated to leverage this 18:38:32 q? 18:38:34 ... and people could abuse this, so that could be a problem 18:38:36 ack bajones 18:39:10 q+ 18:39:20 bajones: We can't have nice things - anything that allows people to game SEO things, becomes effectively meaningless once adequately abused 18:39:25 q+ 18:39:53 ... and if you're using this to put a badge in the URL bar etc, without a hard obligation to invoke the session, that can be weird 18:40:18 ... it seems like the page should need to a little more work in order to guarantee that this hinting is accurate 18:40:48 ... PWAs are a little different, because of the difference in context and function - the manifest a better guarantee of expected functionality 18:41:03 ... I'm not sure of the importance and needs of archiving 18:41:21 ... if you're not using JS to scan through web pages, as an archivist today, you're probably not doing it properly 18:41:58 ... that said, I don't _object_ to it - I just question whether it will do the thing it's intended for 18:42:00 q 18:42:01 q+ 18:42:07 ack CharlesL 18:42:27 q-- 18:42:32 q- 18:42:35 q+ 18:42:38 CharlesL: Discoverability via a meta tag would help, and for future ePubs in the future 18:43:12 q? 18:43:16 ack bialpio 18:43:18 ... in which case, schema.org would probably want to be involved - it looks like "virtual location" is the only similarly-defined attribute today 18:43:49 bialpio: we already have a schema proposal, whose status isn't known, that may include this information 18:44:20 ... model-viewer has some representation, but I don't know whether it has what we want - but it's likely a step in the right direction 18:44:24 q? 18:44:24 https://schema.org/3DModel 18:44:28 ack marcosc 18:45:00 marcosc: As someone close to the web manifest, it's useful to talk about the difference between "app" and "page", since an app can span pages 18:45:26 q? 18:45:30 ack bkardell_ 18:45:32 ... And question if / how this is a new display mode. Within Apple we often use OG as a representation as well 18:46:02 bkardell_: I am finding it hard to hear, so I'm not sure if marcosc and I are saying the same thing 18:46:29 q+ 18:46:54 ... I was going to ask - there is a radical range of things that can be done with XR, so is there a single attribute that is relevant to identify XR pages / apps with? 18:46:55 ack marcosc 18:47:23 marcosc: checking API calls isn't a great proxy for actual, legitimate use because of the prevalence of fingerprinting 18:48:16 ... in the past, we've had things like that - simple boolean checks typically haven't changed the UI of systems, so we shouldn't do too much 18:48:35 adarose: So in summary, it would. be useful but shouldn't change things in a way to alter the presentation 18:49:22 marcosc: to reiterate bajones' point, the page ought to have to do work in order to make it a fair guarantee that this is legitimately a function of the page 18:49:22 q? 18:49:53 adarose: Last issue before lunch, and then lunch! 18:50:48 [technical setup to share] 18:50:59 https://lists.w3.org/Archives/Public/public-immersive-web/2023Apr/0006.html 18:51:32 [anticipation builds] 18:52:55 zakim, take up agendum 5 18:52:55 agendum 5 -- webxr#1317 Some WebXR Implementations pause the 2D browser page in XR, make this optional? -- taken up [from atsushi] 18:52:55 cabanier: This is about the 'offer session', similar to the 'request session' 18:53:07 zakim, take up agendum 4 18:53:07 agendum 4 -- webxr#1264 Proposal: lower friction to enter WebXR sessions -- taken up [from atsushi] 18:53:31 ...it's like request session, in that it allows for the inclusion of optional extras 18:53:52 ...the button appears in the URL bar. That's the whole demo! 18:54:05 bajones: This was the best demo we've seen all day 18:54:12 q? 18:54:33 cabanier: I haven't added to the spec via PR, but did have some questions. I have written them down. 18:54:43 q+ 18:55:07 ... what happens when you call 'offerSession' multiple times? do we reject the promises of the earlier sessions? 18:55:29 ...similarly, what do we do about iFrames who pass in and out of existence? 18:55:38 is it necessary to support it in an iframe? 18:56:01 q+ to ask about abort signals 18:56:04 ... I think that there's only one offerSession in action at a time, and can't be revoked by the offerer 18:56:22 ... this might require user-education, since they could be tripped up without it 18:56:31 ack bajones 18:57:28 bajones: multiple session calls should probably just override prior ones. Libraries might misbehave, but *shrugs* 18:58:05 ... we should have a sense that the `offerSession` establishes some pending promise on the page, and it should be able to go away with the page offering 18:58:25 q+ 18:58:30 ... I'm not sure that it's critical to be able to make a call to cancel the 'offer' 18:58:56 q+ 18:59:01 ... it seems like users might want to cancel the offer and dismiss the chip, but that should be non-normative and up to the UA 18:59:14 ... we wouldn't be the first, so that can be followed the same way 18:59:38 q+ 18:59:43 bajones: 'the chip' refers to the button and surrounding information 18:59:55 q+ 19:00:15 ... a promise, rather than a callback or an event, seems like a better match with our 'requestSession' approach we take today 19:01:02 ... promises feel like the right primitive here - it does mean that sessions would need to be 're-offered' if it's accepted and then dropped out 19:01:48 bajones: sessions would need to be re-offered after a session has ended 19:02:04 bialpio: unless the UA has the 2D _and_ the XR session present 19:02:17 marcosc: can we see the code again? 19:02:29 vicki has joined #immersive-web 19:02:38 ack adarose 19:02:38 adarose, you wanted to ask about abort signals 19:02:43 bajones: It's the same as requestSession, but doing so in a deferred manner 19:03:05 adarose: There would be ways to hide this signal, like offering a null session 19:03:36 ... a hack would be to invoke an iFrame that constructs an overriding offerSession, and then terminates it in order to clear it 19:03:56 ... so it would be good for us to give developers the right way to deal with this 19:04:30 ... a use case includes a time-sensitive offer based on the availability of another participant in an XR session, which could be terminated by someone getting bored 19:04:52 bajones: `fetch` has a way of being terminable, via some kind of signal 19:04:55 CharlesL has left #immersive-web 19:04:57 Emmanuel has joined #immersive-web 19:05:24 ... we could support the same thing here, we'd just need to explictly opt into our use of it 19:05:57 bajones: it would require attending to an 'abort' signal, which we might want to incorporate into the 'requestSession' syntax as well 19:06:18 marcosc: We need to have a good reason for taking the agency away from the user on this 19:06:47 adarose: It's not trivial and not impossible to make codes to wipe out a session, but it will be messy 19:07:16 marcosc: there is a similar problem in payments, where a payment sheet has to be cancelled by destroying the establishing context 19:07:39 adarose: for this API, I would suggest using the (standard) abort controller 19:07:51 bajones: the fetch API covers this approach 19:08:31 q? 19:08:37 adarose: Quest devices might be able to offer both immersive AR and immersive VR - giving users the ability to choose 19:09:02 ack Nick-Niantic 19:09:05 bajones: you _could_ make the chip a drop-down - but many people's "AR" sessions are basically just VR 19:09:10 q- 19:09:23 Nick-Niantic: I am a little confused about what this is in service of 19:09:36 Abort Controller docs -> https://developer.mozilla.org/en-US/docs/Web/API/AbortController 19:09:54 q+ to ask about offering unsupported sessions 19:10:13 ... generally when a UA manages things, developers have to put big CSS arrows to point to the window chrome in order to draw user attention to it 19:10:55 ... relying on devices to self-report capabilities is fraught, because many phones mis-report their capabilities, or do so in a way we are not familiar with the capabilities of 19:11:08 CharlesL has joined #immersive-web 19:11:12 q+ to discuss issues with developers ONLY calling offerSession 19:11:28 ... the case where this seems to be the most sense to me is in a windows, connected headset and you are _sending_ it to your HMD 19:12:06 ... in an oculus[sic], you probably want to get into that immersion ASAP, e.g. in a PWA 19:12:18 q+ to provide user problem it solves 19:12:27 ... but I don't see why this is a meaningful addition 19:12:55 cabanier: Last year we looked into all the problems with the button presentation for these things - often very small, or non-presenting at all 19:13:27 ... including a failure for a 2D page to account for a resized window at all 19:13:28 Maybe it's like controls in media element/full screen? 19:14:09 q? 19:14:11 Nick-Niantic: I have _less_ reservation based on the presence of requestSession, but still don't see the criticality of this as a solution 19:14:17 q- 19:14:17 ack bialpio 19:14:34 cabanier: we have seen many people struggle with, so we feel we are solving a problem people have 19:15:20 bialpio: I'm concerned by the 'last-writer' resolution for this - is this safe? 19:15:28 cabanier: you still need to explicitly grant xr permission to iFrames, so that's under some degree of control 19:15:58 bialpio: It still seems like the impact on debugging could get difficult, to track down who is the ultimate offerer. Maybe we could broker that permission explicitly 19:16:37 bajones: we have the "XR Spatial Tracking" permission policy, we need to continue to respect that 19:17:06 ... MatterPort / sketchFab / etc do a lot of their work through iFrames, so we need to keep letting them do that without random ads being able to jump in arbitrarily 19:18:07 bajones: we would apply the conventions that come from abort controllers and the requestSession syntax 19:18:24 ... I'm still not convinced that we need this to be abortable, but could be convinced 19:19:02 ... maybe on a gallery page where some content is XR capable and some are not - but it sounds complicated 19:19:12 q? 19:19:20 ack alcooper 19:19:50 alcooper: 2 things: Initially I thought this sounds great, but some of nick's comments made me wonder: 19:20:07 ... why isn't developer education our first priority to solve this problem? 19:20:15 ... (bigger buttons etc) 19:20:37 ... and second, what is the strategy for landing this, where does it go? 19:21:13 ... it seems like requiring UAs to add things to their OS chrome is a big ask 19:21:14 q? 19:21:17 ack adarose 19:21:17 adarose, you wanted to ask about offering unsupported sessions 19:21:54 adarose: when you click on the button in the URL bar, do you bypass the permission prompt? a: no 19:22:32 ... if it didn't entail additional permissions, would it be possible to bypass permissions? 19:22:43 q? 19:22:47 ack bajones 19:22:47 bajones, you wanted to discuss issues with developers ONLY calling offerSession 19:22:55 cabanier: maybe, but anything additional like hands or room would necessarily require a prompt 19:22:59 q+ to ask about offering unsupported sessions 19:23:29 bajones: In the past I have worried that if this is well-supported unevenly, we end up with the opposite problem 19:24:18 ... if Meta devices support it well, but then Android devices are still only relying on requestSession, then the developer-selected user signals become fragmented 19:24:27 q+ 19:24:45 cabanier: This is similar to the problem of having uneven support for modules like layers 19:25:13 bajones: Yes, but it's slightly different in that the default expectations of if/how things can fail 19:25:55 ... If the method simply doesn't exist, developers should notice that it can't be used - I will have to think more about the consequences 19:26:12 cabanier: It wouldn't be my expectation that everyone would use this API 19:26:36 bajones: There's a difference between using the API and not exposing the end-point though 19:26:54 q? 19:26:57 ack adarose 19:26:57 adarose, you wanted to ask about offering unsupported sessions 19:27:05 ... we have allowed people to sign on to things we don't intend to implement user-facing end-points for the sake of developer convenience 19:27:44 adarose: in your example You offer a session without checking that it's supported - what would happe if it wasn't supported? 19:27:52 cabanier: it would be rejected immediately 19:28:59 q? 19:29:05 ack bialpio 19:29:23 ... unsupported sessions wouldn't over-write supported sessions and kick them out 19:30:08 bialpio: I want to refer back to the browser UI real-estate as being scarce - we would want the spec to say that this isn't always supported 19:30:21 q+ 19:30:30 ... I would want the site to know that this failed, so it's not relying on it 19:31:13 ...this is now the biggest API that allows you to do the same thing in two different ways, which is an entire alternate entry point 19:31:56 ... which can encourage developers to have divergent ways of getting in, with potentially different permissions and code paths 19:32:19 ... in practice, there are a lot of thing we do only when resources are explicitly requested 19:32:50 ... our "SessionSupported" is only looking at Android, rather than e.g. ARCore presence 19:33:47 ... I would need to look more deeply in the 'offer' timeframe to make appropriate determinations about which sessions can overwrite which ones 19:33:54 q+ 19:34:00 zakim, close the queue 19:34:00 ok, adarose, the speaker queue is closed 19:34:10 cabanier: The request sessions are lightweight on Quest 19:34:25 bialpio: so are ours, but the spec allows us to be optimistic about the scope of capability 19:35:40 ... but we need to determine satisfiability of an potentially overriding offerSession actions, so that an unsupported session bumps a supported one 19:35:57 cabanier: I think we can just bump it 19:36:34 bajones: there might be some benefits that come from showing things inside trusted / scarce UI, but it is limited 19:37:03 ... you don't get to choose from an unbounded range of options there - you should probably use requestSession for that 19:37:56 ... we don't want sites to rely exclusively on this affordance. it helps accessibility etc - but this should probably just be for the minimum, most basic action that can be taken 19:38:28 bialpio: But does that mean that we are incentivizing developers to build experiences that target the least set of capabilities possible? 19:39:05 bajones: those options can be negotiated, but the UA has the ability to make those decisions and it doesn't break page logic to have those change 19:39:31 ... e.g. the UA could simply mute that request, if it's being too loud 19:39:53 ... if a user has repeatedly rejected the request 19:40:13 adarose: Let's put lots of pins in this to get back to in the un-conference time 19:40:27 agenda? 19:40:47 Yih: I have been trying to gauge the scope of where these things are decided, on UA vs. page 20:47:46 CharlesL has joined #immersive-web 20:49:37 yonet has joined #immersive-web 20:56:35 Brandel has joined #immersive-web 20:58:10 vicki has joined #immersive-web 20:58:21 etienne has joined #immersive-web 20:58:53 present+ 20:58:59 marcosc has joined #immersive-web 20:59:02 present+ 20:59:03 present+ 20:59:03 adarose has joined #immersive-web 20:59:05 present+ 20:59:08 present+ 20:59:08 present+ 20:59:17 present+ 20:59:21 present+ 20:59:23 zakim, who is here? 20:59:23 Present: cabanier, CharlesL, Dat_Chu, bajones, Marisha, etienne, adarose, Brandel, bialpio, mkeblx, atsushi, Yih, vicki, Nick-Niantic, yonet, marcosc 20:59:25 On IRC I see adarose, marcosc, etienne, vicki, Brandel, yonet, CharlesL, Emmanuel, bajones, alcooper, atsushi, dino7, Nick-Niantic, mkeblx, bialpio, Zakim, cabanier, RRSAgent, 20:59:25 ... brycethomas, bkardell_, Leonard, jfernandez, general_j, sharonmary6, vianka, OverTime, sentinel1975, rzr, maxass99, LawrenceKincheloe, Matthew, NicolaiIvanov, helixhexagon, 20:59:29 ... fernansd, [old]freshgumbubbles, etropea73101, dietrich, Chrysippus, SergeyRubanov, bemfmmhhj, babage, KevinBush, Manishearth, mounir, ada, sangwhan, iank_, scheib 21:00:01 zakim, choose a victim 21:00:01 Not knowing who is chairing or who scribed recently, I propose Dat_Chu 21:00:03 Marisha has joined #immersive-web 21:00:06 present+ 21:00:07 zakim, choose a victim 21:00:07 Not knowing who is chairing or who scribed recently, I propose cabanier 21:00:13 present+ 21:00:15 zakim, choose a victim 21:00:15 Not knowing who is chairing or who scribed recently, I propose Yih 21:00:41 Yih has joined #immersive-web 21:00:45 Jared has joined #immersive-web 21:00:50 present+ 21:01:41 present+ 21:02:19 https://github.com/immersive-web/webxr/issues/1317 21:02:22 q+ 21:02:36 adarose: Been working on attaching DOM things to WebGL things via CSS, to animate properties of an object per frame. Cool, apart from entering WebXR, specifically Meta Quest Browser. Page doesn't update CSS queries each frame anymore, custom properties that are being animated stop. 21:03:24 q+ To take a time machine back to ye ol' 2017 https://github.com/immersive-web/webxr/issues/225 21:03:27 lgombos has joined #immersive-web 21:03:35 present+ Laszlo_Gombos 21:03:35 zakim, open queue 21:03:35 ok, adarose, the speaker queue is open 21:03:49 q+ 21:03:55 q- 21:04:47 q+ 21:04:52 ack bajones 21:04:52 adarose: I would like an option to not background the page, to not be asleep 21:04:56 https://immersive-web.github.io/webxr/#xr-animation-frame 21:05:43 q+ 21:06:34 bajones: Mostly surfacing previous discussion on this topic (referencing documentation link), headset and window may not coincide with each other 21:06:40 https://github.com/immersive-web/webxr/issues/225 21:07:08 bajones: ... previous conversation in May 2017, 21:07:28 q+ 21:07:58 q? 21:08:01 ack Nick-Niantic 21:08:05 bajones: ... it was discussed quite extensively in the past (referencing above two links) 21:09:01 Nick-Niantic: We also want the page to update in a timely fashion, even in background, generally support this notion. 21:09:08 q+ 21:09:27 ack mkeblx 21:11:04 q? 21:11:09 ack Marisha 21:11:10 mkeblx: (shows demo of animation illustrating point) 21:11:24 Marisha: What is the current status of DOM layers and is there overlap here? 21:11:32 q+ 21:11:47 cabanier: Main page would still be frozen, but DOM layer would be animating 21:12:01 q+ 21:12:08 q+ 21:12:09 ack Brandel 21:12:17 scribe: Yih 21:13:04 q? 21:13:07 ack cabanier 21:13:14 Brandel: You can use the DOM to process things, CSS animations may be what is missing 21:13:47 cabanier: Switching to another tab will also not work, CSS animations will be frozen on backgrounded tab if switching to another tab 21:14:01 q? 21:14:29 cabanier: Framerate may also not be smooth 21:14:57 adarose: The animations do look smooth, not an issue 21:15:11 adarose: ... not using requestAnimationFrame 21:16:03 adarose: ... using headset to query values 21:16:44 cabanier: If requestAnimationFrame is running, and framerate changes, wouldn't be a problem 21:18:18 adarose: Most web browsers can run at 120fps 21:18:50 cabanier: Would be nice if only CSS animations are being run, and not redrawing the entire page 21:19:11 cabanier: ... Not sure what group to ask 21:19:28 adarose: If we're in immersive web, the page shouldn't count as being backgrounded 21:19:35 q? 21:19:50 ack bajones 21:20:38 bajones: In general, browser should recognize that DOM is being displayed, similar to DOM layers 21:21:41 q+ 21:22:09 bajones: ... DOM is being read and reproduced, not recognized by the system. Possibly the solution is ubiquitous DOM layers. If I'm querying CSS, it should return an animation value. 21:22:40 bajones: ... this is possibly outside the purview of this group, worth finding the group to implement the fix to what sounds like a broken situation 21:23:32 adarose: Valid situation for CSS animations to continue playing, such as a backgrounded tab 21:24:03 bajones: Question I have: Is this actual intentional behavior? 21:24:15 q? 21:24:18 ack bialpio 21:24:20 adarose: Could be specs or compatibility issue 21:24:56 q+ 21:25:08 bialpio: Does it matter that XR raf is using predictive/display time versus CSS might be using different timeline — FPS might not matter, but that it'll be slightly different timeline 21:25:52 adarose: If I had to do something time sensitive, I wouldn't use CSS animations — timeline isn't as important 21:27:27 q? 21:27:31 ack mkeblx 21:27:38 bialpio: Might be the case that this was accidentally omitted 21:29:05 adarose: There is a significant performance impact when attempting to do this with large number of elements, big part of frame budget is checking for updates 21:29:35 mkeblx: As a developer, one takes on the performance impact as an accepted tradeoff 21:30:38 adarose: I try to cover most cases, but accounts with contingencies. Despite performance overhead, using page logic to control WebGL is an ideal approach 21:31:24 adarose: ... goal is to not put page to sleep, and keep it alive 21:33:35 ack cabanier 21:33:38 adarose: ... for highest browser compat, would like a simple way of getting this solved 21:34:45 cabanier: Something when you request a session sounds like a reasonable push. Don't know if we need to talk to different groups about this, sounds like something we can solve in isolation. 21:35:47 adarose: It could be something like turning on a property by default. 21:36:44 +q 21:36:46 ack bajones 21:36:49 I play a 1s audio loop on my android device, otherwise my music player keeps dropping the actual sound (any player) 21:36:51 q+ 21:38:15 q? 21:38:17 bajones: CSS animations across browsers do not seem to pause timing when I switch tabs, seems to be consistent, not retaining state. CSS animation seems to be jumping forward to meet timeline, doesn't seem to be a concern. 21:38:18 ack mkeblx 21:38:39 mkeblx: If by default, we change this behavior, what is the expected result? 21:39:54 mkeblx: ... is there a downside to doing this? 21:41:03 adarose: A developer could be doing something improper, perhaps we can follow Three.js lead here 21:42:03 adarose: Could a background page that's still running impact WebXR? 21:42:07 cabanier: Yes, it could 21:42:40 adarose: I'll try putting something together and put it forward to get input. 21:42:42 ack Nick-Niantic 21:43:04 Nick-Niantic: Are we going to make time to discuss DOM overlay later? 21:43:13 https://hackmd.io/@jgilbert/imm-web-unconf 21:43:20 adarose: This is a great unconference topic. 21:44:45 Jared has joined #immersive-web 21:44:48 present+ 21:46:02 ada: A at a last face to face, having a means to go from one page to another while staying in WebXR, there have been some ideas like waiting for an event to go to a new page, Brandon had proposed a more complex solution. I think it would be good to talk about again. 21:46:08 q? 21:47:38 brajones: At Tpac and the last face to face we had feedback. The proposal that I made, was me "pie in the sky" lets solve navigation and a11y at the same time. I like the core idea but maybe not solving all the things at the same time. 21:48:20 brajones: the proposal that was made last time, the capture session should be the means. Rick's proposal had a session granted. 21:48:56 brajones: The fact this is ambiguous it is problematic. 21:49:35 Cabanier: Session granted is a trigger. In the session granted event handler you request session and continue on. 21:49:45 bajones: I thought session granted provided you with a session. 21:49:51 cabanier: No. 21:50:02 bajones: Do you have many people that are trying to use that? 21:50:11 cabanier: I don't have data on that 21:50:42 ada: I encounter this alot when I am debugging a-frame pages. A-frame supports it and is turned on by default. Not that I know many multi-page a-frame pages. 21:50:52 cabanier: That is the first time I have seen it used. 21:51:04 Ada: I am tempted to add it to a project. Rather than having to refresh a lot. 21:52:01 bajones: I was just reviewing the readme on navigation so far. We need to update it. It indicates you get a session from the session granted event. I am going to assume that when you fire off the session granted event and you don't respond.. 21:52:16 cabanier: You have an amount of time to respond. 21:53:25 bajones: I have a request for that. The session granted readme is out of date. If you could list out the mechanics to have it reflect what the meta experiment is doing, it would be helpful. Given that, I am going to leave the catching side alone then if it has been working well for you. And some of my assumptions were wrong. We should go back to have you initiate the sessoin. 21:53:50 cabiner: You can navigate however you want to. If you don't resume the session in a number of seconds you get kicked out. 21:53:59 bajones: that is currently limited to same session? 21:54:03 cabiner: same origin 21:55:16 bajones: None of this is new from the slide decks previously, the thing this kind of misses out on that we would want in a complete nav solution, it doesn't provide a way to provide context to the user where they are going when they navigate and it doesn't provide the user a way to have them know an action is a navigating action. So I think that, I don't think we should take that ability away. Web pages can do this, a mouse move can be a val[CUT] 21:56:06 We also have a href that we can use.. we should have a specific gesture that says, okay I want to do a navigation gesture now, and then go to the last place they want to go that are the navigation target. 21:56:41 Those two things allow you to determine where the navigation target is, and the nav is predictable and owned by the platform. I can then hold down the menu button for two seconds and navigate away. 21:56:57 You have more chances for the UA to mention something and context about where you are going (bank, etc) 21:57:05 Dat_Chu has joined #immersive-web 21:57:43 Those are the core elements that I feel somehow should be present. I don't know or care about the API that should provide them. There are two slides, TPAC and others in which we mark up everything, and others where we markup just navigation elements. 21:57:48 q? 21:57:59 Starting with where you are going to navigate and seems like a good place to start. 21:58:05 Brandel_ has joined #immersive-web 21:58:44 q+ 21:58:56 Cabiner: The issues are every time you navigate there is dark, there could be several seconds of dark. The website you navigate to may not render or be ready to render especially the first time. If we have a solution, it needs to somehow allow preloading. And the page can say I am ready. 21:59:11 q+ 21:59:57 So you can have a seamless transition, I don't know how that would look like. I looked at something that Google proposed that was portals. It was a noninteractive page, it doesn't know it isn't being loaded but is being displayed. It could be built on and be shown in a non interactive DOM layer. Maybe a meta dom that allows you to navigate to that. 22:00:10 At that point session granted could fire and seamlessly take over. 22:00:21 That kind of solves the problem. You want the navigation to be seamless. 22:01:28 bajones: So, I generally agree. I have so many fanciful wonderful ideas on how we could bridge the two sites to make them nicer. A twitter thread about endless elevators. I think those are ideas that we can add on top of the core of navigating. I would like to not always stop at that point because we figured out and more concretely build on top. I like the ieaa of portals. 22:02:07 If we have something like a broadcast to the page, this is where you will go if you navigate now. This can be a signal that I as a UA I could be proactive about it at that point, but you don't always want to do that. If everything is an acnhor, loading all of them. 22:02:32 The gesture is also something in which you need to press a button for a second or two. There is nothing to stop you thinking they are going to abondon the gesutre but I am going to preload now. 22:03:15 If you do the nav this way you have more control to go in and out, otherwise you hit a black wall and the page just dissapears, maybe you can show a spinner. If a user initis a gesture it gives you more chance as he UA to smooth it over. And the browser to do things like show the skybox or whatever. 22:03:26 ack Brandel 22:04:01 brandel: There are a bunch of things you alluded to, an explicit path you are going to or a specific action for that specific thing. Makes me wonder there aren't trusted UI or trusted events. 22:04:06 Do you have that? 22:05:33 Bajones: We have had bits of trusted UI, it's been iffy. We have had a floating thing that was telling you but it was annoying but then we made it https only. There was a thing where you could ask for permission for camera in XR and I know during the time we had browser for daydream we did have UI for that. It may have backrounded the session. THere are ways to do that. We are in a secanrio where we take over your whole view. THere is no gau[CUT] 22:05:53 the pixels are being rendered by the UI, if there is a prominent browser it becomes easier to spoof. 22:06:42 An environment that the browser can make that is recognizable, the browser can always render on top. Positioning is tricky they can try and put stuff above and below. There are ways that you can be more assured the thing is the browser. The concrete guarantee we don't really have that option. 22:06:50 brandel: do you have it rick? 22:06:56 cabanier: we don't 22:08:11 bajones: the reason we want the gesture is to know that it is explicit, whatever button on the quest when I am on a session it will bring me to a panel. I have no doubt it came from the OS becasue the page cannot override or incercept. That button will always take me to that page. If the other side is held for two seconds and then it switches away, and says you are about to navigate to evil.com the only way I could get that is from the browser. 22:08:30 While it is difficult to render it into the scene it is difficult to trust, but you can know for certainty you know came from you.. 22:08:34 q? 22:09:02 ada: something that is new but was very pie in the sky since we last talked about this, I don't know if it has landed landed, the fade transition effect. 22:09:22 https://developer.mozilla.org/en-US/docs/Web/API/PageTransitionEvent 22:09:26 In 2d. Once we know the navigate from one page to another. The 2d web, had been doing this. There will be a link in IRC> 22:09:49 This is a part of the page transition api that allows you to animate smoothly between pages and it does the animation. I think we could hook into these events. 22:10:13 q? 22:10:17 ack adarose 22:10:20 Instead of a 2d transition we could have a developer know that they could do a nice transition for a interstitial state. I think it could be nice. 22:10:38 we can probably end that conversation there if there isn't much to add. Coffee? 22:10:54 After this is unconference items. Then it's the end 22:10:57 https://hackmd.io/@jgilbert/imm-web-unconf 22:11:08 There is a doc, I will paste it in IRC. You can add stuff to it if you want. 22:28:29 adarose has joined #immersive-web 22:29:42 present+ 22:30:56 etienne has joined #immersive-web 22:32:34 Marisha has joined #immersive-web 22:32:37 present+ 22:45:40 Yih has joined #immersive-web 22:46:16 rrsagent, generate minutes 22:46:18 I have made the request to generate https://www.w3.org/2023/04/25-immersive-web-minutes.html Marisha 22:49:41 i/ada: A at a last face to face,/scribe+ Jared 22:49:45 etienne_ has joined #immersive-web 22:49:51 rrsagent, publish minutes 22:49:52 I have made the request to generate https://www.w3.org/2023/04/25-immersive-web-minutes.html atsushi 22:50:03 etienne__ has joined #immersive-web 22:50:44 Nick-Niantic: cabanier mentionned that DOM-layers will have a different document, what are the ramifications? 22:51:01 scribe: etienne__ 22:51:08 cabanier: each DOM layer is like a dialog 22:51:29 ... fullscreen only works for a single element, so it's a better fit 22:52:14 Nick-Niantic: so if we want to bring something from the original page to the 3D session, how can we keep things in sync with the dom-layer 22:52:38 ... so could the DOM layer be the whole document? 22:52:49 cabanier: people might want multiple DOM layers 22:53:19 ... you have to provide the URL (same origin, no foreign content) when requestion the DOM layer 22:53:32 postmessage? 22:53:34 Nick-Niantic: using a separate URL could be problematic 22:54:43 cabanier: you could create an empty dialog then populate it from the main page 22:55:02 brandon: but what about event handlers etc... if we move element to a different document 22:55:33 Nick-Niantic: if something moves to the dialog then back, what will break? 22:55:47 brandel: the CSS cascade will change 22:56:02 cabanier: you could getComputedStyle() on everything before sending it to the dialog 22:56:30 Nick-Niantic: but that wouldn't work well after class changes etc... 22:56:50 ada: looking at the adopt node spec, it doesn't mention event listeners 22:57:11 cabanier: if they are on the same element they should just work 22:57:31 Nick-Niantic: so the recommendation is to load `about:blank` then programatically populate everything 22:57:49 Marisha: how does the origin work for `about:blank` 22:58:36 ada: we might need to explicitely state that it'll work for same-origin and data URLs 22:59:09 Nick-Niantic: the existing "dom tablet" concept should be implementable with enough framework work 22:59:45 cabanier: still afraid that it'll be too much of a pain and people will roll their own 23:00:14 Nick-Niantic: what about clicks / events? 23:00:19 cabanier: event handlers should fire 23:01:33 cabanier: the author will have to intercept the select event, and relay where on the DOM layer's quad it should be dispatched 23:01:48 ada: if it's coming from the xr select event it should be trusted 23:02:07 brandel: but is the target un-ambiguous? 23:02:38 cabanier: that's why we have the same origin and security restrictions, you would be lying to yourself :D 23:03:26 cabanier: should the fullscreen API work there? 23:03:34 ada: it would be handy for videos 23:03:40 brandon: I have a demo! 23:04:21 vicki has joined #immersive-web 23:04:23 ... demoing moving a button between windows 23:04:47 ... the script continues to work, event bindings work, but the style doesn't move between windows (obviously) 23:07:57 q? 23:08:40 adarose: we should make a repo and have a place for people to file issues 23:08:55 cabanier: there might be a PR to layers 23:09:21 Marisha: why do we need to pass a URL instead of passing a document fragment directly? 23:09:32 cabanier: we need it to get a new document 23:09:46 brandel: document have a path, need dimensions etc... 23:10:24 i/Nick-Niantic: cabanier mentionned/scribe+ etienne__/ 23:10:34 cabanier: yes you need to pass dimensions too 23:11:10 https://www.w3.org/TR/webxr-dom-overlays-1/#xr-overlay 23:11:26 cabanier: but dom _overlays_ work similary to fullscreen so you don't need to pass dimensions 23:11:39 ... and you can only have one 23:12:14 adarose: if you only want one element, could you get the same treatment as overlays 23:12:17 q+ 23:12:20 ... oh but it wouldn't work for events 23:12:22 ack bajones 23:12:28 q+ 23:13:20 bajones: 1. how do we determine the rasterization size? which is distinct from the size of the element in space, and needs a limit 23:14:02 ... 2. interaction wise how do I know what I'm pointing at? which ties into the keyboard integration from yesterday 23:14:06 q- 23:14:56 ... the keyboard blurs the scene, but that maybe wouldn't work for dom panels. but maybe it's a good place to start 23:16:32 ... the user needs to be confident about where they're pointing 23:17:22 adarose: but again, with the same origin limitations as a developer you can't do anything you couldn't do before 23:17:47 ... no need user input hack 23:18:22 bajones: correct, but still concerned. do we need an explicit mode switch depending on what the user is interacting with 23:18:51 ... what would actually trigger the pointer events? 23:19:13 cabanier: needs to be specified, but would probably be an API on the layer 23:19:48 Nick-Niantic: could the browser do everything here? 23:20:13 cabanier: how? the layer could be occluded (and we can't read back the depth buffer) 23:20:42 q? 23:20:51 cabanier: when casting a ray, only the author knows if you hit the layer 23:22:19 q+ 23:22:20 adarose: accessibility wise, we have access to the accessibility tree here, could we do anything based on gaze? move a virtual cursor? 23:22:41 ack mkeblx 23:22:46 cabanier: things like hover events are going to be compex 23:23:08 mkeblx: we need more than click interactions (scroll etc...) 23:23:39 adarose: maybe the events should be treated as non-artificial events 23:23:59 ... so more things would work (sliders etc...) 23:24:23 ... and then we could just do pointer events 23:24:55 brandel: users of this API would taylor the interactions for this setup 23:25:20 Nick-Niantic: but we would like to make the effort minimal for developers, and have things working on phones etc.... 23:25:54 bajones: I would expect an even split between people building XR UIs with the DOM, and people bringing existing UIs in 23:27:10 cabanier: the main document might have the same restrictions. so you couldn't have 3rd party scripts on the main document handling events from the DOM layer 23:27:46 Nick-Niantic: the use case we care the most about is having the outer page controlling the DOM layer's content 23:28:00 ... so anything you can do on your one page should work here 23:30:54 ... for scroll alone, it seems that it would be much easier if a controller could be assigned to the DOM layer, letting the browser handle event dispatch (not the author) 23:31:40 bajones: the more I think about it, the more I think we should have a mode switch where the OS takes over (and handles input events) 23:32:36 ... on daydream the platform conventions for scrolling were different than on quest 23:32:57 ... so the author couldn't reproduce it while respecting the OS conventions 23:32:58 q? 23:33:20 adarose: this feels like the onbeforeselect event 23:33:58 ... both get the event but the xr content can ignore them 23:34:48 bajones: on the quest, do you show a cursor or just a ray? 23:34:52 Marisha: yes we show a cursor 23:35:56 ... for a 2D browser 23:36:59 bajones: the idea would be to have the OS cursor be the "source of truth" for where the input event will be dispatched 23:37:14 ... the web content stays in control of the ray 23:38:05 Marisha: we might have some implementation issues, but theorically it could work 23:38:30 brandel: where do we put the limit? would the document also get devicemotion? 23:38:39 ... and resize 23:38:50 cabanier: currently you can't resize a layer 23:39:30 adarose: one thing developers might use this for is a custom DOM-layer based keyboard (to avoid the blur?) 23:40:01 ... and for the extra control 23:40:19 brian: some people to that on mobile already 23:40:51 bajones: native apps too 23:41:18 q? 23:41:51 adarose: let's wrap up, it's a very cool thing 23:42:17 cabanier: need to confirm if the CSP needs to be set on the parent document or not 23:42:34 adarose: which would break people loading three from a CDN etc... 23:44:05 brandel: (and others) the CSP is set in the HTTP header, not the page, so github pages would break too 23:44:25 cabanier: points that a meta tag is also available 23:44:43 brandel: but then who wins? 23:45:57 adarose: we definitely don't want things like bank's iframes being loaded 23:46:03 ... but scripts should be fine? 23:46:34 cabanier: the spec says that if you use the meta tag, you could load scripts _before_ the meta tag at it'd work 23:47:08 adarose: it would be nice if only the subpage had the restrictions 23:59:59 s|new spec: https://immersive-web.github.io/real-world-meshing/mesh-detection.html|new repo (real-world-meshing) -> https://immersive-web.github.io/real-world-meshing/| 23:59:59 s/TOPIC: Proposals - Some sort of local shared space// 23:59:59 s/Thanks atsushi// 23:59:59 s|agendum 4 -- webxr#1264 Proposal: lower friction to enter WebXR sessions -- taken up [from atsushi]|| 23:59:59 s|agendum 5 -- webxr#1317 Some WebXR Implementations pause the 2D browser page in XR, make this optional? -- taken up [from atsushi]|agendum 4 -- webxr#1264 Proposal: lower friction to enter WebXR sessions -- taken up [from atsushi] 23:59:59 i|https://github.com/immersive-web/webxr/issues/1317|topic: webxr#1317 Some WebXR Implementations pause the 2D browser page in XR, make this optional?| 23:59:59 i/adarose: Been working on attaching/scribe+ Yih/ 23:59:59 s/bajones: ... /... /g 23:59:59 s/adarose: ... /... /g 23:59:59 s/mkeblx: ... /... /g 23:59:59 s/cabanier: ... /... /g 23:59:59 s|https://hackmd.io/@jgilbert/imm-web-unconf|topic: unconference| 23:59:59 s|https://hackmd.io/@jgilbert/imm-web-unconf|topic: navigation| 23:59:59 s/We also have a href that we can use.. we should /... We also have a href that we can use.. we should / 23:59:59 s/Those two things allow you to determine where the navigation/... Those two things allow you to determine where the navigation/ 23:59:59 s/You have more chances for the UA to mention something and/... You have more chances for the UA to mention something and/ 23:59:59 s/Those are the core elements that I feel somehow should be present. I d/... Those are the core elements that I feel somehow should be present. I d/ 23:59:59 s/Starting with where you are going to navigate and seems/... Starting with where you are going to navigate and seems/ 23:59:59 s/So you can have a seamless transition, I don't know how that w/... So you can have a seamless transition, I don't know how that w/ 23:59:59 s/At that point session granted could fire and seam/... At that point session granted could fire and seam/ 23:59:59 s/That kind of solves the problem. You want the na/... That kind of solves the problem. You want the na/ 23:59:59 s/If we have something like a broadcast to the page, this is where/... If we have something like a broadcast to the page, this is where/ 23:59:59 s/The gesture is also something in which you need to press a/... The gesture is also something in which you need to press a/ 23:59:59 s/If you do the nav this way you have more control to go in and o/... If you do the nav this way you have more control to go in and o/ 23:59:59 s/Do you have that?/... Do you have that?/ 23:59:59 s/the pixels are being rendered by the UI, if there is a promin/... the pixels are being rendered by the UI, if there is a promin/ 23:59:59 s/An environment that the browser can make that is recognizable, th/... An environment that the browser can make that is recognizable, th/ 23:59:59 s/While it is difficult to render it into the scene it is difficult to trust, but yo/... While it is difficult to render it into the scene it is difficult to trust, but yo/ 23:59:59 s/In 2d. Once we know the navigate from o/... In 2d. Once we know the navigate from o/ 23:59:59 s/This is a part of the page transition api that allows you to anim/... This is a part of the page transition api that allows you to anim/ 23:59:59 s/Instead of a 2d transition we could have a developer know that th/... Instead of a 2d transition we could have a developer know that th/ 23:59:59 s/we can probably end that conversation there if there isn/... we can probably end that conversation there if there isn/ 23:59:59 s/After this is unconference items. Then it's the e/... After this is unconference items. Then it's the e/ 23:59:59 s/There is a doc, I will paste it in IRC. You can add stuff /ada: There is a doc, I will paste it in IRC. You can add stuff / 23:59:59 i|There is a doc, I will paste it in IRC. You can add stuff |unconference topic doc -> https://hackmd.io/@jgilbert/imm-web-unconf| 23:59:59 i|cabanier mentionned that DOM-layers will have a differ|subtopic: DOM layers|