12:35:32 RRSAgent has joined #immersive-web 12:35:36 logging to https://www.w3.org/2023/09/11-immersive-web-irc 12:35:43 zakim, this is Immersive Web 12:35:44 got it, cwilso 12:35:48 present+ myles 12:35:54 present+ 12:35:59 present+ 12:35:59 present+ 12:35:59 scribenick: bkardell_ 12:36:04 present+ 12:36:09 present+ Piotr_Bialecki 12:36:09 scribenick: bkardell_ 12:36:12 rrsagent, make log public 12:36:13 present+ 12:36:14 present+ 12:36:45 myles___ has joined #immersive-web 12:38:10 takashi_m has joined #immersive-web 12:38:10 present+ 12:38:10 present+ 12:39:15 Maud has joined #immersive-web 12:39:15 sangwhan has joined #immersive-web 12:41:04 test 12:41:04 s/test// 12:41:04 rrsagent, publish minutes 12:41:35 I have made the request to generate https://www.w3.org/2023/09/11-immersive-web-minutes.html atsushi 12:41:44 ada: Real World Meshing and Real World Geometry, maybe we can move from the CG to the WG 12:41:44 alcooper_ has joined #immersive-web 12:41:44 present+ Mark_Foltz 12:41:44 Jesse_Jurman has joined #immersive-web 12:41:46 ada: WebXR web GPU Binding? We'll have to check into that one 12:42:37 present+ 12:42:43 agenda: https://github.com/immersive-web/administrivia/tree/main/TPAC-2023#agenda 12:42:49 bradleyn has joined #immersive-web 12:42:54 ada: Some of the other CG topics we have - I'm not sure, I'll do a quick list and you can q+ .... capture, computer vision, detached elements, front-facing camera, geo-alignment, marker tracking, 12:44:11 ada: the hololens... I'm not sure if they are supporting WebXR anymore - that was one blocker in the past, what they supported. It broke and we're not sure what the plans are with it. 12:44:11 q? 12:45:01 ada: for now lets move that in CG. is still being active. Nagivation, occlusion, raw camera access... is that out behind a flag in chrome 12:45:14 bialpio: it is enabled by default 12:45:23 ada: do you think we should move it into the WG? 12:45:43 bialpio: i have no problems with it if we meet the criteria 12:46:35 bialpio: it is pretty unique in that it is mainly a OS issue for mobile and pretty much with this flavor of the API that means some kind of chromium currently 12:47:24 ada: lets keep in the CG for now 12:48:05 ada: In the working group, we can look at some stuff and move it to CR, or things that are CR what can move to REC 12:48:09 Leonard has joined #immersive-web 12:48:15 present+ 12:48:22 ada: Anchors is on oculus quest and androud - that would be good to move to CR 12:48:31 q+ 12:48:41 ada: Hit test is on both of those and wolvic, we could more them 12:49:01 ada: is it in wolvic 12:49:10 I don't know, is it jfernandez 12:49:18 jfernandez: no I dont think so 12:50:22 ada: webxr - there is still some little bit of work, could be the first spec fully finished by this wg... 12:51:15 ada: the webxr AR module and gamepad module seem like they are very stable, maybe we can move those to rec 12:51:35 ada: it would be good to move web xr hand input 12:51:46 i/ada: Real World Meshing and Real World/topic: Administrivia: Review big wins since last TPAC, Accessibility task force, and moving standards forward [WebXR 1333]/ 12:51:51 ack bialpio 12:52:22 bialpio: We did have a couple of additions to specs that I know, for example, chrome doesn't implement. Persistence I think, hit testing with semantic labels 12:52:32 cabanier: it's optional 12:53:02 bialpio: There might be some parts of the spec that have been stable for a while but they aren't in all of the browsers yet - do we split it into modules or what? 12:53:48 q+ 12:53:53 ada: Our next charter isn't until next year - I don't mind if we add it to that or... 1) split it into levels 2) try to get the implementations to all them by july or 3) put it on as more in our charter 12:54:03 bialpio: as long as it is not a blocker 12:54:25 ack cabanier 12:54:27 ada: It will go out to more people to ask for objections and so on... This group just needs to think it is ready for that 12:54:29 ack cabanier 12:54:41 cabanier: at some point we talked about becomming living specs? 12:54:52 ada: does someone know the process for that? 12:54:56 [slide 16] 12:55:17 Dan: our current charter ends at the end of October, we're in the process of rechartering - please comment on the proposed draft charter 12:55:27 dontcallmeDOM_: y ou are i nthe wrong channel!! 12:55:30 dontcallmeDOM_: I think you are in the wrong channel 12:55:31 :) 12:56:00 ada: I will take an action item to look into how we do that 12:56:07 yonet: I will look into that 12:56:22 ada: it's probably not applicable to all of the specs, but certainly some of them. 12:57:29 ada: charles had an a11y task force issue - I thought it would be good to have a repo with all of them.. Right now we just have a label. It might get more good people involved if it wasn't just a thing that happens with labels in the background 12:57:44 cabanier: Did you mention the depth sensing module? 12:57:58 ada: it's in the WG already, but it is chrome only still so not a great one to move to CR 12:58:46 ada: Anyone can agenda any items they want 12:59:02 ada: 45 minutes on the tag.. I think we have an issue 12:59:11 agenda: https://github.com/immersive-web/model-element/issues/69 12:59:34 topic: https://github.com/immersive-web/model-element/issues/69 12:59:40 s|agenda: https://github.com/immersive-web/model-element/issues/69|| 12:59:48 ada: introductions by dino 13:00:18 Can the speaker be closer to a mic? 13:00:55 TY, much better 13:01:09 Now no audio 13:01:29 Room cameras appear to have frozen as well 13:01:40 Yes 13:02:24 dino: (presents slides)... I'm dean, there's a few other folks... 13:03:12 dino: we announced this product recently, but we've been working on it for years... WebXR is supported at launch, it's not quite ready... Our model for controler is "you don't have any"... so we have some things 13:04:11 dino: we don't want WebXR to be the only way you work with the Web. It needs to follow the normal web principles: safe and default, interoperable, privacy 13:04:24 dino: we want to say "it's not a different web" - it lets us extend the existing features, and add some new features 13:04:40 dino: it's really important to get high quality, realistic rendering... 13:05:02 dino: that's why we suggested some things need to be more in control of the OS or the browser itself 13:05:27 dino: we think you shouldn't have to be a coder or a 3d artist - that they should be able to make minor enhancements on existing pages 13:06:05 dino: specifically, gaze and eye tracking - we cannot provide every web page this information. That would be horrible 13:07:49 dino: (shows some videos of Vision OS) This is a flat page on vision SOS, and you can see as we move around it is just like a floating window in space you can walk around it... we can show that the illusion of 3d on the page is broken, and lots of things just want a model - you could do this with a webgl canvas, but that requires environment lighting and head tracking, etc 13:08:53 dino: you can see the element - and it looks correctly 3d and you can see if we walk around it is kind of an illusion and you're looking into a portal... This is why we gave some feedback about why we didn't _just_ want to do webgl 13:10:01 dino: there are just lots of limitations too -- retained mode vs intermediate... retained mode has been tried a few times before, and all of them have failed--- so why should it work now. We think there are just a lot more apis out there that all sort of share a common subset at what they agree upon as a sort of model 13:11:14 dino: the next thing though is consistent interoperable rendering... real time 3d rendererers are just different than many other things. But recently in the last few years the industry beyond w3c has been working on this and we think it makes it possible 13:11:37 https://materialx.org/ 13:11:40 dino: material x is something from ILM that is declarative 13:12:19 dino: it describes the gltf rendering model - you see you have a fresnel effect - it's a very very powerful shading system built to be used in feature films as well as real time rendering 13:12:42 dino: it's also sort of procedural - you can see introducing some noise... 13:12:55 [slide 4] 13:13:26 dontcallmeDOM_ has left #immersive-web 13:13:38 vicki has joined #immersive-web 13:13:39 dino: next: What about file formats. Strangely when we first made this proposal this is what everyone seemed to care about - I suggested we should recommend them how the HTML spec does. 13:14:12 dino: apple and a few others in the company recently started a standards body around USD... 13:14:19 Alliance for Open USD: https://aousd.org/ 13:14:43 q? 13:14:53 I do, but will save them for later 13:15:33 q+ 13:16:03 bradley: You say we can't give you head tracking or eye tracking - could we put noise into the signal? 13:16:14 dino: we tried this, the answer is "no". 13:16:56 dino: you need both to be very accurate and we run into vestibular issues. Things that are 'slightly off' are strangely way worse than things that are way off 13:17:09 ack cabanier 13:17:26 cabanier: you showed material x -- are you proposing that as part of the model standard or... 13:18:10 dino: to be clear, I am just trying to show there are other standards bodies working on real time physically based models -- this is something we could adopt or build on. 13:18:12 q? 13:18:19 q+ 13:18:30 ack cwilso 13:18:52 cwilso: I have to disagree that the format doesn't matter. You would never try to build a browser today and not support jpeg 13:19:13 q+ 13:19:21 q+ 13:19:40 cwilso: and that is a really simplistic case. I'm not sure that this is going to happen. If we are aiming for interoperability great - but I'm not sure that happens in a world where we don't support the same formats or there isn't at least a common format 13:20:12 dino: It definitely does matter, the html spec doesn't _say_ that it requires jpeg, but things work themselves out 13:20:24 cwilso: because of licensing issues at the time 13:20:34 dino: there have been others since, png, for example 13:20:52 dino: there are divides in which browsers support which formats 13:20:57 cwilso: I don't like that 13:21:02 myles___: I don't understand 13:21:26 ack Brandel 13:21:27 ack Brandel 13:21:41 cwilso: if we wind up in a world where apple supports X and google supports Y, but neither supports something common, I think we have failed 13:22:18 q? 13:22:22 ack cabanier 13:22:24 Brandel: It is really just about whether it is the space of the specification to say that -- not that it is not something to discuss. It's about whether it should go into the spec at this time 13:23:18 cabanier: the reason we all objected to USD was because there was no standard for it, but it sounds like this is being worked on and it can be developed in such a way as to run nicely in web browsers... If it can do all of these things, I don't see why we cannot support that 13:23:30 cabanier: maybe gltf comes later? 13:23:47 IW has joined #immersive-web 13:23:53 q+ 13:23:58 IW has left #immersive-web 13:23:59 dino: I think you should consider usd a superset of gltf 13:24:16 bradleyn has joined #immersive-web 13:24:49 ack bialpio 13:24:50 cabanier: personally I don't care that there are so many, it's a bit of a pain. IT might be a problem later, but we can also tackle them later... If we wanted to break it open... we can do it later. Maybe by then the format will be better flected out 13:26:13 q+ 13:26:22 bialpio: maybe a question related to open usd... dino you mentioned that it was a "subset of usd" does that mean that the work that the alliance will be doing is specing and starting from scratch or... how would other implementers interact with that process -- is meta or google a part of that standards body? How will we be able to provide feedback? As a person who hasn't been in standardization so long, I'm not sure. I guess I will 13:26:24 have to impleement both 13:26:32 q+ 13:28:05 dino7: the alliance just started it has only a few members. I believe that once the charter is in place it would be expanded. I'm not sure. The goal of the alliance is to make a more clear specification for what USD is. It is effectively explaining all of the things a web specification would. USD did not describe them - so doing that makes it more intereoperable. Does it make sense? 13:28:09 vicki has joined #immersive-web 13:28:11 ack Leonard 13:29:28 Leonard: Alliance for Open usd just started so I expect it will take several years to get to a spec. From what I understand it is not starting from scratch but taking existing work which is done mainly by example. There is active work between kronos and ? ... It's not anticipated that gltf will be able to make a lossless loop 13:30:46 s/kronos/AOUSD 13:31:23 Leonard: there are features in each format that aaren't present in the others 13:31:34 q? 13:31:38 ack cwilso 13:32:15 Items of present: AOUSD is just starting. It will take at least a couple of years to get a written specification 13:32:52 cwilso: I want to be transparent about why this is important. I worked as MS when we did -- we convinced people this would be interoperable. It never was. Pragmatically we never got that to work. Object has a great fallback mechanism, but basically no one ever used it because it was so powerful and what do you fall back to 13:33:09 ... Active work between USD (AOUSD) and Khronos (glTF) [mostly in Metaverse Standards Forum] to handle the interchange and differences 13:33:13 cwilso: where you need to fit this in is in HTML, so the bar there is in WHATWG 13:33:16 ack dino 13:33:31 dino7: So what do you think we should do? What is the alternative? 13:34:11 ... It does not seem possible to make a lossless loop between USD and glTF. There are features of one that are not present in the other. 13:34:11 dino has joined #immersive-web 13:34:27 cwilso: start with a format that has wider adoption or stay on the track that you're on and focus on getting wider adoption. I'm not trying to paint this the wrong way -- but is there even another parser strcuture for USD that has been implemented more than once? How much is specified? How much are you going to chop off for 'web usd' 13:34:33 ... glTF is actively working with OpenPBR (https://www.aswf.io/blog/academy-software-foundation-announces-openpbr-a-new-subproject-of-materialx/) 13:34:57 ... to use/incorporate their standard. 13:35:10 cwilso: You're not going to use the same renderers for a web page and a cinema movie, are you? This is tough because there are lots of capabilities of products... 13:35:28 ... OpenPBR is a sub-project of MaterialX and Academy Software Foundation (ASWF). 13:35:38 dino: I'm glad you said that - because it sounds like you're main concern is about interop.. 13:35:51 cwilso: Yeah, it's been a long and painful process for videos 13:36:06 q+ 13:36:11 q+ 13:36:38 dino: I should have had a slide that it's going to be hard and there are going to take a while. Part of the reason I used material x is that it is one of the first ones where people are starting to agree like this... 13:37:13 q? 13:37:14 q+ 13:37:23 ack Leonard 13:37:30 dino: If I sent a 16k video to a feature phone it's not going to work - it just happens to be that with 3d we hit that more quickly 13:38:35 Leonard: I had a question on the video you showed of the flat screen but the model had 3d characteristics... for at least the near future actually flat screens will be the predominant mode of consuming web pages -- does this benefit them? or 13:39:49 ack ada 13:39:53 dino: that is what we were trying to do - it has to be visible on a flat screen because that is almost all screens at least currently. It needs to be designed for the whole range, just like anything for the web should be. You might not get the same beautiul rendering... the same way you do with steroscopic images maybe 13:40:13 ada: your comment about the hard work being done in the interop stage - would that be here or aousd? 13:41:11 dino: I think cwilso's comments about some of this belonging in HTML apply too. I guess that's the question I have for everyone, where should it happen. This seems like the most appropriate place in terms of interest 13:41:22 q? 13:41:25 ack alcooper 13:41:27 ada: implementers type of interested people are probably in this room yeah 13:41:57 q+ 13:42:13 Note: Credits were missing from the 'Damanged Helmet' model. Details listed below. 13:42:19 Taken from https://github.com/KhronosGroup/glTF-Sample-Assets/blob/main/Models/DamagedHelmet/README.md 13:42:28 q? 13:42:28 Brandel: I think there are things common to both gltf and usd that would be criticisms of both. 13:42:28 ack cabanier 13:42:32 License & credits: 13:42:32 © 2018, ctxwing. CC BY 4.0 International - ctxwing for Rebuild and conversion to glTF © 2016, theblueturtle_. CC BY-NC 4.0 International - theblueturtle_ for Earlier version of model 13:42:58 q+ 13:43:04 s/Brandel/alcooper 13:43:08 q+ 13:43:12 dino has joined #immersive-web 13:43:26 ack ada 13:43:37 cabanier: I am concerned about the complexity - the place to worry about that though is probably in the formats. We tried to be clear about about how complex it could be 13:44:10 q? 13:44:21 ada: do you think the formats themselves should include information about how to handle the lower level of detail use cases? 13:44:44 ack Leonard 13:45:26 Leonard: Is this similar to the idea we talked about several years ago when we talked about create a favicon with lower level of detail 13:46:30 cabanier: It wasn't a standard, but it was kind of similar 14:04:12 break over, we are starting again 14:04:40 etienne has joined #immersive-web 14:04:58 zakim, choose a vitim 14:04:58 I don't understand 'choose a vitim', ada 14:05:26 present+ 14:05:41 ada: We can crack on with other topics from the discussion. 14:05:56 vicki has joined #immersive-web 14:06:30 alex cooper: I was reading over your topics, and proposed topics, and can we discuss why we're building this stuff? It might help inform discussions later 14:06:30 ada: I'm open to doing that 14:06:30 s/alex cooper/alcooper 14:06:35 cabanier: Is this about specific topics? 14:06:39 ada: who raised this issue? 14:07:20 Maud has joined #immersive-web 14:07:41 14:08:00 rrsagent, publish minutes 14:08:02 I have made the request to generate https://www.w3.org/2023/09/11-immersive-web-minutes.html atsushi 14:08:08 https://github.com/immersive-web/model-element/issues/70 14:08:14 yonet has joined #immersive-web 14:08:24 Brandel has joined #immersive-web 14:08:26 s/https:/topic: https:/ 14:08:32 present+ 14:08:39 Leonard: There are a lot of discussions about since it was proposed 2 years ago. People have said about why it needs be done. I have never seen written descriptions about what we're trying to do that differs from existing things. 14:09:20 q+ 14:09:43 Leonard: In the video, that was the best I've seen about why rather than model viewer. Also, privacy detections. It would be good to see this written down in the issues. I've spent a long time interacting with the issues. Why I wrote this issue is to understand why this is being proposed and its history, why existing systems are not being used, and why other systems are not sufficient 14:09:46 i/ada: We can crack on/scribe+ myles___ 14:10:05 Leonard: Let's get these things discussed instead of complaining about particular cases 14:10:16 rrsagent, publish minutes 14:10:17 I have made the request to generate https://www.w3.org/2023/09/11-immersive-web-minutes.html atsushi 14:10:24 ada: Should we discuss this now? Or is this ongoing work? 14:10:47 q+ 14:11:05 Leonard: Without thought, it would be difficult to produce what I'm looking for during the next hour. This is important to do for further discussion. 14:11:17 Leonard: If a number of peopl can go out and address this and provide specifics as to why something is or should not work. So the rest of us can address use cases 14:11:29 ack dino 14:11:46 dino: I think Apple can take this action. Now that we can publicly talk about the justification. We'll handle it. 14:11:59 dino: I'm not joking. We will do it. 14:12:01 q+ 14:12:05 ack cabanier 14:12:07 Leonard: That's great, thank you 14:12:20 q+ 14:12:26 cabanier: Dean's presentation covered a lot of what Leonard is asking for. Updating hte explainer to include the info will be good 14:12:33 ada: Can you share the presentation? 14:12:38 dino: I can share a modified version of it 14:12:49 ack alcooper 14:13:11 dino: Keynote files are OK, right? Wait, nevermind, let's not argue about formats 14:13:11 alcooper: I'd like to see alternatives considered flushed out more. 14:13:11 dino: Sure thing. 14:13:11 ack cwilso 14:14:24 cwilso: Off the cuff, Leonard, the original explainer for probably explains that question better than anything else that's pointed to. It does talk about that a little bit. To be clear, I don't see model-viewer as a competitor to . is trying to satisfy use cases that are more integral to the language. model-viewer is "can we drop a model into a page and make it interoperable in many different implementations, at 14:14:24 around ~2015". There are some lessons we collected from that. 14:15:11 q? 14:15:11 dino: I agree, and I felt bad using model-viewer page to show that. I only did that because there were so many people saying "we shouldn't do and do model-viewer instead" but we can learn from model-element. doesn't replace model-viewer. 14:15:18 q+ 14:15:39 dino: The explainer did go into that. There's a random thread by weinig on Twitter (X?) that explained the rationale behind . He was getting a lot of feedback. 14:15:48 ada: Please preserve that. Twitter might not be around forever 14:15:51 q? 14:15:53 dino: I'll try to figure it out 14:15:53 ack alcooper 14:16:37 alcooper: Something that was mentioned is that he's had a hard time trying to keep that API surface down. One concern for is to not have every browser re-implement a game engine. It's something to keep in miind. 14:16:47 q? 14:16:55 https://twitter.com/samweinig/status/1445464463067398154 14:16:58 Thanks for addressing it 14:17:02 ^^ that was Sam's twitter thread 14:17:05 s/miind/mind/ 14:17:25 but we'll make an update to the explainer to include more justification 14:17:27 Topic: Specifying an image-based light 14:18:30 gombos has joined #immersive-web 14:18:34 Brandel: In our exploration and experimentation with it's become clear that even on environments where they've provided the model as it currently exists is within the context of the page. There's a strong indication that the lighting is the lighting from the page rather than the world. It's important to be able to control lighting. Marketing organizations in particular want to control this. 14:19:09 vicki has joined #immersive-web 14:19:13 Brandel: In that context it would be important for us to be able to specify an environment map or image-based light. It seems pretty important that that format be an HDR format. To my knowledge, nobody has HDR at this point. There was a breakout session about HDR. I just wanted to raise it in here at this time. 14:19:18 q? 14:19:31 q+ 14:19:52 ack Leonard 14:20:08 Leonard: I agree with that. Is this the right working group? Or is that something that should soley done with a 3D context? 14:21:22 bajones has joined #Immersive-Web 14:21:49 yonet_ has joined #immersive-web 14:21:53 q? 14:22:11 q+ 14:22:24 Agree with need for HDR. Where is the best forum to handle it. Here if it is 3D only; other WG for general browser support 14:22:31 Brandel: I don't want to exclude raising it elsewhere. Are there discussions that we need to have? I thought we could pre-populate that with views about the case for it, and some of the attributes/properties of an HDR format might not be familiar with folks 14:22:36 q? 14:22:39 ack bajones 14:22:40 q+ 14:22:52 ack brandon 14:23:05 bajones: hi 14:23:11 ack bajones 14:23:19 ... If it is general browser, then need to make sure it is not just 8-bit colors. There is precendence for 3D-only: KTX 14:23:52 ... does not display in a browser. It requires a GPU to decompress and display, so it is used on 3D models, and not web pages 14:24:11 bajones: HDR is something that has had a lot of ongoing discussions in WebGPU, and video and 2d canvas and CSS and such. I've sat through many of their presentations and I understand a little better now, but not enough to explain it to a group like this. But it does sound like there is at least some headway being made there in terms of having different image sources being able to output to the browser that can be displayed on a variety 14:24:11 of displays and has all the attributes that people are looking for. 14:25:40 ntim has joined #immersive-web 14:25:40 bajones: Please don't try to re-invent any wheels here. It's a dense topic. I think we will generally be able to rely on straight webxr point of view - we can rely on the output of these groups to feed into any comparative content that we create. For , I think that you should generally be able to just provide some of these attributes that would otherwise go to a video or other images, and provide it as part of a model or part of 14:25:40 the embedding tag, and piggyback on what they're doing there. I wish I was in a better position to have more details on what format that is 14:25:40 bajones: I think we should lean into what they're doing and provide a consistent surface for HDR across the web. 14:25:40 q? 14:25:40 Brandel: Cool 14:25:40 +1 to Brandon's comment 14:26:23 Brandel: One difference about our use of HDR is we have no expectation of displaying the whole gamut range of the file. We use it for an estimation. We need to think about that code wildly different intensities than anyone else. Because it's a precursor step to the ultimate display. But it's great to have that conversaiont. 14:27:41 bajones: Are you talking about the materials, or lighting? 14:27:41 Brandel: Lighting. 14:27:41 bajones: The materials would make the as well. 14:27:41 bajones: My understanding is many times when you're dealing with HDR content, the materials don't actually contain much HDR information. There's not a whole lot of value going into a particular material and saying "this is 300% red." But oftentimes, the HDRness is just being able to capture the full range of illumination that's being applied to those materials. 14:27:41 Brandel: 14:27:46 ack /url 1 14:28:03 bajones: I don't think we need special HDR materials. You can still embed 10/10/10/2 or something like that. But HDR isn't necessary for materials. 14:28:04 q? 14:28:07 ack cabanier 14:28:08 ack cabanier 14:28:19 cabanier: Why does the browser need to specify this global lighting image? Can it not be part of the USD file itself? 14:28:35 q+ 14:28:36 cabanier: If we're going to do this, HDR becomes a problem. Now we have to define how it is on a browser that supports HDR and one that doens't. 14:28:55 ack dino 14:29:05 cabanier: I think this might spiral out of control and make implementation, how to define it, much harder. 14:29:05 cabanier: Can it just be part of the format? 14:29:25 dino: Are you talking about IDLs? You could definitely embed image-based lighting into a file format. It is possible definitely in USD. You might want different IDLs for the same model in different circumstances. 14:29:29 cabanier: Could you post different models? 14:29:35 dino: Yes, but they would be different files. 14:29:41 dino: You'll want it external to the file. 14:30:31 dino: You want to say "how does this green look when in a restaurant showing off to my friends, vs when i'm outdoors, vs when i'm in my living room" 14:30:31 q+ 14:30:31 cabanier: It's too much effort to have different models for different conditions? 14:30:31 q+ 14:30:33 ada: if it's a big file, it's duplicated. 14:30:38 dino: Sometimes you _do_ want to embed it in the file. 14:30:58 s/idl/image-based-lighting 14:31:05 cabanier: Is this such a big use case that it's worth accounting for? 14:31:22 q- (Dean is making my point right now) 14:31:55 cabanier: Is this such a big use case that it's worth accounting for? 14:31:55 dino: The case is: The 3d models we show on apple.com, they have lighting that's specifically picked by the designers, specifically for product showcase. If you took that same model, and viewed it in AR, you want the realistic lighting of the environment. 14:31:55 dino: You want to use the same file in different places 14:31:55 cabanier: You want it to look OK in a place where there is no lighting? 14:32:41 bradleyn has joined #immersive-web 14:33:05 Brandel: Even if we were to package an IDL on a particular model, it would make sense in AR quicklook use in a phone, to use the estimated lighting that comes from the system. In a webpage, it's in portol. It's not immediately clear that it exists in the world. The presentation context has more opinion about what kind of lighting and color it should take on 14:33:10 Brandel: there are 2 different things that can be done with the model, but people can probably use a custom image based lighting for an environment map in an AR view, it doesn't necessarily make sense to carry it along. 14:33:39 Brandel: For HDR, it's important to have an HDR image based light, simply because the sun is very bright. The illumination component that comes from the sun, it's way brighter than whatever device you're on. 14:33:40 q? 14:33:48 ack Leonard 14:35:05 Leonard: You might not get bright light halogen or LED, you only want to have a single model, because if changes to the model you have to propagate them out. In the glTF context, you don't embed lighting in the model file. The player can choose it as necessary. What you might want to use in a room vs when you take that same model and walk outside with it. 14:35:21 Leonard: If the browser can display an HDR image is separate. There is work in other working groups, even out of the W3C, to make it work. We shouldn't go there until they are done. 14:35:29 ack bajones 14:36:01 bajones: I am here. I tried to take myself off the queue. You want different lighting on the page vs an immersive environment. 14:36:16 ada: break for coffee. Come back in 15 minutes 14:41:51 RRSAgent, make minutes 14:41:52 I have made the request to generate https://www.w3.org/2023/09/11-immersive-web-minutes.html caoxiaoran 14:42:00 myles has joined #immersive-web 14:47:02 caoxiaoran has left #immersive-web 14:55:05 Maud has joined #immersive-web 14:58:38 yonet has joined #immersive-web 14:58:47 etienne has joined #immersive-web 14:59:46 Can you fix up the room cameras? 15:01:24 myles has joined #immersive-web 15:01:55 We have all gotten so good at zoom calls, we don't know how to deal with F2F! 15:02:46 scribenick: cabanier 15:03:50 topic: Camera controls, being able to move the pivot point rather than just tumbling around the world origin. 15:04:30 Brandel: currently, the proposal has a camera control that consists of pitch/yaw/scale 15:04:44 ... this is because it's generally understood that people don't like roll 15:04:58 atsushi has joined #immersive-web 15:05:09 ... but also having full 360 controls is not appropriate for stereo displays 15:05:19 ... but we should talk about other platforms 15:05:45 ... the intent of the webkit implementation might need further constraints 15:05:48 q+ 15:05:54 ack bajones 15:05:57 ... what is the bare mimimum for model 15:06:18 bajones: the pitch/yaw/scale is for orbit camera ccontrols? 15:06:38 ... yes, that is appropriate 15:06:57 ... for targeting, we need to make sure that the element itself is able to predict the center 15:07:12 ... because very often there is a mismatch 15:07:21 ... and you never find it 15:07:32 ... the element needs to identify the center of the object 15:07:42 ... and the external bounds and center itself 15:07:48 ntim has joined #immersive-web 15:07:55 ... otherwise people will have bad time 15:08:09 ... it might be good to have an override 15:08:32 ... there's an obvious thing that people want but there's a niche thing that others want 15:08:45 q? 15:08:50 q+ 15:08:51 q+ 15:08:53 gombos has joined #immersive-web 15:09:00 ack Leonard 15:09:34 q+ 15:09:34 Leonard: does this discussion limit the depth of field? 15:09:37 q+ 15:09:43 ... like the focal distance 15:09:54 ack Brandel 15:10:17 Brandel: the goal is to define the point of interaction 15:10:39 ... there's no notion of depth of field 15:10:54 ... there's no dref or focal point support on our devices 15:11:01 ... that would be a separate aspect 15:11:11 ... (???) 15:11:13 q? 15:11:20 ack bialpio 15:11:21 vicki has joined #immersive-web 15:11:26 q++ bialpio 15:11:31 bajones: for depth of field, we don't want to apply it automatically 15:11:34 q- + 15:11:48 ... it's something that the embedder will want to set automatically 15:11:59 ... we don't want to lock the ability 15:12:06 ... it would be a separate control 15:12:19 ... and it would be like setting a focal plane 15:12:45 ... this is an artistic choice so should be left to the person embedding or creating the model 15:13:02 ack bialpio 15:13:04 ATM glTF does not have a facility for depth of field 15:13:07 ... it's not something that this would interfere with. It's a separate control 15:13:23 bialpio: how will we expose the camera control? 15:13:51 q+ 15:13:57 ... do we think it's a problem that the site has to account for both? 15:14:18 ... is it a hybrid model where we allow turning even if it's turned off? 15:14:34 ack bajones 15:14:35 ack bajones 15:14:41 ack Brandel 15:14:58 Brandel: the current representation has controls inline on the page 15:15:13 ... so there's a capability to rotate the model 15:15:59 q+ 15:16:03 ... (??) our users like to mess with the camera 15:16:23 ... it is possible to camera controls on all views 15:16:25 ... I 15:16:41 ... I'm interested how people deal with those transforms 15:16:41 q? 15:16:44 ack bajones 15:16:48 q+ 15:17:05 q+ bajones 15:17:08 ack Leonard 15:17:13 q+ I actually did want to say something this time! 15:17:19 Leonard: are the camera controls on the camera or do they rotate the object? 15:17:35 q+ to actually say something this time! 15:17:57 Brandel: it's a pith/yaw/scale on the object itself 15:18:09 Leonard: so it's like walking away from it 15:18:27 Brandel: yes, by user interacting or walking 15:18:34 Leonard: (???) 15:18:39 Brandel: yes, it's the same 15:18:44 ack bajones 15:18:44 bajones, you wanted to actually say something this time! 15:19:21 bajones: it feels like the camera controls are needed. depth of field would be hard in stereoscopic 15:19:39 ... how do you determine how deep intro the page the object is? 15:19:50 ... because that might change the way you interact with it 15:20:10 ...is it a magic window or protruding from the page? 15:20:16 ... are there controls for that? 15:20:25 q? 15:21:01 Brandel: currently is inset into the page. It's reasonable to alter that but I don't have an answer for that 15:21:31 ... it might be reasonable to specify a pivot point 15:22:14 bajones: I'm making an assumption that if the object protrudes the page, it will be clipped 15:22:30 ... it won't satisfy everyone 15:22:49 ... (???) 15:23:16 q? 15:23:17 ... we need to work on way to not have things pushed out of the page 15:23:48 topic: Extra Camera controls? 2D browsers might want to control FOV etc 15:23:51 I disagree with Brandon, but not enough to bring it up verbally. I think you allow objects to be closer than near-clipping plane of the page... 15:24:56 ... If you have a wall-mounted sculpter, it is likely that you want the page to be wall-aligned with the scuplter coming out of the wall (aka page) 15:25:08 Brandel: it's hard to see what the common denominator is 15:25:10 q? 15:25:17 q+ 15:25:21 ack bajones 15:25:43 s/sculpter/sculpture/g 15:25:44 bajones: I think it's one of the areas to see what the mvp is 15:26:24 ... but I'm unsure if we need to dive into this for the first version 15:26:45 ... maybe we can start by saying what you see in 3D, is on the page but stereoscopic 15:26:56 ... then allow developer feedback to go from there 15:27:08 ... the potential for feature creep is large 15:27:25 ... my general leaning is to focus on the minimum that is still useful 15:27:53 ... we can add a lot of capabilities that nobody is going to use 15:28:13 ... it's a slow process but that's ok since that is how the web works 15:28:31 topic: Can they have background images? BG color yes. Can the portal be transparent 15:29:05 Brandel: it would be nice for the portal to have the same color 15:29:22 ... I'm unsure if the background image should be part of the spec 15:29:35 ... it's not as good to control with an image background 15:30:00 q+ 15:30:13 ... having a transparent background makes it hard because the depth 15:30:21 q+ 15:30:21 ... if anyone has examples 15:30:27 ack Leonard 15:30:45 Leonard: does transparent mean that the background is the camera? 15:31:13 Brandel: for this proposal, I envision that the (??) 15:31:26 Leonard: so you'd see the page elements behind the canvas? 15:31:29 Brandel: yes 15:31:33 ack bajones 15:32:15 bajones: yes, like canvas can be transparent and float on top of a block of text 15:32:17 q+ 15:32:44 ... then either the text needs to be in stereo mode 15:33:00 q+ 15:33:13 ... my inclination is that if you have a transparent background, you don't get stereoscopic 15:33:30 ... so the default maybe should be to not be transparent 15:33:49 ... the other thing that I want to mention is image background 15:34:00 ... you could surround it with a sphere 15:34:08 dino has joined #immersive-web 15:34:26 ... but that could be problematic if you move around 15:35:01 ... I think you can make the argument that there's a cubemap style image 15:35:07 ... not sure if it's part of the mvp 15:35:42 ... maybe an environment map style image 15:36:05 q? 15:36:06 ... I see that use case but not sure if it's needed right away 15:36:11 ack cabanier 15:36:19 q- 15:36:24 Agree with Brandon 15:36:38 cabanier: would it make sense if the model punches out a hole in the page 15:36:43 ... on image (environment) background 15:36:53 bajones: that would make a big change between 2d and 3d 15:37:08 ... I don't think developers would expect that 15:37:16 q? 15:37:27 ... so we'd like to have an explicit option for that 15:37:29 q+ 15:37:40 ack bradleyn 15:38:11 q+ 15:38:12 bradleyn: what if you give it a stretch property so you can push it in? 15:38:17 ack bajones 15:38:27 bajones: that would be very tricky to specify 15:39:01 ... you can image that as you scroll there are 3d elements that scroll over text 15:39:44 ... but in most cases it can overlay text on the page 15:39:44 q+ 15:39:44 ... given your description you would have a black hole effect 15:39:59 ... when you view this in immersive environment, you'd sink it into the space 15:40:34 .... so the text would sink into the background 15:40:34 Call the black-hole effect! 15:40:34 ... that would be really awkward 15:41:33 ... maybe there's a use for that type of effect but specifying transparent should not trigger this 15:41:52 ack Jesse_Jurman 15:42:04 Jesse_Jurman: is this excluse to the model element? 15:42:17 ... or is this something that any element can opt into? 15:42:26 ... maybe you can do this in CSS 15:42:36 q+ 15:42:39 ... so text or an image could do this 15:43:03 ack Brandel 15:43:07 Brandel: in this context, it's around the model on the page 15:43:33 ... we don't have a way for people to knock out pixels 15:43:49 ... this is in terms about what we do with the concept of transparency 15:44:19 ... what you propose is a general problem 15:44:35 q+ 15:44:42 ... the model element is the first stereoscopic element on the page 15:45:23 ack Leonard 15:46:02 q+ 15:46:03 Leonard: is this similar to CSS z-index 15:46:23 ack bajones 15:46:46 bajones: this is similar to compositing in a stereoscopic effect 15:46:47 q+ 15:47:11 ... but that would not work at all because z-index expects to be drawn in 2d 15:47:24 ... if you use this as a queue for actual depth, this won't work 15:47:32 ... it would be great 15:47:43 2D HTML/CSS has the top layer 15:47:48 ... but you don't want to use existing cues of the page 15:48:36 ... maybe something that's at the top of the page to make you opt in 15:49:01 ... you want to have the developer say that they want this 15:49:28 ... we can expect that the vast number of developers to still develop in 2d 15:49:56 ack cabanier 15:51:44 cabanier: a number of years ago, there was a proposal and an implementation from Magic Leap to have 3D transforms apply in actual 3D. I believe the webkit team said that they would implement this and would satisfy your request 15:52:53 topic: Do we want to expect visible transport controls, or will it all be done in custom JS? 15:52:59 present+ 15:53:01 present+ 15:53:01 rrsagent, publish minutes 15:53:02 I have made the request to generate https://www.w3.org/2023/09/11-immersive-web-minutes.html atsushi 15:53:08 present+ 15:53:15 zakim, choose a victim 15:53:15 Not knowing who is chairing or who scribed recently, I propose Leonard 15:53:34 zakim, choose a victim 15:53:34 Not knowing who is chairing or who scribed recently, I propose takashi_m 15:53:50 zakim, choose a victim 15:53:50 Not knowing who is chairing or who scribed recently, I propose yonet 15:54:15 zakim, choose a victim 15:54:15 Not knowing who is chairing or who scribed recently, I propose cabanier 15:54:51 scribenick: bialpio 15:55:10 meeting: Immersive Web WG/CG TPAC Day 1 15:56:08 Brandel: models can have animations 15:56:52 ...: animations can have a duration specified so models can be understood as media elements 15:57:03 q+ 15:57:10 ...: how do we think the minimal treatment of animations should be presented in the MVP? 15:57:15 ack Leonard 15:57:35 Leonard: not all models are intended to have their animations autoplayed 15:57:48 ...: they can be played in response to events, they can have multiple tracks 15:58:04 ...: we should not spec that they should autoplay their first tracks 15:58:17 q? 15:58:18 Brandel: ??? 15:58:19 q+ 15:58:27 ack bajones 15:58:50 bajones: there is common use case for video where you have an embedded player 15:59:03 ...: secondary uses are video that is being used as visual element in the page 15:59:18 (I just said that a lot of people use