W3C

– DRAFT –
Immersive Web Community Group

16 June 2020

Attendees

Present
alexturn, bajones, cabanier, cabanier_, cwilso, kip, Leonard, Manishearth, trevorfsmith, yonet
Regrets
-
Chair
cwilson
Scribe
kip, manishearth

Meeting minutes

layers#154 first pass at hit testing; discuss the current hit test proposal

<cabanier> https://‌github.com/‌immersive-web/‌layers/‌pull/‌154

<cwilso> https://‌github.com/‌immersive-web/‌layers/‌issues/‌163

cabanier: i've created a strawman proposal for hit testing in layers

cabanier: which describes what layer was hit, and where on the layer it was hit, along with where in 3d space it was hit

cabanier: also extended xrframe so that you can pass a space+array and it returns a list of those dictionaries

cabanier: since everything is known there it can be synchronous

cabanier: manish suggestd an event based api but the issue is that the event may fire too late

bajones: we should note that we have skipped a bit ahead in the agenda. rik is discussing the second agenda item

bajones: for hit testng itself it's tempting to find parallels with world geometry

<kip> [Just saw that Zakim chose me while reconnecting...

<kip> Will scribe...

bajones: but different in that the client side knows all the info in this case

brandon: You can probably successfully offload this to a library

layers#158 Layers on devices with views of different resolutions; how should we handle views from different devices (ie eye displays + observer)

layers#154 first pass at hit testing; discuss the current hit test proposal

brandon: because as long as we do a decent job of defining what shape the parameters are in, gives out, the math behind this is a bit of a pain, but is not anything that can't be solved in JS
… Not certain that is the right way to go, off to a library, but is a valid option for this case

bajones: the math for this is probably something that you can maybe offload to JS. but it might be a perfectly valid option

bajones: by the same token we don't need a big long subscription event based thing when we're talking about this

bajones: because everything can be figured out on the client side
… so not enturely sue if the event is necessary here

<kip> Thanks for scribing Manishearth, I'll take over during your talk

bajones: unless there was some kind of DOM input on the layer and there was a security reason

bajones: final thought. i see a formulation of "find hit test", which will loop through all layers
… i wonder if given the way you could use this, and given that everything is client side, we should make this a property of the layer itself, "for this frame this space/etc, where did i hit?"
… then we don't need to worry about sequences etc

bajones: not aware of any native API equivalents to this that might dictate this particular design
… something i'm missing here, rik?

cabanier: not that i think so. this won't be done in the native api

manishearth: Clarifications
… Not suggesting a subscription events based model
… HAve preferred pointing ray field in on XRInputSources
… Used to trigger events, similar to select events
… Come with their own frame
… Can do most operations you want to
… Idea is to have a preferred pointing ray on XRInputSource anyways
… For example, application is doing offset stuff
… Not using target ray space, but using offset
… Useful for DOM Overlay
… Click events based on offset ray rather than target ray space
… WebXR spec does not specify pointing ray
… Having field is useful. Can have event api
… Subscription based model seems heavy
… For utility, per frame thing, here's space and ray...
… Give me intersection in this space. That would work
… Event thing would be, due to needing for DOM Overlay anyways
… Unifying would be nice

Rik: Makes a lot of sense to have hit test on layer
… Could be multiple layers
… Equirect layer wouldn't want to hit. Just quads and cylinders
… Could be fine with that change
… Originally proposed in library. At time people objected
… Library might be the way to go
… Didn't want to revisit
… Could be that some UA's might place the layers slightly differently. Would all fall done

bajones: Coulple of notes based on what said
… First, Rik saying that different UA's place layers differently
… Would be concerned if the case, could happen, but ideally shouldn't
… Good way to get bugs. Might see seams of black edges
… In this case (Manish's), there may be intersection point other than pointing ray
… Use for basic physics
… Someone could do multiuser environment. Raytrace their pointer to surface, from network
… detaching from input sources is decent idea
… To Rik's idea on library. Given that it can be library.
… Layering module already complicated
… Would not object personally.. Good idea, but maybe for backburner. See what kind of feedback once layers out in the open
… Take time to build up intersection library ourselves as we need to anyways. See what works for people
… Additive layer on,
… Will have better idea later

Manish: Want to mention that would prefer a library
… Pushing for discussion to happen as DOM overlay gives event.. Screen space... Multiple things giving X,Y coordinates
… Realized that this may be potential option
… Potential of unification interests me. If not doing that, then library is fine

alex: Chime in for library
… Something inside the UA feels like opportunity to address subtle differences
… Apps will generally need to raycast against layers and scene geometry
… Will need to do both predictably
… Expect them to raycast against scene representation
… Just happen to use layers to render part of the scene, but would need to do that in their engine and have predictable outcome

Rik: Getting started on polyfill for layers
… One of first things would be to make hit testing part of that polyfill
… If others interested, can work with them
… Otherwise will work on this

cwilso: Next item
… Cover 163, Rik?

Rik: Don't have solution to this one

layers#158 Layers on devices with views of different resolutions; how should we handle views from different devices (ie eye displays + observer)

Rik: Now if create texture array, we assume every view has the same resolution
… Not the case always
… Camera resolution may be different than the displays
… Don't know now how to solve for the spec
… Should we throw or make things more complicated and allow textures and separate textures
… Sounds confusing
… Would like to hear from alex

manish: Also, should let alex go first...
… Unfortunate result suggested throwing for texture arrays
… Content supporting multiple views may not realize that using texture arrays is broken here
… May be nice to request "give me texture arrays otherwise give me normal views"
… This may require api change that can be backed my multiple texture arrays
… When getting a view you get both a texture array and index
… Not sure which approach is better or necessary
… Ideally content written for two views will not automatically break

Alex: I would hope to get to, with others chiming in...
… Speaking to how HoloLens does it
… Optimal path to hit with layers spec -- use texture array for main stereo view. Use mono for separate things
… Not third element for first person view, separate from texture array
… Way the hardware works
… Previous comment Manish made.. Having way for app to get texture array, but can offer something else. Reduce axes of ambiguity
… If know UA is unable to support... OpenXR backed support runtimes support Texture Arrays, but some may be faster than others
… Is required to support apps that use the paths,
… Should WebXR require both texture array path and non-texture array path
… Once you have promise that textures arrays will work

bajones: I think that for us we are not worried about any UA that does not support texture arrays
… Should be baseline at this point
… Reason for Texture Arrays at all is for WebGL 1 based apps to support
… If didn't care about WebGL 1 would say "texture arrays all the way"
… Scenerio to consider is not just HoloLens observer view, but also ... [missed] have four views.. Higher inset views
… People working on [barro?}
… In both cases, said that all of the primary views that you allocate are same resolution
… In Barrow, smaller view is higher density but same size. Can be allocated in one go
… HMD vendors seem to be aware of this
… Encounter a lot of that, but maybe can't count on it, such as with HoloLens observer
… Manish was wondering if we could return another texture array. Answer is yes
… API as it is now, here's view and layer. Give me WebGL sub-image
… Could contain texture array or something else
… Would strongly recommend that if developer uses texture arrays, that we always support texture arrays. Or if textures always suppor tthat
… Reduce complexty
… Maybe have primary views in texture array
… This will break down if developers are trying to do multi view rendering. Trying to render all views in one batch
… All but one would be the same texture
… Some other conversations around those observer views have a lot of discussion about people opting into that
… Make contract wit h the UA that says, "I am handling these if you give them to me"
… If that's the case and we're willing to be more explicity
… Clearly demark what views are non-primary, then we could get into a comfortable situation where we make guarantee where primary views are part of same texture array
… MAybe we allow multiple viewports per level
… There's some more discussion there
… Do allow for these secondary views to be formatted a bit differently
… Need to be aware of that when develop it. MAke sure app handles properly and test

alex: Comment.. Great point about vario
… Two notions of WebXR
… Primary view configs

<alexturn> OpenXR Varjo primary quad views: https://‌www.khronos.org/‌registry/‌OpenXR/‌specs/‌1.0/‌html/‌xrspec.html#XR_VARJO_quad_views

alex: Mono view device or stereo view device
… Extension enables new primary, four views
… Independently notion of secondary views

<alexturn> OpenXR secondary views: https://‌www.khronos.org/‌registry/‌OpenXR/‌specs/‌1.0/‌html/‌xrspec.html#XR_MSFT_secondary_view_configuration

alex: Secondary view system that proposing is reviving some things from spec that was excised for 1.0
… Bringing back as an extension
… Introduces more machinery for secondary views
… May be a simpler opt-in
… Give me 4 primary views in texture arrays
… Option to provide primary and secondary array with different resolution and render paths
… Can't just be another element in the texture array
… Okay to be a separate context.. This is where we are landing in OpenXR

Rik: Sounds like we are all getting agreement that we should not throw
… And for primary views we should create texture array
… Secondary views should be texture array but separate
… PAge should opt in
… for secondary views
… Should page opt in for secondary views?
… Was pushback
… Now we are starting to differentiate
… With layers spec
… In order to do that we need to define that somewhere
… Should be defined in WebXR spec

manish: This dovetails into.
… My PR on spec ties into what OpenxR calls secondary views
… I and brandon are concerned about primary and secondary language
… Additional views concept. In layer spec can say "here's an additional thing you should handle"
… We could have a line like this here as well
… Not all cases where you have two views , eg in a C.A.V.E system / first person system, device can reproject
… Give you that view without surfacing to JS
… In C.A.V.E. has to give all views. can't extrapolate
… Similar situation where want to make distinction between multiple primary views and additional views that can be ignored
… Concept of additional views candles this

Rik: Under impression that additional views means you have more than two views
… In C.A.V.E. can have 4.. Doesn't seem to match up.
… Primary views are what observer sees.

manish: Way I tried to spec lets that distinction exist
… Not clear but the way is specced, additional views means "not your normal views". Can change that
… to "primary / secondary"

Rik: Changes make sense

manish: Concept invented for this may not be useful for layers. Can change that concept so it works

bajones: Say really quickly that I do think distinction between primary and secondary views is more necessary given this conversation

manish: Can do that

Rik: Can change layers spec so is not so confusing

manish: Quick question - I am defining primary views as things that you must render to. Do we expect a case where primary views have different resolutions?
… Maybe in a C.A.V.E. system? Might be other systems?
… Kicking a can for cave systems down the road here?

bajones: Not aware of any systems where primary views are of a different resolution, even if not a different size
… Even in cave system, is a cube. Sides all the same size
… Don't see where it would need to be differing
… OpenxR is ideally where things are trending
… In that scenario, implemented gets to choose resolution that is passing down
… WE can probably just decide that they are same resolution, but underrezzed
… Compositor will figure that out
… Not overtly concerned, but didn't give thought

alexturn: Haven't explored on HoloLens 3 element texture array
… Haven't dug deep there
… Reasonable to do extra pass. Not quite doing Multiview
… I think would be interesting to explore
… Want to prove out that we can get to reasonable performance
… To see if single texture for all of them
… Other side for Vario and StarVR.
… Wide-left and wide-right...
… Nuance for additional discussion
… Meaningful for runtime to know in advance.. If you opt in
… IF not opt in for starVR, stereo covers central display and some of wide displays
… If didn't opt in, get the same for all displays
… Core 0,1 views L,R inner displays are not exactly that mapping
… Need to know as a runtime/UA to know which FOV to give you
… Keep in mind that is useful / valuable for runtime to know if opt-in to operate the primary views

Rik: Move to next item?

Rik: Yes

layers#163 Should non-projection layers support more than 2 views?; Should non-projection layers support more than 2 views? [from cabanier]

Rik: Currently only 2 views
… One for left and right
… GetSubItems for one image
… Working on supporting more views
… Updating layers spec
… Incorrect assumption
… Associating with view for stereo is incorrect assumption
… Stereo means left or right, not anything to do with the view
… Created a PR

<cabanier> https://‌github.com/‌immersive-web/‌layers/‌pull/‌165

Rik: To fix this
… This change makes it so views are only applicable to projection layers
… And you have to pass an XREye to getSubImage
… Depending on if stereo or mono, pass in L+R
… Manish commented on PR
… Big change
… MAke sure everyone is okay with this change
… Let me know if you have concerns

bajones: I haven't reviews the PR yet
… Will do that after call
… Worth calling out as sanity check assumption in 163
… Where saying that any non-projection view, having more than two views
… doesn't make sense
… Mostly going to be used for static 3d images or movies
… Where isn't more than two views available
… My assumption on top of that is that if you wanted to do something like a portal
… Might not be best place for layering system
… Probably want to use more traditional means with stencil
… Worth doing quick feeling for room to see if everyone agrees with those assertions
… With quad and cylinder, anything more than a strict L+R stereo does not make sense
… Anyone disagree with that

manish: My initial pushback was based on not understanding why non-projection views would want stereo
… 3d nature of that would be very weird
… In case of taking existing content formatted like that, maybe useful
… Agree with PR

alexturn: Generally I think this will cover most use cases
… If you make a quad layer, you can say if on left-eye-view or right-eye-view, but don't control view visibility
… Might be okay
… May use a quad to do an observer view. Maybe observer view renders differently
… Do we need that flexibility for things like quad layers, or say that it appears in all views?
… Maybe complexity can be added layer if we need it. Calling out that this limitation exists with this approach

Rik: I don't think this is limitation
… Quad layer will still be composited
… If in world space of camera
… Will be in correct spots
… The only thing that would.. if stereo layer, which would you pick

rik: do we have cases where we would want to exclude the quad layers?
… If just projection layers, then app may exclude arbitrary geometry in scene to tweak for observers
… Lost flexibility.. Once using quad layer, then content must appear in all views
… Don't have concrete example of what would be blocked
… Functionality in OpenXR that is not represented here
… Not strong objection, but is an artifact to note here
… Worth filing for async discussion

Minutes manually created (not a transcript), formatted by scribe.perl version 121 (Mon Jun 8 14:50:45 2020 UTC).

Diagnostics

Succeeded: i/Rik: Currently only 2 views/topic: layers#163 Should non-projection layers support more than 2 views?; Should non-projection layers support more than 2 views? [from cabanier]/

Succeeded: i/manishearth: Clarifications/scribe: kip/

Succeeded: i/brandon: You can probably successfully /scribe: kip/

Succeeded: i/bajones: the math for this/scribe: Manishearth/

Succeeded: s|agendum 3. "layers#163 Should non-projection layers support more than 2 views?; Should non-projection layers support more than 2 views?" taken up [from cabanier via atsushi]||

Maybe present: alex, brandon, Manish, Rik