W3C

– DRAFT –
Immersive-Web WG/CG call

02 November 2021

Attendees

Present
cabanier
Regrets
-
Chair
yonet
Scribe
laford

Meeting minutes

layers#135 Depth testing across layers requested by cabanier

cabanier: Two options to expose depth testing layers
… boolean field that determines support
… or feature that you opt into on session creation time
… if you opt in it will always be enabled and projection layers decide to opt in or opt out
… others are rendered in normal order (submission order)
… made a PR for option 2. Its more flexible for UAs to polyfill it \
… if its optional and it is returned as not supported, you activate the polyfill
… First: opt in on layer creation
… Second: opt in on session creation time

bajones: For non-projection layers, this is an all or nothing thing?
… what would be the behavior if you mix and match types of layers, with different depth sort options?

cabanier: OpenXR is not defined that way. Every layer can pick whether or not its to be depth sorted
… don't need that flexibility though, so makes sense to globally opt in

bajones: That makes the most sense to me as well
… would be nice to sanity check OpenXR's design decision
… is there a use case that required that?
… Would like the name of the feature to include the word 'layers'

laford: layers in OpenXR was a fb driven spec

layers#265 A feature that toggle stereo on runtime. https://github.com/immersive-web/layers/issues/265

cabanier: Someone was integrating media layers. Required a stereo equirect layer. Combined with controllers rendering, they are not correctly mixed with the stereo equirect layer
… no way to quickly switch to mono
… Could there be an attribute to switch back to mono temporarily
… We do not want a mono layer to be able to switch to stereo

bajones: switching layers should be a fairly lightweight operation
… should take place within a frame
… would there be a downside to just keeping 2 layers in memory and swapping for any given frame
… in this case based on controller presence
… though the layers are probably allocating texture memory
… 1.5x the memory may be an issue on low end devices

cabanier: the problem is what do you populate the mono layer with
… issues with webgl

bajones: we want 2 separate things. The format of the media stream and the presentation modality
… doesn't seem terrible to toggle at runtime

cabanier: is there something more elegant than adding an attribute?

bajones: do we need / want the granularity to choose which eye to show

cabanier: not the ask. Just want the non-media stream content to not look weird

bajones: seems like a good compromise; not necessary for other layers

webxr#1203 Provide statistics to help guide performance https://github.com/immersive-web/webxr/issues/1203

cabanier: provide statistics to discover gpu / cpu headroom

RafaelCintron_: The closest we have is timing gpu operations in webgl
… AAA games have settings to achieve perf or best visuals
… or fine grained graphical feature control

cabanier: some have automatic detection, e.g. look at hardware or run frames and guess best settings
… or database mapping system setup to settings
… we don't want to go there though

alcooper: I've seen some chat on this one on the issue
… Definitely something that we want to be careful exposing due to fingerprinting data it introduces
… can probe system info by reading statistics

RafaelCintron_: webgl has timer query to time individual webgl operations

cabanier: can time how long animation loop and gl operations take
… seems like you can already get what is being proposed in webgl
… what is the timing attack surface here?

bajones: Disjoint timer query and analogous webgpu functions are sufficient
… don't cover system compositing time and browser overhead
… would hope that is minimal though
… even if so, what is the right way to report it?
… The oculus dev tools have similar info, so perhaps a combination of existing web tools + system tools is sufficient

cabanier: developers then may develop for specific systems

bajones: the assumption would be that overall time spent by browser + system is within an OoM from each other then webgl timings should be analogous to the actual headroom +- ~10%
… if we can come to a rough rule of thumb then webgl alone should be sufficient, otherwise we should take a closer look

<yonet> https://docs.google.com/document/d/1WAZAmA7TGi_2QAR6o_jr8hOPM7ybzrKUt80tAajzyhg/edit?usp=sharing

<yonet> https://www.w3.org/groups/wg/immersive-web/calendar

<yonet> Thank you atsushi

Minutes manually created (not a transcript), formatted by scribe.perl version 136 (Thu May 27 13:50:24 2021 UTC).

Diagnostics

Succeeded: s/agendum 2:/topic:/

Succeeded: s/agendum 3:/topic:/

Succeeded: i/cabanier: Two options to expose depth testing layers/topic: layers#135 Depth testing across layers requested by cabanier/

Succeeded: s/chair yonet//

Succeeded: s|https://github.com/immersive-web/layers/issues/265|layers#265 A feature that toggle stereo on runtime. https://github.com/immersive-web/layers/issues/265

Succeeded: s|https://github.com/immersive-web/webxr/issues/1203|webxr#1203 Provide statistics to help guide performance https://github.com/immersive-web/webxr/issues/1203

Maybe present: alcooper, bajones, laford, RafaelCintron_