W3C

– DRAFT –
(MEETING TITLE)

14 June 2022

Attendees

Present
ada, cabanier, Leonard, present, yonet
Regrets
-
Chair
-
Scribe
leonard, yonet

Meeting minutes

<atsushi> WBS of charter review

<laford> unable to join the webex meeting

<laford> stuck on "connecting"

<ada> oh that's odd

<ada> it seems to be working laford

<atsushi> AC rep list

<laford> I'm in

webxr-hand-input #117 Provide a signal that hands are doing a system gesture

https://github.com/immersive-web/webxr-hand-input/issues/117

<atsushi> s/agenda at/agenda:/

Rik: Oculus browser stopped working when using hand inputs. Now hand inputs are on regardless or system gesture or not.
… wants to provide signal when the system is processing system gestures and the application should not do hand movement recognition

<laford> c:

Rik: Concerned about adding another event because of the large number of events

<atsushi> s|s/agenda at/agenda:/||

<Zakim> ada, you wanted to say that gamepad is mistake

Lachlan: Would like more clarity on why events are bad

Rik: There are a lot of authoring issues with events. Mostly it seems that authors are not handling them correctly.

<cabanier> flag: XR_HAND_TRACKING_AIM_SYSTEM_GESTURE_BIT_FB

<ada> who's here?

<bajones> Hold on, tech difficulties

brandon: generally I think it is little difficult to look at some of the events and make apple to apple comparison

It is something doesn't come up often that people are motivated. I'm hesitant to make comparison.

brandon: I want to make sure it is not easy to accidently ignore

brandon: we do have the target ray mode in input sources, a lot of apps check

brandon: There are a lot of ifs, with a the content out there, it is hard to say, this won't break.

Both flag and event will be both ignorable by the developers

ada: my experience dealing with hand input, it seems frequently, the only time devs check target ray mode is when the app starts.

ada: if it is not tracking pointer, it is gaze and you go a different path

ada: they have to be significantly refactor to account for target ray mode is can change anytime

brandon: It would not surprise me if the frameworks are checking ray target once.

ada: From the pure developer point, flag would be simpler.

<Zakim> ada, you wanted to concede

cabanier: I proposed this in the context of hands but maybe we should have flag for all of the input sources

ada: if the user is doing something for the screen input

brandon: so much of our devices are gestured input right now. I think the native input can turn into a back gesture. I need to look.

brandon: the thing I want to point, looking at the properties in the input source now, handedness, target ray mode, the profiles and spaces.

technically those are all immutable

While the position of the space is immutable and you query the spaces everytime.

If any change, it is removal and re-enter. It is nice to say everything is immutable. That makes me to lean towards event

brandon: maybe it is clearer if its an event

rik: it would not be an attribute, but a function instead

brandon: I would feel better about that than straight attribute

rik: you are right, everything else is not immutable.

brandon: spitballing on this. Let's ignore if we have a gesture and ignore if it is event or flag

Target ray space is there but you might not be able to get it at some point.

brandon: if we stop giving out target ray point when there is a gesture, how much we would be breaking things

if you are using a hand, I'm interacting with buttons. If you take away target ray space and the ray space dissapear. Any interactions based on that will dissapear naturally

The failure mode that comes to mind, if they have some logic, first check target ray and if it is not there, there is no hand. I haven't seen a logic like that.

I don't think we should not worry too much about that case

rik: I guess that's something we should try out first

brandon: getspace can always return null

rik: if you can track the controller or hands we can assume we have target ray space

brandon: I still think you could do a blur without this. If we think this is the safe thing to do, people might pay more attention.

Maybe we can say, we wouldn't be breaking certain percentage of the apps but I am not sure if we can say that

Rik: we might be able to gather some data on this.4

rik: recap, we should always have a method that says (tbd). We need to experiment when you are doing a gesture we don't have target ray

layers#283 Alpha property for XRCompositionLayer

rik: next topic, a library called doVR?

Right now they use layers, you can't calculate their opacity.

rik: it is not surfaced in webxr layer.

ada: what is the advantage of having opacity?

rik: in the media layer there is no control. If you render it to a webgl layer, right now you render everything to a shader

ada: I guess it is for debugging too.

brandon: There are multiple ways that alpha blending can happen, is that something that is exposed to openXR and what are the options

rik: there are multiple options.

rik: composition layer color scale bias

is an extention

brandon: this is used not only for alpha but also for a tint

rik: also have a composition layer alpha blend, that lets you control how the background is treated.

rik: currently webxr offers only source over blending

brandon: if I draw partially transparent pixels, it will only render alpha

<Zakim> ada, you wanted to ask about mixBlendMode

brandon: I don't have a particular problem with giving alpha. I just don't want to get overly complex with different layer blending options

<ada> https://developer.mozilla.org/en-US/docs/Web/CSS/mix-blend-mode#syntax

ada: it would be nice if you get the same blend modes of css's blend modes

rik: that we can not do.

brandon: a lot of this mixed blend modes can be represented by openGL and web gpu

I think it is more likely that we take WebBL or what web gpu does

you are describing how the values are slotted than these human readable values

ada: not all of these are going to be based on OpenXr. If we can have a some kind of blend mode, we could have a unified way for web

rik: it can be done.

ada: if we have a cloud layer and have fog with additive mode, that would be nice

It would be cool to have more of these modes.

ada: WebGL has a list of modes for when it composites

rik: meta is the only one that supports that.

ada: if you were to use it, it will work but is not on the webxr spec

rik: it is not on webxr spec but it is on openXR spec

<bajones> +1

<ada> +1

rik: everybody OK with opacity?

<cabanier> +1

+1

ada: go ahead and update the spec

<Ren> +1

Do we have anything else we want to bring up

Sorry Ren, I've tried to get Ada's attention but I was muted I guess

Ada, I think I didn't change the scribe.

<ada> oh no

Do you the minutes doesn't have mine

OK. I'll look into it.

<ada> thanks for updating that atsushi

and just type it in

Minutes manually created (not a transcript), formatted by scribe.perl version 185 (Thu Dec 2 18:51:55 2021 UTC).

Diagnostics

Succeeded: s/ > / -> /

Succeeded 1 times: s/ > / -> /g

Succeeded: s|Agenda: /invite RRSAgent #immersive-web||

Failed: s/agenda at/agenda:/

Succeeded: i|https://github.com/immersive-web/webxr-hand-input/issues/117|topic: webxr-hand-input #117 Provide a signal that hands are doing a system gesture|

Succeeded: s/scribe /scribe: /

Succeeded: s/+present/present+

Failed: s|s/agenda at/agenda:/||

Succeeded: s/Agenda at /agenda: /

Succeeded: i/flag: XR_HAND_TRACKING_AIM/scribe+ yonet/

Succeeded: i/next topic, a library called doVR?/topic: layers#283 Alpha property for XRCompositionLayer/

Maybe present: brandon, Lachlan, Rik