W3C

– DRAFT –
Immersive Web Community Group Teleconference

22 Sep 2020

Attendees

Present
alexturn, atsushi, bajones_, Brett, cabanier, cwilso, dino, kip, Lachlan_Ford, Manishearth, nick-8thwall, yonet
Regrets
-
Chair
yonet
Scribe
cabanier, Manishearth

Meeting minutes

webxr-hand-input#50 Move to WG

Manishearth: turns out we already did this

cwilso: that was easy

Manishearth: we want to move to fpwd so we will send out a call to consensus

cwilso: you can send out the call and we'll use that

<cwilso> https://‌github.com/‌immersive-web/‌webxr-input-profiles/‌issues/‌178

webxr-input-profiles#178 Provide assets for the left and right hand

<Manishearth> https://‌github.com/‌immersive-web/‌webxr-input-profiles/‌issues/‌178

cabanier: I logged this one and there was some feedback from alex and Manishearth

Lachlan_Ford: as Alex said, it's an hierarchy
… the way we do is different than oculus
… for us it's based on raw triangles
… the extension we provide in openxr, exposes indices and vertices

bajones_: it seems intense when it comes doing this for every frame
… but you do have a skeletal aspect?

Lachlan_Ford: yes. it's derived from the mesh

cabanier: so from the oculus perspective, it's the same as hl
… we also have the hand mesh and skeletal information
… but this is to drive the joints spec
… not a mesh spec (which has been discussed a bit)
… knowing that there is a hand, and knowing that there are joints, you just want to draw a hand
… as brandon said meshes are kinda expensive for xr, so joints make better sense
… we should have *a* model, not one for each vendor
… espeically since we all track the same joints

Lachlan_Ford: questionable what an app would do differently based on which hand they get

Lachlan_Ford: on the point of a new mesh every frame: you can get away with it because you can map it straight into a vertex buffer

bajones_: a vertex and index buffer per frame?

Lachlan_Ford: no, a fixed number of indices per frame

bajones_: kinda directed at rik, but Lachlan_Ford if you have something that contributes that would help
… i know on the oculus side there are assets that are distributed for a canonical hand mesh
… curious if you know under what circumstances to use the generated vs canonical mesh
… is there a recommendation for why users should one over the other
… kinda get a sense of when one is used over the other

bajones_: first thing that comes to mind is that if you are using a game you want an appropriately themed hand, so you will use an imported asset
… but outside of that, are there cases you know

cabanier: so for ex the hands you see in oculus use the mesh, not joints

Manishearth: I brought up per vendor hands
… and mostly because I didn't know what the question was

I'm not opposed to it. Just getting one hand one and then we can figure the rest out later

Lachlan_Ford: my understanding was for understanding of gestures
… there's gestures support
… so I was think that was what inputprofiles was for

Manishearth: no.
… it's not just what buttons are supported
… for webxr select and squeeze are overlapping

<bajones_> "In the case of hands, you don't have many buttons" [citation needed]

Manishearth: right now there's only one
… we could add separate meshes
… but it's an option for ys

cabanier: when it comes to the name, right now when you expose hands as a controller, we call it "Oculus hands"
… if an API uses the input profiles repo it won't find that
… has there be a discussion about the name?

alexturn: where does the string oculus appear?

bajones_: the profile array?

alexturn: yeah so "oculus hands" isn't a valid profikle name

alexturn: we have already defined "generic-hand-select"
… if oculus was exposing further data we could have oculus-hand-select and that could fall back
… i see this as totally orthogonal though
… e.g. in openxr we have hand+dir and hand+joints
… idk if i would tie vendor to the way we do joints

nick-8thwall: wanted to inform the discussion with stuff from 8thwall face effects
… with a mesh that coverts the face
… and renders it
… meshes are different, need to be specified ahead of time and covered with properly mapped textures
… to come up with assets that cover every encounter
… if the end users are still generating meshes the uvs need to be specified ahead of time, so you wouldn't need indices each frame, just vertex positions
… a thing that comes up often in face effects is that you need texture effects
… for hands you may need a high fidelity mesh, so you might need an inverse map
… some things that may fall into an uncanny valley situation wrt a robocop/themed hand
… idk from a UX POV if having a high fidelity hand to go with a prefab tex is strongly preferable to being able to have a generic hand model
… that can be appropriately skinned

nick-8thwall: main thing is, by exposing meshes we have a per-spec multiplication of effort
… indices need to be pre baked anyway

cabanier: in response to alexturn , it seems like we should have a common model
… and everyone agrees on that
… wanna be sure if folks agree it should be based on joints?

alexturn: yes, and i think that we should require joints
… (if you don't have joints no dice)

cabanier: yes

cabanier: also, do you have a model? we don't have extra joints

alexturn: unfortunately no, because for rendering we use the mesh
… with a lot of the properties nick was mentioning

<alexturn> OpenXR hand mesh (MSFT): https://‌www.khronos.org/‌registry/‌OpenXR/‌specs/‌1.0/‌html/‌xrspec.html#XR_MSFT_hand_tracking_mesh

alexturn: we give you that separately per frame
… currently an MSFT standard, not cross-vendor, but it's a way people can do hand meshes
… we tend to render the hand mesh every frame

bajones_: addressing nick's concerns: the facial mesh rendering is an interesting topic here
… alcooper has been looking at it on our end
… it is a little different: hard to come up with a reasonable representation
… just blasting out vertices each frame ends up being more concise
… so you don't have quite the same issues as facial rendering, but a lot of the same issues
… can imagine that someone who had no other options can do their own hand mesh with their own stable uvs
… it just won't line up perfectly

bajones_: the case of the oculus default mesh -- doesn't have metacarpals, but i think you can just plug in metacarpals and not skin them to anything
… point being if you have a hand mesh that's close enough we can work with that
… final thing i want to say is that i would really like to have some asset available in the input-profiles repo. think it would be useful. would not want it to be automatic
… the same way controllers do now
… good thing to have it in the library, but to get it to ship out automatically needs a separate impl that is joint aware

alexturn: either joints or hand mesh can get you a good quality result, so if we hit the same articulation bar it should be fine
… some differences here bw ar and vr. we believe they can share the same APIs, but apps will build different experiences
… e.g. in AR it is critical the app match the true size of the hand, in VR there is more flexibility
… so you can play a lot more with that
… e.g. you might be wearing a big gauntlet
… also enables easy retargeting
… so in vr you have a lot more flexibility there

Manishearth: I wanted to mention to Nick
… right now we have no mesh API and we're not adding it to the joints repo
… we're just figuring out if we should have a mesh in the repo
… and the hand mesh should be something you opt into
… We already had issue with performance when it comes to hands
… and each mesh should be relative to a wrist
… I agree with Brandon that we can begin with the Oculus mesh

Lachlan_Ford: I wanted to have an understanding where this model will live
… so it's a glb that you pull down and render

bajones_: yes. we have an assets folder on github.
… it will live on that same spot
… and it will be available for everyone to download
… we need to make sure that it not get automatically pushed. We don't want non-skinned hands to show up there
… how can we extend that library so hand aware applications can use it

Lachlan_Ford: I want to know the motivation for standardization

bajones_: it works this way
… it's really framework builders having a appropriately licensed mesh
… I don't advocate that they will switch to that

Lachlan_Ford: the way you describe it, the user can improve upon the baseline

bajones_: it's often a sanity check
… we're not advocating an autorative hand

Manishearth: we aren't standardizing anything. It's just a JS library
… we're saying: we just put a mesh here and people can standardize on that

cabanier: so i think we talked a little about we don't wanna use this model all the time otherwise we get "claw hands"
… i think we only are going to show hands as controllers only if the author opted in to joints

cabanier: e.g. on oculus you have controllers and put them down and use their hands
… rn you HAVE to pick up controllers to use webxr
… unless the user changes some settings
… i nthe future we can make it so that ONLY IF the author requests hand-input

alexturn: maybe we should have this in the spec

alexturn: we might wanna make clear that UAs intend to operate this way

cabanier: will open an issue

nick-8thwall: just responding to something alexturn said quite a while back: was talking about how we'd enable ar as well as vr for rendering a mesh
… for face effects even though you have a high res mesh
… it's easier to expose a landmark api
… e.g. when you talk about setting a finger to a location
… when you have points on the mesh, you have some special landmarks

alexturn: and yeah that's the api that we'll have first in webxr
… you get joints plus the tip
… for collider purposes apps would use joints, and rigging is mostly for rendering

Manishearth: like Alex said, there is no mesh API
… it's just a mesh so you can skin the hand
… we only expose joints. 25 points per hand
… what oculus and Hololens do is different
… even if the content doesn't opt into hand, you should expose a hand controller

Lachlan_Ford: there are cases where you want to render the hand

Manishearth: not rendering a controller

alexturn: people will render the gltf
… any code that uses motion controller is going to have to be updated to the hand model

???
… the ua would remove direct delivery for the accessibility feature

alexturn: I'm torn if the default hand mesh in the input profiles will create confusion
… you can roll your own hand.
… maybe it makes more sense to put it in its own location

AOB

yonet: any updates from anyone?

kip: my last day with mozilla is this week
… but my github name will stay the same
… other people from mozilla might join especially from the hubs team
… if you want face-to-face call especially lighting estimation I'll be happy to create some time for that

<alexturn> present

atsushi: thanks for patching up the minutes!

<atsushi> np ;)

Minutes manually created (not a transcript), formatted by scribe.perl version 123 (Tue Sep 1 21:19:13 2020 UTC).

Diagnostics

Succeeded: i/yonet: any updates from anyone?/topic: AOB

Succeeded: i/cabanier: so from the oculus perspective,/scribenick: Manishearth/

Succeeded: i/cabanier: when it comes to the name, right now when you expose hands/scribenick: Manishearth/

Succeeded: i/bajones_: addressing nick's concerns: the facial mesh/scribenick: Manishearth/

Succeeded: i/Manishearth: I wanted to mention to Nick/scribenick: cabanier/

Succeeded: i/cabanier: so i think we talked a little about/scribenick: Manishearth/

Succeeded: i/Manishearth: like Alex said,/scribenick: cabanier/

Succeeded: s/e.g. in AR it is critical the app match the true size of the hand, in VR there is more flexibility/... e.g. in AR it is critical the app match the true size of the hand, in VR there is more flexibility/

Succeeded: s/--- and each mesh should be relative to a wrist/... and each mesh should be relative to a wrist/

Succeeded: s/the ua would remove direct delivery for the accessibility feature/... the ua would remove direct delivery for the accessibility feature/