W3C

- DRAFT -

Immersive Web Community Group

26 Feb 2019

Attendees

Present
trevorfsmith, cwilso, blair, cconiglio, adarose, bertf, josh_marinacci, brandon, chris, joel, ravi, atsushi, alexturn
Regrets
Chair
Trevor F. Smith
Scribe
rik, cabanier

Contents


<josh_marinacci> could someone send me the call-in link again? :(

<cwilso> scribenick: rik

<cabanier> scribenick:cabanier

<josh_marinacci> hmm. I can hear nothing

<josh_marinacci> yes. I'm on the web version

trevorfsmith: welcome.
... on the agenda next steps for cv
... (going over agenda topics(
... are there other agenda items?

Next steps for CV in AR: #4 (Blair)

<trevorfsmith> https://github.com/immersive-web/proposals/issues/4

blair: so, there's been a lot of discussion on this
... should we move this to a proposals repo?

<bajones> +present

blair: there were some open question
... should we join up with webrtc?
... webrtc is already there. getusermedia already requests camera access
... but webrtc is hairy
... so coupling cv to webrtc will be hard
... or we do something in webxr
... and merge with webrtc later
... and this is what most people seem to want to go for
... if we're willing to decouple from webrtc, we can go that way
... getting video frames into the web page could be straightforward
... and a lot of people want to see this happen
... are other people here interested

bajones: it was always my understanding was that webrtc would give us sync with the audio stream
... and I don't know how important that is

blair: to push back on this, getting the video frame is just very small problem for cv
... we do have to support video in a texture
... webrtc and streaming in the long run should give us access to the frames
... if it can give us a compressed stream to give us a remote chat
... for webar, a case is to have a remote worker
... it may be enough if we can just have camera access during an xr session

<blair> (ha, forgot about the queue ... or did I?)

cwilso: so, I want to second what blair said
... the problem is syncing pose with info from webrtc
... there's work in chrome to detect shapes and images
... that api would do some of the use cases and scenarios

<blair> +1 on the shape detection API ... it would be AWESOME if we could use that as a vehicle to doing some native, cross platform simple CV

cwilso: so we should be clear on what we're trying to do
... because pushing streams of data might be just to get camera access
... which is not that hard/our problem

<blair> can't put stuff on peiople's faces convincingly and STABLY over time if we don't do 3D

trevorfsmith: I was hesitant to open a repo
... because it should be on a clear direction that we're unified on
... is it video to texture, cv,??
... so a general cv repo is not going to be helpful

alexturn: I think I'm putting things together
... so maybe it's better to break it into 3 things
... computer vision could be looking for qr code
... for the remote worker type, the video camera could be higher fov but lower framerate

blair: it's interesting to break it down that way
... Brandon thinks about it as the video mix part and it could be one thing we talk about

<trevorfsmith> close queue

blair: how do we expose a video feed?

<bajones> Blair's right. My thinking on this has been very ARCore-centric.

blair: how do we expose a video texture to a session

<trevorfsmith> close the queue

blair: but video streaming is not cv unless you had no choice
... here, I'm thinking, are there cameras and can I access them?

and can I process on that feed

scribe: as for the shape api, I'd love to have a common way to recognize shapes and images, qr codes

cwilso: the shape detection api hooks into that today

<cconiglio> Most use cases I've seen over the past year from clients have been object detection/classification with a little bit of segmentation in there.

cwilso: is that api a bandaid for a 2d world? can it give a 3D pose

trevorfsmith: let's move this issue and ask blair to break into repos

Leonard: (agreeing)

<trevorfsmith> open the queue

Spatial favicons explainer feedback: (Ravikiran)

Ravi: we have discussed this at tpac
... and most people were in agreement with the format

<cwilso> https://github.com/immersive-web/spatial-favicons/blob/master/explainer.md

Ravi: so I went ahead and wrote an explainer
... and added some images. There are 2 things that may need more collaboration
... one of them is the sizes attributes and how do we add the third dimension
... and the other one is constraints and guidance
... gltf has a wide variety of features and we want to limit it
... mostly for efficiency and performance
... and animations. What do you do with them?
... some people are ok with the initial pose, but others had other ideas
... the microsoft docs talk a lot about level of detail
... I looked a lot about the various docs and came up with a list/guidance
... tell authors to be careful for the number of polygons, etc
... since it can bog down the system
... would like to collaborate with other people so we can come to a conclusion

trevorfsmith: for the implementors, have you look at this>

Leonard: so, looking at this, without animation, how would this look different?
... from a 2d image

bajones: it would show up as volumetric when you're in a ar/vr scene
... some UA will rotate the favicon around
... the volumetric nature would help in this environment

Leonard: this should be added to the explainer

Ravi: I agree

alexturn: this is definitely an interest for us
... lod vs size
... favicons are selected on quality
... in the place where we use the icons, it can be a bookmark but we can use it in other location
... so we end up downloading all the size (for lod = level of detail)

<bajones> +q

alexturn: this is why we have the lod extension so we only have to download it once

cwilso: I added an issue on the repo.
... because it is hard to add to the web app manifest

<cwilso> https://github.com/immersive-web/spatial-favicons/issues/4

cwilso: gltf is not an image type. We need to find out who the best person is

Ravi: yes

cwilso: Marcos C from mozilla might be a good person to contact

bajones: the spec says that the size should be in meter
... the size specifies how it should be displayed
... so the ua selects based on that size?

Ravi: no, the size reflect the actual size
... and I didn't want to make the parsing complicated

<trevorfsmith> close the queue

bajones: I think the explainer should say that the size is the bounds of the object within the file
... so it should also say that 0,0,0 should be in the center
... is the expectation that the asset is always scaled down?
... or is beneficial and do some clipping?

Ravi: the size attributes allows to pick and choose from the favicons
... it's not a mandatory thing, but most ua's can download the model and calculate the bounding box
... (???)

cwilso: the size in the actual icon link, is a hint that the asset is roughly that size
... and then the ua will generally scale down the asset to fit

alexturn: it might be good that the explainer normalizes the icon size
... draw the difference between the selection size and the actual size

bajones: it actual does show that and calls this out
... presumably the ua is picking

alexturn: if it isn't a scenario where the size is actually used, we should mentioned

<alexturn> https://docs.microsoft.com/en-us/windows/mixed-reality/3d-app-launcher-design-guidance

Ravi: I wasn't thinking of using these favicons to place something in the real world

alexturn: one additional use case would be to have it as a shortcut to a launcher
... so is the sizes relevant or not

trevorfsmith: ok, let's create an issue to talk about the size
... in the favicons issue

Leonard: is lighting an issue?

trevorfsmith: we can turn that as an issue in the repo

<Ravi> will talk about lighting too

bajones: the explainer talks about pbr and the ua should have enough information to use that for lighting

Ravi: agreed

<Leonard> PBR should include environemnt maps. Those aren't part of glTF (as far as I can tell)

<Ravi> I will open an issue around size.

<trevorfsmith> open the queue

<scribe> topic :Discuss timing and next steps for navigation: #517

trevorfsmith: is this something that has to be addressed in the api right away
... or is it ok to have it in the second revision

<trevorfsmith> https://github.com/immersive-web/webxr/issues/517

blair: my concern here is that we will end up with a bunch of websites that don't work
... aside from that, we don't need navigation support

<bajones> +q

blair: there's a lot of confusion about needed permissions
... if we're convinced that session creation scheme is that if the page is already in immersive view, this is not essential

bajones: if it doesn't get in, a bunch of pages will be broken

<adarose> qi future stuff

<alberto> +q

bajones: but it's been my mental model, that the navigation feature would always be opt in

<adarose> +q

<blair> I absolutely disagree. :)

bajones: so this lessens the urgency
... so we shouldn't be forcing this on them

<blair> We should be focusing on end-users, not developer, preferences and experience

bajones: if the author didn't choose to enable that functionality, that should be ok

alberto: it's a hard problem to solve now to make sure that it would continue to work in the future
... in a future where webxr is standardized, this could just work

<bajones> +q

alberto: I do think it's a 1.0 future
... because people wouldn't understand
... that you go back to a 2d site when doing a navigation to another AR site
... I'm not a vendor so I don't know how hard it is to implement
... we already have an API proposed

<blair> brandon and blair, "steel cage death match"

adarose: old website that gotten better as the web platform has evolved
... so even if this future doesn't make it

<trevorfsmith> close the queue

adarose: it would be nice if things were architected, that UA's on old websites could do something smart to stay

blair: to me, traversal and the browser initiate an immersive session
... and the API that was proposed is adequate
... it might be useful to list out the use case
... some of us are making assumptions
... for handheld AR, it would be weird to stay immersive
... for vr/ar head sets, it would be weird to exit the session

trevorfsmith: yes, let's go over the use cases so we're all agreed

<trevorfsmith> close the queue

bajones: forcing a website to go full screen video is not what most sites want
... a single site wants to do some setup or might offer multiple experiences
... so it's not good to always jump into immersive

<blair> Brandon: would it be sufficient for a page to be able to reject an event?

dmarcos: supermedium doesn't even have a 2d mode

<blair> or otherwise say "hey, cool, bugger off?"

dmarcos: we don't have strong opinions but we want to have a cue
... in exokit, all website are expected to present
... if there's a standard mechanism to go into immersive right away

<trevorfsmith> rssagent, please set logs world-visible

<trevorfsmith> rssagent, please create minutes

<trevorfsmith> rssagent, publish minutes

Summary of Action Items

Summary of Resolutions

[End of minutes]

Minutes manually created (not a transcript), formatted by David Booth's scribe.perl version 1.154 (CVS log)
$Date: 2019/02/26 19:05:24 $

Scribe.perl diagnostic output

[Delete this section before finalizing the minutes.]
This is scribe.perl Revision: 1.154  of Date: 2018/09/25 16:35:56  
Check for newer version at http://dev.w3.org/cvsweb/~checkout~/2002/scribe/

Guessing input format: Irssi_ISO8601_Log_Text_Format (score 1.00)

Succeeded: s/cr/cv/
Succeeded: s/cwilso/bajones/
Succeeded: s/cb/cv/
Succeeded: s/cwilso/blair/
Succeeded: s/context/contact/
Succeeded: s/cwilso/bajones/
Succeeded: s/cwilso/bajones/
Succeeded: s/rpo/repo/
Succeeded: s/don't/do/
Succeeded: s/as involved/has evolved/
Present: trevorfsmith cwilso blair cconiglio adarose bertf josh_marinacci brandon chris joel ravi atsushi alexturn
Found ScribeNick: rik
WARNING: No scribe lines found matching ScribeNick pattern: <rik> ...
Found ScribeNick: cabanier
Inferring Scribes: rik, cabanier
Scribes: rik, cabanier
ScribeNicks: rik, cabanier

WARNING: No date found!  Assuming today.  (Hint: Specify
the W3C IRC log URL, and the date will be determined from that.)
Or specify the date like this:
<dbooth> Date: 12 Sep 2002

People with action items: 

WARNING: IRC log location not specified!  (You can ignore this 
warning if you do not want the generated minutes to contain 
a link to the original IRC log.)


[End of scribe.perl diagnostic output]