W3C

- DRAFT -

immersive-web 2019/June Face-to-Face 1st day

04 Jun 2019

Attendees

Present
NellWaliczek, ada, cabanier, trevorfsmith, alexturn, cwilso, Leonard, kearwood, Manishearth, Leonardo
Regrets
Chair
Ada, Chris
Scribe
kearwood, samdrazin_, samdrazin, Alex Turner, Trevor F. Smith, Leonard, ada

Contents


<Manishearth> scribe: kearwood

(Discussing how to use IRC bot)

<cwilso> https://github.com/immersive-web/administrivia/blob/master/IRC.md

Join irc channel at #immersive-web

Use q+ to speak in the IRC channel

Say "q-" to drop off the queue. Chairs will do the rest

<ada> "q+ to ...." to add description (the "to" is important)

Welcome

On to "framing the roadmap" discussion

Chris:

Tomorrow we have a session starting at 10am

Charter is running out

Charter is maleable and changeable

Chairs are responsible for figuring it out before running out in february

Slightly different deliverables as they change over time

Really start a charter discussion befor ethe end of the year

Rationalizing the drive for a replacement for WebVR

concerned that landing everything by end of the year is not going to happen in that way

Nells says we can ship now / soon or have a complete solution.

Maybe pick one and balance our needs and what we want to get out of this

see changes in what we see as basic AR

<NellWaliczek> I said you can ship asap, ship correct, or ship complete. pick two

When we started really focusing on this a year and a half ago, we were really focused on the AR hit tests for smartphone ar case

This turned out not to be the short term goal

So we moved away from that a little

We need to re-think what we will focus on

John will discuss

With this in mind, we should choose what our goals are

Give Nell credit on raising this

Nell's suggestion is to follow css working group

They use modules that are tied together, driven by subcultures and own editors

Don't want to become too large / complex, but perhaps breaking into modules can help turn into deliverables and products

Turn community group incubation ideas into working group

Talk about before we dive into chunks like real world geometry

Refactor road map based on discussion tomorrow about how monolithic / modular

Nell:

If you are like me and didn't spend much time with css working group, youy might ask what is the difference between a spec and module

Chris may be able to describe more clearly

Looked at the css working group charter. Quote that inspired:

Under the deliverables section...

"This list of modules is not exclusive: The WG mauy also create new CSS modules, within its scope. Also, it may split or merge CSS modules......"

w3.org/Style/2016/css-2016

The purpose of a charter is to define what the deliverables

As we go through things today, think about this mental model and how the lines would device this into modules

Modules may have levels / versions

Ex:

An AR module could have a level 1 that is well defined at this point with a tight scope

Less baked parts could be in a level 2.

Don't have to cut off work

Allows us to compartmentalize related chunks of work without saying all other parts have to wait

No timeline statements around this at this point

A way of thinking to help us strike balance between Shipping ASAP and Shipping Correct, and Shipping Complete

There is one more thought

In order for us to satisify the "SHipping Correct" criteria. We must think about issues more that superficially

There is a couple of areas where there is opportunities to have upfront discussions to set direction without final design

Core itself could have multiple levels also

Ada:

A good way to visualize this is to look at CSS working group drafts github

github.com/w3c/csswg-drafts

Broken features up into individual folders

We don't have to do it this particular way

Shows how they have different levels for each module

work on things in parallel by different editors and people

Nell: Some are level 1 and level 2 within a particular feature. Some are edited as the same time

Can get a sense of if things will land in time

If looks granular, it's because CSS is huge.

We don't have to do it this granularly, but could help to thing about how we can break it down

Slot tomorrow to talk about this

<johnpallett> +q to ask how CSS manages dependencies

Chris:

John:

<Zakim> johnpallett, you wanted to ask how CSS manages dependencies

How does CSS manage interdependencies between work

Some of the webxr work is less module

Chris: CSS still has whole conferences. Core members participate heavily across features. No complete decentralization

It's not a completely separate modular model. There is bleedover

Goal is to minimize it

Ada: It's helpful that there is not 250 of us and that we have regular teleconference calls

If someone is taking charge of a particular module. We can set up a process to add items to agenda

Trevor:

It's helpful to set the context and have the larger discussion tomorrow

Why AR is in a separate module and VR is not.

Ada:

Nell+Chris: Not choosing to do that yet. Talking about what would be the structure

Nell:

ADa+Chris -> Nell+Chris

Nell: Problematic to pull the immersive concept out of the spec. Best to think generically about immersive

Not a lot of divergent text there

Not a philosophical statement. More about what would be in the document.

Trevor:

Nell:

Dividing into modules does not mean taking foot off the gas

There is a very strong drive to make sure we land toehold of VR we are talking about

Even if we pull AR into a separate module does not mean we are not landing it ASAP.

Gives a few months of runway without holding back rest of ecosystem

Conversation will be what we can deliver by slicing different ways

Chris:

Founding member of CSS working group 20+ years ago, is not the same working group now

The CSS working group developed this way to move faster. Could not get to long list of things without this

Manish:

Want to mention that to address John's question. CSS specs link to one another and to multiple versions.

We can do the same thing and won't be as complicated for us

Ada:

If you have a pet feature that you feel is not getting love and attention it deserves, it can be worked on in parallel without waiting for other features to be delivered

Get your pet feature delivered faster

Chris:

Keep that in mind as we talk through next days worth of stuff.

Subtext for what we will talk about tomorrow

Moving along to "milestone retrospective"

Brandon:

Milestones are relatively recent organizational strategy

We have interested parties outside leadership team that want to have a better idea of what is going on in the spec

Wish to break out what is going on and our velocity

Broke into monthly milestones

<cwilso> https://github.com/immersive-web/webxr/milestones

Monthly cadence chosen arbitrarily but is working out okay.

Recent monthly not quite closed yet

Weekly triage meetings lay out the work

Keep work generally in a theme

eg, this month we focus on this section and this section

Around cleaning up gamepad functionality, privacy work, refactoring

Good progress made on privacy, but falling to next milestone

A lot of refactoring around references paces

Massive amount of work around formalizing concepts there

Making sure that language is more precise about communicating concepts, how math works out, spaces relating to each other

Cleanup of reference spaces, easier to understand

PReviously had identity and viewer space. Was confusing. Math hard to describe. Merged into single viewer reference spave

eye level is now local floor level is now local-floor

Outsider seeing for first time will be more grokkable

Clarifying when reset events fire off

Cleanup for gamepad. answered questions like how live gamepads are

How to communicate input sources. Change in place or copies (Answer: change in place)

Arbitrary controller devices mapping

More to come for gamepad

How we identify devices specifically.

Not just string

Clarity of functionality of gamepad and input sources

Cleanup and issues get dumped into milestone, looks like more work done :-)

Some things not on roadmap was easy fix to make, and added to milestone such as Manish's work

There was a bun ch of work talked about. Simplifying inline mode didn't make it in

On the cusp of making progress there and making a pull request

Nell:

What we haven't done for june.

Fixed by pending PR tag. github not smart about connecting dots for fixes before PRs land

Take a look at what is left. Ada wrote script to take issues referenced by PR

Can use filter to see what is open without a PR attached to them

Tried using project tool, didn't work well for many reasons

Consequences mentioned by Brandon. Some things taking longer than expected pushed out from one milestone to another

The stuff that transfered from previous month to current is still what we will look at first

Prioritized early, continue doing that

Need to close out.

Walk through spec text. Unstable CSS styling on sub-element have been fixed, but spec changes did not remove

Late breaking ads need to be cleaned up

Remove slow path for inline

three items related

Related to tracking behavior. Hold on merging until spec text updated.

John working on privacy design doc

Many topics there to talk with broader group before landing

Looking at June:

4 closed things already

Happened to clean up before we finished May

Can drag and re-order. Try to group. May not be groomed.

Hold overs from previous milestone:

Document the formatting of gamepad ids

- Is too late, so "ship correct option". Rolled it over and now prioritized

Some gamepad things made closer on, need gamepad spec folks to sanity check before closing out

Up next:

Anything that is potential breaking changes for VR or unified path stuff

Aiming to hit VR-complete (Not VR finished). Is VR spec shaped complete

Anything in this milestone is to close out this stuff

Rest is privacy, permission, user consent for review

Detached typed arrays could be breaking change. Close out tomorrow hopefully

Focus + Blur has been over may months. Need community feedback on behavior

Bikeshedding of name

Two categories for next:

Spec breaking changes

beginnings of AR feature work

Milestone defined for "Spec complete"

Things in spec complete milestone are additive features. Decide if goes into a module or if needs urgent closure and would block spec from landing

Get through breaking changes and what would be the "WebVR spec" replacement

June is about spec complete

STuff in July can be parallelized

If you want to pick up spec text work, can help with July

Chris:

Anyone want to jump in before bio break?

Choosing next scribe

<ada> scribe: samdrazin_

Input: update on Nell's library

<Barbara> Future topic - Immersive Media - Add Immersive Web API to Media roadmap? https://w3c.github.io/web-roadmaps/media/

Ada: welcome back. next topic: Gamepad. opening up w/ library port to Gamepad, models, over to Nell

Nell: last f2f, i presented idea for addressing common challenges re: building vr experience with motion controllers; genericizing motion controllers in generic terms

-- we decided it was wrong time to standardize, but good time to frame our thinking in a standard fashion

-- since xr gamepad mappings repo has been created, and populated over last 2 months. lots of content

-- data sources for gamepad, mappings for each controller/button, and method for indicating stop points (ex: thumpad left/right extreme) per model

-- long and involved readme, plus fully fledged schema. compiles down to something you can validate

-- different subschemas have been defined as well. handedness, components defining how everything glues together, all here

-- an attempt at mappings have been defined for both WebXR and WebVR (webVR has what brandon first thought of, and everyone copied)

-- lots of issues filed. Mock is for tests that have been written.

-- given a json file (load mappings), parse into obj model, you can query its state to find out where thumbstick is, use it as desired

-- all done in open source, should be as generic as possible. benefit of open: mappings > WebXR, in any mfgr folder, mapping file will be acompanied by glb or gltf with named nodes and references to mapping doc

-- ex: "for this thumbstick, right/left most pos will be here", and clear references from mapping file should provide a comprehensive guide of controller inputs

-- WMR is a good example. was used a lot during the dev of this repo. includes trigger, thumbstick, handedness.

<johnpallett> +q to ask whether glb/gltf dependencies depend on extensions that are still being incubated within Khronos or are part of the core spec.

-- each component has its own root, label transform (points to node inside model file) can be used to annotate (like a legend) for each component

-- and data source id. which index you get inputs from.

-- labelTransform is the name of the controller, root is the acutal element,

-- visual response indicates how the extreme positions of a given controller can respond/should be rendered. much more detail and examples in readme

-- number of issues open relative to this, some can be closed. please file more if you have any feedback.

-- some feedback already. no asset files yet. big thanks from MS and Oculus for MIT license models. Updates coming to add these. HW owners get to pick what their models look like (just file PR)

-- in process of documenting the process of specifying your controller's model

-- brief pause from progress for 2 months, but this work should pick back up shortly

<Zakim> johnpallett, you wanted to ask whether glb/gltf dependencies depend on extensions that are still being incubated within Khronos or are part of the core spec.

johnpallett: dependency on glp and gltf - core spec or extensions for GLTF still being incubated?

Nell: still being investigated/defined. why GLB/GLTF ? needed to pick something so that format was standardized across mfgrs.

-- #14 provides instructions for how to provide a model for your controller.

brandon: lots of gltf extensipons can be used optionally with sensible fallback options (ex: Draco)

-- You can offer Draco compressed buffer, but have a more standard defined buffer, so folks who understand Draco can follow that path, and others can proceed as usual

johnpallett: spec will have baseline experinece on GLTF or KHR with ability to extend?

Nell: that is correct

Trevor: Potassium project - A11y, ex: folks using MS game input systems (big buttons, breath controller). We could support MS suite as an input

-- what about custom setup, could they be prompted to add their inputs tot his library?

Nell: AI(Nell) to log 2 issues)

-- Wanted to call out how to add I18n support, and a way to feather in A11y meta data

-- Back to question: Are you asking: at runtime, browser at runtime can override, or at build time, consumer can override?

-- Another issue here is: how to handle merge requests (for feature additions)

-- Maybe amazon could host in CDN on S3... but different platforms may want to do this differently. Library can be included, but can also be forked and spin offs can be made

-- Source will be available under MIT license

-- Trevor: could hooks be added to core to add new(ly discovered) controllers to this library?

Nell: great idea, please log this

Leonard: GLTF has standard where 1 = 1m. Can we standardize this to ensure scale is appropriate?

Nell: somethign that needs to happen: named nodes can have units specified, though might be overkill. perhaps some tests to be added to view controller next to human (ex) to gauge rough accuracy of scale

-- for some sort of model valdation

Leonard: some requirements/guidelines for users to adhere to recommended pracitces?

Nell: this is a missing section in the readme (subitting models) which needs to be filled in. This is a TODO.

-- Goals are for this to be as easy and automated as possible without over-engineering the problem

Rick - visual responses: how to display when a button is pressed?

Nell: this lib does not modify asset. it has function - give me id for the transform, and then dev applies the transform

-- this process is fairly well documented in repo, perhaps better to review offline

<Zakim> alexturn, you wanted to talk about CDNs

-- some bugs already logged against this, but plz review offline and ask questions if there are any

Alex: whether to use CDN, forward compat as a goal. If users take snap w/ set of models, they have a static implementation. Folks will likely want to fork and host those forked versions, but we should encourage that the CDN should be living

Nell: Not sure how big the bites will be (for hosting). Lots of thought on versioning semantics (what major, minor, patch means). this is documented in readme.

-- Patch: things like: new model conformant wiht spec , or mapping bug fixed, model bug fixed. unique to specific model

-- Minor: fix or minor bug, no major new features

Major: major new features

Manish: Not sure if this fits in this convo or other, but what about Gesture support?

Nell: that should be something different. this is specifically for gamepad controllers

John: 5 trackers (enterprise VR book). Someone might hook this up to device (ex: chainsaw). Company should probably not upload model to chainsaw. How to associate/render these new controls?

Nell: this is not a spec, its just a library. your chainsaw would be a gamepad obj, the company could fork lib, and add their own model, and remove others they dont care about

-- That would probably be a silly choice - where as this lib is targeted towards things that look like XR motion controllers

-- THis could be used to animate, for Ex: Xbox controller. for things we consider like motion controllers (with thumb stick, etc)

-- animations that correspond with this are also included, but do not necessarily map to custom controls (like a chainsaw)

<kearwood> https://videogamecritic.com/images/cool2/chainhandsthumb.jpg

Input: gamepad ids

<samdrazin> 7m outage IRC :(

<samdrazin> Brandon: Gamepad IDs

<samdrazin> there are hierarchical needs to conquer:

<atsushi> scribe: samdrazin

-- shape of controller is another challenge to solve (Ex paint brush controller in tilt brush)

<Leonardo> Issue: https://github.com/immersive-web/webxr/issues/550

<atsushi> scribe: samdrazin

-- not as necessary as button layout, but still an important part of experience

-- 2 proposals to cover these needs.

-- last several comments on #550. previous proposal: gamepad ID has string on it. no formatting restrictions from basic gamepad id.

<scribe> -- new proposal: restrictions would align with heirarchy of needs. <buttonlayout> would be captured in first string portion

-- for oculus, string could refer to "Oculus Touch", which covers all controllers that conform to same layout

-- for oculus, string could refer to "Oculus Touch", which covers all controllers that conform to same button layout

-- really a mapping of inputs to buttons for agiven controller

-- A common delimiter (ex: colon ":") , then next portion of string to cover controller model IF we know it

-- underlying APIs that will not tell us, but perhaps it can be inferred. (OpenXR does not currently support this)

-- may need to be omitted, TBD. if included, for ex: oculus touch, could indicate "2019 model" or "2016 model", etc. Individual implementors of API could pick strings around their hardware

-- at the end, you'd get somethign like "oculus-touch" or "oculus-touch:2019"

-- registry could poiint to specific model/id

-- alternative proposal (david dorwin) provide fields using an interface. splitting things out more explicitly prevents abuse.

-- as well as mis-parsing

-- makes things more extensible (say if we wanted to add more fields in the future)

-- Fields would still need to be parsed, but this likely reduces chances for confusion.

-- String based version would only function for Gamepad objects. Fields/interface would work for things beyond that.

-- Might not be a great thing (ex: apply model numbers to hands, in case of optical tracking)

-- Summarizing, quick discussion. 1) Do you feel like 2-tier identification system is appropriate for your ecosystem & use cases? 2) straw-pole: string vs interface.

Leonard: Do string IDs allow for fingerprinting or only available once session starts?

Brandon: only available once session starts. prob not available to inline session without hoop-jumping

-- basic consent procedure to get to immersive session will likely be required

-- some obfuscation: we'd never report serial number or something uniquely identifiable

-- may expose something like "HTC hardware", but only generic info. goal is to reduce fingerprint-able info as much as possible

-- that said, devs need some data. consent procedures would be in front of this

<samdrazin_> Nell: this is similar to localization

<samdrazin_> -- {Language: Region}

<samdrazin_> -- Whats appealing abotu this design: it could encourage this kind of behavior. No strong opinion as of now as to which approach will encourage this behavior most closely from devs

<samdrazin_> -- we should inspect how l10n tools are written to encourage this behavior

<samdrazin_> -- David's point: there may be more than these 2 pieces of info that could be valuable, more scalable solution could be nice, but string version could be useful in conjunction

<samdrazin_> -- whatever we do, we want to encourage a pit of success

<samdrazin_> Alex: I like multi-part design. How to deal with explosion of controllers (and layout) is tough.

<samdrazin_> -- Mapping and negotiatin being handled separately (by Nells Lib) is a nice solution. semanitc lib on top to do layout negotiation (how to support oculus/vive controller), and lib knows how to handle 3rd arbitrary obj (if similar to another contorller) could be a nice fall back

<samdrazin_> -- well known token layouts is a great way to handle myriad of controllers

<samdrazin_> -- for MS, we intend for our APIs to fill gaps with extensions. MSFT extension to specify their models and IDs

<samdrazin_> -- Trying to standardize some of this (at least to get model IDs standardized across vendors). Give my vote towards separate fields. Strings can be crazy

<samdrazin_> -- L10n analogy: mostly fixed set of langs (and grows slowly). General space of these is known. Controllers is more volatile (and more likely to grow over time)

<samdrazin_> Leonard: In Button ID ref, does this imply button ID layout or physical layout? if physical, what are the tolerances?

<samdrazin_> Brandon: Functional layout, potentially room for both, but we assume that semantically a touch pad gets mapped to the same index across controllles (no connection to where that control sits on the controller)

<samdrazin_> -- this may be addressed by model, but youd need familiarity with model

<samdrazin_> -- Not sure how we'd programatically describe physical button layout

<samdrazin_> John: Strings vs. Fields: after VR dipped, i went into ERP. Data conversions are nasty (brilliant, but nasty).

<atsushi> scribe: samdrazin_

-- Flexibility wiht exension possibilities are ideal for future proofing

Straw Pole: Strings vs Interface

<ada> +field

<kearwood> +field

<alexturn> +multiplefieldswhicharestringsbutnotonestring

<alexis_menard> +field

<NellWaliczek> +field

<cabanier> +field

<daoshengmu> +field

<Artem> +field

<trevorfsmith> +field

<bajones> +fields

<Leonard> +field

<atsushi> scribe: samdrazin

Brandon: Fields is the reasonably overwhelming consensus

Ada: multiple fields (strings), but not 1 string. and the branch has a new name :)

-- Different topic: Layers

<JGwinner> Fields+

-- just kidding, input registry.

Input: registry for input devices

Chris: Either we can do Input registry and focus blur, or push this to later

Brandon: related to previous topics:, with Nell's lib (processing and normalizing output of Gamepad input), goal is to get UAs to agree to same set of fields

-- W3C mechanism for thisis registry

-- issue for this is #578. outline from David Dorwin: what registry would do for us, how to start one

-- No huge concerns heard thus far. Unless there are large concerns, we will likely start engaging with this soon.

-- Not for VR Complete Milestone likely

-- but soon after, we can establish this registry with W3C. Process TBD, but as fields are selected, vendors will submit to registry, provide layout, etc

-- If other UAs expose same Devices, best effort to expose same strings and layout for same controllers.

-- will not be 100% possible all the time, might not know at runtime, but we should seek maximum consensus

Alex: hopefully we can all commit to keeping this healthy. David pointed out: we'll all have secret stuff and it wokrs on day 1. At least do the PR when things launch to promote health of this

Brandon: this doesnt obligate you to leak new hw

-- Good faith effort

Layers

<cwilso> scribe: Alex Turner

<cwilso> scribenick: alexturn

NellWaliczek: Going to switch gears now - so far, this has been logistical/tactical for how we replace WebVR.

-- One topic that sits on that line is how we handle layers.

-- First step is to agree that we're in business to do layers in WebXR at all

-- Want at least a consistent way for UAs to experiment with layers

-- Goal is NOT to define a bunch of specific layers to introduce now

-- These are set of 5-6 documents now that are hot off the presses - some are more early than others

-- Not talking about compositor-backed texture sources today

-- This can be a key case for using spec modules

NellWaliczek: Key motivation for layers is to help with reprojection and stability
... If you move the quads yourself, you end up with aliasing when your quad is reprojected
... For maximum legibility, compositors can sample from quads directly
... For video-backed textures, compositor can pick the right frame of video to pull from and which pixels to sample
... Some concepts to explore include getting layer capabilities, if we need it
... Render state today has a baseLayer attribute for a single layer

<bajones> +q

NellWaliczek: Proposes providing a sequence of layers instead
... Defines the order in which layers are composed and how to handle occlusion

<joshmarinacci> +q

NellWaliczek: Can decide how much is in the beachhead and how much goes in secondary modules
... Key thing to figure out in short-term is the former - e.g. do we keep baseLayer in core spec
... Proposal replaces baseLayer with layers
... Specifies painters algorithm with layer 0 being "drawn first"

Artem: One of the biggest benefits of layers beyond power consumption and latency is avoiding double-sampling
... This avoids downsampling to the eye-buffer first (1000x1000 for Oculus Go per eye)
... Instead, samples at full resolution, giving great text or video
... Other example is to support hi-res video, which can be impossible to do through WebGL directly
... Have a demo that shows the difference in quality with or without quad layers

NellWaliczek: Yea, it's REALLY different

Artem: Even MICROSOFT likes quad layers!!11!1

NellWaliczek: If you have concerns about supporting multiple layers, please stand up now - we believe there is consensus here

<johnpallett> +q to ask what aspects of the current WebXR revision might break in the future due to layers. Also see: https://github.com/immersive-web/webxr/issues/670#issuecomment-497863931

bajones: Multiple layers do seem important - many native libraries today do support it
... Not every system is going to have the same level of support for all layers
... Maybe some system can support it, but it would be composited by software, so you don't get the benefits
... Need to decide how fallbacks work - can be mostly perf/clarity, but maybe with DRM layers, you can't get there from here
... "Can't do it" vs. "I can do it but with caveats"
... Could call this updateRenderState, but may be difficult to know why a set of layers failed
... Developers could have a few sets of layers and try to submit them
... How does this work in OpenXR?

Artem: Extra types of layers beyond projection/quad are extensions
... Can have capabilities, but that would cause fingerprinting
... Are there 32 supported or whatever?

bajones: What if you can support a certain number of quad layers but more projection layers or vice versa?

Artem: Stereo layers also needs some mechanism for filtering to left vs right

joshmarinacci: Speaking as an app developer, I understand why layers need to exist for perf/power
... But why do they exist as something accessible by the developer?

NellWaliczek: Because the developer needs to populate them
... This is a 2D thing that I want you to put into space with these pixels to be composed there
... Put here in space and shifted this way

joshmarinacci: So I'm providing hints about what it's going to do with the layer?

NellWaliczek: You're just giving quad pixels, whether it's text or video

bajones: A perhaps poor analogy to 2D web pages is frames
... However, sometimes you need to pull in content from other sources
... The goal is not to pull in specifics into the core spec - we want to be sure there's an entry point for pulling this in

joshmarinacci: That's helpful - want to be sure that by default you don't need to think about this unless you have advanced needs

alexis_menard: Would be helpful to see JS pseudo-code that demonstrates how this works

NellWaliczek: The reason we pulled the details out is because it was mostly IDL - explainers are generally built from sample code to help give examples

alexis_menard: High-level sample code helps people get oriented

cwilso: Alex Russell pushed explainers pretty hard and generally wanted IDL out of explainers, so folks don't go too deep on the exact types in question

NellWaliczek: The actual spec changes are something we'll get to - let's align on the basics first on what toeholds we need

<Zakim> johnpallett, you wanted to ask what aspects of the current WebXR revision might break in the future due to layers. Also see:

johnpallett: Let the record show there are two Johns in the room
... Agree that layers are important and that we want to avoid breaking changes
... If we change nothing, we may need breaking changes - if we add complexity now, that may also cause breaking changes
... How confident are we about which approach will minimize the risk of changes

NellWaliczek: Are we convicted first that we want to do layers - then we can figure that out?

johnpallett: Single baseLayer seems very polyfillable
... Do we know what else needs changes?
... Sequences are tricky to know we got right if we don't have the layers worked out yet
... Concern on our side if we don't test out the additions

NellWaliczek: Two breaking changes being proposed:
... Switching from baseLayer to layers sequence
... Second thing isn't here yet: how do you query what's possible
... Best practices around composition are hopefully less controversial, even if worked out outside this group
... What's missing is about how the web enables interrogating what types are supported
... To be clear, not advocating for one approach or another

johnpallett: Issue 670 in webxr

NellWaliczek: Comment is whether we can just stick with baseLayer for now
... Sequence later could be polyfilled
... Thinking of things as modules could take the pressure off now
... Question is whether to stick with baseLayer for now

johnpallett: Are there other aspects of the current spec design that we'd want to fix for now?

NellWaliczek: One other thing is if we want to change the current WebGL layer to use the compositor-backed layer stuff
... As we move to support WebGL/WebGL2/WebGPU as other texture sources
... But could be polyfilled

johnpallett: Concern is if we add more complexity now, ability to polyfill back could be concerned

cabanier: We only support one layer - would be good for this to be optional or to know if it would happen in software

NellWaliczek: Yea, that's a part that's missing right now

bajones: While I'm personally in support of multiple layers, Chrome when we ship will almost certainly just support one layer, just as a matter of timing

Artem: Not for Oculus!

bajones: May need to have the browser communicate that it just supports one layer, if only just for old browsers
... Could also work for browsers that are layer-anemic

Nick-8thWall: May be naive, but when I think of web development, it already has a great layer system in HTML/CSS
... Can put various canvases on top of one another
... Could this be handled at application layer

Artem: It's critical to have the texture allocated by the platform underneath the browser
... If you want maximum performance, you need to render directly into the texture created by the VR compositor
... Otherwise, you get multiple copies with multi-process nature of the browser
... That copy would take 0.6/0.7ms on Oculus Go, for example
... Could spend all your frame budget on copying if not allocated by UA/platform
... API-wise, WebXR is much closer to OpenXR in terms of API in terms of how it should be treated than CSS/Canvas

NellWaliczek: Two things to think about there too - sometimes the system controls the size of the buffers
... WebVR was very canvas-dependent, but if your rotated your phone, everything could break
... When you think about canvases, the browser does composition - but here, it's an entirely different compositor that does the work here
... Usually a separate process, sometimes even a separate chip, so definitely not the same compositor

Manishearth: May be useful to have a baseLayer even separate from the layer list, so you can easily talk about the background layer
... When looking into speccing out the compositor for multiple layers work, need to reason through how to handle devices like Magic Leap where black is generally transparent

<kearwood> Random idea... baseLayer -> environmentLayer ?

alexturn: Layer composition is separate from environment composition

<johnpallett> +q to say that we're super-supportive of implementers experimenting with things that are being incubated. Our primary concern is adding complexity to the spec now that might cause non-polyfillable breaking changes later; we're not sure how we'd know until after the explorations and incubation of a layer type happens.

alexturn: Layers would be alpha blended even if environment is additive

Artem: Sometimes a fixed layer like equirect is underneath the core eye buffers - makes baseLayer odd

John: Concern is two things:
... If I have a layer at high-resolution that displays text while I have a phone in front of things
... But I see newbies hack into React ecosystem to put 3D objects on top of other 3D objects, but in immersive context, you get terrible cognitive dissonance
... People can do very wrong things with this and can do very right things with this - need to be really careful

NellWaliczek: Yea, often called depth inversion - can look horrible
... If the thing painted in front is further away in depth, can cause artifacts
... A way around that would be a native system having depth buffer access, but could slow down composition
... One challenge with WebXR is that ultimately we don't own the native compositor
... We can say "wouldn't it be nice if" and then work with the groups that define native XR APIs, such as OpenXR
... One catch is that you have to manually manage what your composition order is, which is ugly
... Can get there if you use WebGL and turn off depth stenciling

Artem: Or use incorrect projection matrices

bajones: What we really need to establish here are best practices

John: The document specifically says the ordering is specified by the web site - could that be based on the layer itself?

NellWaliczek: The catch is that we have to build on top of what native XR APIs can do to get the efficiency here

bajones: On desktop, depth can be used to help float UI in front of app content

ada: Before lunch, let's take a straw poll here

<cwilso> johnpallett: we're super-supportive of implementers experimenting with things that are being incubated. Our primary concern is adding complexity to the spec now that might cause non-polyfillable breaking changes later; we're not sure how we'd know until after the explorations and incubation of a layer type happens.

johnpallett: Want to ensure that we get a design right before we add too much complexity now, to avoid risk of harder polyfill to today's API

bajones: May be a design pattern here that could help us out, options going in as a dictionary, which are quite permissive
... Could take a single layer now and then later a sequence of layers later, without a break

NellWaliczek: May be not necessary since it's a dictionary, can just spec that if you give baseLayer, can't give layers and vice versa
... 1) No breaking change

<cabanier> 1

NellWaliczek: 2) Let's talk about this again in 2 weeks
... 3) Let's go for it

<bajones_> 3

<Artem> 3

<trevorfsmith> 3

<ada> Note 1. doesn't preclude it going in a future module or spec

NellWaliczek: Would we have pseudo-code in 2 weeks?

<johnpallett> 1

<kearwood> 3

<Leonard> 2

<ada> 3

Artem: Seems reasonable to provide pseudo-code and new explainer doc

<JGwinner> 3

<johnpallett> (clarification: "1" because I'm not sure whether "2" would affect the schedule for the spec.)

<joshmarinacci> 1

<Manishearth> 3

<alexis_menard> 2

<NellWaliczek> 1

cwilso: I'd like to suggest a 1.5: Let's go with no, but still revisit this after investigations during the next 2 weeks to see if we need to tweak anything
... Have to see if it's modularizable

<bajones_> Changing my vote to 0.6

<ada> 1x4 2x2 3x7

<ada> Not a significant majority

<ada> Chairs will discuss.

Lunch break

<joshmarinacci> hola

Polyfill and browser API support

<trevorfsmith> scribe: Trevor F. Smith

<trevorfsmith> Nell: This is about two things: we have a polyfill that has been worked on by Jordan Santell from Google. There are other folks interested in collaborating on that, perhaps Blair from Mozilla. THere's a fair amount of work that needs to be done as the polyfill has fallen behind.

<atsushi> scribenick: trevorfsmith

There are two question: Are there other folks whow ant to help. There are Issues in the repo. The second question is what are folks' plans for what version of the spec they'll ship in what timeframe. That impacts the plan for the polyfill.

Nell: For example, Chrome 73 WebXR is out of date from 76. So what are our responsibilities to devs? Are people's plans to work from top of tree in their implementations?
... This is a discussion, so I want to hear from you.

Manish: For servo we are trying to follow spec master. We don't support inline. We are lagging a bit but we are working off of top of the tree.

Minish: Servo will be in Firefox Reality on some browsers.

Brandon: On the Chrome side, we just branched for Chrome 76. At that time we were up to date with top of tree with an additional bit of inline API.

Nell: BOOOOOO

Brandon: We are closely synced with top of tree with minor exceptions like event order. We hope to sort that out during the origin trial period.

Nell: How long will be origin trial be?

Sam: One or two milestones.
... The goal is to make no breaking changes during the origin trial.

Nell: So, that will be tru for 76 and 77?

Sam: Yes.
... Our intent is to have something stable over the course of the trial. We may go dark for some time and then give updates for future trials.

John Pallette: The origin trial ending at 78 and then we'll have the new features. So we can have an active trial on stable and have canary support the newer version of the spec.

Chris: Do people understand what an origin trial is for Chrome?

Ada: Please explain it.

cwilso: When we implement new features we put them behind a flag. So, canary users can use them but they need to turn on a flag on each machine. The problem is that if you as a developer want to ship it to a bunch of people you have no way of testing it out with flags.
... That's what origin trials allow: testing in a public deployment. Users can use a feature that would be behind a flag. Just for that origin, with a valid key, and they expire.

Chris: We kind of violated that with WebVR. Origin trials will time bomb but also they explode if they're overrused. YouTube could never use the origin trial because it is too much traffic. The code name for origin trials was "phosphorus"

John Pallette: It's for responsible experimentation for things like A/B testing with real-world results.

cwilso: To file origin trials we have to say what feedback we're looking to get form developers. It's not a soft ship.

Brandon: Putting --webkit in front of everything didn't prevent features from being baked into the web. Origin trials do.

csilwo: We break stuff in an origin trial and we tell people up front that it will break.

Josh: When will the trial start?

Sam: 31st of July though end of October.

Josh: Is it OS bound?

<johnpallett> (with error bars on dates)

Brandon: No, but there are only xr bindings on Android Daydream and Windows. We don't have browsers on other platforms.
... Inline functionality will be on all platforms. Origin trials are just in Chrome.

<ddorwin> Timeline: Roughly 76 beta through 77 stable.

Kip: For Gecko, the goal is to land something this month behind a pref but people can start porting and testing. We haven't landed the spec but the goal is to land VR specific. Firefox Desktop and Firefox Reality on Android.

Brandon: Do you know how the W3C considers them.
... Are those two implementations?

Manish: Servo doesn't ship anywhere yet.

Kip: We'll be sharing tests between engines.

Alex: What backends?

Kip: Gecko on Oculus Go, HTC Vive, Quest, GearVR.
... All will support both WebVR and WebXR. Whichever presents to a session first will hold it.
... We'll let this ride the train. We will enable by default once we're at the VR complete stage (with conversation) with the group. That will be in the Firefox 69 schedule (hopefully) for end users seeing it in September.

Rik: MagicLeap released XR behind a pref last week but it prompts people to change the pref. We work with a-frame and three.js (which is working with Chrome 76) so we provided a small polyfill to make those work.

Brandon: You're providing JS to make it work with 73?

Rik: Yes.

Ada: Are there plans to rebase?

Rik: Yes, we rebase when Chrome releases. Then we can get rid of the polyfill.

Artem: Oculus Browser has WebXR behind a flag using first public draft.
... We backported 76 WebXR parts to our browser. We don't really care about polyfilling so much.

Nell: Your targeting 73?

Artem: Yes, the first public draft.

Ada: That's post removal of device but not much more.

Artem: Next version will be m74 based or more up to date depending on timing.

<Zakim> alexturn, you wanted to ask if we plan to do a public draft after the "VR complete" milestone

Alex Turner: It sounds like we're converging on VR complete inflection point. Things not locked but on ice. Choose your analogy. Is that the point at which we'd snap another public draft? Would it benefit us to agree to align on that?

Nell: Yes, another public draft after the VR complete milestone. The question is how large is the delta between implementations.

Alex: It might depend on timing of that release whether we hit it in our development cycle.

Nell: 76 goes out in end of July. 77 in mid-September? So from end of October the origin trial would end. So we're looking at this Fall, sometime between September and October when Gecko based browsers will have VR complete. Chrome will be about 6 weeks behind that. Rik will be keeping that polyfill up to date. So it sounds like we should target top of tree for the polyfill and keeping it up to date. Maybe with a Chrome specific version with

that delta.

Nell: Does it seem reasonable to target top of tree for the polyfill?

Chris: The challenge with the polyfill is what we're really going to do is tell devs to "build to this" and if its top of tree that's a moving target.

Nell: So we should track the polyfill to the VR complete snap. There will be last minute changes in there for VR features and anyone who is not targetting their user agent to be shipping VR complete spec text carries the burden of supplying a polyfill to their implenetation.
... We'll have an opportunity for Rik's polyfill to be in the repo.

Rik: Our polyfill is only for Three.js.

Nell: Ok. So, anyone who wants to have code authored toweard the VR complete can include it to target their implementations.
... Ideally the implementation polyfill should... if it's based on Chrome 73.. never mind.

Brandon: People using polyfill use it for compatibility. Do we want these version delta fixups in the polyfill? I'm in favor of it becuase it represents wider compatibility.

Nell: The crux is that we're talking like it's one thing. It's from A to B. B should always be the draft spec but A can be any number of things, each implementation.

Ada: That's what the polyfill service does. Polyfill.io does it for all of es'15 and new features. We could do something polyfill-service-like they could use a script that pulls down what the experience needs for a specific browser.

Nell: That seems reasonable. Another question: we're all within spitting distance and if that's the case do we do like WebAssembly if we turn them all on at the same time?

Ada: We'll be more concerted than CSS grid which was a remarkable coordination.

Chris: The polyfill needs to be an authoring target (with documentation) and the reasonable choice is the VR complete milestone. Maybe we should track top of tree after that because not much should change. The fixup would be fixing up support for implementations underneath that.

Nell: It gets complicated if we aren't decided about modules.
... For the VR side of things, this makes sense. If we're all quietly not saying anything and we don't coordinate it seems like, yikes, can we not?
... It seems like we're not far off from coordinating. We can't ask people to make a committment in this room but people could go talk to people in their teams and maybe talk in the next WG call. Building on WebXR right now is hard.

Alex T: Do we need to pick a date for the draft?

Nell: Are folks interested in coordinating on a polyfill? Are folks interested in coordinating on release dates?

Chris: The level of transparency we've had in the last half hour is great and we should continue that. My suspicion is it will be very hard to coordinate other than that. I'm concerned about the developer story.

Nell: Me, too.

Ada: We need the polyfill about the safety stop. It's great to hear about implementations being ready by September but accidents happen and if stuff does disappear. Even a delay by one or two months then it's November and then you're looking for losing November and December so it would release in February or March.

Nell: We'll be at TPAC in September.

Ada: It would be cool to have this conversation again in September, to see what's ready to ship.

Chris: I think you'll find that we (Google) have the transparency because of the release system we have to plan a few months in advance. Looking ahead we're talking about what's in 76, which hits stable at end of July, beginning of August. Is September we'll know about October / November.

Nell: Perhaps in the call after next we can make a call about the polyfill to see if browser vendors are willing to commit to their implementation being polyfillable to the VR complete draft.
... Ideally, browsers will update the polyfill themselves. I'm hearing that my product should target the VR complete milestone.

Ada: I'm making a note to add it to the agenda for the next call or the next call after that.

<johnpallett> +q to ask whether we can document that request in an issue or somewhere so we all are agreeing to the same thing

Nell: Thank you for that topic. I feel it was productive.

<Zakim> johnpallett, you wanted to ask whether we can document that request in an issue or somewhere so we all are agreeing to the same thing

John: Does the onliner in the agenda I can commit to that.

Nell: Let's file an issue in the polyfill repo and figure out how to tag that for the agenda.
... I'm not sure who owns that repo, CG or WG.
... We'll update the issue inline and we can discuss it there.

XRTest and web-platform-tests

<Manishearth> https://paper.dropbox.com/doc/XRTest-API--AeZNuSf9gBF3pKIPcc35rrYqAg-ButPPh6NtDPj59JPw2wbx

Manish: This should be short. I and others want to write shared tests and this involved mocking devices. There is a WebXR test repo under immersive-web that has an API design for testing. The way it works is that you simulate a device connection and a mocking controller. It allows you to change the frame of reference, end sessions, set bounds, etc. You should read the doc and leave comments. I wanted to see if people had thoughts or comments on

how this should work, any help.

<bajones> +q

Brandon: I wanted to mention that you had updated the interface and there's some old API like setFrameofReference and supportsImmersive because the genesis of the testing interface was at Chrome.

Manish: Yes, it's not updated complete.

ly

Brandon: It conforms to Chrome internal data which in turns was designed around WebVR. So it represents a fair amount of technical debt so I don't want people to look at what was there and think it shouldn't be changed. Let's be willing to take a hatchet to it and make it something that suits the need of the current API.

Manish: Yes, I wasn't sure about the setFrameOfReference.
... Did change it to use native origins, it does the fundamentals of the spec so you're actually testing code that derives from the fundamentals. I'm happy to make bigger changes.
... There are already test in web platform tests that use this but they're ages old.

Brandon: We have been adding web platform tests on the older version of this. For the most part it's my hope that they're platform tests but it's possible that there are Chrome-isms in there. I'd love people to start using them against your own implementations when it's ready. Again, if you find things that are Chrome-isms or are overtly purpose fit to what Chrome is doing it's accidental and you should point it out.

Manish: Before others start using those tests everyone needs to be on the same API. Servo and Gecko are using the new version of the API. We have a little local wpt dir that doesn't sync up and we keep them there until everything syncs up.

Brandon: Could you make those public even if they're not synced?

Manish: Sure.

Ada: Ok, next topic.

Chris: Snack!

Privacy

<cwilso> https://www.irccloud.com/pastebin/5Q9THNZe/

<Leonard> scribe: Leonard

<Manishearth> avadacatavra: ^^

<cwilso> To join the video meeting, click this link: https://meet.google.com/igb-hoah-swr

<cwilso> 9:02 AM Otherwise, to join by phone, dial +1 929-266-2229 and enter this PIN: 288 324 932#

jp == JohnPallett

jp: Privacy Design Doc

<johnpallett> https://github.com/immersive-web/webxr/pull/638/files?short_path=472fbcc#diff-472fbcc4786b1b90047b02fd8e7bdc17

jp: Self- read privacy design doc

Nell: Use buttons to the right to switch to the "rich-diff" version of the document

<alexturn> This link to the doc skips the green bars on the left: https://github.com/immersive-web/webxr/blob/67b0c0992e7d82c383d619900ef110a36c3bfade/designdocs/privacy-design.md

<Manishearth> whoops

<Manishearth> a

jp: Typo stuff later... Keep focused on concepts
... Design doc, not PR, not Spec.

kit: Quantization accuracy/tolarance. Specifically height with floor moving up/down may cause problems

Kip: Prefer make people shorter or the floor higher than actual

Alex: How much is allowed?|

jp: A few centimeters should be sufficient

Ada: In headsets accuracy is order of millimeters

Brandon: Actual sensor-driven data report height about 0-plane. Expectation is that is sufficient.
... Critical: emulated floor-level space based on user-configured height.

JP: Covered in the doc by emulated height set by user

alex: Device reports actual height and provided height my provide a lot of info for fingerprinting.

Nick: In the first table, there are ~9 different types of data. Having problems understanding what would be presented to users

jp: User Understanding: Existing table in document that discusses the issue
... Overlapping situations may cause user confusion. Certain conditions may be mutually exclusion. Others may be present at the same time.

nell: Question about XRSPace and XRReferenceSpace?

jp: Both may be relevant.

Nell: Doesn't agree.

Brandon: Set 6dof controller and can't determine user or space motion

Nell: What does that have to geographic measurements

jp: ...lost it.

Brandon: two scenarios track camera and something else

jp: Realistic scenario can relative position data over large enough physical space can pinpoint user geographic location

Nell & Brandon: Concerns about wording in the document table. JohnPallett will address this summer.

Nick: Still concerned about User consent

<Zakim> kearwood, you wanted to suggest that inactive controllers (eg not moving for a long time) no longer return position without user consent.

Kip: For controllers: a vunlerability with multiple 6dof controllers. When one or more controllers are put on a stable surface.
... Single or Multiple sites may be able to track.
... Is quantization is sufficient to prevent fingerprinting

jp: Threat vector associated with low-end bits because of mfrg. differences
... Addressing Kip's question: For angles 6 degrees successfully prevented fingerprinting without impacting user experience.
... Thinks 6-degrees is too much for immersive session

kip: For a stationary device dynamic quantization may work. The longer it is stationary, the bigger the quantization.

josh: Actual case of using just 1 controller

jp: Clarification done...
... Trusted UX: consent happens at session connection because no clear mechanism to ensure a trusted interface
... See #424. Trusted interface may be display device dependent for multiple reasons

Manish: Single request causes a pit of failure by requesting all permissions up front. Especially true of frameworks.

jp: Some platforms have mechanisms to do trusted interfaces, but this does not apply to everything

nell: can only think of cardboard

Alex: Same problem applies in reverse. How does the user know which mode is running.

Trever: Shared secret between browser and user, but not application that indicates what happens.

nell: Problem with iOS when a website pop ups a dialogue that requests the phone pin

jp: Research indicates browsers leave full-screen mode for all browser prompts

manish: Desktop has an easy mechanism

Nell: Edge have a long-term dialogue that indicates full-screen with bailout instructions

jp: Can a trusted i/f be spoofed. Believes that all research (listed in #424) indicates that there is a significant safety issue

Alex: Still sees a problem with headsets because there is no base level to return to

jp: All of this only a problem with immerisve. Inline does not suffer from this.

Alex: Needs to prove to the user that the immersive session has ended.

jp: Would be looking for solution for trusted info .

nell: Does there need to be a requirement for a guaranteed mechanism to "return home" in the spec?
... What happens when a user needs another permission features (e.g., mic)

jp: If design was relaxed so that permissions do not all need to be requested up front, then there turns out to a major hole, what happens?

Ada: Up front requests are different than the current recommend means for doing permissions. Really important to
... handle permissions during a session

jp: X-Origin navigation requires session termination and handling of new session/permission

johnGwinner: Do we need to answer the question: :What is a secure interface?"

jg: Concerned that the wording in the 'User Communication' is not clear

jp: Informed consent is required. Believes that there is no cross-platform, cross-language universal informed consent interface.

Manish: Concerned about the on-demand session permission dialogue forces that user to become too familiar with approving]

Alex: More discussions about how to ensure trusted interface

nell: Starts being the same as focus/blur -- next topic

<Manishearth> alexturn: One way to handle backcompat is to dump the user back into the outer UI if we end up, in the future, realizing that in-immersive permissions prompts are broken

jp: Slide on Consent... not clear how to present trusted i/f

Alex: Equivalent of ending session, present prompt, restart from previous point

<Manishearth> Manish: Oculus Go has a "nice" version of such a UI where the outer session has a window into the suspended session

Floor: move to break-out session(s)

jp: What happens with partial consent -- the user only agrees to some of the stuff
... what should the UA do? Design does not currently have an opinion. It can prevent session creation OR disable features
... NEVER give access to data that was not consented to.

Brandon: Internal data indicates which data is supported/handled

jp: Allows user to have fine-grain control. Concerns about either choice

Manish: Major improvement compared to over-permission. Should try to spec it. May be useful to have
... required and optional permissions and return a list of enabled permissions

nell: Required and desired permissions might expose device unsuported features might be able to fingerprint device/user
... return not available instead of declined.

Brandon: Timing may indicate availability of features. Would not allows work

nell: Using controller would provide all of that info anyway

Ada: Progressive permission enhancement?

Brandon: API & apps should be designed based reasonable fallback
... Need to start a session before controller info are available. Does not prevent session termination and restart with additional info

jp: Last item: Thresholds for user consent. Some can be avoided, some are required, some are in-between.
... fingerprinting (info across session) requires. Profiling: info during a session requires consent if it is personally identifiable information
... IPD, height, gait, etc.

Chris: Suggests quiting early.
... but it didn't happen

<cwilso> I tried SO HARD.

Josh: Presenting "Hypercard for XR". See link from Josh

Lightning talks and unconference / Hypercard for XR: Josh Marinacci

<joshmarinacci> https://docs.google.com/presentation/d/1ByC7N1NJy9P_uFi8V2YPkk_vuClBlBu-beDmsfbUBJM/edit?usp=sharing

<ada> scribe: ada

Josh:

What would hypercard for XR be like?

Hypercard is a "programming" system for multimedia for non software engineers.

Gui editor

Can be run from the Archive using WASM

Single consistent metaphor "a stack of cards"

Remixing encouraged :D

<kearwood> https://en.wikipedia.org/wiki/Myst

<kearwood> ".... The original Macintosh version of Myst was constructed in HyperCard. "

People made some really cool stuff!

Metaphor Scenes, Remixing (built on glitch), simple scripting

Built in support for image anchors and geo anchors

targetted at middle scholers.

MrEd (Mixed Reality Editor)

Lists of scenes behavuiurs and assets which can be measured with a wysiwig editor

<atsushi> s/topic: Hypercard/Josh: /

x-device editing and viewing

Simple scripting (JS underneath) for changing scenese

Share via urls! :fireworks:

Based on React, PubNub for shared editing

Success because it was fn, failiure because software needed improvement

Edit on desktop and vr

Ar is view only

What's next?

Good for non-programmers, prototyping, having fun!

How should they continue it??

*round of applause*

It's done!! Buh-bye til tomorrow!!

:dance.gif:

Summary of Action Items

Summary of Resolutions

[End of minutes]

Minutes manually created (not a transcript), formatted by David Booth's scribe.perl version 1.154 (CVS log)
$Date: 2019/06/04 23:27:26 $

Scribe.perl diagnostic output

[Delete this section before finalizing the minutes.]
This is scribe.perl Revision: 1.154  of Date: 2018/09/25 16:35:56  
Check for newer version at http://dev.w3.org/cvsweb/~checkout~/2002/scribe/

Guessing input format: Irssi_ISO8601_Log_Text_Format (score 1.00)

Succeeded: s/On to "framing the roadmap" discussion/Topic: On to "framing the roadmap" discussion/
Succeeded: s/Ada+Chris/Nell+Chris/
Succeeded: s/scribe: samdrazin/scribe: samdrazin_/
Succeeded: s/scribe: samdrazin, samdrazin_/scribe: samdrazin/
Succeeded: s/is/was/
Succeeded: s/... Even/Artem: Even/
Succeeded: s/scribe Leonard/scribe: Leonard/
Succeeded: s/Kit/Kip/
Succeeded: s/Hypercart/Hypercard/
Succeeded: s/topic: Hypercard/Josh: /
FAILED: s/topic: Hypercard/Josh: /
Present: NellWaliczek ada cabanier trevorfsmith alexturn cwilso Leonard kearwood Manishearth Leonardo
Found Scribe: kearwood
Inferring ScribeNick: kearwood
Found Scribe: samdrazin_
Inferring ScribeNick: samdrazin_
Found Scribe: samdrazin
Inferring ScribeNick: samdrazin
Found Scribe: samdrazin
Inferring ScribeNick: samdrazin
Found Scribe: samdrazin_
Inferring ScribeNick: samdrazin_
Found Scribe: samdrazin
Inferring ScribeNick: samdrazin
Found Scribe: Alex Turner
Found ScribeNick: alexturn
Found Scribe: Trevor F. Smith
Found ScribeNick: trevorfsmith
Found Scribe: Leonard
Inferring ScribeNick: Leonard
Found Scribe: ada
Inferring ScribeNick: ada
Scribes: kearwood, samdrazin_, samdrazin, Alex Turner, Trevor F. Smith, Leonard, ada
ScribeNicks: kearwood, samdrazin_, samdrazin, alexturn, trevorfsmith, Leonard, ada
Found Date: 04 Jun 2019
People with action items: 

WARNING: IRC log location not specified!  (You can ignore this 
warning if you do not want the generated minutes to contain 
a link to the original IRC log.)


[End of scribe.perl diagnostic output]