Minutes - Day 1/2
Minutes taken by Karen Myers, Dominique Hazaël-Massieux, Ted Guild, Xiaoqian Wu and Atsushi Shimono. Many thanks to them!
See also minutes of day 2
#Topics
- Keynote by David Catuhe (Microsoft / Babylon.js)
- Setting the context: Web games development
- The future of the web runtime for games
- 3D rendering roadmap
- Mapping games users & designers needs to technologies
- Packaging / Asset loading & storage
- Towards accessible games
#Keynote by David Catuhe (Microsoft / Babylon.js)
David: I ♥ the Web - I created
babylon.js, which is used by many to build other projects, an
idea I ♥
... We used to have Flash games, and Flash died, for the
better
... Web on games have improved a lot since then
... I've been able to play Minecraft in my browser
... we did wonders!
... I ♥ the Web, but it also drives me crazy
... it has so many limitations
... lots of my friends use native tools, such as Unity,
Unreal
... they have access to so many tools and toys, I'm jealous of
them
... The agenda will cover many of the problems faced by game
developers on the Web
... one of them is Asset Management - in particular the extent
to which they get made available without protection
... Performance is a core of the business of my tool
... Cloud support is another important direction the industry
is going to
... IP protection, file size, compression, Web GPU, threading,
Web Assembly, new codecs, gamepad support
... A lot of interesting topics to discuss - let's change the
Web together
... We can make it better
... I'm impressed by the many organizations in the room,
including many that are otherwise competitors
... We're all friends today, with a shared goal
#Setting the context: Web games development
Francois: this session is about feedback from developers of building games on the Web, from different perspectives, starting with Chris from Facebook Instant Games
#Instant Games developer feedback
Chris: I work as an engineer at
Facebook around Instant Games, helping developers to build
games for our platform
... Instant Games is where we are at now
... I'm here to listen and learn but wanted to highlight some of
the perspectives we've heard
... first, introducing Instant Games
... Instant Games is an HTML5 games platform running in a
WebView
... The WebView is responsible for the game, where the Facebook
app deals with platform integration (e.g. authentication, ads,
etc)
... WebView handles all the graphics, input, audio
... the games get hosted on our CDNs
... there are 2 sets of problems that we have: some are purely
on us, e.g. monetization, social gaming (for which we're very
well positioned as a social network), and keep the platform
trustworthy
... but some problems we need help with: getting high fidelity
games is challenging due to restrictions,
... battery life and temperature are problematic
... esp. in lower end devices
... and finally, the need to protect games asset
... It's great that the Web is such an open platform, but it
isn't great when it means developers get ripped off of their
assets
... Some specific issues from Instant Games developers: device
capabilities detection; getting out of memory is particularly
problematic since it crashes not only the game, but also the
Facebook app
... we see higher rate of crashes when Instant Games are in
use
... Another issue is having webviews at feature parity with
browsers
... and IP protection for Games developers
... Caching and loading - delay in getting a game starting has
a strong impact on engagement
... And WebAssembly is an important element of changes
Bernard: about asset protection - what are the key thing you're looking to protect?
Chris: people basically clone
some games, re-upload them with ads
... the first level of protection would to be avoid cloning
whole games
... next level might be protecting specific 3D assets, but
getting the first level done would already go a long way
#Report from the trenches of an HTML5 game provider
Christian: Christian Rudnick, CTO,
Softgames
... We want to
share our challenges as a game developer and publisher
... We all know challenges of loading and storage
... platform should deliver first bits within second
... challenging but it's quite hard to get there
... casual game developers tend to load everything up
front
... don't think about background loading, so it's hard to
achieve 3 seconds
... when games are more valuable, meta, graphics, etc.
... one of the issues we have
... The loading and storage leads to issue of offline
play
... native games
... once loaded on your mobile phone, you can play it
everywhere
... web game not so much
... conflict between fast loading a good game and @
preparation
... what about games with monetization?
... how to present for offline?
... I very much like what Google did for search engine, provide
mini-game
... in afternoon there will be a session on loading and
storage
... I will attend and see what we can do
... we also have issue of piracy
... we make it easy to steal the game; there is no way to
protect our games on the web
... there are tools to make them beautiful and de-obfuscate
it
... when it comes to older games
... we only have the code anymore
... one of ideas was 'runtime code'
... deliver a web service
... we have another problem
... for now we gave up; not really a chance
... one you get your user playing a game
... maybe stealing as well, there is an issue of multiple
devices
... we played with finger printing the user
... mobile devices...no way for 100% hit rate
... and that is insufficient
... you get user, we want to get millions of users
... then it comes to problem of discovery
... my colleague will talk about this now
Gili: Gili Zeevi. I will talk about
discovery
... in past had choice of talking about discovery in web
games
... there are tons of games on the web
... it is a challenge to push traffic on these games
... this would continue to be challenging for us
... also a challenge to get a player to play again, the
retention
... some effects in native
... push notifications
... opt-in rate
... push notification and ability to engage users and get back
to game
... like to have opt-in when user accepts notification
... this is not solved in the best way for the user
experience
... in some cases users may not be aware they can opt in
... Chrome solved it better; maybe direction to go in where
user acknowledges it
... Maybe look at shortcuts and bookmarks
... users might not be aware they can add to the home
screen
... having opportunity to show something like the notification
would be a good thing
... enter the game consistently
... on FB instant games we can create these shortcuts
... and we see users doing it and their retention is
significantly higher
... Monetization
... we create games for fun but also to make money
... the revenue is 60 billion on native, like to see same on web
games
... making API standard, make experience...users to trust this
kind of feature
... do it the same way as native
... but have a solid solution for rewarded ads
... huge opp for monetization
... that is a summary of our major issues and challenges
... not everything as Christian said
... and looking forward to tackling these things during this
workshop
Francois: What are rewarded ads?
Gili: A way for a user to watch and ad and get rewarded to replay a level by watching the ad for example
Francois: Why is game stealing specific to the web
... what makes piracy more problematic on the Web?
Christian: you can crack the
code
... store on local browser
Francois: Thank you
... we will go into details later; we are currently setting the
context during these talks
... we will dig into them during the main sessions today and
tomorrow
#Indie Perspective on Web Games
Andrzej: Andrzej Mazur. I am excited to be
here
... this is my own experience
... you can call me Ender
... I am founder of Enclave Games
... huge team of me and my wife
... also curator of JS13K Games competition
... we do meet-ups
... and also with Gamesdev.js
... thank you W3C for inviting me, and Mozilla for sending me
here
... progressive web apps
... I want to start by going back to 2011
... wayback machine
... there was a W3C workshop in Warsaw that sparked cool
conversations
... it made me prepare a presentation for a conference in
2012
... and I wondered if it was possible to create HTML
games
... it was a struggle
... it was really hard, very challenging
... the performance was not the best, to say the least
... most of the games
... just did not work
... we could not fight with all the features
... working offline was only in theory; did not work too well
in practice
... there were not many APIs, just drafts
... cutting edge, but not working well
... very experimental; play with a gamepad
... hard to have pointer lock and fullscreen
... hard to build with that technology
... it was also hard because there were not many tools
... I remember the first games I was building JQuery
... many front-end developers used tools they already
knew
... Monetization was non-existant
... it was a topic that we were building games for fun and
seeing if it was technically possible
... not many developers were thinking about having a business
around that
... Good news is that most of the stuff in 2019 now works
... you don't have to be a developer to create games
... it's not about struggling with the tech; I can build a
game
... but main question is what is next
... I built the game, so what now?
... We still have discussions comparing web to native
... you can choose native, even though you are building with
open web technologies
... but you are competing with developers using native
tools
... they have more years of experience
... and forgetting about important features; the URL
... you can stay with web
... instead of going native
... work with publishers and platforms
... work with publishers
... development focus is on the mobile web right now more
... that is where most of traffic is going
... developers are finding progressive web apps to be
useful
... you can load assets after the initial load, etc.
... you can also distinct single devs vs small and bigger
studios
... small indies usually create hyper-casual experiences that
you can play on the toilet or commuting to work
... bigger studies push the boundaries
... IndieDevs are looking into the new technologies
... it is easy to build hyper-casuals
... but maybe it will be easy to use WebXR in future
... and is there a way to monetize it
... how do you show the games to the public?
... these are things I see in the community
... but I also decided to ask the community and get their
feedback; what are their struggles?
... Interesting to see that we don't have a specific big issue
any more
... rather it's a whole lot of small things
... things like we have too many packages, should I package it
with a grant or @
... too many things to choose from
... we need tools to debug the games
... be nice to debug Canvas
... with devtools or something
... we need more materials
... there is information on the web
... video tutorials
... about the different technologies, but we need more
... also situation with new platforms
... you can build games quickly
... but developers are not getting support from the platform or
the publishers
... reaching to send games to the platform
... when there are so many games submitted, it is hard to get
feedback, hard to get updates
... wait a month or more
... something we could actively work on
... Also more technical questions in this survey
... can we have native 2D and 3D math modules?
... they could be built into the browser
... what about shared array buffers
... can we get back to SAB; is it safe?
... two more important things mentioned that you also saw on
the other two presentations is discoverability and
monetization
... Going back to 2011
... I built a game
... about shooting to IE6
... it was interesting to see that the game was built only for
the conference; I published it on my blog
... then it started to appear on other sites
[reads funny headlines]
Andrzej: there were 50 articles
and they just copy/pasted
... make a game and someone would copy it and it would be
everywhere
... a few years later
... some developers are making 500 games
... and have hundreds or thousands of developers like
these
... super easy to create an HTML5 game today
... in a week for a hyper-casual game
... easy to look at game and make the same one in just a few
days
... securing the source code is important
... but working with publishers and platforms to track down
stolen games and not wait three months to take it down is
important
... So discoverability is one things and also
monetization
... We had a thing were developers were selling licenses to
publishers
... fixed fee
... worked nice
... and over the years it evolved into the subscription
model
... paid a small fee to publish game every month
... Now most popular approach is revenue share
... and split earnings out of the advertisements
... it is hard to predict this
... not as good for developers
... fixed fee I knew how many licenses I could sell
... with revenue share
... it was like, 'ok, we have tens of billions of users' but
game earned $1 per month
... and you cannot live off of that
... developers were free-lancing
... revenue share is really tricky
... of course every publisher and platform will show a specific
success story of developer X
... but most of the developers are struggling to make a living
making games
... hard to focus only the games and make money
... I mentioned that protecting the source code and assets is
important
... but having an ordinary clone of your game taken down is
also important
... from the developer point of view
... to fight with such things
... To summarize
... the technology is good enough, compared especially to a few
years ago
... no big issues, but a lot of small issues we can work
on
... Discoverability is definitely a problem
... with so many games, it's hard to stand out, and be visible
with your games
... harder and hard to have a business around making
hyper-casual games
... have to find other opportunities to make money
... those are the most important things from the IndieDev
perspective
... really hope we will have more discussions and talk with
many of you over the next two days
... thank you
[applause]
Bernard Aboba: If W3C could solve one problem for you, what would it be?
Andrzej: I have to think as there
are a few
... many small things to be improved
... how to advance with WebXR
... is the Gamepad API working?
... WebAssembly...can we have some tools to work on WebAssembly
... and have the communication between JS and WebAssembly be usable
... many small things to be discussed
... Fortunately or unfortunately, not one major issue
Edgar Simson: Playing devil's advocate here, if markets are already saturated... why should there be more small games which can be hidden somewhere for someone to discover?
Andrzej: I love to make games and
I love to make money out of it
... I like my games; and I want to show you they are the best
games
... the market is over-saturated
... but there are more and more new opportunities opening every
month
... not that it is done
... it is not possible to make more games because there are
more players
... more about quickly adjusting to the situation
... from safely betting on licensing to subscriptions; some
publishers offer online purchases, but you need millions of
users to make it work
... still needs to be some research
... if you have ten million users, why is it not working;
why?
... needs to be some research on that
Tom Greenaway: You mentioned at one
point that PWA is interesting tech
... have you tested what metrics improve...and then did or did
not implement a game with PWA standards
Andrzej: I know there is research;
I did not do it myself
... from my POV, it was awesome to experiment with game; call
it lazy loading
... while player is looking at main menu; initial load is
pretty quick so publishers and platform are happy
... we are still able to load in the background
... I did not do research, but it feels like it should add a
lot of value
... I researched PWA for selling stuff and web site support
... and it was improving the
sales in hundreds of percents
... that worked for applications
... and I think it should work for games, but I have not
checked that myself
... and I have not heard about research
Tom Greenaway: I also hear about new
platforms
... did you mean generally there is over-saturation?
Andrzej: I think both
... usually players have @; now there are other options
... starting to look similar
... FB instant games are most popular
... so many other things
... @ started experiments to allow a streamer to allow others
to watch
... and see if you can earn money on them playing
... or they pay the streamer
... still something that needs research
... so many new platforms that can appear
... already getting feedback
... when reaching to developers it was smooth, but now it is
taking much more time to get the feedback
... of course, it's a problem
... of having a successful platform
... have to handle if people say the game was stolen
Tom Greenaway: an earlier question was if
there were one thing W3C could do to help
... were you restricting to tech answers or are monetization
and discoverability still the biggest problems?
Andrzej: yes, those are the
challenges
... Web Assembly sounds like an easier question to answer;
rather than monetization which sounds like a harder question to
solve
Francois: Thank you. We are right
on time. We will be discussing monetization tomorrow, breakout
session that Tom will be leading
... we will break for 30 minutes now
... and then start on first technical session
... Also want to apologize for the badges; name shuffle
... we'll regroup in 30 minutes; coffee outside
#The future of the web runtime for games
#Debugging tools, by Bjorn Ritzl, King
Bjorn: from Stockholm office at KING, will talk about debugging
... our game engine and how to debug html5 apps
... King has released 200 games over the year, starting from Flash to facebook and mobile games
... in the app stores
... the most popular one is Candy Crush
... 268 000 000 users
... 2 game engines at King - one of them is Defold
... Defold is a game engine and editor, cross-platform
... published to native on mobile, desktop apps and HTML5
... it's a C engine, with Lua for game logic
... 3D with 2D focus
... component based, small and modular
... used in Google Play Instant, Facebook Instant Games, playable ads
... Defold was released for free in GDC 2016, available to anyone
... with an organically growing active community of develoeprs
... couple of King games released with it
... and a few major indie games built on it as well
... Our C++ engine is completed by a bunch of platform specific code to adapt to each target
... JS for HTML5 - the ratio between shared and platform-specific is around 20:1
... debugging is a must; usually enough to debug on host platform
... but what about HTML5-specific problems? these require a different set of tools
... the game engine is transpiled to JS with emscripten
... we use Web Assembly with source maps that help with debugging
... that's nice, but coming from our background of C developers, we would prefer to use our existing debugging tools in this context too
... How do you debug problems specific with Web Assembly?
... Debugging graphics is also important for us
... we would like to be able to inspect the textures uploaded to the GPU, shaders, draw calls, composition
... we use WebGL and OpenGL
... on desktop and mobile we get powerful tools (renderdoc)
... but what about WebGL? we have spector.js which is great, but can we get better integrated support in browser dev tools?
#Enable fast and efficient threading on the web, by David Catuhe, Microsoft / babylon.js
David: with my babylon.js hat on, I'm going to write my letter to Santa Claus.
... today, doing Multithread on the Web is done with a very process-oriented way
... it's tough to compete with native when you can't use multi-threading across multiple cores
... Why isn't web workers enough? they're not easy to work with
... in a 3D engine, we have lots of things to do in //
... Frustum culling can be done in // - but in a Web workers, this requires copying and passing with serialization
... Lacking a shared memory structure is a big gap for the Web
... this would apply likewise for particles
... many of my colleagues (e.g. at SharePoint) would love this as well
... a better threading support would go a long way
... Web workers can't run a worker in the same context - it has to be run in separate file, communicating with string based messages
... the only thing they can work with ArrayBuffers - they work like processes more than thread
... here is a my santa claus wishlist
... Could we have a PromiseTask that would run a different thread?
... Feedback from the MS browser team was that it wasn't easy or possible
... including some friction from TC39
... JS engines are optimized for thread isolation
... this means changing this would be complicated
... Apple experimented this two years ago - not sure where this has gone
... Assuming my PromiseTask can't work, can we at least improve Web workers?
... allow to share structure objects would go a long way
... let's extend the idea of SharedArrayBuffer with a SharedObjectGraph that could only be spinned from Web workers
... that would allow 3D engines to run on a specific thread that would spawn new workers without touching the main thread
... with OffscreenCanvas, this would work well
... this would leave DOM & WebGL access out of scope, but still removes a lot of the serialization cost
... Again, it makes no sense to have this requirement to keep Web workers defined in a separate file
... Let's discuss - these needs matter for games, but not just for games
... any kind of expensive experience on the Web will need that
... audio, input, etc
... my Word colleagues would love to use separate thread for spell checking
Nell: would this PromiseTask be callable only from Workers, or also main threads?
... how would the timing work?
David: this idea is inspired from C#
... Promises have already synchronization semantics
... the main thread would wait till the resolution of the PromiseTask, stalling the main thread
Nell: so a PromiseTask would be different from a Promise
... the main thread would be blocked, and only the PromiseTasks would be executed during that
David: correct
Nell: my worry is what happens with queued operations (events, promises)
David: I don't want to block the main thread
... the continuation of the PromiseTask would happen in the main thread
... the continuation would be queued in the task queue of the main thread
Ricardo_ThreeJS: I'm not a browser person - I'm on your side :)
... what about having a threaded requestAnimationFrame
... promises are complicated, but the animation frame loop may be simpler
David: our 3D engines use rAF for our rendering
... some of that work can be parallelized
... having a parallelized rAF wouldn't help here
... [clarification]
Nell: this relates to my point about returning control to the rAF
Will_Google: one model that might help overcoming some of the negative pushback from browser devs might be to use read-only memory [?]
David: the problem is the cost of serialization
... I want to be able to run things in parallel, wait until all the threads are done, and then render
... and to avoid blocking the main thread, this would happen in a dedicated Web workers
David_Playcanvas: sharedarraybuffer in Web workers has been disabled due to Spectre
... if that existed or some equivalent, or had the ability to have Web workers not in a separate file - would that resolve the problem?
David: with SharedArrayBuffers, I lose the structure of my objects
... that's not much an issue in Web Assembly, but for JS libraries, this require serialization/deserialization
... we could use index in a sharedarraybuffer, but then our API has to be slower to map between that index and the exposed property to the developer
... keeping a sharedarraybuffer synchronized will cost a lot in performance
... TransferableObjects allow sharing objects across workers - could we extend this to fit this?
David_Playcanvas: I see how that would help
Andrew_Mozilla: my understanding is that we've reached consensus with other browsers to let pages opt-in to SharedArrayBuffer by disabling cors
... so sharedarraybuffer should be coming back soon
... the problem with multiple threads is that it changes the execution model and creates reentrancy issues
... I agree the problem of serialization is real
... but having shared JS objects would go against JIT magic and wouldn't work with multiple globals
David: I agree these are hard problems
... but I can't imagine the Web staying with working on a single core in the future
@@@: will there be breakout sessions to discuss web workers / multithreading in more details?
Francois: we can plan one
@@@: I think quite a few of us think this a major problem
Francois: we probably won't solve the problem during the workshop, but even highlighting this as a priority goals would already be useful
... can definitely have a breakout session
Liu: how about worklet as defined in CSS Animation?
David: I've never used it
Liu: I wonder if using a Worklet could help with your problem space
David: I'll check it out and will report in the break out session
#WebAssembly: status, Web IDL bindings, and roadmap, by Luke Wagner, Mozilla
Luke: I started at Mozilla 10 years ago during the Javascript performance wars
... I got involved in the Games platform project, which was aimed at helping both games and Web apps in general
... at the time, emscripten had emerged as a platform to compile C codebase to JS
... asm.js emerged out the desire to make the resulting code run smoothly
... it started gaining adoption in lots of game platforms
... this made it clear there was a need that could be addressed with more collaboration
... which lead to the emergence of Web Assembly
... in 2017, we shipped the initial version of Web Assembly in 4 browsers, just 2 years after the creation of the Web Assembly CG
... right after that, we started looking at what we should add
... we took inspiration from the TC39 stages process (from early idea to shipped feature)
... We have a roadmap that my colleague Lin Clark detailed in a blog post accompanied by a useful cartoon illustration
... lots of great objectives, both in and outside of the browsers - checkout the blog post to learn more about it
WebAssembly’s post-MVP future: A cartoon skill tree
Luke: [reviewing Web Assembly proposals]
... I'll spend most of my time talking today about what I feel is the most relevant to the workshop
... how do make all of our Web APIs available in Web assembly?
... this about Bindings
... in the Web Assembly MVP, there are basically 4 types: 2 integer types, and 2 float types
... how would we map this to the much richer set of types in Web APIs?
... our first take in 2017 on this was the notion of "Host bindings"
... the notion was to use wasm tables (typedarrays) to convert between JS Types and WASM code, via indexes map
... but it turned out very awkward as we went in more depth
Luke: also, not clear how it would impact performance
... in 2018, we came up with this idea of Reference Types
... using a type hierarchy - that's what has been implemented in Chrome and Firefox, available experimentally
Luke: the generic type is anyref that can map to any JS value; we're now looking at more specific types e.g. funcref
... this gives wasm first-class host values
... this is leading to our current (2019) effort towards Web IDL bindings
... basically refining reference types to make binding to WebIDL APIs more efficient
... since we can focus on a well-defined subsets of types
... [in the mvp slide]
... Currently, in the MVP, calling a Web APIs require lots of conversions and glue (WASM -> JS, JS -> WebIDL, WebIDL->C++), creating costs and complexity
... With WebIDL bindings, we will simplify this greatly, making it possible to program and extend how the conversions happen in and out of the Web IDL Bindings wrapper using a set of defined operators
... We have an explainer that details this proposal
... [Binding Types/Values sketch]
... Starting from reftype and signed numbers (numtype), we build up mappings to WebIDL defined types
... [Beinding Expressions sampler]
... there are multiple ways to represent strings in that model - we define different operators that enable this different repreentations (e.g. utf8-mem-str vs utf8-gc-str vs an opaque ref type)
... likewise, a record (Dictionary in WebIDL) can be represented in multiple ways
... these operators can be composed together
... We want direct access to WebIDL from WASM - what do we mean by that?
... this means both when importing WASM code, and when calling it
... currently, you need some JS code to expose Web APIs to WASM
... likewise, lots of glue code needed when calling code
... WebIDL binding would help when calling code
... the importing facing will be managed by separate proposals: WebAssembly ESM-integration and Built-in modules and import-maps
... and get-originals which expose Web APIs as built-in modules
... [C++ Prototype]
... we've done some prototype toward that
... wasm-bindgen in Rust
... A Google enginer has developed on a cool benchmark to demonstrate the value of this binding approach
... around the use of WebGL
... and this is pointing at very promising results
... Debugging was mentioned earlier
... This was discussed recently at the Web Assembly CG
... Everybody agrees this is an important problem
... there is a renewed interest in exposing a common interface for WASM to be included in Web browser dev tools
... We have a debugging subgroup in the CG which is resuming its activities
David: do you plan to somehow let TypeScript compile to WASM?
Luke: I've had that conversation with the TypeScript team a number of times
... it's a real difficult problem
... because the type system cannot be sound, they can't be relied on for compilation
... we've looked at using stronger type annotations "this-is-the-type-and-I-mean-it", which would allow marshalling in an efficient manner
... but this is a long term plan
Matthew_PacielloGroup: in terms of accessibility, one of the problems with Game Accessibility is that you may have a bitmap of a UI without exposed semantics
... the challenge is to bridge between the world of semantics and map
... I wonder if what you show with the rust example, you could help bring semantics into the original code
Luke: very good thoughts
... one thing we want to avoid is people moving away from the DOM and its semantics
... WebIDL bindings aim at avoiding this, by making the DOM available to WASM developers
... this helps with accessibility and all the other benefits from the DOM
Matthew: interesting to imagine bringing the Accessibility Object Model, a new proposal under consideration, in this picture
Will: one gap in WASM in my understanding is a non-linear control flow
... is the exception proposal going to help with that?
Luke: this has been an idea for a while
... but there is a new approach, algebraic effects, that encompasses exceptions and co-routine, which, if constrained properly, would likely work better
... but this still needs more research
... and will need more time
Will: Inferno, a successor of Plane9, @@@
Luke: at the moment, we have an abstract CPU - but you also need an abstract system interface
... there is a proposal in this space - WASI
... then you need the notion of programs, processes
... this requires revisiting some of the POSIX design decisions
fernandojsg_ (through IRC): What is the status of weakref, I believe it will be very useful for working on WASM API with bindings on JS, like having your data on WASM heap and object references on JS but you can deallocate it when the object is removed. That was one of my main pain points when trying to integrate wasm piece of code on an already existing js framework without adding overhead marshaling back and forth
Luke: we're looking into capabilities-based design
... weakref would be definitely useful
... they've been discussed in JS for a decade
Luke: we considered doing them for WASM, but this triggered renewed interest in TC39
... with proposals with momentum
Dave_Evans: WASM usually goes along with emscripten - is that a hard dependency? are there other toolchains?
Luke: emscripten goes back even before asm.js
... when we started the WASM CG, we looked at how getting it brought to more toolchains
... there is now a WASM branch for LLVM
... WASI (system interfaces for WASM) will enable a portable libc, that would work both on and off the Web
... rust can use the LVM backend to use the WASI ecosystem on top of that
... the static linker for WASM objects is also part of the shared toolchain
... with gc languages, you have to also compile your GC manager in WASM
... we're looking at improving that, but this will require a whole new set of toolchains
Francois: when we get back to WASM tomorrow, it would be good to have a list of priorities to bring to the WASM CG from a games perspective
#3D rendering roadmap
#glTF roadmap: CTTF Universal Textures and 2nd generation PBR
Francois: we will be going into topics on 3D rendering
Neil: I work for NVidia and by
night I work for Khronos group
... Khronos has a good working relationship with W3C, we are
similar in creating open, RF standards
... looking to bring silicon acceleration to graphics
... there is potential for further collaboration between the
two orgs
... I want to highlight a few of our standards, 3D is what we
are known most for. we took over OpenGL, there is Vulcan,
WebGL
... SPIR is a a graphic shader and could be useful for
WebGPU
... we have OpenXR which is around VR and AR, need to
coordinate with Nell on WebXR
... reason we are here is to discuss glTF
... all the media types had the equivalent of JPEG, small,
compact and supported by most platforms
... we make it independent from runtime, it is open and
extensible. WebGL was the first use case
... glTF has support from Microsoft, it is more than glsl for
shaders
... this is a glTF model running within PowerPoint
... we have had astounding levels of adoption (slide of
implementer logos)
... wide number of authoring tools
... quite a few game engines are using it. all the AR/VR
engines. you can post a glTF picture on Facebook
... Uber just did a blog post on how they are using it to
visualize driver data
... if you haven't come across glTF yet it is rather simple,
descriptor in JSON and multiple payloads
... PBR textures
... supporting binary for main payload helps keep it
compact
... as we are driving this ecosystem there are three main
things we focus on, tools, consistency and functionality
... for tools we started with Blender since it is open
source
... there are a bunch of other tools now following
... we had painful learning lessons from Calada(sp?)
... we have a number of models we encourage people to try out,
a validator and unit tests
... people always want more functionality. we were constantly
balancing functionality with desire to keep the spec
simplistic
... we are being very careful before we add mandatory
functionality to glTF in the core. it is extensible
... we want people to create extensions first and if it is
widely used consider it for inclusion in the core spec
... one of the key extensions probably soon destined for that
is mesh compression
... we have a mesh compression engine using Draco that gives
considerable performance
... what I want to get feedback on is about texture compression
as a large asset
... we are still confounded by myriad of GPU accelerated
formats
... game vendors as a result are shipping multiple solutions
which is not desirable on the web
... we think we can achieve this with CTTF spec
... it is based on a contribution by Binomial based on their
open sourced Basis Universal
... CTTF is cunningly designed so you can convert easily from
CTTF to any GPU accelerator on the fly as needed
... it has less overhead on processing texturing
... we are looking at ways to stream textures
... KTX2 is a container format that includes super compression
technology
... Binomial and Google have a deal on transcoders
... you can try this out in a browser now
... from 3D commerce folks but unsure if it is relevant to
gaming community, please let us know on github if this is
useful for you
... we are learning of veriticals wanting to use glTF. VRM just
incorporated, humanoid avatars in VR environment
... Khronos just established a 3D Commerce Exploratory Group.
we want to be able to allow merchants to represent their
products in glTF
... we are working on complex animation beyond keyframes and
skinning
... Draco has already offered their point cloud
technologies
Will_Google: have you considered
something like Blender, it can make mesh easier. mirroring is
something that is nice, as are subdivision surfaces
... how do you decide where to draw the line on what is
separable
Neil: we are trying to get tool
vendors to define what makes sense. let us know what isn't
fitting
... we do not want to become an authoring interchange
format
... we need to keep glTF as simple and compact as possible and
resist that temptation
#WebGPU
Myles: we are trying to let
developers use full capability of the hardware
... Microsoft created DirectX, we have Metal and Vulkan. they
all have similar concepts
... because these three platform level frameworks we should be
able take this functionality and expose it to the web
... WebGPU sits on top of these three frameworks
... there is only one Web and having this common abstraction on
top of them is very exciting
... you can use the GPU card for other things beyond rendering
graphics, AI computing etc
... when designing this API we could not expose all the card's
capability and let them leverage it all without a security
model
... we have to do things in a safe manner when uploading and
downloading content
... we cannot have any facilities that are not portable. we
have an extensions model but focused on core API implemented
everywhere
... last design principle of ours is performance. if our new
API is not faster than what you are doing already, there is no
reason to use it
... what you are looking at is Safari and Chrome showing a
textured cube
... we are still in early days and using the various nouns and
verbs of the API
... when we designed this API there are a few things different
in WebGPU
... in looking at a textured cube like there, there is a
surprising amount of information needed to represent it
... all of those things (colors, corners...) can be batched up
into an opaque object rather than sending each piece of
configuration separately
... this is powerful as there may be hardware specific op
codes
... it is very fast to switch between different sets of
configurations and something WebGPU is very good t
... in this scene we have enough complex vertices that would
make it onerous to describe via a JS API
... we are currently trying to decide what shading language
should be used
... there are two competing proposals, a dialect of SPIRv and
another hlsl
... where the direction the community group is going is to
select one instead of expecting browser engines to support
both
... we are on github, anyone and everyone can come by and
provide use cases, input on what we should be doing
... we have many companies involved and several big
contributors
Francois: anything the audience can do to influence this decision on choosing shading?
Myles: we do not want to impose our beliefs on the community and would welcome feedback
Francois: what might influence this, you mentioned pros and cons but maybe some use cases could influence it
Myles: the languages are pretty
similar, difference is in the workflow in how you create
them
... human writes these in the beginning
... SPIRv gets compiled whereas with a web high level shading
language, your source code can be used out of the box
<fernandojsg_> currently we must include the wasm for the shader compiler and it's pretty big already, is the plan to include that directly on the browser?
Myles: SPIRv has a head start compared to hlsl
DavidCatuhe: we keep the same shader as the base and can compile down, do you see same option with web hlsl
Myles: we call it whistle for
short
... it should be easy to create a convertion compiler
DavidCatuhe: we are trying to reduce the download tax
Myles: hlsl is the most popular and we are trying to solve [many of] the various concerns
NeilT: in the old days you chose
an API and were stuck with its shading language. Spir-V can
offer choice
... we do not force them to choose. the compilation step could
be lower and the tools aren't perfect
NeilT: we have had quite a few developers prefer the abstraction
Myles: you can obfuscate whistle as you can js
Francois: the choice of the
shading language is a requisite then in advancing the
spec
... unless you support multiple
@@4: does it have target source mapping?
scribe: how do we enable browser developer tools to unpack
Myles: that hasn't been discussed much in the WG
Francois describes upcoming breakout session
#Mapping games users & designers needs to technologies
Indira: [talking about previous
user research done]
... Looking at how users and designers needs can be
implemented.
... I have been interacting with people working on touch and
haptics, which seems relevant for games.
... They talked about: Body position, environment, smart
clothing is quite a long way off. Bio sensors.
... Area where they are looking at new controllers and how they
can be used in games.
... I also talked with a UX designer. All about the
transactional experience. Getting the information on a Web site
as quickly as possible.
... Going from 2D content to 3D content, what are the
implications of doing VR experiences?
... Then I talked with 3 gamers. None of them had used Web
games before.
... Age group between late 20s and early 30s.
... Asked them about how they felt about the games I pointed
them out.
... Games are about narrative.
... Really important part for them.
... Hardware developer: interaction with AR. They're not using
Web technologies, but they'd like to.
... Access to phone sensors is what they needed, motion
tracking as well.
... They also felt that the Web was evolving too slowly.
... This gets us to what I've been doing around serious
games.
... I thought that we should really have been using Web
technologies. But we knew how to find material for Unity, so we
just used that instead.
... The workshop today is to split into 3 groups, look at users
needs. What would you need to be in place to fulfill these
needs.
... How could things work in 5 years from now and then walk
back to see what we need to do now.
... Dom and Diego will help me animate the groups.
[audience splits into 3 groups]
#Packaging / Asset loading & storage
#Size matters: How loading is losing you players, Kasper Mol, Poki
Kasper: I guess most of us
understand loading materials are bad for user experience
... working at Poki which is web gaming platform, aggregating
system from things like flash to web
... help developers make things on the web
... size matters, two specific things, exploring loading data,
opportunities
... on exploring loading data, two concepts - conversion to
play, initial download size
... conversion is how many players remains after loading
screen
... initial download size is just an amount of size to be
downloaded
... mobile games has small size of asset, analysis is on
desktop games
... [graph] conversion to play vs. initial download size in
US
... first impression from this chart is conversion rate drop
quickly for icrease of initial size
... 10MB has only 80% even in US
... problem get bigger for other places, like Brazil - 70% or
Egypt - 60% for 10MB
... two main things we can do, one is to educate
developers
... secondaly, teaching people to check local
environments
... improving the web for small size assets and high
conversion
... so, size matters on the web
Bernard: do you have any data for improvements by HTTP/2 or HTTP/3
Kasper: not having particular data, do you have any?
Bernard: loading speed will get improved by using these connections
Francois: what could do practically speaking to help with asset loading?
Kasper: would be great if shared libraries (e.g. game engines) could be shared across sessions & games
Francois: W3C has published best practices documents, we may have such document out on this topic for teaching
Dave_Evans_PlayCanvas: better compression would get conversion rate up?
Kasper: not sure how much improvements from low level compression
#Browser Storage, Andrew Sutherland, Mozilla
Andrew: I am Andrew Sutherland, I work at Mozilla on the DOM Team, specifically the Workers and Storage sub team that is dealing with service workers as well as the workers that do not do what people want right now and all of the related storage APIs.
... I wanna go over what our current, the Cross-Browser Storage API story is. There obviously exits Web SQL Database which we will not speak of further.
... We have number one, cookies. It's a synchronous storage API, it stores only strings and there's a Hard Storage Cap. The cap is in fact so hard that if you try and use too many cookies on your domain, we'll just start evicting them. So you really don't want to be using this for game storage at all, only for network authentication and such.
... Next, we have LocalStorage which is a synchronous API as well. Also only stores strings, also Hard Storage Cap. Five megabytes, I think is pretty much the universal cap and it has the interesting wrinkle that at least for legacy purposes in Firefox, it's shared among your entire eTLD+1, e.g. for example.com if you have foo.example.com and bar.example.com, you're all sharing a five megabyte limit which has caused problems for sites like Wikipedia that tried to cache their JavaScript there and because Wikipedia shards by language, they ended up running out of quota so that on some languages, you wouldn't have any storage accessible at all.
... LocalStorage is also interesting because if you go to access LocalStorage when your site is loading and we don't already have the data available, we are effectively compelled to block the main thread until we have the data available. Which means that we've also optimized it a lot. I would recommend, if you're really concerned about time to interactive, LocalStorage might not be a bad thing to use but you need to be careful and coordinate with other people on your origin...
... I think a very common thing we see is sites will have a auto-appending log, like it's fun to gather telemetry and such but people will just constantly write to LocalStorage with additional, like just all of the JSON that they wanted to post to the server. Generates very high I/O writes for the user and it actually delays your startup next time because when we go to load your origin, we have to potentially load four and a half megabytes of data for you when you really only want a 100k of it.
... IndexedDB, which gets a lot of bad press. It is fully promise compatible now across all browsers. That was a problem for a while. It is no longer a problem. You can use promises and I recommend Jake Archibald's promised IndexedDB wrapper, it's a good one. It is fully asynchronous, it stores pretty much anything you want because we use the structured serialization API that postMessage uses. That means, like objects graphs, even if you have cycles in them or something like that, they will go to disk and they will come out of disk. It's also aware of blobs and files with some magic and I will get to that a little later. IndexedDB uses the quota storage cap which I'll also get to later as well.
... Last, we have the Cache API, which is technically part of these service workers spec. It is asynchronous and it's basically like a map for Fetch request and Response objects. Interestingly, it does implement the cache-vary header abstraction, so you can put in something that's JSON and you can put in something that's HTML and based on what you match for your request, we'll give you the correct thing based on all of the headers.
... It does not store POST requests at this point, like if you wanna cache your POST requests for submission later, we do not support doing that yet. There's actually some talk of augmenting IndexedDB to be able to store these requests and response objects so that you could do something like that or if you had a very clever eviction scheme in mind for your cache data. Right now, with the cache API, if you want to perform eviction, you either do something like, I think Google's Workbox team does which is, they use indexedDB to actually track how recent something is or there's a single API right now, Keys which lets you enumerate the entire contents of your cache and then you can manually evict things. It's not super efficient and in some cases, like I think in Chrome for a while, if you put too much data in your cache and then you called Keys, Chrome would crash or rather, your origin would crash.
... What's extra probably of interest to people here using Wasm, if you're dealing with responses, this is where the browser is able to do things like cache, the JS, AST caching, if we've derived something from it or for Wasm, if we've done additional compilation steps, we are able to tie it to the response. At least in Firefox, there's still ongoing optimization work here but it is like you're making it easier for the browser to help you if you're dealing with response objects.
... Okay so, I think structured serialization is not free, it's something we captured earlier today. Like postMessaging to a worker, it's not, like if you have a complex object graph, you're gonna pay for that because, we write it to a byte stream and then when it gets to the worker, we read it back out of the byte stream and create the objects. An issue we've had internally in Mozilla is, we actually have had people try and move something that was on the main thread API to a worker and they were surprised when it still took the exact same amount of time as it did before, 'cause what they were actually doing was doing IndexedDB from the main thread. The cost of using IndexedDB is the same on the worker and the main thread and it's also pretty much the same cost you're paying when you postMessage it.
... And this is something where it's hard for us and to improve on this, via the spec, especially because we actually need to invoke getters on any objects you might have for side effects and your getters are allowed to do crazy stuff like, mutate the object that we're serializing as you do it. Please avoid doing that because, we will possibly be able to optimize that more going forward if your object looks like a plain object.
... Related to this, IndexedDB allows you to create indices on your data and these indices are based on traversing your object graph for the same reasons that we need to, it's hard to optimize put an add in the first place for the structured serialization of your actual payload. It's expensive for us or it's at least, the cost is proportional to how many indices you have and how much of the your objects structure we need to traverse. It would be very interesting if there are ways we can help optimize this going forward, but again, it's a little bit hard per spec.
... But, there are things you can do to make postMessage less expensive and to make IndexedDB storage less expensive. Hopefully everyone is familiar with the fact that, on your Global's in JavaScript, we have blob and file. File is just a blob with a name attached to it. Blob can be constructed from other blobs, from strings, from ArrayBuffer stuff and basically, it creates an immutable handle on data that is now because it's a handle, we're dealing with things by reference instead of by value.
... So if you're postMessaging a four megabyte blob from the main thread to the worker, the cost for us is just to say, this pointer address and it's a blob. If you were to postMessage a four mega byte ArrayBuffer, if you're not transferring it, we have to actually write out all four megabytes of payload. So is a very handy thing to do, if you don't need to synchronous access to the data. With a blob, you still need to asynchronously get the data via either the FileReader API or the Fetch API or some other upcoming APIs.
... And also interestingly in a multi-process world, it's even better, because if you're dealing with blobs and they're going between processes and it's possible they will be further messaged around, we can lazily stream the data when it's actually needed. For example, if you have a service worker and you're responding with a blob, we don't actually consume the blob until the data get streamed, if it gets streamed.
... Also, if you put these into IndexedDB and you get them back out again at least on Firefox and I think on all the browsers, it ends up being disk-backed.
... One quick question I have about the previous presentation is, how much user losses are just due to the browser crashing? At some point, you can cause an out of memory event for your tab, or the browser or just the process you're in and it'll go away and that is especially possible on 32-bit platforms. Like on Windows, Firefox has seen this be a serious issue especially because, we have to allocate so much address space to video drivers or that, the video drivers need it. So if you put a blob in IndexedDB, pull it out, it's disk-backed and it's magically reference counted in the sense that, that blob is still valid even if you delete the object that's holding the blob from IndexedDB. You can keep using it and the browser will, assuming no bugs, cleanup all the disk usage for you when the time comes.
... All right, quota. There's basically two real APIs that expose the realities of how browsers manage quota for service workers, IndexedDB and the cache API. The first, is navigator.storage.estimate, and it tells you how much you're using and it tells you how much you're allowed to use before we'll throw exceptions. How much you're using can actually be somewhat of a lie. For privacy reasons, service workers are allowed to request data from third-party sites and because we don't want to allow side-channel attacks on extracting the actual size of the resources, we will lie about how big the resources are to you.
... So, if you use third-party resources and store them in the cache API, you can actually burn through your quota a little bit faster than you would expect. The big problem with quota is that, browsers don't really know what's the right amount of storage space for your site. So, speaking strictly for Firefox, we just guess, okay well, as the browser and we're probably the only browser the user uses, we're gonna take half of your free disk space and say that, that's for quota storage and we count any existing usage of browser storage as part of that and then we'll say, okay your group, which again is eTLD+1, so foo.example.com, bar.example.com, are all limited by the same group storage. We will give you 1/5th of 1/2 of the users free disk space up to two gigabytes.
... Just the fact that we're willing to let you use that storage, doesn't mean that it's necessarily a good idea because, we have eviction all right. At some point, Firefox says oh, I went over my limit for all of the storage for all of the origins and I should go evict that data and then, we just currently do a least recently used eviction.
... If you call navigator.storage.persist, we basically exempt you from that. We prompt the user and then your site is protected but it still doesn't fix the problem. So, buckets are the spec standard for eviction, we'll wipe out all of your site's data, if we wipe you out. So, the interesting stuff going forward spec-wise is, we would like to have multiple buckets, we would like you to be able to say, these are different pieces of data, this is the emails the user has composed are something that probably you would want to keep, like Gmail would want to keep drafts but things that are just cache could possibly be evicted.
... And there's really the big issue of how do we make this something that the user can understand? The background-fetch spec is a thing that's happening, being proposed that it let your app say, all right I'm gonna download like a 10 megabyte video, it's made up of these individual fetches and the browser will do that for you without your service worker needing to be active.
... It also interestingly expect to be exposed by the download manager and this is like the best hope I've seen for being able to explain to the user what's happening with storage. It's to say okay, well this is level one of this game, this is level two of this game and it would be my hope that we could also then, if we have these buckets, start with a more reasonable quota. So instead of saying, you get two gigabytes but spend it wisely, we could say, you get 100 megabytes and if you want more than that, you're gonna need to be able to explain to the user what these hunks of data are and then we will also be allowed to nuke them based on how recently you've used them.
... And then I guess, I'm out of time but there is a lot of interesting stuff happening now with third-party origins and iframes and I'll certainly be interested in discussing that with people because, at least, one of the things relative to the previous talk as well is that, if you're downloading resources from another site, as a third party origin, it's very likely that we will partition that data and any caching of the Wasm or something like that will not actually be cached and reused because, privacy reasons.
Kasper Mol: if we are running game in iframe, no way to be optimized
Andrew: top level key is origin, if iframe is another source, storage access api will give another place
Francois: issue on quota or background-fetch
Andrew: I'm hearing need to have multiple buckets on background-fetch, if user want to play offline, game need to know how much space can be used
#Towards accessible games
#Turning on "accessible mode" for users with motor impairments
luis: nice progressing
consideration for the users
... first part is about cerebral palsy and neural-tube
defects
... introduce of myself
... 13% of the people of US are with disability
... our goal is to make games as accessible as possible for
more people
... by turning on "accessible" mode
... use more than a keyboard
... especially people with limited movements
... targeting at special keys on the keyboard
... x-box controller enable who suffer from cerebral to enjoy
the games, which is awesome
... these are more than millions of people
... giving human touch for the community
... ability switches
... for those who suffer for limitation of movements
... voice enabled
... for conversation design
... really personal UE, be really sensible
... a few different kinds of motor impairments
... image a pain in movement
... these are also users
... Anxiety
... stresses from interaction with the community, family and
friends
... [3D demo for those with anxiety with space or
darkness]
... sounds that cause confusion
... American Psychological Association is working on a few
experiments to feel and to see from the user's
perspectives
... small targets creates difficulty in the interactions during
playing the games
... flick-on examples
... rethink about the input and output for those with
difficulties
... rethink about movement -- selecting text requires better
UIs
... rethink about engagement
... in the accessibility mode, consider easy interactions
... rethink impact, think about users with disabilities, think
about the limitations of the platform
... do early testing for accessibility
... building voice capability
... there is lots of opportunities to help someone with
disability
... when people want to talk to the game
... say yes
... use voice command to help complete tasks
... we need to understand accessible by the guidelines,
standards, to turn into solutions
... with help of AI
... focusing on users research
... future of meeting the user's need in the web games
#Adaptive Accessibility
Matthew: please read the paper
for more information
... a couple of acknowledgement
... Active Game Accessibility by the APA group
... game accessibility is important because of human benefits
from culture, recreation, social, and education
... games have to be challenge
... content accessilibity
... content is important, with colours, with specialised
audio
... Odyseey provided subtitles and 95% people keep them
on
... Division 75% people are using subtitles
... if you struggle to hear sound, you can benefit
... game accessibility guideline provides very useful
suggestions for these use cases
... FAST and MAUR are also guidelines worth reading
... people with accessibility have their own preference with
tools or techs they are comfortable with
... we need to provide assistive tech integrating with these
preferences
... provide personalisation assistance with different tools,
languages
... UI Accessibility
... already good technology provided by the OS
... please use the right elements
... provide semantics information if not
... this needs support from DOM
... we need a bridge for web games
... another example, authoring tools
... XML file
... figure out how to build the bridge
... find adaptations at different levels, OS, users,
engine
... how to improve the existing specs
... how to make authoring more accessible
... perhaps make it machine level
... UI to help testing
... more open questions in the paper
NellWaliczek: is there any work
being done at the gltf level?
... Anyone investigating an extension to glTF for accessibility metadata?
... to make the engine more friendly to accessibility
NeilT: if user can determine metadata, that may help
Francois: provide semantics information for the items
dom: when you talk about switch, is it binary? according to user's need?
luis: it's not 100%
luis: some of them are
conditional
... no perfect answer for this questions yet
Matthew: how many of you knows accessibility?
[most of the people in the room]
Matthew: how many of you are working to fix the accessibility problems?
[some of the people in the room]
scribe: games can teach us about the balance
Geoff: will music with semantics help?
Matthew: user needs to
personalise the voice, speed
... semantics is important
IanHamilton: a11y specialist,
CVAA
... [introduce different sessions of the guildline]
Luis: goal for think about
accibility?
... compile to laws, business needs?
... what would help to find the balance?
Brannon: A11y manager in MS
... it's a huge difference
... it's easy to mark accessible in the title
... the difficult part is to make it enjoyable, design the
features
... sit down with people as early as possible during the design
process
Matthew: agree
... the paper covers a few UI requirements
... UI with the game style needs balance
@@4: how much options for small
developers?
... is there any dev tools to help testing?
... how it works?
Brannon: Ian may have other
suggestion outside the MS's
... there is a tool to help test the websites
... it's more for productive websites
... we are working hard to make it more useful for games
... for foreground colours
... there is a lot of space to grow
Ian: there is tools and guideline
to help
... CVAA provides recommendations
... most of them are cheap and easy
Brannon: #a11y hashtag will help you
find a lot of useful information in this area
... highly recommend to reach out to that community
Luis: accessibility people are working on more than a checklist
Matthew: there are a few things
that are easy but with great help
... button in the mobile
... many such good examples
Binh_Bui: from BBC
... web games often comes with iframes
... pop-ups
... how to make these things accessible?
Matthew: there are a lot of
suggestions for pop-ups in the Web
... notifications, text in a big map can be difficult
... bridge the gap with the UI
... handle the customised keys carefully
... audiogames.next provides lots of good examples
Ian: make it broader
... testing with your users is important
Bjorn: simulating the interactions with dev tools is interesting
Matthew: happy to provide more
suggestions or examples with Luis or Ian, or anyone in the
community
... there are lots of information in the websites of the
community
[Ian & Luis provides more information]
Brannon: make sure you document your accessibility features in detail in your products
Francois: there are lots of
supporting APIs
... AOM is one example
[End of minutes]