Breaking Silos: A Collaborative Discussion on Use Cases for Linked Web Storage
Proposer
Pierre-Antoine Champin, Aaron Coburn, Laurens Debackere, Eric Prud'hommeaux
Description
The mission of the newly created Linked Web Storage Working Group is to enable the development of web applications where data storage, entity authentication, access control, and application providers are all loosely coupled, as opposed to the web of today where these are typically all tightly coupled and changing one requires changing all, sometimes at the price of all past data.
A number of initiatives — such as IndieWeb, Unhosted, and Solid — have emerged to propose alternative models for web applications. They are largely based on existing open web standards, which they extend to give priority to individual and group autonomy.
The goal of the breakout session is to bring together current and potential participants of the Linked Web Storage Working Group, as well as members of other communities with shared interests (such as the ones cited above) to seed a set of use-cases that the working group can later adopt to guide its work.
Live minutes will be taken in https://docs.google.com/document/d/1sFZ0BozoLA3U7iROtu10tR7S05_OEOprS8Um9NZqZYw/edit
Goals
Gather use cases and experiences from a wide range of people to initiate the work of the Linked Web Storage WG.
Millions of websites rely on country and address formatting data that needs to be unnecessarily downloaded on everyone of those websites.
For example on a regular Shopify onlinestore this leads to 193kB uncompressed or 17.7kB compressed data being downloaded on every store.
To allow developers build simpler and more performant websites we want to discuss our proposal for a browser API for country data.
Goals
Discuss proposal, get feedback, and discuss next steps
A fundamental capability missing from the web is the ability to complement Canvas with HTML elements. Adding this capability enables Canvas surfaces to benefit from all of the styling, layout and behaviors of HTML, including interactive elements and built-in accessibility.
Goals
Present high level proposal and open questions, gather developer and implementor feedback
Agenda
Use-cases targeted by this capability.
Rendering model and basic syntax: 2d canvas, WebGL, WebGPU.
Deprecation is Hard to Do, and We Can Do it Better
Tracks
Standards
Proposer
Benjamin VanderSloot
Description
Deprecating behavior on the web has to be done sparingly. Removing a behavior from the platform means that some websites that once worked will no longer do so. For some sites and behaviors that may be a good thing, e.g if it improves security or privacy protections provided to the user. However, this needs to be weighed against the impact on existing website deployments that don’t need or merit that protection and the impact on the web ecosystem of removing that behavior. Failing to sufficiently incorporate those website deployments' needs leaves the deprecation paternalistic at best.
One place this tension arises is in the similarity of authentication and tracking to the browser. Privacy protections that rely upon deprecating behavior, like third-party cookies, have had to work around this tension.
In this session we will discuss principles for deciding:
what behaviors are candidates for deprecation,
when a deprecation should proceed,
how to mitigate harm from those deprecations.
Participants are encouraged to bring their own examples that reveal challenges to provide concreteness. The chair will use third party cookie deprecation, storage access, FedCM, navigational tracking, OpenID Connect, and SAML as a starting point and example that they are familiar with
Goals
Improve consensus around deprecation of web platform behaviors
Agenda
5-10 minutes of stage setting, followed by discussion.
After more than two years of work and over a year of development, Italy has launched the first phase of the IT-Wallet pilot, a digital wallet solution that balances the experience gained from existing national digital identity systems with the requirements introduced by the new paradigm of digital wallets, as outlined by the EUDI Wallet regulation. In this presentation, we will summarize how the project began, how it continues to seek harmony with the EUDI Wallet, and the implementation choices that make the IT-Wallet an interesting platform for a critical analysis of Wallet Architectures.
Goals
The purpose of this session is to provide guidance on following the development of the IT-Wallet and to highlight some unique aspects of the Italian model. These include the trust infrastructure, revocation control mechanisms, and the integration model for Authentic Sources through the national interoperability platform.
Expanding Verifiable Credentials: Future Standards and Innovations
Tracks
Identity
Proposer
Mandy Venables, Wesley Smith
Description
This breakout session for Verifiable Credentials will explore key areas for future development and standardization. It will cover the work on renderMethod and confidenceMethod, which provide flexibility for enhancing credential presentation and verifier trust. It will delve into the integration of the unlinkable ECDSA cryptosuite, an advancement for privacy-preserving credentials. It will cover the work on VC barcodes and the ongoing work on the VC API, supporting VC lifecycle management. The group will explore innovative methods for VC transmission over wireless technologies, including NFC and Bluetooth, to support in-person interactions. Finally, there will be discussion on vocabulary development for specific domains such as driver's licenses, electronic ID cards (EADs), citizenship documents, vehicle titles, and more, to ensure broad applicability and interoperability of Verifiable Credentials across various market verticals.
Goals
Inform members on upcoming Verifiable Credentials work as well as future rechartered work.
Ada Rose Cannon, Marcos Caceres, Brandel Zachernuk
Description
Discussions about with some Wider Groups
Goals
Establish whether Model elements are Media Elements
Agenda
The HTML Model Element is an in progress proposal for a new HTML element which can display 3D models on a Web Site, with capabilities for displaying in Stereo on compatible display hardware such as VR and AR devices.
Is the element a Media Element?
The model element has many capabilities which overlap with Media elements, what does it mean to be a media element and is it appropriate here?
For example a model could have a duration for it's animation but it might not be animated at all. It could potentially have multiple independent tracks.
Should model's allowed to be transparent can they always be opaque are there any always opaque elements?
Can lead to uncomfortable visual conflicts if the 3D model punches through text/content which is behind it but visually in front.
One Year Update: Using LinkML in Web of Things Specifications
Proposer
Ege Korkan, Mahda Noura
Description
As kind of a follow-up of https://github.com/w3c/breakouts-day-2024/issues/15 and its predecessor https://github.com/w3c/tpac2023-breakouts/issues/8 , in this breakout, the Thing Description Task Force of the Web of Things WG would like to talk about their experience using LinkML to change how they generate various resources and specification text for their technical reports.
Goals
Experience Sharing, Discussion
Agenda
Wrap-up of Schemata Discussions from TPAC2023 and Breakout Day 2024
Presenting LinkML Usage in WoT Thing Description Specification Work
Script-based Shadow DOM can share styles with AdoptedStylesheets, but Declarative Shadow DOM must either initiate network requests or use scripting to share styles. This session will discuss various proposals for sharing styles with Shadow DOM's declaratively. These include:
Come to a consensus on which approach is preferred for sharing stylesheets in Declarative Shadow DOM.
Agenda
Discuss requirements from developers
a. Fully declarative (no network requests)
b. Able to opt-in for specific shadow roots
c. Able to export styles out of shadow roots
Discuss various proposals
a. https://github.com/WICG/webcomponents/issues/909
b. https://github.com/w3c/csswg-drafts/issues/10176
Trust the Origin, Trust the Content - Originator Profile
Proposer
Michiko Kuriyama, Shigeya Suzuki
Description
This breakout session will introduce Originator Profile, which aims to tackle misinformation using verified originators' profiles. Discussions will also explore possible collaboration with similar projects, such as C2PA, that depend on the originator.
Goals
Provides an overview of Originator Profile Discuss potential standardization Find potential collaborators
Agenda
Introduction to Originator Profile (20m, includes short intro video)
Demonstration (5 min)
Discussions (30min)
Wrap-up (5min)
This breakout provides an opportunity to share results from user research on permission prompts, discuss methods and findings, and ideate on additional research that could help the community.
Permissions are notoriously difficult to study, given that they happen in very brief moments while users try to accomplish a primary task. Yet, they are an important security and privacy mechanism that can have substantial positive or negative impact on users' experiences on the web. The Chrome team conducted two studies to (1) understand the general experience of permission prompts on the web as well as (2) how users perceive a mechanism to make prompts less interruptive. We will share a brief overview of results from the two studies and discuss implications.
Beyond that, this forum provides an opportunity to exchange ideas and experiences when conducting user research on security and privacy mechanisms in browsers as well as to identify new opportunities to better understand users' experiences.
Please leave comments and suggestions, for example if you also have user research in the permissions space that you would like to share or other ideas that would fit well into the scope of this breakout.
Q: When accessing or requesting permission websites should explain why. Any testing and research whether the page has made an attempt to explain to the user why it is asking?
A: Permission rationale study. Tricky to draw firm conclusions, data is very noisy. Will follow up with a study replicating the main flows we observed.
Q: why do websites ask for permission on arrival? E.g. google.com asks like that without context.
A: It’s tricky to get hold of non-Google website owners that would provide rationale. Those that wanted to talk about it were likely to have thought about providing rationale anyway.
Q: if you require user interaction to trigger the prompt what would happen? Do we care if we break pages because of that?
A: This is more a business question than a research question. Browsers would need to answer that for themselves. Developers can actually query the permission state. So there’s a way for developers to check before prompting.
Q: Prompt fatigue: is that influencing behavior when dealing with the prompt?
A: At the median, individuals on Chrome see approx. 1 prompt per 2 weeks. People make fast decisions based on habituation. Fatigue comes through qualitatively though and we do try to avoid additional prompts.
Q: Cookie banners: users don’t differentiate between actual permission prompts and cookie banners. So they add to fatigue. Any research related to that?
A: No specific research on our end, but we see study participants confuse cookie banners for permission prompts.
Q: How many prompts users see is influenced by Chrome remembering the permission state, this is not the same behavior for Safari. So it feels like assumed user fatigue is more of a Chrome thing than generic across browsers.
A: Agreed. We are rolling out one time permissions which will change that, but no concrete data to be shared yet.
Comment: If we think about the people who commission websites, they want to know where the customers are and send them stuff. Not at all surprising that we have so much more notifications and location prompts.
Q: Is there a way for websites to know they’re getting into the quieting zone of Chrome’s intervention?
A: You can look in CrUX to see your rank, but there’s not a direct way to know beyond that.
Q: Safari requires user activation for Notifications. It’s great that Chrome and Firefox also do this. We should start doing something about geolocation as well.
A: Next session has more on notification spam and proposes some solutions.
Q: Is there a presentation issue around users being told that the prompt is quieted? Can we reframe it to sound more positive, e.g. this is a new way of doing prompting? A: That sounds interesting and worth exploring. We did attempt to change the string in the chip to say “Get Notifications?” and that did not have substantial effects, though.
Q: Did website devs complain about Chrome changing behavior?
A: Some, but we tend to answer with “If you do the right thing you shouldn’t need to worry about it.” But yes, developers want predictability.
Goals
Share user research findings on permissions, discuss best practices and ideas
Generative Artificial Intelligence (AI) is a type of AI based on techniques and generative models that aim to generate new content, and its performance in knowledge learning, inductive summarization, content creation, perception and cognition is distinctly different from previous AI technologies. Now, Web is fraught with AI-generated information.
There are several new features of generative AI, such as new contents generation, long context windows, etc,. At the same time, generative AI brings new risks, such as hallucination, privacy leakage, copyrights infringement.
This session will introduce the new features and new risks brought by generative AI compared with traditional AI. And the session will have an open discussion about the risks related to Web and W3C can do on Web standards to address the risks.
Goals
Discuss the risks of Generative AI and what W3C can do to address the risks.
Agenda
Introduction of generative AI and its potential risks
This breakout session aims to bring together communities and working groups that focus on web standards beyond the traditional browser environment. While much of the web's development has centered around browsers, there are many other environments, such as WebViews, JavaScript runtimes, and EPUB readers, where web standards play a critical role. This session will explore how these different groups can collaborate to enhance user perception and promote their environments in broader web standards development.
Goals
Collaboration between groups to discuss challenges and identify areas where these environments can more effectively connect with web standards development efforts.
Agenda
Introduction & Admin (5 min)
Roll call: Who is here and what environments are we looking at? (15 min)
Open discussion on discussion points (30 min)
Close up with minutes, follow up items, results etc. (10 min)
Session Objectives:
Foster collaboration between groups working on standards for non-traditional web environments.
Discuss challenges and opportunities in promoting these environments to both users and developers.
Explore potential synergies and shared goals across different projects like WebViews, JavaScript runtimes, and EPUB.
Identify areas where these environments can more effectively align with mainstream web standards development efforts.
Discussion Points:
What unique challenges do environments like WebViews, JavaScript runtimes, and EPUB face in relation to web standards?
How can we better communicate the role of these environments in the web ecosystem to users and developers?
What areas of collaboration between these groups can enhance user experiences and drive further adoption of web technologies?
How can we promote these non-traditional environments within W3C and other standards bodies?
We'll step through and demo the purpose built software we use to collect AT output, build consensus on verdicts, and run those verdicts through automated regression tests with real ATs.
Goals
Share our work, tools, and progress toward AT Interop
After 12 years of Community Group operations and many discussions about their role in Specification incubation, the Staff are working on greater alignment of the Community Group program with the W3C mission. We are planning enhancements to address a small number of situations of concern for those CGs that publish Specifications. (As a reminder, CG Specifications are the subset of CG Reports that are subject to the CLA and FSA.) CG Specifications that gain traction outside of the guarantees provided by standardization processes risk weakening the W3C mission; through these CG program enhancements we seek to reduce that risk.
During this breakout, we will discuss the situations of concern that are motivating this work, share initial ideas to address them, and recruit early testers and adopters of these enhancements.
Benjamin Ackerman, Kristian Monsen, Arnar Birgisson, Aleksandr Tokarev, Sameera Gajjarapu
Description
Device Bound Session Credentials (DBSC) aims to enhance protection against web session theft by using a secure session that is bound to the device between the browser and web application. This session will provide a breakdown of the general attack vector of cookie theft that it is aiming to disrupt, an overview of the proposed DBSC web standard and host an open discussion about the web standard to gather any feedback or suggestions by the community. The session also covers an addition to the standard layered on DBSC, called the DBSC(E). DBSC(E) aims to provide session protection from malware for enterprise use cases against web session theft as an opt in.
Goals
Present the DBSC and DBSC(E) API and protocol proposed for standardization and have an open discussion about any of the various components that are of interest.
Mandy Venables, Manu Sporny, Gabe Cohen, Kim Duffy
Description
Standardizing Decentralized Identifier (DID) Methods is important to progress W3C's foundational work on global identifiers that are not leased, but owned by individuals. This session will discuss a proposed W3C charter for web-based DID methods. We will also cover key categories of DID methods that have proven broadly useful, including self-resolvable, web-based, and decentralized methods. The goal is to standardize a few methods that are in production within these categories, ensuring they meet the needs of diverse use cases. To achieve this, the group will establish strong collaboration and liaison agreements among key organizations outside of W3C, minimizing barriers to participation and fostering a cooperative environment. Additionally, the effort will engage the broader community of DID builders, innovators, and adopters, inviting them to contribute to a detailed roadmap that guides the future of DID standardization.
Goals
Inform members on the new DID Methods Working Group charter.
Lessons learned by an independent implementation of the Digital Identity Wallet in Europe
Tracks
Wallets
Proposer
Denis Roio, Andrea D'Intino
Description
I'll introduce challenges and pitfalls emerging while implementing the digital identity wallet in Europe (EUDI ARF).
Our team has independently implemented the EUDI ARF specification and released all code free open-source, a beta demo is also available as an online application at didroom.
This was a journey through some shocking revelations and frustrations shared by other large-scale pilots in Europe, so I'll do my best to extract critical insights that can guide other digital identity initiatives and avoid similar setbacks in the future.
Goals
Share a critical perspective on EUDI ARF based on facts
Page Embedded Permission Control (PEPC): Safely embedding permission entry points in web content
Tracks
Permissions
Proposer
Penelope McLachlan, Andy Paicu, Serena Chen, Marian Harbach, Balazs Engedy
Description
This breakout will continue past discussions of the Page Embedded Permission Control (PEPC). We will discuss safe, consistent mechanisms for web developers to link into browser UI surfaces, starting with permissions. Other examples of browser controls which could be embedded include content settings, a PWA install trigger, an installed app management surface, federated login, autofill or other browser settings. To date discussion has focused on the permissions use case, and while we would like to continue this discussion we believe the concept could be applicable to other use cases.
As web apps grow more sophisticated, rivaling native apps in capability and complexity, users can become confused as to how to access important settings that affect their ability to use apps. For example, in addition to origin scoped Permissions, PWAs can have application settings scoped to the application.
Websites can try to help users by providing guided instructions into browser UI surfaces but (1) this normalizes a safety anti-pattern and should not be encouraged even in legitimate sites as malicious websites are excellent at deceiving users into making unsafe changes to their settings, (2) instructions are inconvenient for the user, difficult to maintain for developers and frequently fail to help and (3) these types of instructions present extra challenges for accessibility.
This session will continue the dialog on providing in page access to permission settings, including implications for the underlying browser permission model, while expanding the discussion to include problem spaces beyond permissions. We will present preliminary usage data and developer feedback from the PEPC prototype for permissions as context for conversation.
Goals
Gather community feedback on the use cases and requirements for a general solution to providing safe entry points into browser UI surfaces from web content while laying out an incremental roadmap. Discuss whether (1) the problem space warrants solutions, (2) the requirements of a solution, (3) how the PEPC as prototyped stacks up against requirements, (4) alternative ways the requirements could be addressed.
While WebCodecs provides low-level access to the browser's native encoders and decoders, today there is no API to transport media peer-to-peer using the Real-time Transport Protocol (RTP), defined in RFC 3550.
The goal of the RtpTransport API is to enable applications to utilize peer-to-peer RTP transport, as well as to send and respond to feedback using the Real-time Control Protocol (RTCP).
Sourcemaps, Security, devices, & more: What you may have missed from TC39 & Ecma
Proposer
Aki Braun
Description
Lots of people who are tuned into internet-relevant standards know about TC39, the technical committee within Ecma International that specifies ECMAScript® (JavaScript™). What you may not know is that TC39 is broken up into 5 Task Groups, only one of which is responsible for the core standard, ECMA-262. Ecma is also home to TC53, another technical committee focussed on ECMAScript® in embedded systems. Join me to hear about the rest of Ecma’s work on ECMAScript: Internationalization, Security, Sourcemaps, Experiments in programming language standardization, and ECMAScript® modules for embedded systems.
Goals
Spread knowledge of JavaScript outside of the Web APIs, identify areas where Ecma & W3C could be working together.
In this session we want to discuss the scope of having an installed PWA close to a system tray icon. What does this behavior mean in different platforms and what is the minimum functionality that needs to be supported to bridge the gap with platform-specific (Windows/macOS/Linux) counterparts.
Goals
To scope the feature based on feedback from participants.
Popups are a (mostly) desktop-only UI concept critical to many existing flows (e.g., payments and login). If popups had to be invented today, what would they look like? What privacy and security concerns could be addressed? What UI would exist on mobile? Consider this and other questions as we examine the Partitioned Popins proposal!
Goals
Consider the Partitioned Popins proposal, and re-think popups as well know them!
Since 2013, the TAG has been systematically reviewing new web features as they are being designed and specified. We’ve called this process “Design Review”. Right now, the majority of the TAG’s time and effort is spent on design reviews, with the remainder of our time focusing on "other" outputs, such as updates to the Design Principles, Security & Privacy Self-Review Questionnaire, newer documents such as the Privacy Principles, and BF Cache Guide, as well as one-off "findings."
The design review issue queue is what drives most of our meeting agendas. Insights that come out of design reviews also inform and prioritize the work we do on these other documents that the TAG publishes. The TAG has a backlog of design reviews, and its feedback in those reviews has sometimes been too late to have an effect on the development of the reviewed spec. This has caused some frustration both on the part of the people filing these reviews and in the TAG. We need to find a way to focus and prioritize our work on the issues where it's most needed - where it can have the greatest benefit to the web. This session is intended to gather community ideas on the best ways to do that.
This session is to gather feedback from the W3C community. Is it clear how the TAG design review process works? Is it working for you? Have you filed TAG reviews and been frustrated with the results? Have you filed TAG reviews and been delighted with the results? Are we balancing correctly between design reviews and other output? We’d like to “check the temperature” to ensure what we’re doing is useful to the community and we’d also love your feedback about what we could be doing better? Help us help you.
Goals
Refine the plan in https://github.com/w3ctag/process/issues/36.
Agenda
The format will be a short overview of the Design Review process followed by open discussion. Please note, we don’t want to use this as a forum to discuss the technical details of currently open reviews.
Web apps are increasingly expected to gain access to a language model. We are proposing Web APIs that allow web developers to directly access both on-device and cloud-based language models, and securely share user data between multiple apps when using these models.
The following are the APIs goals:
Provide web developers with a connection strategy for accessing both on-device and cloud-based models. For example, if no on-device models are available, attempt to access cloud-based models. Conversely, if cloud-based models are unavailable, try accessing on-device models.
Provide web developers with a storage strategy for sharing user's private data. For example, one web app saves users' private data into a local vector database. Another web app, when accessing a on-device language model, can leverage this data through a local RAG system.
The following are not within our scope of concern:
Design a uniform JavaScript API for accessing browser-provided language models, known as the Prompt API, which is currently being explored by Chrome's built-in AI team.
Issues faced by hybrid AI, such as model management, elasticity through hybrid AI, and user experience, as this topic has already been discussed in Hybrid AI Presentations in the WebML IG, and will be covered in the sessions on AI Model Management.
This session will showcase how to use our proposed API for booking flights and hotels. It will also provide specific implementation details and references for these APIs. Example source code and implementation references can be found on GitHub web-hybrid-ai.
Goals
Explore the potential of our proposed Web API for accessing hybrid AI through use case demonstrations and API implementations. Additionally, discuss concrete steps for moving forward.
Agenda
Introduce the goals of the Web APIs we propose for hybrid AI (2m)
Introduce Connection API (5m)
Introduce Storage API (4m)
A Showcase of Hybrid AI App (3m)
Considerations for Connection Strategy (2m)
Considerations for Storage Strategy (2m)
Considerations for Native OS APIs (2m)
Discuss possible resolutions, followup actions and collaborations (20m)
Web Features: Building an index for the web platform
Tracks
Feature lifecycle
Proposer
James Graham
Description
Web features is an initiative of the WebDX Community Group, to build a list of features on the web platform, organised in a way that's useful to developers. It is currently used for providing "baseline" statuses, indicating feature availability, to be presented on documentation sites such as MDN.
So far the work has mainly been focused on grouping the already-existing platform APIs into features that can be assigned a baseline status. However the purpose of this session is to understand the opportunities and requirements to use web-features earlier in the proposal lifecycle, as part of the ongoing standardisation process. For example, in a world where standards-positions and intent emails consistently use well-defined feature names, it may be possibly to build tooling that informs developers, and other interested parties, about which proposals are just waiting for implementation, or where there are more fundamental challenges at the standards level. This data could then be used to inform other projects such as Interop.
Goals
Understand the requirements / challenges of integrating web-features early into the standards development process
🐞Ladybird: A new, independent browser engine — written from scratch
Proposer
Michael[tm] Smith (sideshowbarker), Andrew Kaster, Jelle Raaijmakers, Tim Ledbetter
Description
An intro+Q&A for the Ladybird browser engine: a completely new engine written from scratch (using no code from Blink/Chromium, WebKit/Safari, Gecko/Firefox, or any other browser), and backed by the non-profit Ladybird Browser Initiative — with a policy to never take funding from default search deals or any other forms of user monetization, ever.
Goals
The session goal it to give a detailed intro to Ladybird, and answer attendees’ questions about it.
Agenda
Some of the details planned to be covered include:
Goal of the session: Understand user needs around provenance and authenticity of content on the Web and their intersections with W3C work, towards a possible W3C Workshop in 2025.
One of the major threats the Web is facing is its use as a large-scale vector or mis- and disinformation, made even more prominent by the rise of generative AI which allows the production of synthetic superficially-credible content.
To understand what role W3C might play in mitigating that threat, discussions around a possible Authentic Web workshop sometime in 2025 have emerged in the community.
This breakout offers to discuss how we can better allow end users to determine the authenticity of the information they see when they use the web, as a first step towards identifying a relevant scope for such a workshop.
The Ethical Web Principles state: “The web makes it possible to verify information”. There are technologies being developed such as C2PA or OP (Originator Profile) which allow for stronger binding of metadata to content of various kinds. What can the web do better to surface this kind of metadata to end users and ensure that this metadata is maintained across various methods of content transfer?
The breakout will focus on the user needs - specifically thinking of web users (both as content creators and consumers), informed by current best practice thinking in the relevant industries (for example, journalism, fact checking).
Goals
Refine plan for a 2025 workshop on web authenticity.
Agenda
Format: Short presentations followed by discussion
Curating the web platform's data and documentation
Tracks
Feature lifecycle
Proposer
Florian Scholz, Will Bamberg, Estelle Weyl, Vinyl Da.i'gyu-Kazotetsu
Description
Open Web Docs is a nonprofit open collective that curates web platform documentation and data for the benefit of web developers and the ecosystem's tooling. In this session we want to talk about how we work and how you (as a spec editor or implementer) can collaborate with us.
We want to talk about how specs and web platform features go through a "pipeline" of repositories and why that is important for us as documentation and data curators.
A new spec gets authored and picked up by: https://github.com/w3c/browser-specs
Spec definitions and in particular IDL definitions get parsed by reffy and https://github.com/w3c/webref
Web platform features get identified from webref and continuously tested by https://github.com/openwebdocs/mdn-bcd-collector
If a browser ships an identified feature, https://github.com/openwebdocs/mdn-bcd-collector signals that to https://github.com/mdn/browser-compat-data
https://github.com/mdn/browser-compat-data gets released and spreads the information to MDN, caniuse, caniwebview, linters, specs, other tooling and reporting.
At this stage, the feature is also marked in need of technical documentation via https://openwebdocs.github.io/web-docs-backlog/
https://github.com/web-platform-dx/web-features defines a feature id for the new feature and calculates a "baseline" status allowing it to be talked about in a consistent way in even more places.
Web developers start using the feature more broadly and adapt it as the feature hopefully transitions to "baseline high" over the course of the next few months.
(if the feature gets deprecated and we take steps to signal to users to move off of it again)
There is probably more to it but this should give you an idea of our "pipeline". The above is sort of the "golden path". Of course, things can go sideways in several ways and, in the worst case, that will lead to non-existing docs and compat data:
Specs don't make it into the browser-specs repo or don't have standing: good
Browsers implement things without a spec and we have to maintain custom IDL files and document features as non-standard
Features are specced but never see implementation (we shy away from curating these, "spec fiction").
Features sit behind a flag for a long time (and change drastically, so we don't curate docs and data until we have been promised with some stability)
Bugs in any of the above repositories
Shortage of maintainers, technical writers, developers, curators, in any of the repositories.
Goals
Discussing curation of documentation, compatibility data. Exchanging feedback among repo maintainers
Agenda
Short presentation about OWD's web platform data and documentation curation pipeline
One of the major pain points (and most requested feature) is painless and automated migration, export and import of social web profiles and data. Join us for a discussion of the LOLA specification (Live Online Account portability), the Data Portability report and related specifications.
We'll also discuss how Data Portability intersects with authentication (OAuth 2 and more), decentralized identity, signatures, and so on.
a non-embedded resource obtained from a single URI using HTTP plus any other resources that are used in the rendering or intended to be rendered together with it by a user agent
Views include all content visually and programmatically available without a substantive change. Conceptually, views correspond to the definition of a web page as used in WCAG 2, but are not restricted to content meeting that definition. For example, a view could be considered a “screen” in a mobile app or a layer of web content – such as a modal.
A small number of success criteria are written to apply to “a set of web pages” or “multiple web pages” and depend upon all pages in the set to share some characteristic or behavior. Since the unit of conformance in WCAG 2 is a single web page, the task force agreed that the equivalent unit of conformance for non-web documents is a single document. It follows that an equivalent unit of evaluation for a “set of web pages” would be a “set of documents”. Since it isn't possible to unambiguously carve up non-web software into discrete pieces, a single “web page” was equated to a “software program” and a “set of web pages” was equated to a “set of software programs”.
Demonstration of AI Powered Accessibility Auditing
Tracks
AI
Proposer
David Fazio
Description
This session will demonstrate how Artificial Intelligence can be used for a number of accessibility analysis tasks in natural language processing, image understanding and design evaluation.
Goals
Demonstrate the power of AI-driven rule engines to enhance accessibility auditing capabilities and reduce the need for manual auditing efforts.
Electronic Transferable Records: Implemented Using Transferable Verifiable Credentials
Tracks
Wallets
Proposer
Rachel Yager
Description
The presentation will share how Verifiable Credentials can be coupled with other decentralised technologies to implement the transferability feature of Electronic Transferable Records (ETRs). ETRs represent a digital evolution of traditional paper-based records, enabling instant, secure and easily verifiable electronic transactions. By leveraging transferable verifiable credentials, ETRs ensure the authenticity, integrity, and traceability of information throughout its lifecycle. Attendees will learn how TradeTrust (a freely available digital public good) uses these technologies to achieve trusted interoperability, with examples of use cases and the impact on various industries.
Goals
We aim to achieve:
Awareness: To raise awareness among W3C members about the benefits of digitalising transferable instruments and applicability of ETRs;
Alignment: To share latest information on related discussions happening at other international forum as the global trade community marshals around a freely-available and neutral standardized framework for ETRs. This would help ensure interoperability across different systems and platforms, making it easier for organisations to advance to digital trade practices.;
Collaboration: To foster collaboration and discussions among industry experts and organizations, leading to collective efforts in spearheading innovation efforts.
Feedback: To gather feedback from the W3C community on the importance of the transferability feature to them.
WebCodecs provides a low-level API to do encoding and decoding of video with control over settings on a per-frame basis. As a relatively young API it currently lacks some more advanced features, such as temporal/spatial scalability that are important for real-time use cases like video conferencing.
This session is intended to discuss a number of potential next steps, to find which features are highest priority, and what benefits or problems we face with each of those.
Some of the topics for discussion:
Explicit reference frame control
By allowing the user to specify which reference buffers to reference and which to update on a per-frame basis, it is possible to implement a number of important reference structures and coding features including temporal/spatial/quality layers, long-term references, low-latency 2-pass rate control, etc.
In short, any of the scalability modes listed in Scalable Video Coding (SVC) Extension for WebRTC, any many more can be implemented with a small set of tools. If done right, this could even be done in a manner that is codec and implementation agnostic.
This way of modeling an encoder does also present some issues. The user needs to be able to determine how many reference buffers are available, how many can be referenced per frame and know which references are allowed or disallowed based on various circumstances. How do we expose such data in a way that is both user friendly, compatible with the current API, and avoids unnecessary finger printing surfaces?
There are also tradeoffs when it comes to integrating with existing encoder implementations, a small subset of which may not fit well into this model.
Spatial/Quality Scalability
Spatial scalability can be achieved by changing the encode call to take a sequence of encoding options, instead of a single option, per input frame. Each option would then represent a different layer and would include a desired encoded resolution. With reference frame scaling, a user may reference a buffer containing a different resolution.
Again, this comes with some challenges. Different codec types might have different bounds on the scaling factors, and even certain implementations have limitations in this regard - if it is supported at all. Some codecs allow only reference frame scaling within the same temporal unit, while other support any reference at any time. How do we handle encoders with special optimized mode such as "multi-res" or "S-mode aware" encoding?
Rate Control
When dealing with layered encoding, rate control becomes much more involved. The easiest way is to just support CQP, putting all of the rate control control with the user. If CBR is desired, the encoder needs to understand the bitrate target and expected frame rate for each spatio-temporal layer, this means it suddenly needs to be SVC aware even if the user is doing all of the reference frame control.
Auxiliary
There are many other knobs that could potentially be added. Speed/Quality control, segmentation/ROI-mapping, etc
What's on the wish-list of the community?
Further, there is a proposed breakout session on RtpTransport, an API that allows users to send custom-encoded frames over the RTP channel of a PeerConnection and is intended to go hand-in-hand with WebCodecs.
Goals
Find the highest priority features in the community, and what aspects needs more consideration
Agenda
The agenda is to discuss the proposal to add reference frame control to WebCodecs, and gather feedback and comments on the path forward. The session consist of a few parts:
General goals
Our initial proposal, a minimum viable useful implementation of reference control
Smooth sign-up experiences on the web are essential for our users’ experience, and the fastest account creation experience is one where the user doesn’t have to create an account at all!!
While such experiences were powered by federated identity protocols using low level primitives in yesteryear’s web (e.g. third-party cookies, iframes and redirects), today’s privacy requirements lead to a new standard proposal to provide them with a more deliberate, safer and private binding - Federated Credential Management (or FedCM for short).
In this session we will briefly cover FedCM and how it does its magic, how it interacts with other efforts in the identity space, such as WebAuthn, and demonstrate real-life UX improvements that FedCM provides over the alternatives and discuss what can be improved in this space.
Co-hosts: @samuelgoto @gioele-antoci
Goals
Clarify the role of FedCM in the identity ecosystem, demonstrate its advantages and gather feedback on the feature and its future direction.
This is an interactive session to understand how to mitigate a number of specific threats identified during the Federated Identity Working Group's recharter review for the addition of the Digital Credentials API:
a. Perpetuates sharing of personal data by making it more available via a browser API
b. Increased centralization through subtle tradeoffs
c. Content will be moved from the deep web to the “attributed deep web”
d. Exchanges user agency for greater compliance and convenience
This breakout is intended to be a collaborative, working session. The focus will be on gaining consensus on the mitigations.
Goals
This breakout is intended to be a collaborative, working session. The focus will be on gaining consensus on the mitigations.
The session is going to be a presentation part and discussion part.
In the TPAC 2023 breakout, the Google Chrome browser team shared recent efforts on ServiceWorker and discussed potential opportunities for making ServiceWorker even faster. This year, we’d like to present a couple of updates from last year, including a newly shipped feature called Static Routing API, the new proposal extending Resource Timing API for the Static Routing API, and some of future ideas like or ServiceWorkerAutoPreload.
For the first half of the session, we’re going to present performance problems around ServiceWorker and introduce new APIs and how it works. For the second half, we’d like to spend time on the discussion.
Goals
This session aims to discuss ServiceWorkers performance issues, and how we can mitigate them. For the static routing API (now it’s part of the ServiceWorker spec), we’d like to hear browser vender status. For the other APIs that we’re proposing, we’d like to discuss and gather feedback from the community to make the standardization process forward.
Purposeful Permissions - Adding data use information to permission prompts
Tracks
Permissions
Proposer
Alexandra Reimers, Nick Doty, Serge Egelman
Description
Today’s web users often encounter permission prompts that lack context about why a website needs specific permissions and how the data will be used. While developers strive to provide context, the current approach lacks structure and consistency across websites. This session will explore options to add purpose declarations or other trustworthy, explainable contextual information to a permission request to bridge this gap and bring users greater transparency on how their data is used.
The discussion will focus on:
Use cases that might benefit or provide particular requirements for declarative contextual information, including access to information from government-issued credentials.
The potential roles of different stakeholders, including browser vendors, developers, and standardization bodies, in driving such an initiative forward.
Possible declaration options with various granularity levels spanning from links to the privacy policy to a fully fledged label system for data types and purposes of use.
Key challenges and opportunities in implementing purpose declarations for permission-gated capabilities.
Goals
Foster discussion and gather insights from various stakeholders on the development and implementation of permission purpose declarations.
Describe promising methods to provide purpose declarations and context for permission requests and identify volunteers who want to explore them, for future incubation and research, in credentials and other APIs.
Responsible Integration and New Use Cases of DNS Domain Names
Proposer
Swapneel Sheth
Description
Domain names have long been used as identifiers in applications. In the early days of the Domain Name
System (DNS), domain names were associated with Teletype Network hosts, File Transfer Protocol servers,
and email services. Later, they were adopted for web browsing.
Over the last several years, many novel use cases have emerged that utilize domain names. One such use
case is allowing a user to verify control of a domain name, e.g., to show a verified badge on a profile as is
seen with GitHub organizations. Another use case is as a social media handle, e.g., as performed in Bluesky.
Blockchains and other decentralized applications are yet another use case, e.g., where a domain name may
serve as a reference to a digital wallet address as seen in the Ethereum Name Service (ENS) or in various
proposed Decentralized Identifiers (DID) methods.
We propose naming the process of integrating and maintaining a domain name into an application a DNS
integration. These integrations have benefits, such as allowing users to keep a consistent identifier across
their website, email, and new application use cases. Another benefit is portability, as users can opt into or
out of integrations, e.g., by changing what DNS records are associated with their domain name.
This session will raise awareness of DNS integrations, the challenges they face, and facilitate discussions
around how such challenges may be addressed. We will also highlight our IETF draft for DNS integrations and
seek feedback from the community on additional topics to consider in this or future standards related work.
Goals
Raise awareness of DNS integrations and seek feedback for active standards work on providing guidance to applications that want to provide a DNS integration
Agenda
The agenda will be two parts.
Part one will be a presentation to provide background context that:
Walks through examples of DNS integrations and their use cases (social, digital wallets, identity, etc.)
Explores why applications have chosen to provide DNS integrations
Provides measurement results showing challenges that select DNS integrations face
Promotes awareness of our IETF draft for DNS integrations and the topics it currently covers
Part two will be a discussion among participants about DNS integrations and what additional topics or
concepts should be covered in the current IETF draft or future standards work, including at the W3C.
Join us at the W3C TPAC 2024 Tech Plenary to address the challenges and potential solutions for the updatable REC process. We'll focus on the issues outlined in issue #866 and collaborate on making the process more efficient.
AI models can be executed on the client web platform and can add significant functionality to web applications. However, they can also be quite large, requiring significant resources to download and store. Download and compilation latencies can potentially impact the user experience.
This breakout will discuss ways in which these issues can be mitigated. Possible topics include the following.
Background model download and compilation.
Caching strategies, including potential cross-site caching mechanisms with privacy-preserving mitigations
Model naming and versioning, allowing for model substitution when useful
Access to both downloadable and pre-installed models with a common interface
Storage deduplication
Model representation independence
API independence (e.g. sharing models between WebNN and WebGPU implementations)
Offline usage, including interaction with PWAs.
Common models are lower privacy risks
Note: this is both an AI topic and a Storage topic. Input from both communities would be useful and is encouraged!
An Individual Differential Privacy Framework for Rigorous and High-Utility Privacy Accounting in Web Measurement
Proposer
Roxana Geambasu, Benjamin Case
Description
@bmcase and I, along with several differential privacy researchers, have developed a compelling privacy framework where each device tracks and controls the privacy loss incurred by the user’s participation in various measurements, such as advertising, engagement, or mobility analytics. Currently, these measurements require collecting sensitive user activity traces (e.g., visited sites, purchases), which raises privacy concerns. Our framework proposes a privacy-preserving alternative: the device tracks activity locally and generates encrypted reports, which can be aggregated by a trusted execution engine (TEE) or secure multi-party computation system.
We formalize our framework using individual differential privacy, allowing each device to account for and constrain their own user’s privacy loss toward each measurement party. This approach offers significant privacy-utility benefits over traditional models and improves transparency by letting users monitor their privacy on each device. However, it also introduces potential biases in measurement results, which we are working to address, but for whose design we require the community’s input.
At the breakout, we thus plan to:
Present our privacy framework, which we developed initially for advertising measurement use cases.
Seek community feedback on applying the framework to other domains, as we believe our framework is much more general.
Discuss strategies to mitigate bias introduced by individual privacy tracking.
An academic paper describing our privacy framework can be found here.
Goals
To present our individual differential privacy framework for web measurements, gather community feedback on extending its application beyond advertising, and explore strategies for addressing challenges like bias in measurement results.
Agenda
Outline:
Background on ad measurements and emerging APIs
Our privacy framework: Cookie Monster
Discussion on broader applications and bias mitigation
Real-time effects applied to camera input such as background blur, face framing, and lighting correction are becoming more available to users through their operating systems, browsers, and other software components. These effects manipulate camera input before it reaches Web applications via getUserMedia().
Web applications that want to use camera input (and possibly apply their own effects) need know when effects are applied and respond accordingly.
The Ministry of Digital Affairs (moda) from Taiwan government, has initiated a four-year (2024-2027) project, aiming to build a permissionless infrastructure that secures digital identity.
The digital wallet project will build digital civic infrastructure of issuer, wallet, and verifier based on the standards of W3C Decentralized Identifiers (DID) and Verifiable Credentials (VC). It features:
Public Money, Public Code: The software parts will be licensed as open source software to the public.
Open Ecosystem: To expand use cases, we welcome everyone to become an issuer, wallet, or verifier provider to meet any market needs.
Sandbox Environment: we aim to maintain a testing playground for various needs starting in 2025. If you’re interested in joining, please send an email to our contacts.
We will also hold a bi-monthly Technical Advisory Meeting for discussing related technical issues, trusted registry and its trust model.
Goals
For anyone contributing in the W3C DID and VC ecosystem, we welcome any cooperation opportunities ranging from software interoperability, DID/VC use cases, and international standards development and collaboration initiatives.
Agenda
The Ministry of Digital Affairs (moda) from Taiwan government, has started building a permissionless infrastructure that secures digital identity.
Discuss: How to build a new feature for the web platform — and make it a success with developers
Tracks
Feature lifecycle
Proposer
Michael[tm] Smith (sideshowbarker)
Description
The goal of this session is to explore and discuss how to best get new features for the platform successfully created, implemented, tested, documented, and adopted — likely with active participation in the session from some people who’ve had experience in each of those areas and who can offer particular “lessons learned” insights and tips. Also:
documenting the process/advice (brainstorming, with the goal of writing something up from the session notes)
discussing ways in which the existing process might be improved
answering questions about parts of the process that may be especially mysterious to most people
Goals
The session goal is to discuss and help each other understand how to successfully get new features into the platform.
So when you come to the session, please be prepared to discuss — with the discussion we all have together being guided by the following “How to build a new feature for the web platform” outline:
11-step process (zero-indexed, in hex):
Describing the problem: What specific problem are you trying to solve. Who are you trying to solve it for?
Proposing a solution; writing a good explainer with a problem description + proposed solution (optional/TODO step)
Initiating and leading a focused discussion in a spec issue tracker about the problem and possible solutions.
Putting together a spec or spec PR for a problem solution (and learning spec-publishing tools and their quirks).
Writing good WPTs and getting attention for them from reviewers.
Using browser-project bug/issue trackers to raise compelling implementation requests.
Contributing an implementation patch to a browser project (and learning the project’s patch-contribution process).
Getting documentation written for your feature in MDN (working with MDN writers/editors and technical reviewers).
Driving web-developer adoption through outreach in places where web-developers pay attention.
Monitoring web-developer experience/success and identifying web-developer pain points/frustrations.
A. Iterating over each step as needed (including, going back to step #0 and repeating the whole cycle)
* Common off-by-one error many folks make: starting at step #1 (proposing solutions without first describing problems).
In each of these work streams, there have been some common discussion items including flexible but privacy preserving error codes, selector experience and query syntax, conditionally mediated flows, credential and user account identifiers, along with discussion around how these APIs should interact in the future (ex: requesting a VDC -or- an OIDC ID Token in one call). NOTE: Payments will be out of scope for this discussion.
This breakout is intended to be a collaborative, working session. The focus will be on gaining consensus on the more tactical items like errors codes and selectors, with some time at the end reserved for forward looking ideas.
Goals
This breakout is intended to be a collaborative, working session. The focus will be on gaining consensus on the more tactical items like errors codes and selectors, with some time at the end reserved for forward looking ideas.
It is not possible to get a publicly trusted CA to sign a certificate for a local domain (i.e. a non-publicly resolvable domain name such as router.local, printer.home, 192.168.1.1, etc), so currently router configuration pages, IoT devices, media servers, etc. have to either: not use TLS, rely on complicated workarounds, or use self-signed certificates and ask users to click through security warnings.
This session's goal is to explore potential solutions to this problem, such as PAKE (Password-authenticated key exchange) and TOFU (trust on first use).
There was previously a Community Group dedicated to this problem, but discussions seem to have stalled, and the group was closed in 2023.
Goals
Discuss potential ways HTTPS can be supported in local networks
Over 10 years passed after EPUB based digital manga and comics have been delivered, there are some issues for them.
This session aims to discuss and collect issues, such as the current situation for EPUB based file format, accessibility for manga contents, format for scrolled comics and so on.
Goals
Sharing the current situation and gathering issues
Sync on Web, now and next of realtime media services on web
Tracks
Real-time Web
Proposer
Kensaku KOMATSU
Description
Real time Web is improving, in this moment we have an ability to build realtime media services on web w/ WebRTC, WebCodecs, Web Audio ... etc.
IMO, one interesting movement for this would be synchronization, which means "time alignment". General case for "time alignment" would be lip-sync, but with Media over QUIC, mainly leveraging to WebCodes and WebTransport, we can develop more synchronization use cases with media streaming and arbitrary time-related data. Examples would be synchronization of realtime text, haptics, midi events in co-watching or stream-media play-out.
With this feature, we believe that the potential of web would be improving more and more, since we can build Real/Virtual orchestrated services. Use cases would be meta verse, live viewing with MIDI data, remote robot operation etc.
For these realtime use cases, clock accuracy and frequency is quite important. And we have several questions for this. For example, requestAnimationFrame() is enough or not for the future use cases? In this breakout session, we will discuss about these topic according to clock and sync on Web.
This is an opportunity to share the status of two future-looking enhancements that are in the long term pipeline for view transitions, and are in their early design phase:
gesture-based view transitions: using a preemptive navigation gesture (swipe) to trigger a view-transition
cross-origin view transitions: finding a secure way to allow a subset of what's possible with view transitions to a navigation between two origins that don't necessarily trust each other.
Goals
Introduce the thinking behind the two future enhancement, and gather early stage feedback, pushback, and hopefully enthusiasm!
Since these features touch on UX, navigations, performance and security, they could be of interest to people outside the CSS working group.
The Sustainability Community Group (CG) identified a number of projects and work areas in its first meeting. Since then, two things key things have happened: First, the Sustainable Web Design CG has been forked off to its own in-progress Interest Group charter (on w3c-ac-members member only link) to focus on the Web Sustainability Guidelines. Thus this Sustainability meeting will focus on other areas listed. Second, the Ethical Web Principles (EWP) has been voted on by the W3C Advisory Committee, and there were no objections to the section on environmental sustainability, which provides an excellent forward-looking focus for a Sustainability CG meeting.
Goals
The goal of this session is to discuss and pick a few of the Sustainability CG work areas that are most directly and actionably aligned with the EWP encouragement to “endeavor not to do further harm to the environment when we introduce new technologies to the web”, and identify goals and next steps towards those goals. For example, expanding on the Principles identified by the EWP, and how to do a sustainability (s12y) assessment of new and proposed technologies towards establishing a practice of Sustainability Horizontal Reviews to build on W3C’s existing accessibility (a11y), internationalization (i18n), security, and privacy horizontal reviews.
Agenda
To be added to https://www.w3.org/wiki/Sustainability if this session is approved.
(Originally published at: https://tantek.com/2024/260/b1/w3c-sustainability-meeting)
What security guidance should we give web developers?
Proposer
Will Bamberg, Daniel Appelquist
Description
There are a lot of web platform features that relate to security, and they generally have pretty comprehensive documentation on MDN. But there's not a lot of normative guidance: which features should people use (and which should they avoid), why should they use them, and how should they use them?
In the Security Web Application Guidelines Community Group (SWAG CG) we've been trying to understand these questions, partly so we can update MDN with this sort of normative guidance for developers with deadlines. So this very open-ended session is proposed to gather input on security documentation requirements.
Why is it like this? Installed Web Apps - how they are built, function, and struggle today.
Tracks
Web Apps
Proposer
Daniel Murphy, Reilly Grant
Description
Installing web applications has been around for a while now, and common patterns, requirements, and problems have emerged. This presentation will go over these things with some case studies, and hopes to inform future development of web platform functionality to provide a stable, non-flaky, and functional platform for developers to create competitive and rich user experiences.
Goals
Understanding web app functionality, structures, problems, and common gotchas today.
Capturing of the screen, window, tab and even web elements has made it very convenient for users to share information on the web. There have also been efforts to prevent or restrict capturing to reduce data exfiltration, e.g. in an enterprise environment, or for content protection. We'd like to review capture prevention use cases, and discuss new perspectives like user privacy, e.g. to avoid accidental information leak.
Goals
Community discussion to gather interests, ideas and feedback
Enabling workload orchestration among central cloud, edge cloud, and clients has many useful use cases, such as accelerating AI applications, streaming services, and cloud gaming.
However, there is no standardized mechanism for workload coordination and orchestration between central cloud, edge cloud, and clients, which may hinder the interoperability of the workload user, cloud provider, client-side OS, and applications.
This session will discuss the emerging new use cases and standard gaps of cloud, edge, client coordination.
Goals
Get feedback and discuss gaps in standards today to realise the emerging use-cases.
Agenda
Discuss emerging use cases (AI model use case, Home AI/IoT use case, Cloud gaming use case; 10m)
Session to discuss CSS modules and how they work with declarative shadow DOM:
https://github.com/WICG/webcomponents/blob/gh-pages/proposals/css-modules-v1-explainer.md
Carousels are an often used design pattern on the web. They are used in a variety of contexts, from product listing pages to slideshow like content. OpenUI has explored a range of carousel designs, showing that the specific layout and appearance can vary dramatically. They are also provided by many frameworks as components, however implementing a carousel correctly is complicated and often results in inconsistent and sometimes inaccessible implementations on the web today.
There are a set of problems being solved by carousels, which we believe could be provided by a set of simple incremental CSS features, allowing developers to combine these CSS features to create the various designs in a completely customizable fashion. CSS-only component libraries could be built to further simplify this process with an eventual built-in style similar to customizable select.
E2EE is useful in several key aspects of the Social Web (such as private Direct Messages), as well as for backend storage in general. Join us for a discussion of the Social Web E2EE roadmap and related specifications.
Goals
Community Discussion of E2EE Issues in the context of the Social Web
The vision of authors being able to drop components and UI libraries into their pages and have things Just Work™ and look right (at least to a first order approximation) is still largely unrealized. Integrating any UI component is incredibly laborious, as it requires manually communicating fine grained design tokens to every single component.
This Web Awesome tutorial illustrates the problem perfectly:
And this is just for assigning a certain hue as the primary color of a whole page — the effort is duplicated for every third-party UI component that uses colors, fonts, measurements, font sizes, etc. Same-party components can use the same naming convention to reduce effort, but that doesn't work for components from different entities.
To reduce integration effort, we need to reduce the amount of information the host page needs to communicate about its design to each individual component. Some avenues are:
standardized ways to set design tokens (e.g. the page’s primary color or serif font) in a way that can be read by other components
ways to derive tokens from core tokens (e.g. a light tint from a primary color, or the next smaller font size in a scale) to minimize how many tokens need to be communicated.
Ways for components to adopt and repurpose page styles of existing (standard) elements
???
Goals
Flesh out ideas for potential directions, see which are most viable in terms of I/E
Web Notifications are often used as a channel for unwanted spam. Even when the notifications are initially wanted, it can be difficult for people to work out how to unsubscribe to their notification subscriptions. The notifications permission grants never expire. It is also a common observed pattern from some websites that they socially engineer people into accepting the notification permission, for example by withholding site content or functionality until the permission is granted.
In this session we would like to propose and discuss potential alternatives for structurally solving the aforementioned issues. These include:
1: Make notifications not-promptable
Instead of showing an interruptive pop-up when websites request notification permission, browsers could silently add a settings row available to the user in secondary browser UI, like so:
Then, if the user wishes to subscribe to notifications from a website, they can proactively navigate to the browser UI to turn this toggle on. The benefit of this solution is (even if websites socially engineer users to grant them notifications permission), through the act of navigating to browser UI to turn on notifications, people inherently learn where to go to turn it off.
The drawbacks here are centered on discoverability, i.e., whether users will be able to find how to grant notifications when they actually want them. However, given the current prevalence of unwanted notification prompts, this might be a tradeoff worth making.
2: Make default notifications behaviour require an open and active tab
Most other permission types existing today require the website to be in an open and active browser tab for the capability to work. Notifications notably differs: it allows websites to message people even if they close all tabs from that website. In some ways, this is the point of the Web Push API.
Making the default notification behaviour tied to whether tabs from that origin are open gives people an intuitive way to sample notifications from a site while it’s still open, and to silence these notifications if they are unwanted. For sites that the user trusts highly, they can “upgrade” the permission to include the ability to notify them even when tabs from that site are closed. For installed web-apps, the "upgraded" behaviour could be the default.
3: Expire stale notification permissions
Current decisions on notification permission prompts are permanent. The browser does not do any automatic clean-up of stale notification subscriptions, even if they are from a site that the user has not interacted with for many years.
Clarifying the purpose of the Web Push and Web Notification APIs on a philosophical level – amongst browser vendors as well as web developers – would allow user agents to enable helpful notifications while curbing unwanted notifications. For example, if we can agree that the purpose of notifications is to inform users about content being changed on a site, and in response, users are expected to actually visit the sender origin within a reasonable time window (such as within a few months), user agents could expire permission grants outside that reasonable time window. It is an open question if there are legitimate in-the-wild use cases for notifications that users only read, never interact with, nor do they ever visit the sending website again.
Goals
Share and discuss solutions to curb spammy and abusive notifications on the web, and to help users exert better control over their notifications. Identify points of consensus and points of contention. Identify open questions to be answered to sort out points of contention. Identify practical next steps to improve Web Notifications.
Agenda
Intro: Web Push Notifications problem space
Potential solutions + discussion
i. Make notifications not-promptable
ii. Notifications to require open tab
iii. Expiry for notification grants
iv. Any other potential solutions from the group
Identify points of consensus and points of contention
i. What are the open questions to answer to sort out the points of contention?
At TPAC 2023, we introduced the topic of digital identity wallets and their use on the web, and asked the community for use cases, concerns, and asked browser engine maintainers to express interest in this work.
This evolved into a work item in the WICG, the Digital Credentials API, where folks from multiple communities involved in the digitally verifiable credentials space (including other W3C CGs and WGs, the OpenID Foundation, national standards organizations, privacy experts, issuers, verifiers, wallet makers, and more) have been working together to provide a safe, secure, privacy preserving method of sharing digital credentials on the web.
This session will provide an update on the work, the threat models being evaluated, upcoming work items, and open discussion.
Goals
Provide an update on the work stream to the wider community, hear new issues or concerns, receive feedback on core work items
In the related session of the breakout day of 2024, it was found out that the wide W3C community needs a place to discuss registry mechanism related topics such as best practices, experiences, and tooling. With the Web of Things WG advancing in defining their registry, we want to use this session to present the analysis that was done so far, collect opinions from others.
Goals
Experience Sharing, Discussion
Agenda
Wrap up of Breakout Day 2024, Registry Session
Experience from the development of the WoT Binding Registry
When a standard is written, it's required to write Security and Privacy Considerations and, if the technology is particularly disruptive, to sample the human rights impact.
One of the processes that can be used to get these considerations in a practical and structured way is to use Threat Modeling, a repeatable process with several techniques to understand best what we're doing, what can't go wrong, and what we can do about it.
In this session, we will explore how to initiate Threat Modeling from the early stages of a specification, using practical examples. This approach ensures that everything is secure, respects privacy, and is properly documented.
The Advisory Board (AB) published the W3C Vision as a Note earlier this year. The Vision Task Force (VisionTF) has processed most issues and a small number of Statement Blockers remain. This breakout session is an open session for working through the remaining Statement Blocker issues.
Goals
The goal of this session is reach consensus resolutions on the remaining Statement Blocker issues for the W3C Vision, so the Vision Task Force can prepare an updated W3C Vision Note for publication as a proposed Statement for an Advisory Committee vote.
This breakout will serve as an update to the broader community about the Web Install API. We will be discussing the shape of the API, decisions behind it, and show the progress around the API.
As the platform continues to evolve with richer and more powerful capabilities, more developers lean on PWAs to reach their customers. Distribution is still tied to native app stores or proprietary protocol workarounds, with one big gap related toa application distribution.
Goals
Showcase the Web Install API and give an update on development. We want to gather community feedback on the API.
The Artemis plan calls for human habitation of the Moon from 2028, currently under international consideration from the perspectives of habitation, mineral resource collection, food and medicine, and is considered the first step towards sending humans to Mars in the future. Communication with family and others is essential when people are away from Earth for long periods. Astronauts on the ISS are already guaranteed a private conversation with their families on the ground at least once a day, and can even watch football games using web and remote conferencing technology, but the ISS is 400 km from the ground, and its latency is less than that of a geostationary orbit satellite (located 35,786 km above the ground) is said to be 500 ms.
The distance to the Moon is 380,000 km, which theoretically translates into a latency of 2,600 ms.
Under these conditions, it is expected to be difficult for people living on the Moon to explore the Web, hold remote meetings with their families or watch football games, as they do on Earth, due to time-outs and other problems with current technology.
In this breakout session, we would like to discuss the possibility of using web technology on the Moon in the same way as on Earth, although the speed of light cannot be exceeded.
Goals
Participants will decide on the direction of future discussions.
Chrome is proposing new improvements to the Web Speech API, including:
Offline speech recognition: Allow speech recognition without an internet connection. [1]
MediaStreamTrack support: Enable seamless integration of speech recognition with audio and video streams. [2]
Spoken punctuation parameter: Add a new parameter to the speech recognition API that allows developers to control how spoken punctuation marks are handled in the transcription. [3]
Blending Realities: Building An Open Web Wide World
Proposer
Harry Wang, Rocman Zhao
Description
Building blocks for constructing a tangible virtual world has begun to take shape. Technologies from various fields are becoming increasingly mature, such as gaming, social platforms, digital twins, and virtual reality.
This session aims to trigger the discussion on how each stakeholder may collaborate to co-create a tangible virtual world that is interconnected, mutually beneficial, supportive and belongs to all. We need a set of standards that enable the integration of individual efforts to build an open platform on web, fostering prosperity through cooperation rather than competition.
This session will delve into our recent endeavors in crafting virtual world experiences on Tencent products. We will share insights from our development process, highlight the innovative solutions we implemented, and discuss the hurdles we encountered along the way.
The following aspects will be addressed:
Types and Corresponding Roles: Sites as Service Integrators and Programs as Service Providers
Scoping: Defining the Origins of Assets and Elements
Asset Deployments: Planning Asset URIs and Deploying according to Scoping
Site Hierarchy: Inter-Site Integration through Contracts and Administration according to Scoping
Site-Program Integration: Cooperating between Programs and Sites through Contracts
Permission Controls: Managing Authorization for Assets and Elements across Sites/Programs
Version Management: Independent Releases by Sites/Programs with Unified Synchronization across the Virtual World
Multi-user Interaction: Distributed across Sites and Seamlessly Integrated through On-Demand Pub/Sub for Users
Goals
1)Call for participation: co-create one tangible virtual world that is globally shared, co-constructed, and interconnected on web. 2)Share ideas on user experience, economic models, technological challenges. 3)DIscuss standard needs for asset, logic, user interaction, UI and business integrations.
When used in Editor's drafts, Respec and Bikeshed provide very powerful/useful JS controls that provide a lot o value to the community. For example, both Respec and Bikeshed allow clicking on variables in algorithms, which get highlighted to show where they are used. ReSpec adds a copy button to WebIDL, letting readers copy WebIDL with a single click, etc.
Other useful things are the definition boxes, MDN annotations, etc.
Unfortunately, these are inconsistently presented across specs/tools providing different experiences and capabilities.. as a community, we should look to align on the design of the above things, and expose these JS tools to all specs that need then.
It would be great to have a session where we look the great feature both Bikeshed and Respec provide, and see where we have overlapping features, and start looking at how we can harmonize the JS libraries into a single codebase.
It would be great if we could find a UI designer to help us make these tools even better... and, as a bonus, it would be cool if we could start using popup and other modern web features to achieve these things.
Hi @tabatkins, @sideshowbarker, @sidvishnoi, @fantasai, @dontcallmedom, @tidoust, @deniak 👋
Goals
Shared JS libraries for spec goodies
Agenda
Demo what Bikeshed has
Demo what Respec has
Compare differences
Talk about convergence
Talk about other neat things we could develop that would help users
Talk about what we can bring to TR and maybe let TR/ design handle
The upcoming JSON-LD recharter includes two new compatible formats based on the JSON-LD data model: CBOR-LD and YAML-LD. CBOR-LD provides a highly compressed format keeping data small enough to be represented in QR codes, distributed over short-lived, low bandwidth scenarios, or enabling smaller device use cases when coupled with W3C Web of Thing descriptions. YAML-LD aims to provide simpler human-friendly formatting while staying fully compatible with the JSON-LD data model allowing for easy round-tripping between the two formats. The WG is also exploring a handful of other documents related to internationalization and enhanced processing modes.
Session to discuss DOM parts proposal:
https://github.com/WICG/webcomponents/blob/gh-pages/proposals/DOM-Parts-Imperative.md
https://github.com/WICG/webcomponents/blob/gh-pages/proposals/DOM-Parts-Declarative-Template.md
This session is meant to touch base on how horizontal reviews are conducted at W3C and for the Web
In scope:
Scope of horizontal reviews: W3C Technical Reports, W3C Community Group reports, Web specifications from other organizations (WHATWG, IETF, ECMA, etc.)
Prompt spam and reputation attacks associated with requestStorageAccessFor
Proposer
Aaron Selya, Chris Fredrickson
Description
Discussion on how to expand the requestStorageAccessFor API to reduce the potential for it to be used as a vector for reputation attacks and prompt spam.
These are issues because embedded sites can not control who embeds them. Which means that the top level site can prompt on behalf of the embedded site. This could potentially damage the embedder’s reputation and/or spam the user with the generation of a large number of prompts.
Goals
gather input from the community and gain consensus on how to address the problems
Agenda
Introduce the problem
Review how the browsers have addressed it so far
Discuss more potential solutions
Identify where definitions exist in the DOM spec that can be used by UIEvents, with the goal of removing all duplication and to enable a proper algorithmic description of all UIEvents.
Goals
Identify where UIEVents needs to hook into the DOM spec
Join us as we discuss the SocialWeb CG activities and developments in the areas of Social Web, Fediverse, distributed social software, over this past year.
Goals
Appraise the SocialWeb developer community of relevant specs and developments
Based on the discussion during the W3C workshop on Smart Cities in 2021 and then two follow-up breakout sessions with invited key stakeholders from related SDOs (one during TPAC 2022 and another during TPAC 2023), we identified the stakeholders around Smart City standardization and reasonable applications for Smart City technologies. In addition, we got thoughtful input on how to improve the draft Charter Charter of the proposed Web-based Digital Twins for Smart Cities Interest Group for further discussions.
Some of the key input included:
"Digital Twins" should be the key concept for Smart Cities standardization.
Web standards, e.g., Web of Things (WoT), Decentralized Identifiers (DID) and Verifiable Credentials (VC), should be considered as possible key modules for the possible Web-based Digital Twins framework.
Standard vocabulary for semantic interoperability is also required for the platform.
Collaboration by related SDOs is necessary for further discussion, and W3C should become the central hub for the discussion given Web standards play very important roles.
Now the Web-based Digital Twins for Smart Cities Interest Group has been launched, and we'd like to start actual discussion about with the key stakeholders from related SDOs, industries and coutries/cities to work on the following deliverables of the Interest Group:
Survey on the existing technologies and standards for Smart Cities, e.g., possible building blocks of Digital Twin Framework and standardized Vocabularies for Smart Cities
Best Practices on what kind of technologies to be applied to what kind of Smart Cities applications, e.g., WoT, Automotive, Geospatial, VR/AR, Speech and Semantic Web to be applied to improved accessibility, visitor guidance and energy management
W3C wants to know how to improve member engagement and member retention. What would you like to see W3C do for you and your organization? Are you interested in regional meetups? Information about charters and recommendations that are relevant to your organization? Creating relationships with other standards organizations? Tell us what you would like to see W3C do for you. This session will have a short introduction and then be an open conversation.
Web platform APIs are typically specified having web browsers as their main target. However, there is a large set of JavaScript runtimes that benefit from having a common API surface with the web platform.
WinterCG is a W3C Community Group focused on the needs of such runtimes, particularly in the server side. The main focus of this CG has been to work on a minimum subset of web platform APIs that all such runtimes would support (the WinterCG Minimum Common API), but we also work on identifying areas where such runtimes would benefit from changes in the web platform specs.
Goals
Exploring the collaboration between the web platform and server-side runtimes.
Legend:
Feature lifecycle,
Permissions,
UX,
Identity,
Real-time Web,
AI,
Web Components,
Wallets,
Standards,
Web Apps.
A time like
‘02:00+1’ means 2 a.m. the next day.