Lazy loading
zcorpan: want to discuss different approaches between browsers with regard to when they're going to load the image. Specifically, the rootMargin on the IntersectionObserver. E.g. Firefox uses 0 rootMargin. Chromium uses a network-dependent rootMargin, 1250px to 8000px.
zcorpan: Open questions: 1. Are people happy with the Chromium behavior? 2. There are suggestions in the HTML Standard about what information to consider.
emilio: Firefox update: currently shipping 0 margin default (but user-configurable). Actively looking into updated strategies with the performance team for better defaults. Developer feedback is that they like the control JS lazy-loading gives them. Maybe a different topic, but worth discussing… would the value be global, or per image, or what?
vmpstr: IntersectionObserver doesn't apply to nested scrollers. Any way to deal with that?
zcorpan: yes, this is an open issue with the IntersectionObserver spec. No non-hacky way to really ground this on IntersectionObserver. It'd be ideal for an IO to opt in to specifying a rootMargin that applies to all scrollable containers. #431
domenic: a bit surprised we're using IO as the basis given this and many other mismatches.
emilio: agreed, but it's getting better.
zcorpan: browsers do use IO to implement lazyloading, so probably worth keeping this layering and resolving the IO issues.
fantasai: Authors might want to adjust the rootMargin based on their guesses as to the user’s scrolling behavior, but it seems more likely that they need to adjust the timing due to differences in resource sizes. So maybe providing hints as to the size of each resource would be more useful more of the time (and would avoid interfering with user prefs or UA smarts as to scrolling behavior and network speed/latency).
emilio: need more data on what authors need, what they’re doing now
zcorpan: some JS libraries allow per-image customization. Most seem to have small rootMargin values (but that often results in the images not being loaded by the time they're seen).
Reusable image fetching logic
zcorpan: old issue. Browsers are probably reusing image loading logic for many things, but HTML Standard only specifies special logic for <img>. This makes adding new image-loading features, e.g. to CSS, hard. No work in the last 7 years to my knowledge. Any progress on solving this, or new information on how to solve?
emilio: interop issues with CSS images having different caching policies. But usually all browsers have a centralized image loader. Shared caches, and so on.
zcorpan: also an issue with the image cache. Probably doesn't match implementations.
emilio: changed Gecko to match spec a bit better for image cache. This can also affect preloading.
Domenic: interest from chromium to work on memory cache, image cache..
emilio: arch in gecko is different from chromium. if you have an uncachable image loaded from a stylesheet, (something something)
Domenic: interop issue is important. emilio you seem to have knowledge about this
emilio: happy to work on this
<annevk> And with EXIF there’s decoding considerations too that need centralization (And I guess in general we want uniform image decoders across features that consume images.)
Domenic: I'll take the action item to start the discussion to collate the concrete interop issues.
Specify HTML preload scanner / speculative parser
<annevk> (tests then spec is a pretty good order for legacy stuff, as Simon knows)
zcorpan: I've been working on this. Tests demonstrate interop issues.
zcorpan: document.write() external scripts cause performance problems. So the speculative parser tries to help. Gecko vs. Chromium/WebKit: Gecko more faithfully uses the real HTML parser to build a speculative tree. Chromium/WebKit only use the tokenizer, with some tree-building knowledge (e.g. know about script elements and style elements; don't know about SVGNS). This allows generating test cases to show interop differences. Don't have data on the web developer pain these interop causes, but it'd be useful to reach interop. Question is: how much detail do we want in the spec, and which behavior do we want to specify? Also we need to speculatively parse the document.write() string if you're writing an external script.
hsivonen: have you found any cases where "the right thing" differs from "what Firefox does"?
zcorpan: document.write("<meta charset>") plus a later document.write("<meta charset>") which invalidates that does not work.
smfr: reiterate othermaciej's comments in the issue that this is just an optimization done by UAs, and maybe it's not necessary to fully specify. If there are interop issues then maybe the spec should say enough to address those, but we're not convinced that it needs to be fully specified.
emilio: if we know of authors relying on particular bits, it'd be useful to document them; it's not clear whether it needs to be normative or formal.
zcorpan: so like a bullet point set of requirements or expectations
hsivonen: my main concern is not speccing things so that the less-accurate version is correct and Firefox's more-accurate version is incorrect. So either spec the correct thing or leave it hand-wavey enough that the correct version is allowed.
zcorpan: Another Firefox is not totally correct issue: it doesn't respect CSP. In Chromium the presence of CSP meta entirely disables the speculative parser. (WebKit aligns with Firefox and ignores CSP.)
emilio: that seems like a bug we could probably fix. Is there a bug on file?
zcorpan: I haven't filed one. I do envision listing this as a requirement in the spec and filing a bug.
zcorpan: subtopic: the speculative cache. Ideally the spec would say something about that.
emilio: Gecko reuses the existing caches. Per-document stylesheet cache, image cache.
zcorpan: annevk brought this up in #5624.
(some discussion about stylesheet cache, subtleties, how it doesn't exist in the spec)
hsivonen: we have a hash table of URLs that have been speculatively fetched. So e.g. <img> and <script> pointing to the same URL won't speculatively fetch the same URL twice.
Render blocking stylesheets
(skipped)
beforematch event
josepharhar: helps pages reveal hidden content in response to find-in-page and scroll-to-text-fragment. Event happens before the browser scrolls to it. Integrates with CSS content-visibility: hidden-matchable. Example websites: mobile Wikipedia with collapsed sections; anything that looks like a <details> element (maybe we should also make it work for <details>)
smfr: does the beforematch event only fire for content-visibility: hidden-matchable?
josepharhar: yes. Originally we had it fire everywhere, but an internal privacy review showed a problem. This version makes it have similar privacy properties to scroll events.
smfr: it seems weird to restrict this to this small case.
emilio: FWIW I agree with smfr. Also wondering about find-in-page before the script which registers beforematch loads. Doing that will break the page's ability to use beforematch.
josepharhar: agreed. This mitigation isn't perfect, and locking the page out of beforematch is not great. Might be able to use non-bubbling, and determine if there are any event listeners?