This is an archived snapshot of W3C's public bugzilla bug tracker, decommissioned in April 2019. Please see the home page for more details.
It's important to be able to define components such that the definitions are relocatable. A component may use relative url's for resources in its shadowDom (e.g. stylesheets, images in stylesheets, src or href attributes). In this case, the urls should be relative to the url from which the component definition was loaded and not to the document hosting the component. This way resources become portable with the component definition.
Done. Each sub-component is resolved relative to its component, and a component is resolved relative to the document.
https://dvcs.w3.org/hg/webcomponents/raw-file/tip/spec/components/index.html#link-type-component
When I switched to DocumentFragment instead of Document per import, I killed this ability. I need to figure out how to bring this back.
(In reply to comment #3) > When I switched to DocumentFragment instead of Document per import, I killed > this ability. I need to figure out how to bring this back. Probably we should have multiple master documents, each of which represents something like a "base" url. Then, we can let each component being owned by corresponding master document. Conceptually, a set of components from one framework (eg. polymer) should share one master. I'm not sure if this works though.
Bug 22077 also has useful info.
(In reply to comment #5) > Bug 22077 also has useful info. Doh. I meant bug 21407.
*** Bug 21407 has been marked as a duplicate of this bug. ***
*** Bug 21399 has been marked as a duplicate of this bug. ***
Let's try to nail down some specifics. http://example.com/index.html: <link rel="import" href="http://randomurl.com/import.html"> http://randomurl.com/import.html: <html> <link rel="stylesheet" href="styles/1.css"> <link rel="import" href="2.html"> <script src="scripts/3.js"></script> <img src="images/4.png"> <template> <link rel="stylesheet" href="styles/5.css"> <img src="images/6.png"> </template> </html> What happens to each resource? * is it fetched and/or executed/processed? * when accessed via DOM (document.querySelector['link[rel=import]'].import, what is the resolved URL? 1.css is not fetched, URL is http://randomurl.com/styles/1.css 2.html is fetched and processed, URL http://randomurl.com/2.html 3.js is fetched and executed, URL http://randomurl.com/scripts/3.js 4.png is not fetched, URL http://randomurl.com/images/4.png 5.css is not fetched, URL is empty string 6.png is not fetched, URL is empty string In other words: * only scripts and link[rel=import]s fetch resources in an import. * URLs of all elements with hyperlinks are resolved relative to imported document. * at least as spec'd now, template documents don't have a base URL to resolve against. This means that using the templates from an import will lose the knowledge of the URL of the imported document in which they had arrived. Did I miss any other cases? Does anything seem wrong? Factually incorrect?
(In reply to comment #9) > In other words: > * only scripts and link[rel=import]s fetch resources in an import. > * URLs of all elements with hyperlinks are resolved relative to imported > document. > * at least as spec'd now, template documents don't have a base URL to > resolve against. This means that using the templates from an import will > lose the knowledge of the URL of the imported document in which they had > arrived. > > Did I miss any other cases? Does anything seem wrong? Factually incorrect? This is same as my understanding (and implementation).
> * only scripts and link[rel=import]s fetch resources in an import. Makes sense. > * URLs of all elements with hyperlinks are resolved relative to imported document. > * at least as spec'd now, template documents don't have a base URL to resolve against. This means that using the templates from an import will lose the knowledge of the URL of the imported document in which they had arrived. IMO, these 2 bullets should be handled the same. There should be nothing special about the contents of templates. Given: index.html imports/import.html imports/images/foo.png index.html is a page which has <link rel="import" href="imports/import.html"> and import.html contains: <img href="images/foo.png"> Right now the image's url will resolve correctly if the images folder is relative to the import's location as above. In the import, the image url will resolve correctly, but the resource will not load so this has little value. 1. For the most part and regardless of whether or not they are inside a <template>, elements in imports will eventually make their way to the main document, either via moving or cloning. In our case, if the image is moved to the main document, the src will no longer resolve correctly. 2. There are use cases for resolving url's correctly in imports: scripts and imports must do so and custom elements may want to do so. I think the ideal behavior would be that paths are resolved relative to the import when elements are in the import and when elements move to the main document, paths are additionally resolved relative to the import. This way our img would correctly resolve its url when it's in the import or in the main document. That 'magical' behavior may be too tricky. If we have to pick one document against which these (non-script or import) element url's should be resolved, because of point #1, it needs to be the main document.
My hope here is to... - Let Shadow DOM to have base URL, and - Let Shadow dom creation accept template element (or element in general) to be cloned. In this way, we could inherit base URL of imported document to clone-stamped shadow DOM, like host.createShadwoRoot({ template: linkEl.import.querySelector("#theTemplate")) });
I think I can described the desired behavior more simply: (1) it needs to be easy to use elements in imports in the main document (one shouldn't have to do path fixup to make url's work); (2) imports also need to be portable so authors must be able to specify paths relative to imports. My first reaction to tying the desired behavior described above to ShadowDOM is that it seems like an unfortunate coupling; what if I don't use ShadowDOM? Perhaps a scoped <base> element would make Morrita-san's approach more general. For background, here's how the Polymer HTML Imports polyfill (https://github.com/Polymer/HTMLImports) handles url's in imports: 1. When an import is loaded, all attributes and css that contain relative url's are re-written to be relative to the main document. 2. Imports are given a <base> element with that has the main document's url.
(In reply to comment #13) > ... > > My first reaction to tying the desired behavior described above to ShadowDOM > is that it seems like an unfortunate coupling; what if I don't use ShadowDOM? > > Perhaps a scoped <base> element would make Morrita-san's approach more > general. > Yeah I don't think we should add magic arguments to ShadowRoot for templating.. smells like the <element> stuff we removed. I do like the idea of allowing <base> in a ShadowRoot, then polymer could be smart and put a <base> for the import origin in the custom element <template>. This is still going to be fraught with peril: XHR uses the <base> of the main document, so does new Image().src or new Audio() etc. That's going to be confusing. If you create a canvas, then do new Image().src to load an image to draw it'll not be the same as if you had appended the image to the ShadowRoot and assigned a src. Making you append the node to the ShadowRoot is sad, now you're causing style recalcs just to draw into a canvas.
I agree that <base> could be a good solution here. Let's talk about a bit more details. If we just allow <base> for ShadowDOM as Elliot mentioned, imported-templates-without-shadow scenario isn't resolved, even though it doesn't complicate ShadowRoot API. In my understanding, scoped <base> or <base scoped> is what Steve is suggesting. This is more general and shadow agnostic. This seems reasonable to me. Even though might be too powerful than what we want. I feel XHR and the like are orthogonal points here. Their lack of URL scoping is surely a problem, but it's more on JavaScript side thing than markup thing. Being slightlyoff hat on, we could invent some 'lower-level' primitive which handles URL resolution in browser, thus resolves both of the problem. But I feel it slightly overkill.
(Just a reminder for myself): Also, <base scoped> implies that the resolved URL can be changed after mutating tree, even if the node stays in the same document or tree-scope. We should look into Blink to see whether any element relies on the fact that the URL won't change while they stay in the same document.
I now think using <base> element is not desirable because we're introducing a new element to achieve what should be basic, standard behavior. Conceptually, we could resolve this problem if elements were able to maintain a persistent connection to the import document's url so that they could resolve url's against it. One way to achieve this would be to give elements a `baseDocument` pointer which would be set by imports and used to resolve urls.
Node.baseDocument seems too powerful for me, but I agree that it'd be great if there is a straightforward way to retain the origin (or base) of imported fragments. For me, DocumentFragment.baseURL, thus ShadowRoot.baseURL, feels right. DocumentFragment or ShadowRoot represents a kind of scope and boundary, which could have its own "scoped" base URL. The default value of the baseURL is one of the owner document. The challenge here is that there is no way to "pass" the baseURL one from another. If you use appendChild(), ownerDocuments of new children are overwritten. We want to retain it, or the URL of it, somehow. I think we could add new ShadowRoot (or DocumentFragment) API to do this "baseURI-preserving" appendChild() alternative, say, ShadowRoot::render(documentFragment). This hypothesis render() API - removes existing ShadowRoot children - clones the document fragment children and inserts it to the ShadowRoot, and - replaces ShadowRoot's baseURL with one of DocumentFragment. In an IDL form, --- partial interface DocumentFragmentRenderable { readonly attribute String baseURL; void render(DocumentFragmentRenderable); }; DocumentFragment implements DocumentFragmentRenderable; // FIXME: Seems to aggressive? // Element implements DocumentFragmentRenderable; --- This approach has some pros: - Component authors no longer need to take care of base URL. It works automatically once they use render(). - baseURI can be read-only thus less error prone (in Blink!) - The definition is orthogonal from other Shadow bits and easily move to HTML or other standard. Even HTML Imports can host it :-) XHR users can get benefit from it as well. One cons is that this is not useful without ShadowRoot. We can possibly let Element implement DocumentFragmentRenderable, but I'm not sure if it worth doing since implementation will be harder. What do you think? Any suggestions are welcome. Well, the name render() could be better first of all :-)
(In reply to comment #18) > Node.baseDocument seems too powerful for me, > but I agree that it'd be great if there is a straightforward way to > retain the origin (or base) of imported fragments. > > For me, DocumentFragment.baseURL, thus ShadowRoot.baseURL, feels right. > DocumentFragment or ShadowRoot represents a kind of scope and boundary, > which could have its own "scoped" base URL. > The default value of the baseURL is one of the owner document. > > The challenge here is that there is no way to "pass" the baseURL one from > another. > If you use appendChild(), ownerDocuments of new children are overwritten. > We want to retain it, or the URL of it, somehow. > > I think we could add new ShadowRoot (or DocumentFragment) API to do this > "baseURI-preserving" appendChild() alternative, say, > ShadowRoot::render(documentFragment). > > This hypothesis render() API > > - removes existing ShadowRoot children > - clones the document fragment children and inserts it to the ShadowRoot, and > - replaces ShadowRoot's baseURL with one of DocumentFragment. > > This is the wrong abstraction level, we shouldn't be adding new DOM mutation methods (you've just reinvented innerHTML = ''; appendChild(fragment) on ShadowRoot) to support this. We could expose something like document.currentImport just like document.currentScript and Import can have a resolveURL() method on it where you could go fixup your resource urls when the component is stamped. I don't like adding magical behavior here, even at the root level, it seems very confusing because now: var img = new Image(); img.src = 'foo.png'; // load actually started here. appendChild(img); // now the url is different, we start loading a different thing. also if you remove an img from one shadow root and stick it into another all the urls are going to change. I think we should be explicit and just expose a URL resolver thing on imports and then let applications setup the necessary abstractions (ex. MDV can do the url replacing for you if needed).
What about hooking adoptNode? There's already a step in the DOM spec for adopting a Node that says to run the appropriate "base URL change steps" (see https://dvcs.w3.org/hg/domcore/raw-file/tip/Overview.html#concept-node-adopt). If there were a way to detect that the current ownerDocument is an Import document, it seems like this could be an ideal time to do "something". A strawman for "something" being: 1. Get the fully-resolved URL for whatever attribute is in question (src, href) 2. Make it relative to the new document's base URL, if possible 3. Set the attribute to the value derived in the previous step
Sorry for the late response. Was in vacation. --- I'd note that URLs are not only in the DOM, but also in the stylesheet. Although I agree that "less magical" is desirable, we need provide some help to developers, as something like Adam suggested. In general, the browser needs to provide some way to enumerate URLs or hook URL resolution so that developers can resolve URLs in less error prone ways. Otherwise, apps will easily break each time the platform introduces new URL attribute.
In my understanding, there were kind of consensus that <base> will work except having an extra element is ugly. I start thinking that we can just add the baseURL to ShadowRoot. It could be able to set either by <base> in a shadow tree, an object property or a constructor parameter. My preference is to add a parameter to createShadowRoot() so that it can be immutable. --- > var img = new Image(); > img.src = 'foo.png'; // load actually started here. > appendChild(img); // now the url is different, we start loading a different thing. I think developer can avoid reloading just by using the absolute URL or let documents to share same base URL. Thanks for the inspector, it won't be hard to diagnose the possible problem. Also, off-tree loading is done only by <img> but not <link> or <script>. I don't think this is a good case for taste-checking.
We chatted a bit about this with Polymer folks, just to get some inspiration. Here are the key insights we sussed out: * Shadow trees definitely need to have a base URL. This is bug 22255. * We could probably get away with cloning being the place where URLs are magically resolved. The nature of the magic is less clear (see next bullet point). * Tying urls from imports to shadow trees seems too limiting. For instance, a hypothetical developer Scott may want to have a library of images stored as <img> tags in an import, using relative URls. Upon cloning that node into a document, Scott fully expects the right image to load. Unfortunately, this is not how <img> works today, so either we need to rewrite href attribute upon cloning or ...
Even though I think I may have proposed the idea, I now think that any sort of scoped url resolution scheme is too complicated and unnecessary (agreeing with Elliot). I see two ways to do this: 1. we do what the polymer HTML Imports polyfill does: (a) imports get a <base> with href of the main document url, (b) all url attributes of elements in imports are re-written to be relative to the main document. 2. we change the url when the element is adopted from the import into the main document (Adam's idea).
Morrita-san and I worked on this today. Here's the outcome of our discussion: 1) The resources in HTML Imports are already loaded relative to the respective import document URLs. 2) If we are arguing for some new "retained" URL property that is passed along when cloning an HTMLImageElement et al. from one document to another, HTML Imports spec is not where we should be solving this. In fact, this new property is likely to never be introduced into the web platform. 3) We'll try to tackle the remainder of this problem in bug 22255.