This document is also available in these non-normative formats: XML.
Copyright © 2006 W3C® (MIT, ERCIM, Keio), All Rights Reserved. W3C liability, trademark and document use rules apply.
This document contains requirements for the development of an XML Processing Model and Language, which are intended to describe and specify the processing relationships between XML resources.
This section describes the status of this document at the time of its publication. Other documents may supersede this document. A list of current W3C publications and the latest revision of this technical report can be found in the W3C technical reports index at http://www.w3.org/TR/.
This First Public Working Draft has been produced by the W3C XML Processing Model Working Group as part of the XML Activity, following the procedures set out for the W3C Process. The goals of the XML Processing Model Working Group are discussed in its charter.
Comments on this document should be sent to the W3C mailing list public-xml-processing-model-comments@w3.org (archive).
Publication as a Working Draft does not imply endorsement by the W3C Membership. This is a draft document and may be updated, replaced or obsoleted by other documents at any time. It is inappropriate to cite this document as other than work in progress.
This document was produced by a group operating under the 5 February 2004 W3C Patent Policy. The group does not expect this document to become a W3C Recommendation. This document is informative only. W3C maintains a public list of any patent disclosures made in connection with the deliverables of the group; that page also includes instructions for disclosing a patent. An individual who has actual knowledge of a patent which the individual believes contains Essential Claim(s) must disclose the information in accordance with section 6 of the W3C Patent Policy.
1 Introduction
2 Terminology
3 Design Principles
4 Requirements
4.1 Standard Names for Component Inventory
4.2 Allow Defining New Components and Steps
4.3 Minimal Component Support for Interoperability
4.4 Allow Pipeline Composition
4.5 Iteration of Documents and Elements
4.6 Conditional Processing of Inputs
4.7 Error Handling and Fall-back
4.8 Support for the XPath 2.0 Data Model
4.9 Allow Optimization
4.10 Streaming XML Pipelines
5 Use cases
5.1 Apply a Sequence of Operations
5.2 XInclude Processing
5.3 Parse/Validate/Transform
5.4 Document Aggregation
5.5 Single-file Command-line Document Processing
5.6 Multiple-file Command-line Document Generation
5.7 Extracting MathML
5.8 Style an XML Document in a Browser
5.9 Run a Custom Program
5.10 XInclude and Sign
5.11 Make Absolute URLs
5.12 A Simple Transformation Service
5.13 Service Request/Response Handling on a Handheld
5.14 Interact with Web Service (Tide Information)
5.15 Parse and/or Serialize RSS descriptions
5.16 XQuery and XSLT 2.0 Collections
5.17 An AJAX Server
5.18 Dynamic XQuery
5.19 Read/Write Non-XML File
5.20 Update/Insert Document in Database
5.21 Content-Dependent Transformations
5.22 Configuration-Dependent Transformations
5.23 Response to XML-RPC Request
5.24 Database Import/Ingestion
5.25 Metadata Retrieval
5.26 Non-XML Document Production
5.27 Integrate Computation Components (MathML)
5.28 Document Schema Definition Languages (DSDL) - Part 10: Validation Management
5.29 Large-Document Subtree Iteration
5.30 Adding Navigation to an Arbitrarily Large Document
5.31 Fallback to Choice of XSLT Processor
5.32 No Fallback for XQuery Causes Error
A large and growing set of specifications describe processes operating on XML documents. Many applications will depend on the use of more than one of these specifications. Considering how implementations of these specifications might interact raises many issues related to interoperability. This specification contains requirements on an XML Pipeline Language for the description of XML process interactions in order to address these issues. This specification is concerned with the conceptual model of XML process interactions, the language for the description of these interactions, and the inputs and outputs of the overall process. This specification is not generally concerned with the implementations of actual XML processes participating in these interactions.
An XML Information Set or "Infoset" is the name we give to any implementation of a data model for XML which supports the vocabulary as defined by the XML Information Set recommendation [xml-infoset-rec].
An XML Pipeline is a conceptualization of a flow of a configuration of steps and their parameters. The XML Pipeline defines a process in terms of order, dependencies, or iteration of steps over XML information sets.
A pipeline specification document is an XML document that described an XML pipeline.
A step is a specification of how a component is used in a pipeline that includes inputs, outputs, and parameters.
A component is an particular XML technology (e.g. XInclude, XML Schema Validity Assessment, XSLT, XQuery, etc.).
An XML infoset that is an input to a XML Pipeline or Step.
The result of processing by an XML Pipeline or Step.
A parameter is input to a Step or an XML Pipeline in addition to the Input and Output Document(s) that it may access. Parameters are most often simple, scalar values such as integers, booleans, and URIs, and they are most often named, but neither of these conditions is mandatory. That is, we do not (at this time) constrain the range of values a parameter may hold, nor do we (at this time) forbid a Step from accepting anonymous parameters.
The technology or platform environment in which the XML Pipeline is used (e.g. command-line, web servers, editors, browsers, embedded applications, etc.).
The ability to parse an XML document and pass infoitems between components without building a full document information set.
The design principles described in this document are requirements whose compliance with is an overall goal for the specification. It is not necessarily the case that a specific feature meets the requirement. Instead, it should be viewed that the whole set of specifications related to this requirements document meet that overall goal specified in the design principle.
Applications should be free to implement XML processing using appropriate technologies such as SAX, DOM, or other infoset representations.
Application computing platforms should not be limited to any particular class of platforms such as clients, servers, distributed computing infrastructures, etc. In addition, the resulting specifications should not be swayed by the specifics of use in those platform.
The language should be as small and simple as practical. It should be "small" in the sense that simple processing should be able to stated in a compact way and "simple" in the sense the specification of more complex processing steps do not require arduous specification steps in the XML Pipeline Specification Document.
At a minimum, an XML document is represented and manipulated as an XML Information Set. The use of supersets, augmented information sets, or data models that can be represented or conceptualized as information sets should be allowed, and in some instances, encouraged (e.g. for the XPath 2.0 Data Model).
It should be relatively easy to implement a conforming implementation of the language but it should also be possible to build a sophisticated implementation that implements its own optimizations and integrates with other technologies.
An XML Pipeline must be able to be exchanged between different software systems with a minimum expectation of the same result for the pipeline given that the XML Pipeline Environment is the same. A reasonable resolution to platform differences for binding or serialization of resulting infosets should be expected to be address by this specification or by re-use of existing specifications.
The XML Pipeline Specification Document should be able to be validated by both W3C XML Schema and RelaxNG.
XML Pipelines need to support existing XML specifications and reuse common design patterns from within them. In addition, there must be support for the use of future specifications as much as possible.
The specification should allow use any component technology that can consume or produce XML Information Sets.
An XML Pipeline must allow control over specifying both the inputs and outputs of any process within the pipeline. This applies to the inputs and outputs of both the XML Pipeline and its containing steps. It should also allow for the case where there might be multiple inputs and outputs.
An XML Pipeline must allow control the explicit and implicit handling of the flow of documents between steps. When errors occur, these must be able to be handled explicitly to allow alternate courses of action within the XML Pipeline.
The XML Pipeline Specification Document must have standard names for components that correspond, but not limited to, the following specifications [xml-core-wg]:
XML Base
XInclude
XSLT 1.0/2.0
XSL FO
XML Schema
XQuery
RelaxNG
An XML Pipeline must allow applications to define and share new steps that use new or existing components. [xml-core-wg]
There must be a minimal inventory of components defined by the specification that are required to be supported to facilitate interoperability of XML Pipelines.
Mechanisms for XML Pipeline composition for re-use or re-purposing must be provided within the XML Pipeline Specification Document.
XML Pipelines should allow iteration of a specific set of steps over a collection of documents and or elements within a document.
To allow run-time selection of steps, XML Pipelines should provide mechanisms for conditional processing of documents or elements within documents based on expression evaluation. [xml-core-wg]
XML Pipelines must provide mechanisms for addressing error handling and fall-back behaviors. [xml-core-wg]
XML Pipelines must support the XPath 2.0 Data Model to allow support for XPath 2.0, XSLT 2.0, and XQuery as steps.
Note:
At this point, there is no consensus in the working group that minimal conforming implementations are required to support the XPath 2.0 Data Model.
An XML Pipeline should not inhibit a sophisticated implementation from performing parallel operations, lazy or greedy processing, and other optimizations. [xml-core-wg]
An XML Pipeline should allow for the existence of streaming pipelines in certain instances as an optional optimization. [xml-core-wg]
This section contains a set of use cases that support our requirements and will inform our design. While there is a want to address all the use cases listed in this document, in the end, the first version of those specifications may not solve all the following use cases. Those unsolved use cases may be address in future versions of those specifications.
To aid navigation, the requirements can be mapped to the use cases of this section as follows:
Note:
The above table is known to be incomplete and will be completed in a later draft.
Apply a sequence of operations such XInclude, validation, and transformation to a document, aborting if the result or an intermediate stage is not valid.
(source: [xml-core-wg])
Retrieve a document containing XInclude instructions.
Locate documents to be included.
Perform XInclude inclusion.
Return a single XML document.
Parse the XML.
Perform XInclude.
Validate with Relax NG, possibly aborting if not valid.
Validate with W3C XML Schema, possibly aborting if not valid.
Transform.
Locate a collection of documents to aggregate.
Perform aggregation under a new document element.
Return a single XML document.
Read a DocBook document.
Validate the document.
Process it with XSLT.
Validate the resulting XHTML.
Save the HTML file using HTML serialization.
Read a list of source documents.
For each document in the list:
Read the document.
Perform a series of XSLT transformations.
Serialize each result.
Alternatively, aggregate the resulting documents and serialize a single result.
Extract MathML fragments from an XHTML document and render them as images. Employ an SVG renderer for SVG glyphs embedded in the MathML.
(source: [xml-core-wg])
Style an XML document in a browser with one of several different stylesheets without having multiple copies of the document containing different xml-stylesheet directives.
(source: [xml-core-wg])
Run a program of your own, with some parameters, on an XML file and display the result in a browser.
(source: [xml-core-wg])
Process an XML document through XInclude.
Transform the result with XSLT using a fixed transformation.
Digitally sign the result with XML Signatures.
Process an XML document through XInclude.
Remove any xml:base attributes anywhere in the resulting document.
Schema validate the document with a fixed schema.
For all elements or attributes whose type is xs:anyURI, resolve the value against the base URI to create an absolute URI. Replace the value in the document with the resulting absolute URI.
This example assumes preservation of infoset ([base URI]) and PSVI ([type definition]) properties from step to step. Also, there is no way to reorder these steps as the schema doesn't accept xml:base attributes but the expansion requires xs:anyURI typed values.
Extract XML document (XForms instance) from an HTTP request body
Execute XSLT transformation on that document.
Call a persistence service with resulting document
Return the XML document from persistence service (new XForms instance) as the HTTP response body.
Allow an application on a handheld device to construct a pipeline, send the pipeline and some data to the server, allow the server to process the pipeline and send the result back.
(source: [xml-core-wg])
Parse the incoming XML request.
Construct a URL to a REST-style web service at the NOAA (see website).
Parse the resulting invalid HTML document with by translating and fixing the HTML to make it XHTML (e.g. use TagSoup or tidy).
Extract the tide information from a plain-text table of data from document by applying a regular expression and creating markup from the matches.
Use XQuery to select the high and low tides.
Formulate an XML response from that tide information.
Parse descriptions:
Iterate over the RSS description elements and do the following:
Gather the text children of the 'description' element.
Parse the contents with a simulated document element in the XHTML namespace.
Send the resulting children as the children of the 'description element.
Apply rest of pipeline steps.
Serialize descriptions
Iterate over the RSS description elements and do the following:
Serialize the children elements.
Generate a new child as a text children containing the contents (escaped text).
Apply rest of pipeline steps.
In XQuery and XSLT 2.0 there is the idea of an input and output collection and a pipeline must be able to consume or produce collections of documents both as inputs or outputs of steps as well as whole pipelines.
For example, for input collections:
Accept a collection of documents.
Apply a single XSLT 2.0 transformation that processes the collection and produces another collection.
Serialize the collection to files or URIs.
For example, for output collections:
Accept a single document as input.
Apply an XQuery that produces a sequence of documents (a collection).
Serialize the collection to files or URIs.
Receive XML request with word to complete.
Call a sub-pipeline that retrieves list of completions for that word.
Format resulting document with XSLT.
Serialize response to XML.
Dynamically create an XQuery query using XSLT, based on input XML document.
Execute the XQuery against a database.
Construct an XHTML result page using XSLT from the result of the query.
Serialize response to HTML.
Read a CSV file and convert it to XML.
Process the document with XSLT.
Convert the result to a CSV format using text serialization.
Receive an XML document to save.
Check the database to see if the document exists.
If the document exists, update the document.
If the document does not exists, add the document.
Receive an XML document to format.
If the document is XHTML, apply a theme via XSLT and serialize as HTML.
If the document is XSL-FO, apply an XSL FO processor to produce PDF.
Otherwise, serialize the document as XML.
Mobile example:
Receive an XML document to format.
If the configuration is "desktop browser", apply desktop XSLT and serialize as HTML.
If the configuration is "mobile browser", apply mobile XSLT and serialize as XHTML.
News feed example:
Receive an XML document in Atom format.
If the configuration is "RSS 1.0", apply "Atom to RSS 1.0" XSLT.
If the configuration is "RSS 2.0", apply "Atom to RSS 2.0" XSLT.
Serialize the document as XML.
Receive an XML-RPC request.
Validate the XML-RPC request with a RelaxNG schema.
Dispatch to different sub-pipelines depending on the content of /methodCall/methodName.
Format the sub-pipeline response to XML-RPC format via XSLT.
Validate the XML-RPC response with an W3C XML Schema.
Return the XML-RPC response.
Import example:
Read a list of source documents.
For each document in the list:
Validate the document.
Call a sub-pipeline to insert content into a relational or XML database.
Ingestion example:
Receive a directory name.
Produce a list of files in the directory as an XML document.
For each element representing a file:
Create an iTQL query using XSLT.
Query the repository to check if the file has been uploaded.
Upload if necessary.
Inspect the file to check the metadata type.
Transform the document with XSLT.
Make a SOAP call to ingest the document.
Call a SOAP service with metadata format as a parameter.
Create an iTQL query with XSLT.
Query a repository for the XML document.
Load a list of XSLT transformations from a configuration.
Iteratively execute the XSLT transformations.
Serialize the result to XML.
An non-XML document is fed into the process.
That input is converted into a well-formed XML document.
A table of contents is extracted.
Pagination is performed.
Each page is transformed into some output language.
Read a non-XML document.
Transform.
Select a MathML content element.
For that element, apply a computation (e.g. compute the kernel of a matrix).
Replace the input MathML with the output of the computation.
This document provides a test scenario that will be used to create validation management scripts using a range of existing techniques, including those used for program compilation, etc.
The steps required to validate our sample document are:
Use ISO 19757-4 Namespace-based Validation Dispatching Language (NVDL) to split out the parts of the document that are encoded using HTML, SVG and MathML from the bulk of the document, whose tags are defined using a user-defined set of markup tags.
Validate the HTML elements and attributes using the HTML 4.0 DTD (W3C XML DTD).
Use a set of Schematron rules stored in check-metadata.xml to ensure that the metadata of the HTML elements defined using Dublin Core semantics conform to the information in the document about the document's title and subtitle, author, encoding type, etc.
Validate the SVG components of the file using the standard W3C schema provided in the SVG 1.2 specification.
Use the Schematron rules defined in SVG-subset.xml to ensure that the SVG file only uses those features of SVG that are valid for the particular SVG viewer available to the system.
Validate the MathML components using the latest version of the MathML schema (defined in RELAX-NG) to ensure that all maths fragments are valid. The schema will make use the datatype definitions in check-maths.xml to validate the contents of specific elements.
Use MathML-SVG.xslt to transform the MathML segments to displayable SVG and replace each MathML fragment with its SVG equivalent.
Use the ISO 19757-8 Document Schema Renaming Language (DSRL) definitions in convert-mynames.xml to convert the tags in the local nameset to the form that can be used to validate the remaining part of the document using docbook.dtd.
Use the IS0 19757-7 Character Repertoire Definition Language (CRDL) rules defined in mycharacter-checks.xml to validate that the correct character sets have been used for text identified as being Greek and Cyrillic.
Convert the Docbook tags to HTML so that they can be displayed in a web browser using the docbook-html.xslt transformation rules.
Each validation script should allow the four streams produced by step 1 to be run in parallel without requiring the other validations to be carried out if there is an error in another stream. This means that steps 2 and 3 should be carried out in parallel to steps 4 and 5, and/or steps 6 and 7 and/or steps 8 and 9. After completion of step 10 the HTML (both streams), and SVG (both streams) should be recombined to produce a single stream that can fed to a web browser. The flow is illustrated in the following diagram:
Running XSLT on a very large document isn't typically practical. In these cases, it is often the case that a particular element, that may be repeated over-and-over again, needs to be transformed. Conceptually, a pipeline could limit the transformation to a subtree by:
Limiting the transform to a subtree of the document identified by an XPath.
For each subtree, cache the subtree and build a whole document with the identified element as the document element and then run a transform to replace that subtree in the original document.
For any non-matches, the document remains the same and "streams" around the transform.
This allows the transform and the tree building to be limited to a small subtree and the rest of the process to stream. As such, an arbitrarily large document can be processed in a bounded amount of memory.
For a particular website, every XHTML document needs to have navigation elements added to the document. The navigation is static text that surrounds the body of the document. This navigation is added by:
Matching the head and body elements using a XPath expression that can be streamed.
Inserting a stub for a transformation for including the style and surrounding navigation of the site.
For each of the stubs, transformations insert the markup using a subtree expansion that allows the rest of the document to stream.
In the end, the pipeline allows arbitrarily large XHTML document to be processed with a near-constant cost.
(source: Alex Milowski)
A step in a pipeline produces multiple output documents. In XSLT 2.0, this is a standard feature of all XSLT 2.0 processors. In XSLT 1.0, this is not standard.
A pipeline author wants to write a pipeline that, at compile-time, the implementation chooses XSLT 2.0 when possible and degrades to XSLT 1.0 when XSLT 2.0 is not supported. In the case of XSLT 1.0, the step will use XSLT extensions to support the multiple output documents--which again may fail. Fortunately, the XSLT 1.0 transformation can be written to test for this.
(source: Alex Milowski)
As the final step in a pipeline, XQuery is required to be run. If the XQuery step is not available, the compilation of the pipeline needs to fail. Here the pipeline author has chosen that the pipeline must not run if XQuery is not available.
(source: Alex Milowski)