W3C

XML Binary Characterization Measurement Methodologies

W3C Working Draft 24 February 2005

This version:
http://www.w3.org/TR/2005/WD-xbc-measurement-20050224/
Latest version:
http://www.w3.org/TR/xbc-measurement
Editors:
Stephen D. Williams, Invited Expert
Peter Haggar, IBM Corporation

Abstract

This document describes measurement aspects, methods, caveats, test data, and test scenarios for evaluating the potential benefits of an alternate serialization for XML.

Status of this Document

This section describes the status of this document at the time of its publication. Other documents may supersede this document. A list of current W3C publications and the latest revision of this technical report can be found in the W3C technical reports index at http://www.w3.org/TR/.

This is the First Public Working Draft of the XML Binary Characterization Measurement Methodologies Document. It has been produced by the XML Binary Characterization Working Group, which is part of the XML Activity.

This document is part of the series of documents on Properties. It helps evaluate aspects of XML and alternate encoding with regards to the Properties defined in the previous document. Remaining work in the XML Binary Characterization Working Group is to focus upon a common set of Properties and explain conclusion of its work in a Characterizations document.

This is a First Public Working Draft and is expected to change. The XML Binary Characterization Working Group does not expect this document to become a Recommendation. Rather, after review and refinement, it will be published and maintained as a Working Group Note.

Comments on this document should be sent to public-xml-binary-comments@w3.org (public archives). It is inappropriate to send discussion emails to this address.

Discussion of this document takes place on the public public-xml-binary@w3.org (public archives).

Publication as a Working Draft does not imply endorsement by the W3C Membership. This is a draft document and may be updated, replaced or obsoleted by other documents at any time. It is inappropriate to cite this document as other than work in progress.

Table of Contents

1 Introduction
2 Relationship to Use Case and Characterization Documents
3 Abstract Scenarios and Property Profiles
4 Test Data
5 Property Measurement Methodology
6 Property Measurement - Basic
    6.1 Properties Present in XML 1.1
        6.1.1 Boolean Properties
        6.1.2 Trinary Properties
    6.2 Properties Not Present in XML 1.1
        6.2.1 Boolean Properties
7 Property Measurement - Detailed Analysis
    7.1 Compactness
        7.1.1 Description
        7.1.2 Type & range
        7.1.3 Methodology
        7.1.4 Dependencies
        7.1.5 Known Tradeoffs
    7.2 Processing Efficiency
        7.2.1 Description
            7.2.1.1 Processing phase definitions
            7.2.1.2 Standard APIs vs. abstract operations
            7.2.1.3 Incremental Overhead
            7.2.1.4 Complexity
            7.2.1.5 Measurement Considerations
        7.2.2 Type & range
        7.2.3 Methodology
        7.2.4 Dependencies
        7.2.5 Known Tradeoffs
    7.3 Accelerated Sequential Access
        7.3.1 Description
        7.3.2 Type & range
        7.3.3 Methodology
        7.3.4 Dependencies
        7.3.5 Known Tradeoffs
    7.4 Efficient Update
        7.4.1 Description
        7.4.2 Type & range
        7.4.3 Methodology
        7.4.4 Dependencies
        7.4.5 Known Tradeoffs
    7.5 Embedding Support
        7.5.1 Description
        7.5.2 Type & range
        7.5.3 Methodology
        7.5.4 Dependencies
        7.5.5 Known Tradeoffs
    7.6 No Arbitrary Limits
        7.6.1 Description
        7.6.2 Type & range
        7.6.3 Methodology
        7.6.4 Dependencies
        7.6.5 Known Tradeoffs
    7.7 Generality
        7.7.1 Description
        7.7.2 Type & range
        7.7.3 Methodology
        7.7.4 Dependencies
        7.7.5 Known Tradeoffs
    7.8 Human Readable and Editable
        7.8.1 Description
        7.8.2 Type & range
        7.8.3 Methodology
        7.8.4 Dependencies
        7.8.5 Known Tradeoffs
    7.9 Content Type Management
        7.9.1 Description
        7.9.2 Type & range
        7.9.3 Methodology
        7.9.4 Dependencies
        7.9.5 Known Tradeoffs
    7.10 Integratable into the XML Stack
        7.10.1 Description
        7.10.2 Type & range
        7.10.3 Methodology
        7.10.4 Dependencies
        7.10.5 Known Tradeoffs
    7.11 Platform Neutrality
        7.11.1 Description
        7.11.2 Type & range
        7.11.3 Methodology
        7.11.4 Dependencies
        7.11.5 Known Tradeoffs
    7.12 Random Access
        7.12.1 Description
        7.12.2 Type & range
        7.12.3 Methodology
        7.12.4 Dependencies
        7.12.5 Known Tradeoffs
    7.13 Round Trip Support
        7.13.1 Description
        7.13.2 Type & range
        7.13.3 Methodology
        7.13.4 Dependencies
        7.13.5 Known Tradeoffs
    7.14 Signable
        7.14.1 Description
        7.14.2 Type & range
        7.14.3 Methodology
        7.14.4 Dependencies
        7.14.5 Known Tradeoffs
    7.15 Small Footprint
        7.15.1 Description
        7.15.2 Type & range
        7.15.3 Methodology
        7.15.4 Dependencies
        7.15.5 Known Tradeoffs
    7.16 Space Efficiency
        7.16.1 Description
        7.16.2 Type & range
        7.16.3 Methodology
        7.16.4 Dependencies
        7.16.5 Known Tradeoffs
8 References
9 Conformance to this document
    9.1 Conformance Criteria
    9.2 Normative Parts
    9.3 Guidelines Extensibility
    9.4 Conformance Claims

Appendices

A Test Data
    A.1 ANT
        A.1.1 Description
        A.1.2 Data characteristics
        A.1.3 Example data
        A.1.4 Source of data
        A.1.5 Industries
        A.1.6 Discussion
    A.2 Financial products Markup Language (FpML)
        A.2.1 Description
        A.2.2 Data characteristics
        A.2.3 Example data
        A.2.4 Source of data
        A.2.5 Industries
        A.2.6 Discussion
    A.3 Invoices 5K-900K
        A.3.1 Description
        A.3.2 Data characteristics
        A.3.3 Example data
        A.3.4 Source of data
        A.3.5 Industries
        A.3.6 Discussion
    A.4 Large documents
        A.4.1 Description
        A.4.2 Data characteristics
        A.4.3 Example data
        A.4.4 Source of data
        A.4.5 Industries
        A.4.6 Discussion
    A.5 RDF
        A.5.1 Description
        A.5.2 Data characteristics
        A.5.3 Example data
        A.5.4 Source of data
        A.5.5 Industries
        A.5.6 Discussion
    A.6 Seismic
        A.6.1 Description
        A.6.2 Data characteristics
        A.6.3 Example data
        A.6.4 Source of data
        A.6.5 Industries
        A.6.6 Discussion
    A.7 SOAP
        A.7.1 Description
        A.7.2 Data characteristics
        A.7.3 Example data
        A.7.4 Source of data
        A.7.5 Industries
        A.7.6 Discussion
    A.8 UBL 1.0
        A.8.1 Description
        A.8.2 Data characteristics
        A.8.3 Example data
        A.8.4 Source of data
        A.8.5 Industries
        A.8.6 Discussion
B Acknowledgements (Non-Normative)
C XML Binary Characterization Measurement Changes (Non-Normative)


1 Introduction

This document describes measurement aspects, methods, caveats, test data, and test scenarios for evaluating the potential benefits of an alternate serialization for XML. This document relies on the XML Binary Characterization Working Group (XBC WG) documents for Use Cases and Properties. The focus of this document is to provide a basis for later comparison rather than reporting of actual measurements of actual implementations. The examined and potential use cases represent existing uses that might benefit from the use of an XML-like format, if it had certain additional properties. This potential expansion of the XML community depends on the existence, identification, and evolution of solutions that cover the broadest problem footprint in the best fashion. The XBC WG Characterization document represents the working group's consensus of required and useful properties. This document discusses how fulfillment of those properties can be precisely evaluated and how combinations of properties are best compared.

A particular format in a particular application situation may need to incorporate design tradeoffs that lower support for a particular property. Unless otherwise noted, the properties are written as positive requirements that are at least desirable.

2 Relationship to Use Case and Characterization Documents

Measurements of properties relies directly on Use Case needs. These needs are expressed in application-specific terms and context. The definition of properties in the Properties document, unified by common needs among use cases, provides the identification of measurement points, but additional information remains to be captured from the use cases. The primary additional information areas are the operational scenarios, representative test data, and the thresholds at which an aggregate solution might be worth significant adoption. Representation of these areas must be initially approximated and abstracted. The Use Case document details and summarizes the relationship between properties and use cases. The Characterization document represents decisions about thresholds of acceptability and ranking of properties.

3 Abstract Scenarios and Property Profiles

An abstract scenario is an idealized and simplified but fundamentally representative form of a use case. The industry use cases identified in the Use Cases document may involve multiple user use cases which lead to abstract scenarios. Ideally, abstract scenarios can include both the user use case situational description, but also a characterization of data manipulation patterns. These manipulation patterns should include when and how data is created, read, modified, transferred, and disposed of.

The analysis of use cases and abstract scenarios leads to a unified set of property profiles. Testing every combination of property presence and weighting is not feasible with limited resources and is not very useful. There is a high degree of correlation in the need for certain pairing of properties. This correlation can be used to create a small set of property profiles that cluster around certain types of problems. A property profile defines an active set of properties that are essential or desirable for a particular abstract scenario or abstract scenario family. The goals of defining and unifying these property profiles are to simplify the number of cases to be optimized for and tested. This approach also makes clear the simultaneous need for certain sets of properties.

The goal of the Abstract Scenarios in the Measurements Methodology document is to catalog and unify the variability that is needed in realistic test suites. A test suite that is representative of the use cases must exercise appropriate combinations of this variability. The variability ranges listed below describe particular aspects of an application environment as it affects processing of data that could be externalized as binary XML. At least one use case exists that requires each option, although not all possible combinations of variability are indicated.

"Lifecycle: touring modifications" means any time a document is created + transmitted and modified + transmitted one or more times. An example would be a form that is routed from person to person with each person filling in data and signing their portion. This pattern is particularly important when speed and security are needed simultaneously.

Some combinations of properties may be contradictory, especially with respect to certain design strategies. Some solutions may not support certain properties or simultaneous combinations of properties. Certain properties or combinations are comparable, sometimes only in one direction, to other properties. For instance, a lossless encoder can be compared to lossy encoders in an evaluation of efficiency with the option of lossyness, but not vice versa. A non-schema solution can be compared to schema-based solutions in all modes, but schema-based methods might not be comparable in property combinations that contraindicate schema-based encoding. These property combinations and application scenario details must be considered when planning test scenarios and when performing valid and useful format comparisons.

4 Test Data

Appropriate test data is crucial to understanding performance for all considered uses and circumstances. Data can be structure heavy, with many large tags, or data heavy. Data can be more uniform or more random. Data may benefit from generalized or application-specific compression or coding. Good test data simulates a variety of applications and broad testing of solutions.

Most format candidates and implementations will have some tunable parameters that affect which options are enabled and to what degree. It is impractical to test every combination of every parameter in such complex systems. To solve this assessment challenge, suitable edge and midpoint values must be chosen and various combinations iterated. Reports based on testing should highlight average, typical, and worst case performance with explanations as needed.

The Test Data Appendix details some example test data.

5 Property Measurement Methodology

The methodology used by this document includes two levels of property support measurement. The first, basic level provides a succinct screening of formats by thresholding properties. The threshold type is either boolean or trinary. A boolean measurement indicates whether the property is supported and is expected to perform better than, or in some cases the same as, XML 1.1. A trinary measurement records whether the format supports the property, does not prevent (DNP) the property, or prevents the property from being implemented. These thresholds are used with property ranking in the Characterization document to determine relative importance of properties which supports candidate format decision making.

This listing of properties is in the Property Measurement - Basic section.

The second, detailed measurement level for some properties is useful in detailed comparisons of candidate formats to each other. Valid and useful comparison of formats is difficult for binary XML candidates. This is caused by the need for a large array of properties constraining solutions which must simultaneously operate well on a broad range of data. Detailed measurement of properties naturally falls into different types and ranges of values. Some properties have one or more boolean membership values, others have categorical levels of compliance, relative, or absolute values. The success of fulfilling a property may depend on the data and usage scenario. Certain measurements, such as expected or actual performance of implementations and size of instances, require careful analysis. In most cases design or configuration tradeoffs for one property will affect many others. In some cases, that influence will be strongly correlated. Additionally, a format may be tunable in hinted or automatic ways to favor different property goals. An example of this would be optimizing for speed vs. compactness with various possible ratios of speed and compactness. It is important to note that both compactness and processing efficiency are affected by the method of support for most other properties. Many other properties are only beneficial when they are supported in ways that allow good compactness and processing efficiency.

This detailing of selected properties is in the Property Measurement - Detailed Analysis section.

6 Property Measurement - Basic

The basic property measurement lists indicate which properties are boolean or trinary and which are present in XML 1.1. Each property can be measured by indicating its presence, or degree of presence, in a format. All properties are listed as Boolean (True or False compliance) or Trinary. Trinary Properties have three compliance values consisting of:

The property measurements identified by the working group are documented below. The degree which these properties are required and their importance is documented in the Characterization document.

6.1 Properties Present in XML 1.1

XML 1.1 could be said to exhibit these properties. To varying degrees, documented in the Characterization document, these properties are required by use cases that might benefit from a binary XML format. Many of these properties are trivially present in a text-based XML format. The properties need to remain available in a binary XML format even though the mechanism is likely to be completely different. In some cases, the property is especially useful with certain binary XML format solutions.

6.1.2 Trinary Properties

Editorial note: sw21 February, 2005

Properties below marked with "*" have weak membership in the set of "Present in XML 1.1, Does Not Prevent". The working group seems to have concensus that these properties are technically possible in XML 1.1 but that there are various inefficiencies. In some cases, such as for Embedding Support, the property describes a common practice. In others like accelerated sequential access, the group participants know only of methods that use external, non-XML data rather than something like an encoded block of data in a processing instruction. Efficient Update is supported in some sense in XML 1.1 as data can be inserted, for instance, but all data after the change must move, something that is feasible to reduce in another format.

Deltas support is a special case. The functionality of a Delta can be obtained using a high-level, logical method which can be expressed in XML. Another approach, possibly more efficient, is a low-level representation of byte-level changes. The former can be layered on XML 1.1 while the latter seems to require new low-level format semantics.

As a result of these nuances, group agreement was not fully attained. The categorization of these properties are interpreted by some as being more not present in XML 1.1 than present.

6.2 Properties Not Present in XML 1.1

XML 1.1 does not exhibit these properties. Properties such as Compactness and Processing Efficiency (i.e. speed) are key drivers in the desire for a binary XML solution.

7 Property Measurement - Detailed Analysis

The detailed property measurements identified by or submitted to the working group are documented below. Detailed property descriptions may have the following descriptive sections:

A number of properties can be measured independent of other properties. The key properties that are at the root of the need for a successful binary XML format are Compactness and Processing Efficiency. These two properties directly depend on nearly every other property in the sense that most of the other properties are interesting mainly when they are supported while also having good compactness and processing efficiency. For example, it is not useful to have a method of random access if it makes instances bigger and slower than just parsing an XML 1.1 document.

7.1 Compactness

7.1.1 Description

The Compactness property measurement represents the amount of compression a particular format achieves when encoding an infoset. The degree of compactness achieved with a particular format is highly dependent on the input infoset, strategies enabled, and application characteristics. These characteristics should vary considerably to emulate all important use cases in order to properly measure the compactness property of each competing format. To objectively compare formats for their ability to represent infosets in a compact manner, competing measurements of various formats must be taken using the same scenario.

The amount of compactness achievable for a given XML document is a function if the document's size, structure, schema, regularity and associated application data needs. Because XML documents exist with a wide variety of sizes, structures.schemas, regularity and applications, it is not possible to define a single size threshold or compactness percentage that a binary XML format must achieve on XML documents to be considered sufficiently compact. The amount of compactness a binary XML format achieves will vary from one XML document to the next. Therefore, we define sufficient compactness relative to well known techniques for achieving compactness on XML data. These techniques set user expectations regarding achievable compactness and can be used to measure the compactness of a particular binary format relative to user expectations.

Editorial note: sw21 February, 2005

The working group has not reached concensus on specific thresholds for this comparison.

A possible disadvantage of any compact encoding might be the additional computation required to generate or interpret and use the encoding. There is a tendency, exhibited by many space minimization strategies, for space efficiency to be inversely proportional to processing efficiency. If space efficiency is absolutely maximized, processing efficiency will decrease in most cases. Note that for many Abstract Scenarios, it is possible to improve both compactness and processing efficiency relative to the use of XML 1.1. It is desirable for a format to support the ability to control its space efficiency based on the need for processing efficiency, available memory, or other properties. For example, if the format is processed on a high-end server, the algorithm should be able to be tuned to obtain maximum processing efficiency by sacrificing memory efficiency. On the other hand, if the format is processed on low-end mobile handsets, the algorithm should be able to obtain maximum compactness by sacrificing processing efficiency. A key need is the ability to balance compactness with processing efficiency in an tunable way. Certain strategies, principally frequency-based dynamic analysis such as gzip compression, are more appropriate when size is the overriding concern. Given the constraints of simultaneously minimal size and processing overhead, methods such as tokenization with dictionary tables might be more successful.

Size efficiency, or compactness, concerns the optimization of the storage or transmission resources needed to represent an infoset. Several categories of methods are known to be useful. This section reviews major categories of methods and related topics which provides background for format analysis.

A data object, which is the representation of an infoset, consists of three logical components that usually have a physical representation. These are the data, the structural information, and metadata (including typing). For XML 1.1, the structure and metadata are represented by tag syntax and naming while data is mostly present in attribute values and element text. Some strategies for data representation remove some or all structural and metadata representation and place it in external metadata or embedded in code.

There are three categories of methods to reduce the size of a data object or infoset: compression, decimation, or externalization. Competitive formats may make use of one or more methods from each category. Compression is the transformation of data into corresponding data that takes less storage space through the removal or reuse of redundant information and more efficient coding of data. Compression is often paired with decimation, the process of eliminating some details that are not used or of less importance than more important components of the original data. This is called "lossy compression" as opposed to "lossless compression".

Externalization is the process of representing an original infoset as an external representation with varying degrees of reuse and a data object that relies on that external instance as a source of redundancy. Besides the trivial replacement of an object with a reference, there are two main externalization methods, schema-based and delta-based. A schema-based method relies on a specification for certain aggregate data types, structure, and/or values. Trivially, this could mean sending and receiving code that simply writes and reads values in a certain order with no explicit structure. In this case, the structure and data type metadata is implicitly present in the code. More sophisticated methods rely on interface definition languages (IDL) or the reuse of validation schema such as XML Schema for externalization purposes. The use of these structural and metadata specifications may result in code generation and/or the production of metadata for use by an interpretive engine. Schema-based externalization usually has long-term schema reuse characteristics. A schema-based externalization is relying on long-term redundancy. This is compatible with some programming and lifecycle models, but can conflict with some application needs.

When the externalization method relies in a generalized way on representing differences from a template, parent object, or earlier message, it is called a delta. Delta mechanisms can be implemented in a high level, logical operations level, or as a low level byte or slot difference representation. Deltas can be produced by a computational differencing operation or by recording the location of changes as they happen. Deltas take advantage of both long term and short term redundancy.

In many cases, compression benefits from processing as much data as possible at the same time rather than considering individual fragments in isolation. This leads to processing models where a bulk compression or decompression step is performed. Generally, this leads to the data being inaccessible to application logic until all of the data is decompressed.

There are numerous methods of compression which rely on different methods of detecting redundancy and representing data. These methods sometimes have data access pattern needs and are generally good at compressing some data while having limited use on other data. Some popular methods include:

  • Stream compression
  • Block sorting compression
  • Run length coding
  • Linear quantization
  • Dictionary coding
  • Key compression
  • Huffman coding
  • Token tables
  • Arithmetic coding
  • Lempel-Ziv variants
  • Quadtree and similar subdivision methods
  • Frequency domain
  • Wavelet coding
  • Fractal coding

See the Compression FAQ for more information on these methods.

7.1.2 Type & range

For a given input document, this property is measured as a set of values [X, Lossy, Schema, Delta]. X represents the percent smaller the encoded version of the document is from the original. Lossy is a True or False with a True indicating the format uses a lossy compression scheme and a False indicates a lossless scheme. (Lossy and Lossless are defined by the Round Trip Support Measurement.) Schema is also a True or False with a True indicating the format achieves its compression via a schema-based encoding, and a False indicates a schema-based encoding is not used. Delta is also a True or False with a True indicating the format achieves its compression via a delta-based encoding, and a False indicates a delta-based encoding is not used.

XML 1.1 would measure as follows: [0%, False, False, False].

Editorial note: sw21 February, 2005

Further discussion is expected on the form of this measurement. An additional viewpoint is noted here:

What about when more than one method is available? What about space vs. space+time balance? Maybe the measurement could be represented as: [20%, 3%, 70%, 95%] for an abstract scenario that allows lossyness, schema-based, or deltas which would mean: 20% for non-schema, 85% with 3% loss for lossyness, 70% for schema-based, and 95% for delta-based. Measuring lossyness quality is an issue in many cases, although certain deterministic measures do exist.

7.1.3 Methodology

When measuring the Compactness property, the encoder is not permitted to use prior knowledge about the semantics of information items used in the input document. For example, the encoder is not permitted to use specialized codecs to encode the contents of a specific element or attribute in the given instance document based on the name or location or that element or attribute.

A format that achieves compression without losing data and without the requirement to process the schema or reference document would be represented as [x%, False, False, False] where x is an integer value representing the amount of compression achieved. This enables formats that have higher compression numbers to be compared objectively with other formats, assuming the remaining three values are the same.

7.1.4 Dependencies

Compactness tends to have an inverse dependency relationship with Processing Efficiency, Small Footprint, and Space Efficiency.

7.1.5 Known Tradeoffs

High scores for this property may be at odds with higher scores in the following properties (An example is given why each is listed):

7.2 Processing Efficiency

7.2.1 Description

Processing Efficiency is a measure of the efficiency, and effectively the speed, of processing an instance of a format. Determining the relative speed of different formats in a complete and valid way is difficult. This is because there are many variables that affect actual speed, including processing library implementation details that are not fundamentally required by the format. Ideally, different formats could be compared based on determination of their best reachable performance levels in all needed situations. In practice, this cannot be done with absolute accuracy. As a result, comparative evaluation must be accomplished by a combination of complexity analysis, processing characterization estimation, format characteristic analysis, fitness for all needed abstract scenarios, and actual empirical testing. It is important to stress that while empirical testing provides proof of obtaining at least a certain level of performance, by itself it proves little about whether better performance can be obtained for a particular format and abstract scenario. Complexity analysis tends to be able to provide better proof of the theoretical limits of performance, although this is not infallible in the face of unexpected algorithms. Additionally, in some cases complexity for multiple candidates may be, for example, linear and relative performance differences may be dominated by format or method details that affect overhead such as an extra level of indirection. As an example of a subtle but possibly dominant detail, one format may tend to allow better locality of reference in processing than another. With cache memory in modern systems running 25 or more times faster than main memory, a large subset of processing scenarios could perform better for the former format.

7.2.1.1 Processing phase definitions

Applications use data formats to communicate information or to store data for later use. XML 1.1, and presumably any binary XML candidates, provide external data representation that is rich and flexible along with other benefits. The use of XML tends to be better, overall, than more simplistic approaches for many applications. While solving all efficiency problems during the creation of XML was not doable, ever-advancing experience and research on the problem have provided new insight. A format that solves these problems while retaining the benefits of XML 1.1 and possibly adding new benefits aids existing applications and offers to greatly expand the range of applications which can justify the use of XML technology.

A key observation about the information technology industry is that often the macroscopic separation of concerns at the operating system, programming language, protocol, service, application framework, or application constrains problem solving and optimization. In the past it was rare, for instance, for an application developer outside of an operating system vendor to cause changes in an operating system to solve performance problems. (A notable exception to this is the addition of facilities for direct access to SCSI command queuing in operating systems for the largest database vendors.) With respect to formats and performance, it has usually been the case that programming languages have been optimized for in-memory operations on native variables while data formats have been designed without prime consideration of processing complexity. Because of the pervasive need for modularization and network distribution of application components, any overhead crossing the boundary between external format and memory representation is amplified. An application exists to accomplish actual work of some kind. Any operations outside of that work are overhead. While much of the overhead in existing systems exists for a logical reason for a particular environment, when considering candidate formats for binary XML, those reasons are a temporary artifact and immaterial. This means that it is important to analyze the effect of candidate format design decisions on existing and best possible processing complexity. The first step in this analysis is to define the processing phases involved in typical applications and determine variability points.

An application logic step is an operation that finds, traverses, reads, or modifies actual payload data in an instance. Processing that is overhead may include decompression, parsing, implied or required memory allocation or reference attachment, data binding, index maintenance, and schema retrieval and processing. Some candidate methods may involve other operations related to the use of schemas. Parsing is the conversion of a serialized form of data into a more readily usable form or events with arguments (SAX et al). Data binding can imply several levels. The simplest usable level, "structure without conversion", converts parse events into a data structure that captures all usable data and the usable structure of that data with no conversions. SAX and other parse/event engines are pure parsing engines. A DOM library implementation, when reading an XML 1.1 object, parses and produces an application generic, XML-specific DOM data structure. The use of DOM is, in a semantic sense, equivalent in most cases to an application using SAX or similar for parsing and from parse events building an application-specific data structure. An application specific data structure may be interpretive "structure with conversion" or it may include representation of data values directly in native, 3GL (third generation language) constructs such as objects or structs, "native structure binding". In the case of non-native structures, format details may create overhead in application processing such as insertion and deletion which might be a tradeoff for other advantages. It might be that candidate formats have no substantial differences in how they present to application phases in which case this analysis would be moot. A survey of possible candidates indicates some methods that may be beneficial.

7.2.1.2 Standard APIs vs. abstract operations

Numerous official, unofficial, and experimental application programming interfaces exist to process XML data. These APIs have provided valuable experience and have been an asset to application development environments. It is expected that any new format would be able to support existing APIs.

It has become apparent that there are certain design flaws in existing APIs in addition to a desire for features that simplify and streamline development. One example of a fundamental flaw that potentially affects performance is the "create object, fill object, link into tree" paradigm of the DOM API. Even if a format exists that supports minimal copying and coherent data, this API forces multiple copies, fragmented representation, and/or data reordering. Additionally, the new industries, data, and application types made possible by a successful binary XML format may require processing that is beyond traditional XML operations. This indicates that new APIs will be experimentally proposed and that valid evaluation of candidate formats must involve an abstract representation of scenario operations that can be translated to the best available API.

7.2.1.3 Incremental Overhead

One aspect of a format is whether it allows and supports the ability to operate efficiency so that processing is linear to the application logic steps rather than the size of data complexity of the instance. It is often desirable for processing complexity to be related to work needed rather than the size or complexity of data. Size refers to the number of bytes taken by the instance. Data complexity refers to the granularity of XML-visible objects such as elements and attributes. A format that supports incremental overhead is fast for a single operation on an instance of any combination of size or complexity. While many applications desire this characteristic, it is not an independent property of the format because it is a meta-property of other properties such as random access and efficient update. If a format supports incremental overhead in a partial or complete way then certain properties operate incrementally.

While not measured as an independent property, this section provides some guidance when examining the presence of incremental overhead. The degrees of support for Incremental Overhead are expressed in terms of cost of use vs. size/complexity of an instance. The overhead of moving raw data as an efficient block copy is assumed. After parsing and data binding, data is accessed in an application through three main methods:

  • Data is in native data structures (object member variables, strings, scalars) that are accessed directly.
  • Data is navigated to through a standard structure (like DOM) by an intermediating library but final value access is through native data structures.
  • Data and structure are maintained in an application opaque manner by a library that fully intermediates access and modification.

The differences in these approaches can be large and are affected by specific choices in format, implementation constraint, and API. All of these choices can affect efficiency. Minor differences are frequently not useful, but algorithmic complexity measures and performance validation can be very indicative. One important tradeoff is native access + linear or worse to size/complexity at one extreme vs. fully intermediated access and little or no overhead relative to size/complexity. Fully valid comparisons of this spectrum of approaches must include algorithmic complexity, logical analysis, characteristic analysis such as modeling locality of reference, and end-to-end and end-to-middle/middle-to-end measurements of available implementations.

The measurement of Incremental Overhead includes a category classification and an indication of algorithmic complexity (in O(n) or relative to linear P^2 notation). An example might be: "linear/no cost, O(P)*4, O(S/1000)".

Incremental Overhead degree categories:

  • "linear factor to use, no cost for data size/complexity"
  • "linear factor to use, limited cost linear to size/complexity"
  • "linear to use, linear to size/complexity"
7.2.1.4 Complexity

Algorithmic complexity relates to the fundamental theoretical performance characteristics of an algorithm. Although particular measurements of different algorithms on the same data may be useful, without understanding the algorithmic complexity of the algorithms involved, the comparison is not known to be valid in all cases. Each algorithm has scaling characteristics that are related to various kinds of overhead, startup, and input/output data related operations. The relationship of the size and complexity of input/output data vs. the performance of the algorithm is represented as a formula that consists of linear and nonlinear factors plus constant factors. Typically, algorithmic complexity is expressed as operations on 'n' which represents the input size, count, or complexity. The following illustrates the value of considering algorithmic complexity with the example of random access support in a format.

Let's assume one wants to access a random element out of an XML document with one million elements. On average, the code will have to examine (parse, read, etc.) 500,000 elements. More generally, the time it takes to average any element out of an n-element document is proportional to n. It might be n/2, a slow implementation might be 2*n, and a fast one n/4, but fundamentally the complexity cost is tied to n.

A format which implements random access, however - in the sense that an index table is included in the format itself - can provide access to the nth element in time proportional to - depending on how the index works - the log of n or even in constant time. Again, there are various constant factors which may vary between implementations.

As n gets larger, it is always bigger than log(n) and bigger than 1 - no matter what the constant factors are. Thus one can reason about the relative performance of the format for certain operations without resorting to ever measuring any implementations. On the other hand, if one is interested in improving only the constant factor, then one must measure implementations, with all the difficulties that topic involves.

For example, DOM defines an API which supports random access in the sense that nodes do not need to be accessed in order. However, because of how XML is defined, random access via a DOM API still takes time proportional to n - the size of the document. DOM over XML does not support random access in the sense which is used here, namely, better-than-linear access time.

That said, the DOM API most likely could be implemented over a different file format to provide true random access; such an implementation would make use of the index included in the file. This continues down the stack: if the file is stored on tape, which does not support random access, then the benefits of the file format will still not be achieved.

7.2.1.5 Measurement Considerations

The amount of increase in processing speed is dependent on the input documents used for testing. Therefore, to objectively compare formats for their inherent processing speed, competing measurements using the same set of input documents must be taken. These documents should vary is size and complexity to generate a set of results. In addition, normal performance profiling steps need to be followed. These include, but are not limited to, constructing a proper test environment with stable machines and software, utilizing a private network, and providing proper "warm-up" for adaptively compiled systems like Java. This requires the use of an appropriate set of Abstract Scenarios, Property Profiles, and Test Data.

Any algorithm used for this measurement should have a theoretical runtime of no more than O(n). However, this measurement alone cannot be used to effectively determine the speed of the algorithm. It is possible that two algorithms with O(n) runtimes could have vastly different performance characteristics if, for example, one algorithm used 100 cycles per byte processed, while the other used 500 cycles per byte processed. Both algorithms would be O(n), but result in vastly different performance measurements.

7.2.2 Type & range

For a given Abstract Scenario, Property Profile, and Test Data test scenario, this property is measured in several different ways:

  1. Serialization - The time it takes to generate the alternate format.
  2. Format parsing - The time it takes to parse the alternate format.
  3. Data binding - The time it takes to create the application data model from the data contained in the alternate format.
  4. Abstract processing steps - application operations such as creation, access, read, insertion, deletion, etc. This measurement aspect will only be relevant for certain Abstract Scenarios. In some scenarios, this aspect is controlling of the others and of other properties. For instance, when testing random access with incremental overhead, it is important to determine the minimum, incremental parsing and data binding required.

Each measurement is recorded as a percentage faster than a standard text-based alternative. Therefore, for a particular format, the measurement results might be: 73%, 250%, 48%, 5%, representing the speedup for serialization, format parsing, data binding, and processing steps, respectively.

7.2.3 Methodology

Measurements must be taken as follows:

  1. Use a specific set of input documents and a text-based XML implementation to generate a baseline.
  2. Using the same input documents, generate the alternate format and compare the results of the four tests when using an implementation of the alternate format.

This will allow a fair comparison between various alternate formats to determine their processing efficiency differences.

7.2.4 Dependencies

Processing Efficiency has a correlated relationship with Small Footprint and Space Efficiency and an inverse relationship with Compactness. Additionally, this property can be considered a measurement of the processing efficiency for most other properties.

7.2.5 Known Tradeoffs

High scores for this property may be at odds with higher scores in the following properties:

  • Compactness: Compact encodings can cost extra cycles to interpret and expand compared to simpler formats.
  • Human Readable and Editable: Precise structural information needed for efficiency, even if represented in text, tends to make a format less human readable and editable.
  • Support for Error Correction: Requires processor to potentially detect and correct errors, therefore reducing processing speed.

7.3 Accelerated Sequential Access

7.3.1 Description

The objective of Accelerated Sequential Access to reduce the amount of time required to access XML infoset items in a document. The fundamental measurement is therefore the average time needed to access an XML infoset item. This time can be compared to a baseline measurement of the average time needed to access an XML infoset item using an unaccelerated sequential access method like that used to implement SAX.

T(ix) - time to create a sequential index, if used (fixed) T(sk) - time to seek an infoset item (average) T(am) - total time for all accesses over the document. This time amortizes the cost of T(ix) over the average number of total seeks (ns).

Not all accelerated sequential access methods use a sequential index and incur T(ix). In this case it is only necessary to compare T(sk) average for the unaccelerated case against the accelerated one.

If accelerated sequential access supports update of the sequential index we should also take this cost into account.

T(up) - time to update the sequential index.

T(up) should also be added to T(am) for the average number of total updates (nu).

T(am) = T(ix) + ns ( T(sk) ) + nu ( T(up) )

For the baseline, unaccelerated sequential access case we consider only T(sk) for the average total number of seeks (ns).

T(am) = ns ( T(sk) )

Example:

For an implementation of accelerated sequential access to XML:

  T(ix)  5.00ms
  T(sk)  3.50ms
  T(up)  3.00ms
  ns  1000
  nu    50
  
  T(am) = 5 + 1000 ( 3.5 ) + 50 ( 3.0 ) = 3655
For unaccelerated sequential access:

  T(sk)  4.00ms
  
  T(am) = 1000( 4 ) = 4000

Accelerated sequential access may have resource costs which can impact system performance. A more comprehensive model would be needed to take these into account in a full assessment of the comparative benefit of a accelerated sequential access implementation. As an approximation, an implementation which produces lower number for the following resource costs will be better in performance than an implementation with the same T(am) but with higher resource costs:

  1. Memory consumption for sequential index structure
  2. Cost in bandwidth utilization for I/O and transport of the sequential index if persisted
  3. Cost of persistent store, if the sequential index structure is persisted

7.3.2 Type & range

7.3.3 Methodology

7.3.4 Dependencies

7.3.5 Known Tradeoffs

7.4 Efficient Update

7.4.1 Description

The Efficient Update property is concerned with whether a format instance can be modified efficiently without being completely rebuilt. When a format is designed with efficient update as a constraint, it will tend to be apparent that this is possible. When this was not planned for, it is still possible that a processor could implement an efficient update capability. In the latter case, an evaluation of the format must determine if there are features that prevent or assist such implementation. As the property description notes, this property is somewhat related to support for Deltas.

There are three aspects under which this property should be evaluated:

  1. Efficiency of update: This is the time and complexity required to apply the changes, starting from the original serialization up until the updated serialization is produced.
  2. Efficiency of retrieval: This is the time required to retrieve a (possibly) modified value.
  3. Compactness: This is the additional space required for the application of an update or the typical overhead of supporting different kinds of changes to a format instance. In the existence proof example, inserting a new element might be efficient because it might just result in an append to the file while inserting characters in a large text might cause a new chunk to be allocated at the end of the file and the old chunk to become an unused block. While the block could be reused just like with malloc, mitigating the cost, it is still a potential inefficiency.

7.4.2 Type & range

Evaluations of candidate formats that implement this property will produce three percentage values and a standard deviation. For update and retrieval, these are positive or negative percentages of improvement relative to comparison XML 1.1 solution. For compactness, this percentage is overhead over a linear creation of an instance with the same data in the candidate format, along with an estimated (for analytical) or actual (for empirical) standard deviation.

7.4.3 Methodology

The measurement for this property is by inspection of format specification, logical analysis, and empirical testing of test scenarios based on Abstract Scenarios that call for Efficient Update.

7.4.4 Dependencies

This property doesn't depend on other properties. It does have a weak relationship with Deltas based on solving similar problems.

7.4.5 Known Tradeoffs

The ability to support efficient updates in the direct, complete sense tends to imply compactness measures that are not monolithic and a mechanism for growing or shrinking data without requiring repositioning for all data following the change. Solutions for these tradeoffs will likely focus on differing granularity and may be tunable.

7.5 Embedding Support

7.5.1 Description

Measures the degree to which a format supports embedding of files of arbitrary type within serialized content.

7.5.2 Type & range

This property is measured along an integer scale from [0,6], where zero indicates no embedding support and six indicates the greatest possible degree of embedding support.

7.5.3 Methodology

This property is measured by considering which of the following statements is true, based on that format's specification:

  1. Provides structures or elements in which data of arbitrary type and reasonable size can be stored by virtue of the flexibility of the format.
  2. Provides well-known points at which data of arbitrary type can be embedded.
  3. Provides for the existence and management of metadata about embedded files.
  4. Provides the ability to include or exclude embedded files from signatures over the file.
  5. Provides the ability to include or exclude embedded files when (partially) encrypting the file.
  6. Provides the ability to compress the contents of the embedded file.

The measurement levels resulting from this analysis are:

  1. Doesn't support
  2. Supports to some extent
  3. Supports well

7.5.4 Dependencies

Support for (d) signing and (e) encryption are dependent on an underlying format which supports the Signable and Encryptable properties.

7.5.5 Known Tradeoffs

A format which supports embedding must make weaker guarantees regarding the humanly readable and editable property, since it forgoes control over the contents of the embedded files.

7.6 No Arbitrary Limits

7.6.1 Description

The degree that the format supports no inherent limits is characterized as: No inherent limits, few limits (i.e. unreasonably large names), and many limits (fixed lengths, small tables).

Experience has shown that arbitrary limits in the design of reusable systems must be carefully scrutinized for the probability of future conflicts. As computing limitations have repeatedly been surpassed in short order and technology has been put to innovative uses, decisions that turned out to be short-sighted have led to painful migration. This property provides a rough measure with which to compare different approaches.

7.6.2 Type & range

The range of this measurement is membership in a category. These categories are: "no inherent limits", "few limits", and "many limits".

7.6.3 Methodology

The measurement for this property is by inspection of format specification and logical analysis.

7.6.4 Dependencies

This property does not depend on other properties.

7.6.5 Known Tradeoffs

Each type of flexibility in a data format can be a tradeoff between efficiency in the expected typical case and the ability to handle cases that are not expected to be encountered. In many cases in the past, seemingly sensible choices have not aged well with increases in computing capacity and new uses of technology.

7.7 Generality

7.7.1 Description

Measures the degree to which a format is competitive with alternatives across a diverse range of data, applications and use cases.

Generality is, in part, a function of the formats ability to optimize for application specific criteria and use cases. For example, some applications need to maximize compactness and are willing to give up some speed and processing resources to achieve it. While others need to maximize speed and are willing to give up some compactness to achieve it. Similarly, some applications require all the information contained in a document and are willing to give up some compactness to preserve it. Other applications are willing to discard certain information items in a document to achieve higher compactness.

Generality is also a function of the optimizations the format includes for efficiently representing documents of varying size and structure. For small, highly structured documents, a format informed by schema analysis will generally produce more compact encodings than a format informed solely by document analysis (e.g. generic compression software). For larger, more loosely structured documents, a format informed by document analysis techniques will generally produce more compact encodings than a format solely informed by schema analysis. A format informed by both schema analysis and document analysis will generally produce more compact encodings across a broader range of documents than a format that only includes one of these techniques.

7.7.2 Type & range

This property is measured along an integer scale in the range [0, 20], where a zero indicates a very specialized format that applies narrowly to a small set of data, applications, and use cases and 16 indicates a very general format that applies to a wide range of data, applications, and use cases.

7.7.3 Methodology

This property is measured by counting the number of statements below that are true of the format, based on inspection of the format specification and objective analysis of compactness results over a wide range of XML documents with varying size and structure. Statements designated as [optional] will broaden the applicability of a binary XML file format, but are not required for that format to be considered sufficiently general. The statements are organized into sections for readability.

Flexible schema analysis optimizations

  • Can represent documents without a schema
  • Can represent documents that include elements and attributes not defined in the associated schema (i.e., open content)
  • Can represent any schema-invalid document
  • Can leverage available schema information to improve compactness, processing speed, and resource utilization
  • Can leverage available schema information to improve compactness, processing speed, and resource utilization even when documents contain elements and attributes not defined in the schema
  • Can leverage available schema information to improve compactness, processing speed, and resource utilization for any schema-invalid document.

Flexible document analysis optimizations

  • Can leverage document analysis to improve compactness
  • Can suppress document analysis to increase speed and reduce resource utilization
  • [optional] Can adjust document analysis to meet application performance and resource utilization criteria
  • Can structure the binary XML stream to increase net compactness when off-the-shelf compression software is built in to the communications infrastructure

Flexible fidelity optimizations

  • [optional] Supports high fidelity XML representations that preserve an exact copy of the original XML document, including all whitespace and formatting
  • Supports reduced fidelity XML representations that preserve all infoset items, but discard whitespace and formatting to improve compactness
  • Supports reduced fidelity XML representations that preserve all information needed by a particular application, but discard specified information items that are not needed (e.g., comments and processing instructions) to improve compactness
  • Supports reduced fidelity XML representations that preserve the logical structures and values of an XML document, but discard lexical and syntactic constructs to improve compactness

Competes with frequency based compression

  • Can consistently produce XML representations that are close to the same size or smaller than XML documents compressed using gzip
  • Can consistently produce more compact XML representations than XML documents compressed using gzip
  • Can consistently produce more compact XML representations than binary XML documents created with document analysis suppressed, then compressed using gzip

Competes with schema based encodings and hand optimized formats

  • Can consistently produce XML representations that are close to the same size or smaller than the equivalent ASN.1 PER encoding
  • Can consistently produce XML representations that are more compact than the equivalent ASN.1 PER encoding
  • [optional] Can consistently produce XML representations that are more compact than the equivalent ASN.1 PER encoding compressed using gzip

7.7.4 Dependencies

7.7.5 Known Tradeoffs

High scores for this property may be at odds with high scores for the Small Footprint property. Some implementation approaches for supporting a broad range of data, applications, and use cases may require larger amounts of code.

7.8 Human Readable and Editable

7.8.1 Description

Measures the degree to which a format is or must be humanly readable and editable.

7.8.2 Type & range

This measurement is a pair of integers <m,n>, each on the scale [0,5]. The first number indicates the degree to which a file in a format may be humanly readable and editable; the second number indicates the degree to which a file in a format must be so. Thus, the greater the difference between the two numbers the greater the degrees of freedom given to the file's creator with respect to this property.

7.8.3 Methodology

Each item in the following list of statements is evaluated to determine if it is never true, may be true, or is always true of file created according to the file's specification.

  1. If the statement is never true of this format no points are assigned;
  2. If the statement may be true then one point is added to the first number of the score;
  3. If the statement is always true than one point is added to both numbers of the score.

(Note: the first number in the score is therefore always greater to or equal than the second number.)

  1. Uses a regular and explicit structure.
  2. Uses only text, avoiding the use of compression or magic numbers.
  3. For any given type of information (i.e., specifying a character encoding) uses a unique encoding mechanism.
  4. Is self-contained.
  5. Maintains the locality of items per their relative positions in the data model.

7.8.4 Dependencies

Support for this property is dependent in part on how self contained it is.

7.8.5 Known Tradeoffs

High scores for this property may be at odds with higher scores in Compactness, Processing Efficiency, Efficient Update, Random Access, Accelerated Sequential Access, and Specialized Codecs, all of which typically use techniques at odds with the requirements of this property.

7.9 Content Type Management

7.9.1 Description

Measures the degree to which a format specifies usable Content Type Management information.

7.9.2 Type & range

This measurement uses a simple range of options from worst to best integration.

7.9.3 Methodology

Degrees of support:

  1. provides no media type or encoding specification
  2. provides a media type but not a content coding
  3. provides a media type suffix akin to "+xml"
  4. provides a content coding

7.9.4 Dependencies

None.

7.9.5 Known Tradeoffs

Note that there currently is dissent as to whether a binary XML format should be considered to be a content coding (like gzip) or not. Here are the options:

  • It's just a content coding. In this case it may have a media type (like application/gzip) but the proper way of using it is to keep the original media type of the XML content and simply change the content coding. The upside is that the current dispatch system is untouched, that the media type information is far more useful that way, and that the content coding infrastructure is put to good use. The downside is that there is philotechnical dissent that binary XML is an encoding in the way that gzip is, and that there can be friction with the charset parameter to XML media types. With this content negotiation is fully possible. The behavior of fragment identifiers does not need to be re-specified.
  • It's not a content coding but a media type, two sub-options:
    • There's just the media type. Any content sent using the format must have the media type of the format. The upside is that it's simple. The downside is that you lose all media type information so that you must then move to another system to provide that information (some Web systems - e.g. browsers - don't work without it), or define new media types for all content (application/binxhtml, image/binsvg, etc.). With this content negotiation is entirely impossible (or rather, totally useless) unless new media types are defined for all things XML. The behavior of fragment identifiers becomes impossible to specify, or has to be re-specified for all the new media types.
    • A new suffix, in the "+xml" style, is defined (say "+bix"). The upside is that it's simple and that the diversity of media types is maintained. The downside is that it requires more intrusive modifications to systems that rely on existing media types. The latter may be fine if there is one and only one binary XML encoding out there (or at least a set list so that the intrusive modifications are performed only once), but given an open-ended set of binary XML formats it becomes quite impractical. With this content negotiation is possible, but with lesser power. The behavior of fragment identifiers has to be re-specified to map back to the one in +xml types.

7.10 Integratable into the XML Stack

7.10.1 Description

Measures the ease with which a given format integrates with the rest of the XML Family of recommendations, based on its orthogonality in specification and the way in which it supports the core assumptions common to XML specifications. Many relevant considerations are presented in the Architecture of the WWW.

7.10.2 Type & range

This property is measured using a scale that reuses the measurement performed on Data Model Versatility for its middle point.

7.10.3 Methodology

The following scale (from lowest to highest support) is used:

  1. optimized for a data model from outside the core XML Family
  2. scores well on Data Model Versatility
  3. uses the XML 1.x syntax

7.10.4 Dependencies

This measurement has a strong dependency on the Data Model Versatility measurement, even though it includes options outside the range of the latter.

7.10.5 Known Tradeoffs

The simplest way of integrating well into the XML Family is obviously to use an XML-compatible. This however does not mean that the given format shall be XML 1.x itself, for instance it could be a subset allowing only certain tokens or requiring a certain form and encoding (for instance a canonical version of the SOAP profile of XML). While this would enable normally impossible optimization to XML parsers, it would also close the door to a great number of other possible optimizations.

It must also be noted that some core XML technologies such as signatures and encryption rely directly on the XML syntax. There is therefore a tradeoff in which a format could integrate perfectly well with the XML Family minus these two members.

7.11 Platform Neutrality

7.11.1 Description

Measures the degree to which a format is platform neutral as opposed to being optimized for a given platform.

7.11.2 Type & range

Measures the degree to which a format is platform neutral as opposed to being optimized for a given platform.

7.11.3 Methodology

This property is measured along an axis of values that rate its platform neutrality from none to optimal:

  1. not platform-neutral at all (for instance, may be the native serialization of a given programming platform)
  2. defined in a platform-neutral manner, but with fixed values for certain parameters that may advantage a platform over another (for instance, only a single Unicode encoding is supported)
  3. defined in a platform-neutral manner, and multiple options (word-length, float format, etc.) can be set so that users may choose locally optimal encodings when the platforms involved in a given interchange are known.

7.11.4 Dependencies

This property has a weak link to Implementation Complexity in that if it is supported at its optimal level it will lead to require multiple encoding options that could be costly in implementation terms.

7.11.5 Known Tradeoffs

While allowing a format to support a large range of options to enable optimal processing between similar platforms, the added complexity may in fact have a generally negative impact as it complicates the format. While an assessment of this tradeoff can only be made on a format by format basis, it must be noted that allowing too many hooks for optimization may in fact prove to be a pessimisation.

7.12 Random Access

7.12.1 Description

The objective of Random Access is to reduce the amount of time required to access XML infoset items in a document. The fundamental measurement is therefore the average time needed to access an XML infoset item. This time can be compared to a baseline measurement of the average time needed to access an XML infoset item using a sequential access method like that used to implement SAX.

This performance metric does not take into account what may be accessed with random access method and what operations may be supported on what is looked up (for example, can the looked-up item be treated as a sub-document or fragment).

T(ra) - time to create an access table (fixed) T(lu) - time to lookup an infoset item (fixed) T(sk) - time to seek an infoset item (average) T(am) - total time for all accesses over the life of the document This time amortizes the cost of T(ra) over the average number of total seeks (ns).

T(am) = T(ra) + ns ( T(lu) + T(sk) )

If random update of the access table is supported we should also take into account this cost.

T(up) - time to update an access table (fixed)

T(up) should also be added to T(am) for the average number of total updates (nu).

T(am) = T(ra) + ns ( T(lu) + T(sk) ) + nu ( T(up) )

For the baseline, sequential access case we consider only T(sk) for the average total number of seeks (ns).

T(am) = ns ( T(sk) )

Example:

For an implementation of random access to XML:

T(ra) 10.00ms
T(lu)   .05ms
T(sk)  1.00ms
T(up)  1.00ms
ns  1000
nu    50
                        
T(am) = 10 + 1000( .05 + 1.00 ) + 50 ( 1.00 ) = 1110

For sequential access:

T(sk)  4.00ms
T(am) = 1000 ( 4 ) = 4000

In this example, random access is advantageous if the average total number of seeks is over 3. For ns = 3, nu = 0 the random access T(am) is 13.15 and the sequential T(am) is 12 while at ns = 4, nu = 0 it is 14.20 versus 16.

Random access has resource costs which can impact system performance. A more comprehensive model would be needed to take these into account in a full assessment of the comparative benefit of a random access implementation. As an approximation, an implementation which produces lower numbers for the following resource costs will be better in performance than an implementation with the same T(am) but with higher resource costs.

  1. Memory consumption for access table
  2. Cost in bandwidth utilization for I/O and transport of the access table if persisted
  3. Cost of persistent store, if the access table is persisted

The random access implementation can be categorized by the embedding or non-embedding of the access table:

  1. No access table and not indexable - no random access
  2. No access table but indexable
  3. Access table defined part of format, but separate from XML document
  4. Access table optionally embedded in the document
  5. Access table always embedded in the document

Another simplification made in comparing T(am) for random access and sequential access is the assumption made that the random access implementation is able to provide access to the infoset items the user wants. If this is not the case, either the implementation of random access will not be useful to that user, its performance notwithstanding, or alternate methods of access would have to be provided and accounted for in the T(am). The access coverage to the infoset provided by the random access implementation can be categorized as follows:

  1. Complete: addressing information for all infoset items
  2. Selective: for certain infoset items
  3. On-Demand: for infoset items which have been requested
  4. Heuristic: for infoset items which have been predicated to be needed

It should also be specified whether the implementation does or does not provide alternative access methods to obtain all infoset items.

The random access implementation can also be categorized by its support for fragmentation:

  1. Full Context (the random access implementation can provide full context information for the accessed infoset item's subtree)
  2. Complete Subdocument (namespaces are propagated so that the accessed infoset item's subtree can be handled as a complete document)
  3. No support for fragmentation

7.12.2 Type & range

The measurement for this property is by inspection of format specification and logical analysis.

7.12.3 Methodology

7.12.4 Dependencies

7.12.5 Known Tradeoffs

7.13 Round Trip Support

7.13.1 Description

Measures the degree to which a format supports round-tripping and round-tripping via XML.

7.13.2 Type & range

These two properties are measured along the same enumerated scale consisting of the following values:

  1. "Exact equivalence": If round-tripping produces a byte-per-byte duplicate of the original
  2. "Lossless equivalence": If exact equivalent is not achieved but round tripping produces a lossless equivalent to the original input
  3. "Does not round trip": If round tripping is not supported.

7.13.3 Methodology

This property is measured by comparing the set of data models which can be represented in XML with those that can be represented in the alternative format.

With regards to Roundtrip Support (XML to binary to XML):

  1. If the set of models supported by XML is a proper superset of those supported by the format, the measurement is - "Does Not Roundtrip."
  2. If the transformations to and from the other format are byte preserving, the measurement is - "Exact Equivalence."
  3. Otherwise, the measurement is - "Lossless Equivalence."

With regards to Roundtripping via XML (binary to XML to binary):

  1. If the set of models supported by XML is a proper subset of those supported by the format, the measurement is - "Does Not Roundtrip."
  2. If the transformations to and from the other format are byte preserving, the measurement is - "Exact Equivalence."
  3. Otherwise, the measurement is - "Lossless Equivalence."

7.13.4 Dependencies

There are no known dependencies of this property on other properties.

7.13.5 Known Tradeoffs

Formats supporting both roundtrip and roundtrip via XML will tend to have the same data model versatility measurement as XML, as that is a measure of the set of data models which they support. Formats with greater data model versatility are more likely to support round-tripping but less likely to support round-tripping via XML, and vice versa.

7.14 Signable

7.14.1 Description

Measures the degree to which a format supports the creation and inclusion of digital signatures.

7.14.2 Type & range

This property is measured along an integer scale from [0,6], where zero indicates no support for digital signatures six indicates the greatest possible degree of support. (Note that a format with a score of zero is still signable, in that a file consists of a sequence of bytes and any sequence of bytes can be signed.)

7.14.3 Methodology

This property is measured by assigning the indicated number of points for each of the following statements which is true of the format, based on that format's specification:

  1. Defines unique serializations for each possible data model instance (avoids canonicalization): 2 points
  2. Permit multiple serializations for each data model instance, but defines one serialization as canonical: 1 point
  3. Always serializes subtrees in a contiguous manner: 2 points
  4. Permits, but does not require, the serialization of subtrees in a contiguous manner: 1 point
  5. Defines a syntax for signature (i.e., recording certificates, signed ranges, etc.): 2 points

7.14.4 Dependencies

There are no known dependencies of this property on other properties.

7.14.5 Known Tradeoffs

Implementation of this property may be at odds with Compactness, Random Access, and Efficient Update, as support for these properties may be at odds with maintaining contiguous subtrees.

7.15 Small Footprint

Editorial note: sw21 February, 2005

More detail is planned for Small Footprint.

7.15.1 Description

A candidate format should be able to be processed by diverse platforms. Many of these platforms have very limited resources for program storage. A format that requires little actual code and data tables (aka initialized or BCC data) is attractive more widely. Inspection of specifications can be a useful form of analysis. Analysis of actual implementations can also be enlightening when those implementations are optimized by skilled developers.

7.15.2 Type & range

The detailed measurement for this property will consist of code and initialized data measured, estimated, or projected to a series of platforms that relate to key architectures including 64K StrongARM, Java bytecode, and Intel/AMD Pentium/64bit.

7.15.3 Methodology

The measurement for this property is by inspection of format specification, logical analysis, survey of implementations and implementers, and projections from one architecture to the others.

7.15.4 Dependencies

7.15.5 Known Tradeoffs

There is likely to be a tradeoff with Generality, Compactness, and general support of many features.

7.16 Space Efficiency

7.16.1 Description

Space Efficiency is the measurement of dynamic memory needed to decode, process, and encode a candidate format. In this case, processing doesn't include any application processing or needs, but may include any format-induced processing or bookkeeping that must be done to adhere to the format. Special consideration must be given to separate and discount overhead that a format requires that is accomplishing something that an application would likely need to perform anyway.

7.16.2 Type & range

This is a percentage measurement relative to the expected dynamic memory costs of popular and theoretical XML 1.1 processing systems. This may include both DOM and parser event (SAX et al) style processing. Due to the nature of applications in memory-constrained environments, it is the DOM-style measurement that is ranked for this property.

7.16.3 Methodology

The measurement for this property is by inspection of format specification, logical analysis, and empirical testing on test scenarios.

7.16.4 Dependencies

This property is related to Compactness and Processing Efficiency and may be affected by Generality.

7.16.5 Known Tradeoffs

Some Compactness methods tend to increase memory usage, sometimes dramatically. Reducing Processing Efficiency may also affect dynamic memory needed.

8 References

XBC Use Cases
XML Binary Characterization Use Cases (See http://www.w3.org/TR/xbc-use-cases.)
XBC Properties
XML Binary Characterization Properties (See http://www.w3.org/TR/xbc-properties.)
Compression FAQ
Usenet Compression FAQ (See http://www.faqs.org/faqs/compression-faq/.)
Architecture of the World Wide Web
Architecture of the World Wide Web (See http://www.w3.org/TR/webarch/.)
XML 1.0
Extensible Markup Language (XML) 1.0 (See http://www.w3.org/TR/REC-xml/.)
XML 1.1
Extensible Markup Language (XML) 1.1 (See http://www.w3.org/TR/xml11/.)
QA Specification Guidelines
QA Framework: Specification Guidelines (See http://www.w3.org/TR/2004/WD-qaframe-spec-20040830/.)
QA Handbook
The QA Handbook (See http://www.w3.org/TR/2004/WD-qa-handbook-20040830/.)

9 Conformance to this document

9.1 Conformance Criteria

This document is a definition of measurement methodology, not a specification. In the search for a binary XML format, it is important that a unified taxonomy, reporting, and methodology is used. Additionally, it is important that key considerations and background understanding are understood by participants. This document relies on the informative Use Cases document and the normative and informative Properties document. The key normative information from the Properties document is the properties definition.

This document provides key taxonomy, methodology, key background information, example test data, and detailed comments on the measurement of certain properties. This document may be used by independent parties or a test suite working group to compile abstract scenarios, property profiles, a full test data suite and to perform basic or detailed candidate format evaluation. Both analytical and empirical evaluation are supported by this document.

To conform to this Measurement Methodology guidelines document, a conforming evaluation would compile appropriate test data, abstract scenarios, and property profiles and perform analysis relative to the Characterizations ranking and classification. Any available analysis and test results can be valuable, but to be valid and useful to format choice decision making, the broad array of use cases, properties, and characterization identified must be represented.

9.2 Normative Parts

The normative parts of this document include:

  • Abstract Scenarios variability definitions
  • Property Measurement - Basic
  • Property Measurement - Detailed Analysis
    • Type and Range
    • Methodology

9.3 Guidelines Extensibility

This document is expected to be built upon by consensus development of key elements needed to validly evaluate candidate formats. This additional content will consist at least of the Abstract Scenarios, Property Profiles, and Test Data derived from existing and new use cases and related work. It is expected that multiple independent efforts will evolve into this consensus.

The complex nature of many simultaneous requirements for a binary XML format and widely differing strategies and environments for possible solutions provides many opportunities for ambiguity and imprecision. A key ongoing activity is the continuous identification of these issues and consensus disambiguation and sub-labeling.

9.4 Conformance Claims

Evaluation and test reports claiming conformance to this guideline must clearly document what Abstract Scenario and Property Profile combinations were performed on which test data for what phases of processing. Ideally, this information would be shared as a test suite of both code and data to directly support independent testing of multiple formats. Testing will usually be partial and should be referred to as a Partial test with characterization of what was tested. A party that believes it has performed a sufficiently broad test to be considered Complete shall refer to its evaluation as a "Complete Candidate". A test suite or similar working group may reach consensus to promote a "Complete Candidate" to a "Complete" evaluation. No other mechanism is defined to reach a "Complete" evaluation based on this guideline.

The discussion in Property Measurement Methodology section about the dependence of Compactness and Processing Efficiency on other properties is a key factor in a valid and useful evaluation and is therefore a key consideration in a valid conformance claim.

A Test Data

Editorial note: sw21 February, 2005

Descriptive information will be added to these test data samples in a future version of this document. Additional test data samples may also be added.

The Test Data Appendix provides information on a number of important data formats that should be part of the test scenarios for a binary XML format. The example data shown below is an excerpt from data files submitted for use by the working group. These examples have been edited for space, often indicated by an ellipsis.

A.1 ANT

A.1.1 Description

A.1.2 Data characteristics

A.1.3 Example data

<project name="xtags" default="main">

  <property file="../build.properties"/>
  <property name="classpath" value="${servlet.jar}:${dom4j.jar}:${xalanj1compat.
jar}" />

  <property name="checkRequirements.pre" value="checkRequirements.pre"/>
  <property name="examples.pre" value="examples.pre"/>

  <target name="checkRequirements.pre">
    <antcall target="checkRequiredFile">
       <param name="file" value="${jaxp.jar}"/>
       <param name="fail.message" value="a jar file containing the JAXP classes
is required to compile the xtags taglib. please define the property jaxp.jar in
your build.properties file and ensure that the file exists"/>
    </antcall>
...

A.1.4 Source of data

A.1.5 Industries

A.1.6 Discussion

A.2 Financial products Markup Language (FpML)

A.2.1 Description

A.2.2 Data characteristics

A.2.3 Example data

...
  == Copyright (c) 2002-2003. All rights reserved.
  == Financial Products Markup Language is subject to the FpML public license.
  == A copy of this license is available at http://www.fpml.org/documents/license
-->
<FpML version="4-0" xmlns="http://www.fpml.org/2003/FpML-4-0"
 xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
 xsi:schemaLocation="http://www.fpml.org/2003/FpML-4-0
 ../fpml-main-4-0.xsd" xsi:type="DataDocument">
    <trade>
        <!--This is a single stock execution swap, that also
        illustrates the case of multiple interim valuation dates-->
        <tradeHeader>
            <partyTradeIdentifier>
                <partyReference href="PartyA"/>
                <tradeId tradeIdScheme="http://www.partyA.com/eqs-trade-id">6234</tradeId>
            </partyTradeIdentifier>
            <partyTradeIdentifier>
                <partyReference href="PartyB"/>
                <tradeId tradeIdScheme="http://www.partyB.com/eqs-trade-id">6569</tradeId>
            </partyTradeIdentifier>
            <tradeDate id="TradeDate">2001-09-24</tradeDate>
        </tradeHeader>
        <equitySwap>
            <productType>SingleStockExecutionSwap</productType>
            <equityLeg>
                <payerPartyReference href="PartyA"/>
                <receiverPartyReference href="PartyB"/>
                <effectiveDate id="EffectiveDate">
                    <relativeDate>
                        <periodMultiplier>3</periodMultiplier>
                        <period>D</period>
                        <dayType>ExchangeBusiness</dayType>
                        <businessDayConvention>NotApplicable</businessDayConvention>
                        <dateRelativeTo href="TradeDate"/>
                    </relativeDate>
                </effectiveDate>
...

A.2.4 Source of data

Authorities for data: http://www.fpml.org

Source for example data: FpML-4.0 eqs/eqs40_ex01_single_underlyer_execution.xml

A.2.5 Industries

A.2.6 Discussion

A.3 Invoices 5K-900K

A.3.1 Description

A.3.2 Data characteristics

A.3.3 Example data

inv1000.xml:
<ns1:invoice xmlns:ns1="http://www.sun.com/schema/spidermarkexpress/sm-inv">
  <Header>
    <IssueDateTime>2003-03-13T13:13:32-08:00</IssueDateTime>
    <Identifier schemeAgencyName="ISO" schemeName="Invoice">15570720</Identifier>
    <POIdentifier schemeName="Generic" schemeAgencyName="ISO">691</POIdentifier>
    <BuyerParty>
      <PartyID schemeName="SpiderMarkExpress" schemeAgencyName="SUNW">1</PartyID>
      <Name>IDES Retail INC US</Name>
      <Address>
        <Street>Hill St.</Street>
        <HouseID schemeName="HouseID" schemeAgencyName="house">5555</HouseID>
        <RoomID schemeName="RoomID" schemeAgencyName="room">Suite 3</RoomID>
        <CityName>Boston</CityName>
        <PostalZoneID schemeName="Zipcode" schemeAgencyName="USPS">01234</PostalZoneID>
        <StateName>MA</StateName>
        <CountryIdentificationCode listAgencyId="ISO" listId="3166">US</CountryIdentificationCode>
      </Address>
      <Contact>
        <Name>Joe Buyer</Name>
        <Communication>
          <Value>313-555-1212</Value>
          <ChannelID schemeName="SpiderMarkExpress" schemeAgencyName="SUNW">phone</ChannelID>
        </Communication>
        <Communication>
          <Value>313-555-1213</Value>
          <ChannelID schemeName="SpiderMarkExpress" schemeAgencyName="SUNW">fax</ChannelID>
        </Communication>
      </Contact>
    </BuyerParty>
...
  <LineItem>
    <LineID schemeName="Generic" schemeAgencyName="ISO">0</LineID>
    <Item>
      <StandardItemIdentifier schemeName="Generic" schemeAgencyName="ISO">20</StandardItemIdentifier>
      <Description>vZCwLwz1AGtbQT7t0diKccyB0rm0DXS5JFUWZyFcDFW7t</Description>
      <Quantity unitCode="number">10</Quantity>
    </Item>
    <OrderStatus listId="OrderStatus" listAgencyId="Sun">FULFILLED</OrderStatus>
    <Pricing>
      <GrossUnitPriceAmount currencyId="USD">437.00</GrossUnitPriceAmount>
      <NetUnitPriceAmount currencyId="USD">367.08</NetUnitPriceAmount>
    </Pricing>
    <PricingVariation>
      <ServiceID schemeName="Generic" schemeAgencyName="ISO">discount</ServiceID>
      <ConditionID schemeName="Generic" schemeAgencyName="ISO">allowance</ConditionID>
      <Rate>16.00</Rate>
    </PricingVariation>
    <TotalAmount currencyId="USD">3670.80</TotalAmount>
  </LineItem>
...

A.3.4 Source of data

A.3.5 Industries

A.3.6 Discussion

A.4 Large documents

A.4.1 Description

A.4.2 Data characteristics

A.4.3 Example data

factbook.xml:
...
<record>
  <country>American Samoa</country>
  <introduction>
    <background>Settled as early as 1000 B. C., Samoa was "discovered" by European explorers in the
    18th century. International rivalries in the latter half of the 19th century were settled by an
    1899 treaty in which Germany and the US divided the Samoan archipelago. The US formally occupied
    its portion - a smaller group of eastern islands with the excellent harbor of Pago Pago - the
    following year.</background>
  </introduction>
  <geography>
    <location>Oceania, group of islands in the South Pacific
    Ocean, about one-half of the way from Hawaii to New Zealand</location>
    <geographic_coordinates>14 20S, 170 00 W</geographic_coordinates>
    <map_references>Oceania</map_references>
    <area><total>199 sq km</total><land>199 sq km</land><water>0 sq km note- includes Rose Island and Swains
    Island</water><area_comparison>slightly larger than Washington, DC</area_comparison></area>
    <land_boundaries>0 km</land_boundaries><border_countries/>
    <coastline>116 km</coastline>
    <maritime_claims><note/><contiguous_zone/><continental_shelf/><exclusive_economic_zone>200
    NM</exclusive_economic_zone><territorial_sea>12 NM</territorial_sea></maritime_claims>
    <climate>tropical marine, moderated by southeast trade winds; annual rainfall averages about 3
    m; rainy season from November to April, dry season from May to October; little seasonal
    temperature variation</climate>
    <terrain>five volcanic islands with rugged peaks and limited coastal plains, two coral atolls
    (Rose Island, Swains Island)</terrain>
    <elevation_extremes><lowest_point>Pacific Ocean 0 m</lowest_point>
      <highest_point>Lata 966 m</highest_point></elevation_extremes>
    <natural_resources>pumice, pumicite</natural_resources>
    <land_use><arable_land>5%</arable_land><permanent_crops>10%</permanent_crops>
      <permanent_pastures>0%</permanent_pastures><forests_and_woodlands>70%</forests_and_woodlands>
      <other_land_uses>15% (1993 est.)</other_land_uses></land_use>
...

A.4.4 Source of data

A.4.5 Industries

A.4.6 Discussion

A.5 RDF

A.5.1 Description

A.5.2 Data characteristics

A.5.3 Example data

fm-releases-103001-2150.rdf:
  <channel rdf:about="http://freshmeat.net/">
    <title>freshmeat.net</title>
    <link>http://freshmeat.net/</link>
    <description>freshmeat.net maintains the Web's largest index of Unix and cross-platform open source software. Thousands of applications are meticulously cataloged in the freshmeat.net database, and links to new code are added daily.</description>
    <dc:language>en-us</dc:language>
    <dc:subject>Technology</dc:subject>
    <dc:publisher>freshmeat.net</dc:publisher>
    <dc:creator>freshmeat.net contributors</dc:creator>
    <dc:rights>Copyright (c) 1997-2001 OSDN</dc:rights>
    <dc:date>2001-10-31T02:50+00:00</dc:date>
    <items>
      <rdf:Seq>
        <rdf:li rdf:resource="http://freshmeat.net/projects/exb/" />
        <rdf:li rdf:resource="http://freshmeat.net/projects/toscanaj/" />
...
      </rdf:Seq>
    </items>
    <image rdf:resource="http://freshmeat.net/img/fmII-button.gif" />
    <textinput rdf:resource="http://freshmeat.net/search/" />
  </channel>
  <image rdf:about="http://freshmeat.net/img/fmII-button.gif">
    <title>freshmeat.net</title>
    <url>http://freshmeat.net/img/fmII-button.gif</url>
    <link>http://freshmeat.net/</link>
  </image>
  <item rdf:about="http://freshmeat.net/projects/exb/">
    <title>ExB Xenophobic Bot 0.1.20011029 (Development)</title>
    <link>http://freshmeat.net/projects/exb/</link>
    <description>An extensible IRC Bot written in Java.</description>
    <dc:date>2001-10-30T20:08-06:00</dc:date>
  </item>
...

A.5.4 Source of data

A.5.5 Industries

A.5.6 Discussion

A.6 Seismic

A.6.1 Description

A.6.2 Data characteristics

A.6.3 Example data

seis.xml:
<?xml version="1.0"?>
<seisdata>
<head>
<name>line 101</name>
<area>midland</area>
<ntrace>1000</ntrace>
<nsamp>1501</nsamp>
<precision>4</precision>
<zstart>0.0</zstart>
<zinc>4.0</zinc>
<num_xyzs_fields>4</num_xyzs_fields>
<xyzs_field_names>xcoord,ycoord,elevation,common depth point</xyzs_field_names>
<xyzs_field_precisions>8,8,4,4</xyzs_field_precisions>
</head>
<xcrd>
123456.712346 123556.712346 123656.712346 123756.712346 123856.712346 ... [ 1000 floating point numbers ] </xcrd>
<ycrd>
1234567.812346 1234667.812346 1234767.812346 1234867.812346 ... [ 1000 floating point numbers ] </ycrd>
...

A.6.4 Source of data

A.6.5 Industries

A.6.6 Discussion

A.7 SOAP

A.7.1 Description

A.7.2 Data characteristics

A.7.3 Example data

req15.xml:
<?xml version="1.0" encoding="UTF-8"?>
<SOAP-ENV:Envelope SOAP-ENV:encodingStyle="http://schemas.xmlsoap.org/soap/encoding/" xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
 <SOAP-ENV:Body>
  <ns3:echoMapArray xmlns:ns3="http://soapinterop.org/">
   <inputMapArray href="#id0"/>
  </ns3:echoMapArray>
  <multiRef id="id0" xsi:type="SOAP-ENC:Array" SOAP-ENC:arrayType="ns6:Map[2]" xmlns:SOAP-ENC="http://schemas.xmlsoap.org/soap/encoding/" xmlns:ns6="http://xml.apache.org/xml-soap">
   <item href="#id1"/>
   <item href="#id2"/>
  </multiRef>
  <multiRef id="id1" xsi:type="ns11:Map" xmlns:ns11="http://xml.apache.org/xml-soap">
   <item>
    <key xsi:type="xsd:dateTime">2001-10-05T22:07:04.629Z</key>
    <value xsi:type="xsd:string">string value</value>
   </item>
   <item>
    <key xsi:type="xsd:string">stringKey</key>
    <value xsi:type="xsd:int">5</value>
   </item>
  </multiRef>
  <multiRef id="id2" xsi:type="ns23:Map" xmlns:ns23="http://xml.apache.org/xml-soap">
   <item>
    <key xsi:type="xsd:string">this is the second map</key>
    <value xsi:type="xsd:boolean">true</value>
   </item>
   <item>
    <key xsi:type="xsd:string">test</key>
    <value xsi:type="xsd:float">411.0</value>
   </item>
  </multiRef>
 </SOAP-ENV:Body>
</SOAP-ENV:Envelope>

rsp15.xml:
<?xml version="1.0" encoding="UTF-8"?>
<SOAP-ENV:Envelope SOAP-ENV:encodingStyle="http://schemas.xmlsoap.org/soap/encoding/" xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
 <SOAP-ENV:Body>
  <ns3:echoMapArrayResponse xmlns:ns3="http://soapinterop.org/">
   <echoMapArrayResult href="#id0"/>
  </ns3:echoMapArrayResponse>
  <multiRef id="id0" xsi:type="SOAP-ENC:Array" SOAP-ENC:arrayType="ns6:Map[2]" xmlns:SOAP-ENC="http://schemas.xmlsoap.org/soap/encoding/" xmlns:ns6="http://xml.apache.org/xml-soap">
   <item href="#id1"/>
   <item href="#id2"/>
  </multiRef>
  <multiRef id="id2" xsi:type="ns11:Map" xmlns:ns11="http://xml.apache.org/xml-soap">
   <item>
    <key xsi:type="xsd:string">this is the second map</key>
    <value xsi:type="xsd:boolean">true</value>
   </item>
   <item>
    <key xsi:type="xsd:string">test</key>
    <value xsi:type="xsd:float">411.0</value>
   </item>
  </multiRef>
  <multiRef id="id1" xsi:type="ns23:Map" xmlns:ns23="http://xml.apache.org/xml-soap">
   <item>
    <key xsi:type="xsd:dateTime">2001-10-05T22:07:18.890Z</key>
    <value xsi:type="xsd:string">string value</value>
   </item>
   <item>
    <key xsi:type="xsd:string">stringKey</key>
    <value xsi:type="xsd:int">5</value>
   </item>
  </multiRef>
 </SOAP-ENV:Body>
</SOAP-ENV:Envelope>

A.7.4 Source of data

A.7.5 Industries

A.7.6 Discussion

A.8 UBL 1.0

A.8.1 Description

A.8.2 Data characteristics

A.8.3 Example data

UBL-Order-1.0-Office-Example.xml:
<Order xmlns:res="urn:oasis:names:specification:ubl:schema:xsd:AcknowledgementResponseCode-1.0"
 xmlns:cbc="urn:oasis:names:specification:ubl:schema:xsd:CommonBasicComponents-1.0" 
 xmlns:cac="urn:oasis:names:specification:ubl:schema:xsd:CommonAggregateComponents-1.0" 
 xmlns:cur="urn:oasis:names:specification:ubl:schema:xsd:CurrencyCode-1.0" 
 xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="urn:oasis:names:specification:ubl:schema:xsd:Order-1.0" 
 xsi:schemaLocation="urn:oasis:names:specification:ubl:schema:xsd:Order-1.0 ../../xsd/maindoc/UBL-Order-1.0.xsd">
	<BuyersID>20031234-1</BuyersID>
	<cbc:IssueDate>2003-01-23</cbc:IssueDate>
	<cbc:LineExtensionTotalAmount amountCurrencyCodeListVersionID="0.3" amountCurrencyID="USD">438.50</cbc:LineExtensionTotalAmount>
	<cac:BuyerParty>
		<cac:Party>
			<cac:PartyName>
				<cbc:Name>Bills Microdevices</cbc:Name>
			</cac:PartyName>
			<cac:Address>
				<cbc:StreetName>Spring St</cbc:StreetName>
				<cbc:BuildingNumber>413</cbc:BuildingNumber>
				<cbc:CityName>Elgin</cbc:CityName>
				<cbc:PostalZone>60123</cbc:PostalZone>
				<cac:CountrySubentityCode>IL</cac:CountrySubentityCode>
			</cac:Address>
			<cac:Contact>
				<cbc:Name>George Tirebiter</cbc:Name>
			</cac:Contact>
		</cac:Party>
	</cac:BuyerParty>
	<cac:SellerParty>
		<cac:Party>
			<cac:PartyName>
				<cbc:Name>Joes Office Supply</cbc:Name>
			</cac:PartyName>
			<cac:Address>
				<cbc:StreetName>Lakeshore Dr</cbc:StreetName>
				<cbc:BuildingNumber>32 W.</cbc:BuildingNumber>
				<cbc:CityName>Chicago</cbc:CityName>
				<cbc:PostalZone>60022</cbc:PostalZone>
				<cac:CountrySubentityCode>IL</cac:CountrySubentityCode>
			</cac:Address>
		</cac:Party>
	</cac:SellerParty>
	<cac:Delivery>
		<cbc:RequestedDeliveryDateTime>2003-02-14T14:00:00</cbc:RequestedDeliveryDateTime>
		<cac:DeliveryAddress>
			<cbc:StreetName>Spring St</cbc:StreetName>
			<cbc:BuildingNumber>413 N</cbc:BuildingNumber>
			<cbc:CityName>Elgin</cbc:CityName>
			<cbc:PostalZone>60123</cbc:PostalZone>
			<cac:CountrySubentityCode>IL</cac:CountrySubentityCode>
		</cac:DeliveryAddress>
	</cac:Delivery>
	<cac:DeliveryTerms>
		<cbc:SpecialTerms>Signature Required</cbc:SpecialTerms>
	</cac:DeliveryTerms>
	<cac:OrderLine>
		<cac:LineItem>
			<cac:BuyersID>1</cac:BuyersID>
			<cbc:Quantity quantityUnitCode="PKG">5</cbc:Quantity>
			<cbc:LineExtensionAmount amountCurrencyCodeListVersionID="0.3" amountCurrencyID="USD">12.50</cbc:LineExtensionAmount>
			<cac:Item>
				<cbc:Description>Pencils, box #2 red</cbc:Description>
				<cac:SellersItemIdentification>
					<cac:ID>32145-12</cac:ID>
				</cac:SellersItemIdentification>
				<cac:BasePrice>
					<cbc:PriceAmount amountCurrencyCodeListVersionID="0.3" amountCurrencyID="USD">2.50</cbc:PriceAmount>
				</cac:BasePrice>
			</cac:Item>
		</cac:LineItem>
	</cac:OrderLine>
....

UBL-OrderResponseSimple-1.0-Office-Example.xml:
...
	<ID>1</ID>
	<cbc:IssueDate>2003-02-03</cbc:IssueDate>
	<AcceptedIndicator>true</AcceptedIndicator>
	<cac:OrderReference>
		<cac:BuyersID>20031234-1</cac:BuyersID>
		<cac:SellersID>154135798</cac:SellersID>
		<cbc:IssueDate>2003-01-23</cbc:IssueDate>
	</cac:OrderReference>
	<cac:BuyerParty>
		<cac:Party>
			<cac:PartyName>
				<cbc:Name>Bills Microdevices</cbc:Name>
			</cac:PartyName>
		</cac:Party>
	</cac:BuyerParty>
	<cac:SellerParty>
		<cac:Party>
			<cac:PartyName>
				<cbc:Name>Joes Office Supply</cbc:Name>
			</cac:PartyName>
		</cac:Party>
		<cac:OrderContact>
			<cbc:Name>Betty Jo Beoloski</cbc:Name>
		</cac:OrderContact>
	</cac:SellerParty>
</OrderResponseSimple>

A.8.4 Source of data

A.8.5 Industries

A.8.6 Discussion

B Acknowledgements (Non-Normative)

The editors would like to thank the many contributors from the working group. Special acknowledgement of assistance should include Michael Leventhal and Tarari, and Don Brutzman.

C XML Binary Characterization Measurement Changes (Non-Normative)

2004-09-01SWDocument Created.
2004-09-30SWDocument updated, released for initial group comments.
2004-11-02SWDocument updated.
2004-11-17SWRestructured document, numerous typos corrected, some language updated, measurements added, discussion entry added to measurements.
2005-02-15SW, PHMostly complete pre-publication group comment version. Restructured document, numerous typos corrected, major rewrites, additional content.
2005-02-21SW, PHCompleted changes for publication.