Copyright © 2010-2022 W3C® (MIT, ERCIM, Keio, Beihang). W3C liability, trademark and permissive document license rules apply.
RDF [RDF11-CONCEPTS] describes a graph-based data model for making claims about the world and provides the foundation for reasoning upon that graph of information. At times, it becomes necessary to compare the differences between sets of graphs, digitally sign them, or generate short identifiers for graphs via hashing algorithms. This document outlines an algorithm for normalizing RDF datasets such that these operations can be performed.
This section describes the status of this document at the time of its publication. A list of current W3C publications and the latest revision of this technical report can be found in the W3C technical reports index at https://www.w3.org/TR/.
This document describes the URDNA2015 algorithm for canonicalizing RDF datasets, which was the input from the W3C Credentials Community Group published as [CCG-RDC-FINAL]. There are other canonicalization algorithms actively being considered by the Working Group – notably [Hogan-Canonical-RDF]; future versions of this document may change accordingly. See Issue 6: Compare the two algorithms, and decide on basis for our work and Issue 10: C14N choice criteria for further discussion.
This document was published by the RDF Dataset Canonicalization and Hash Working Group as a Working Draft using the Recommendation track.
Publication as a Working Draft does not imply endorsement by W3C and its Members.
This is a draft document and may be updated, replaced or obsoleted by other documents at any time. It is inappropriate to cite this document as other than work in progress.
This document was produced by a group operating under the W3C Patent Policy. W3C maintains a public list of any patent disclosures made in connection with the deliverables of the group; that page also includes instructions for disclosing a patent. An individual who has actual knowledge of a patent which the individual believes contains Essential Claim(s) must disclose the information in accordance with section 6 of the W3C Patent Policy.
This document is governed by the 2 November 2021 W3C Process Document.
When data scientists discuss canonicalization, they do so in the context of achieving a particular set of goals. Since the same information may sometimes be expressed in a variety of different ways, it often becomes necessary to transform each of these different ways into a single, standard representation. With a standard representation, the differences between two different sets of data can be easily determined, a cryptographically-strong hash identifier can be generated for a particular set of data, and a particular set of data may be digitally-signed for later verification.
In particular, this specification is about normalizing RDF datasets, which are collections of graphs. Since a directed graph can express the same information in more than one way, it requires canonicalization to achieve the aforementioned goals and any others that may arise via serendipity.
Most RDF datasets can be normalized fairly quickly, in terms of algorithmic time complexity. However, those that contain nodes that do not have globally unique identifiers pose a greater challenge. Normalizing these datasets presents the graph isomorphism problem, a problem that is believed to be difficult to solve quickly. More formally, it is believed to be an NP-Intermediate problem, that is, neither known to be solvable in polynomial time nor NP-complete. Fortunately, existing real world data is rarely modeled in a way that manifests this problem and new data can be modeled to avoid it. In fact, software systems can detect a problematic dataset and may choose to assume it's an attempted denial of service attack, rather than a real input, and abort.
This document outlines an algorithm for generating a canonical serialization of an RDF dataset given an RDF dataset as input. The algorithm is called the Universal RDF Dataset Canonicalization Algorithm 2015 or URDNA2015.
There are different use cases where graph or dataset canonicalization are important:
A canonicalization algorithm is necessary, but not necessarily sufficient, to handle many of these use cases. The use of blank nodes in RDF graphs and datasets has a long history and creates inevitable complexities. Blank nodes are used for different purposes:
Furthermore, RDF semantics dictate that deserializing an RDF document results in the creation of unique blank nodes, unless it can be determined that on each occasion, the blank node identifies the same resource. This is due to the fact that blank node identifiers are an aspect of a concrete RDF syntax and are not intended to be persistent or portable. Within the abstract RDF model, blank nodes do not have identifiers (although some RDF store implementations may use stable identifiers and may choose to make them portable). See Blank Nodes in [RDF11-CONCEPTS] for more information.
RDF does have a provision for allowing blank nodes to be published in an externally identifiable way through the use of Skolem IRIs, which allow a given RDF store to replace the use of blank nodes in a concrete syntax with IRIs, which then serve to repeatably identify that blank node within that particular RDF store; however, this is not generally useful for talking about the same graph in different RDF stores, or other concrete representations. In any case, a stable blank node identifier defined for one RDF store or serialization is arbitrary, and typically not relatable to the context within which it is used.
This specification defines an algorithm for creating stable blank node identifiers repeatably for different serializations possibly using individualized blank node identifiers of the same RDF graph (dataset) by grounding each blank node through the nodes to which it is connected, essentially creating Skolem blank node identifiers. As a result, a graph signature can be obtained by hashing a canonical serialization of the resulting normalized dataset, allowing for the isomorphism and digital signing use cases. As blank node identifiers can be stable even with other changes to a graph (dataset), in some cases it is possible to compute the difference between two graphs (datasets), for example if changes are made only to ground triples, or if new blank nodes are introduced which do not create an automorphic confusion with other existing blank nodes. If any information which would change the generated blank node identifier, a resulting diff might indicate a greater set of changes than actually exists.
Add descriptions for relevant historical discussions and prior art:
This document is a detailed specification for an RDF dataset canonicalization algorithm. The document is primarily intended for the following audiences:
To understand the basics in this specification you must be familiar with basic RDF concepts [RDF11-CONCEPTS]. A working knowledge of graph theory and graph isomorphism is also recommended.
This section is non-normative.
The following typographic conventions are used in this specification:
markup
markup definition reference
markup external definition reference
Notes are in light green boxes with a green left border and with a "Note" header in green. Notes are always informative.
Examples are in light khaki boxes, with khaki left border, and with a numbered "Example" header in khaki. Examples are always informative. The content of the example is in monospace font and may be syntax colored. Examples may have tabbed navigation buttons to show the results of transforming an example into other representations. Code examples are generally given in a Turtle or TriG format for brevity, where each line represents a single triple or quad. Additionally, have the following implied directives: BASE <http://example.com/> PREFIX : <#> Following the Turtle/TriG syntax rules, blank nodes always appear in the `_:xyz` format.
As well as sections marked as non-normative, all authoring guidelines, diagrams, examples, and notes in this specification are non-normative. Everything else in this specification is normative.
The key word MUST in this document is to be interpreted as described in BCP 14 [RFC2119] [RFC8174] when, and only when, they appear in all capitals, as shown here.
true
and false
_:
that is used as an identifier for a
blank node. Blank node identifiers
are typically implementation-specific local identifiers; this document
specifies an algorithm for deterministically specifying them._:
string
to differentiate them from other nodes in the graph. This affects the
canonicalization algorithm, which is based on calculating a hash over the representations of quads in this format.
Canonicalization is the process of transforming an input dataset to a normalized dataset. That is, any two input datasets that contain the same information, regardless of their arrangement, will be transformed into identical normalized dataset. The problem requires directed graphs to be deterministically ordered into sets of nodes and edges. This is easy to do when all of the nodes have globally-unique identifiers, but can be difficult to do when some of the nodes do not. Any nodes without globally-unique identifiers must be issued deterministic identifiers.
Strictly speaking, the normalized dataset must be serialized to be stable, as within a dataset, blank node identifiers have no meaning. This specification defines a normalized dataset to include stable identifiers for blank nodes, but practical uses of this will always generate a canonical serialization of such a dataset.
In time, there may be more than one canonicalization algorithm and, therefore, for identification purposes, this algorithm is named the "Universal RDF Dataset Canonicalization Algorithm 2015" (URDNA2015).
This statement is overly prescriptive and does not include normative language. This spec should describe the theoretical basis for graph canonicalization and describe behavior using normative statements. The explicit algorithms should follow as an informative appendix.
This section is non-normative.
To determine a canonical labeling, URDNA2015 considers the information connected to each blank node. Nodes with unique first degree information can immediately be issued a canonical identifier via the Issue Identifier algorithm. When a node has non-unique first degree information, it is necessary to determine all information that is transitively connected to it throughout the entire dataset. 4.7 Hash First Degree Quads defines a node’s first degree information via its first degree hash.
Hashes are computed from the information of each blank node. These hashes encode the mentions incident to each blank node. The hash of a string s, is the lower-case, hexidecimal representation of the result of passing s through a cryptographic hash function. URDNA2015 uses the SHA-256 hash algorithm.
U+000A
).
A
and B
),
using Unicode Codepoint Collation,
as defined in [XPATH-FUNCTIONS],
which defines a
total ordering
of strings comparing code points.
When performing the steps required by the canonicalization algorithm, it is helpful to track state in a data structure called the canonicalization state. The information contained in the canonicalization state is described below.
c14n
, for issuing canonical
blank node identifiers.
During the canonicalization algorithm, it is sometimes necessary to issue new identifiers to blank nodes. The Issue Identifier algorithm uses an identifier issuer to accomplish this task. The information an identifier issuer needs to keep track of is described below.
c14n
is a proper initial value for the
identifier prefix that would produce
blank node identifiers like c14n1
.0
.At the time of writing, there are several open issues that will determine important details of the canonicalization algorithm.
Following the discussion that just happened at the TPAC joint meeting with the VC.
An option is to return one particular serialization.
An another option is to return a mapping from bnodes to labels.
I prefer the second solution as it is more generic. It does not preclude which serialization to use downstream. It actually does not impose that you serialize the dataset (imagine storing a dataset in a triple store with canonical blank node labels).
At TPAC 2022, the RCH WG had a joint meeting with the VC WG, see minutes.
During that meeting, we had two presentations about canonicalization algorithms:
The first algorithm has some track record of incubation in the W3C CCG, and it has an existing specification.
On today's RCH WG meeting, there was a proposal to use the W3C CCG specification as a basis, but OTOH there was also a desire to compare the two algorithms to better understand how they differ, and why one might be used over the other.
So this issue is an invitiation to anyone who can tell us more about the differences between the two algorithms and their potential implications.
Generalized RDF is described in RDF 1.1 Concepts and Abstract Syntax.
It removes restrictions on the type of RDF term that can occur in any slot in a quad tuple - literals as subjects or predicates, blank nodes as predicates etc. By implication, that would include RDF-start quoted triples.
RDF 1.1 separately changed "RDF dataset" to allow blank nodes for in the graph slot.
Generalized RDF does arise - for example, in some rules systems.
Covering generalized RDF gives some future proofing.
I completely agree with the importance of the "herd-privacy" canonicalization proposed in #4 (comment) by @dlongley when we use c14n with selective disclosure. However, if I understand it correctly, we would still have to improve the above algorithm; it seems to me that the following normalized datasets CX1 and CX2 are not modified via the above transformation, i.e., CX1==CY1 and CX2==CY2.
CX1 (obtained from JSON-LD Playground) (==CY1)
_:c14n0 <http://schema.org/name> "Alice" .
_:c14n0 <http://schema.org/spouse> _:c14n1 .
_:c14n1 <http://schema.org/name> "Bob" .
CX2 (obtained from JSON-LD Playground) (==CY2)
_:c14n0 <http://schema.org/name> "Carl" .
_:c14n1 <http://schema.org/name> "Alice" .
_:c14n1 <http://schema.org/spouse> _:c14n0 .
Therefore, even if Alice selectively hides the statement about her spouse, anyone can easily guess whether Bob or Carl is Alice's spouse based on the canonicalized identifiers or the order of unrevealed statement:
CY1 with selective disclosure
_:c14n0 <http://schema.org/name> "Alice" .
_:c14n0 <http://schema.org/spouse> _:c14n1 .
### 3rd statement is unrevealed ####
CY2 with selective disclosure
### 1st statement is unrevealed ####
_:c14n1 <http://schema.org/name> "Alice" .
_:c14n1 <http://schema.org/spouse> _:c14n0 .
What we actually wanted seemed like the following result:
CY1'
_:c14n0 <http://schema.org/name> "Alice" .
_:c14n1 <http://schema.org/name> "Bob" .
_:c14n0 <http://schema.org/spouse> _:c14n1 .
CY2'
_:c14n0 <http://schema.org/name> "Alice" .
_:c14n1 <http://schema.org/name> "Carl" .
_:c14n0 <http://schema.org/spouse> _:c14n1 .
I am trying to figure out a solution, but haven't found one yet so just submitting this issue at the moment...
In the meeting on 2022-10-12, we discussed criteria that can be used to make a choice between alternative choices in specific steps in the c14n algorithm. The initial list of suggestions is below. We need to formalize this and, IMO, include it in the explainer doc.
In the meeting on 2022-10-12, there was mention of needing to say which choices were made in generating the hash.
We already have the existing URDNA2015 as the canonicalization algorithm. Anything this working group does that makes any change to that will need to have a way to declare which algorithm was used to produce the resultant canonical form.
There may be good reasons for different hashing and signing algorithms.
We need a naming scheme of choices made, together with a way to transmit that information.
See #4 about the output of canonicalization.
From the CG spec:
An additional input to this algorithm should be added that
allows it to be optionally skipped and throw an error if any
equivalent related hashes were produced that must be permuted
during step 5.4.4. For practical uses of the algorithm, this step
should never be encountered and could be turned off, disabling
canonizing datasets that include a need to run it as a security
measure.
In §4.5.2, point 5.4.2 defines, as part of the sentence, the identifier variable. This value is used and important in later steps, so I think it would be better to call out its creation (as the first, and in this case only, value of the identifier list) in a separate step.
The main algorithm creates, in (3), a non-normalized identifier list. However, apart from the fact that entries may be removed from it in 5.3, this list is not used. The problematic cases, in step (6) are retrieved based on the hash->bnode[] map, which is also pruned in step 5.4 to remove the simple cases.
Is this a bug somewhere or just an oversight and that list is not necessary?
The Hash N-degre Quads algorithm is defined with a set of specific input parameters (canonicalization state
, the identifier
for the blank node, and an identifier issuer
). However, there is no clear statement in 6.2.4 of the main algorithm on exactly what parameter is used, especially for the the identifier
which simply refers to the temporary issuer. The fact that the canonicalization state should be given (the one created in step 1) is fairly obvious but, for example, it is not clear whether the input to the algorithm should be b_n
or n
. (I presume it is n
).
In general, if we define the various algorithms in a functional style (which is fine) then we have to check whether the call of the algorithms follows the same thing, or whether we rely on some "global" values like the canonicalization state.
Actually... the value of b_n
in the same step (ie, in 6.2.3 of the main algorithm) isn't used anywhere. I presume that only the "side effect" is used, i.e., that the temporary issues will have a value for n
(and may be reused later). If so, we should not assign the value to a variable at this point.
As a more general remark, the explanation/overview in the Hash N-Degree Quad uses different terms, names, etc, then in the real algorithmic part. As a consequence, it really does not help to understand what is happening.
I believe we should have real examples that show, step by step, what is happening in the algorithm itself to help understanding this. To be honest, I converted the (semi-)English text into code but I would not say I understand what is really happening…
The canonicalization algorithm converts an input dataset into a normalized dataset. This algorithm will assign deterministic identifiers to any blank nodes in the input dataset.
This section is non-normative.
URDNA2015 canonically labels an RDF dataset
by assigning each blank node a canonical identifier.
In URDNA2015, an RDF dataset D
is represented as a set of quads of the form < s, p, o, g >
where the graph component g
is empty if and only if the
triple < s, p, o >
is in the default graph.
This algorithm considers an RDF dataset to be a set of quads.
Two RDF datasets are considered to be isomorphic (i.e., the same modulo blank nodes),
if and only if they return the same canonically labeled list of quads
via URDNA2015.
URDNA2015 consists of several sub-algorithms. These sub-algorithms are introduced in the following sub-sections. First, we give a high level summary of URDNA2015.
This section is non-normative.
This has the effect of initializing the blank node to quads map, and the hash to blank nodes map, as well as instantiating a new canonical issuer.
This establishes the blank node to quads map, relating each blank node with the set of quads of which it is a component.
Literal components of
quads are not subject to any normalization.
As noted in
Section 3.3
of [RDF11-CONCEPTS],
literal term equality
is based on the
lexical form,
rather than the literal value,
so two literals "01"^^xs:integer
and "1"^^xs:integer
are treated as distinct resources.
This step creates a hash for every blank node in the input document. Some blank nodes will lead to a unique hash, while other blank nodes may share a common hash.
This step establishes the canonical identifier for blank nodes having a unique hash, which are recorded in the canonical issuer.
This step establishes the canonical identifier for blank nodes having a shared hash. This is done by creating unique blank node identifiers for all blank nodes traversed by the Hash N-Degree Quads algorithm, running through each blank node without a canonical identifier in the order of the hashes established in the previous step.
This list establishes an order for those blank nodes sharing a common first-degree hash.
b
.The previous step created temporary identifiers for the blank nodes sharing a common first degree hash, which is now used to generate their canonical identifiers.
In Step 5.2, hash path list was created with an ordered set of results. Each result contained a temporary issuer which recorded temporary identifiers associated with a particular blank node identifier in identifier list. This step processes each returned temporary issuer, in order, and allocates canonical identifiers to the temporary identifier mappings contained within each temporary issuer, creating a full order on the remaining blank nodes with unissued canonical identifiers.
This step populates the normalized dataset with quads substituting the original blank node identiers, with the newly established canonical blank node identifiers.
This algorithm issues a new blank node identifier for a given existing blank node identifier. It also updates state information that tracks the order in which new blank node identifiers were issued. The order of issuance is important for canonically labeling blank nodes that are isomorphic to others in the dataset.
The algorithm maintains an issued identifiers map to
relate an existing blank node identifier from the input dataset
to a new blank node identifier using a given identifier prefix
(c14n
) with new identifiers issued by appending an incrementing number.
For example, when called for a blank node identifier such as e3
,
it might result in a issued identifier of c14n1
.
The algorithm takes an identifier issuer I and an existing identifier as inputs. The output is a new issued identifier. The steps of the algorithm are:
This algorithm calculates a hash for a given blank node across the quads in a dataset in which that blank node is a component. If the hash uniquely identifies that blank node, no further examination is necessary. Otherwise, a hash will be created for the blank node using the algorithm in 4.9 Hash N-Degree Quads invoked via 4.5 Canonicalization Algorithm.
This section is non-normative.
To determine whether the first degree information of a node n is unique, a hash is assigned to its mention set, Qn. The first degree hash of a blank node n, denoted hf(n), is the hash that results from 4.7 Hash First Degree Quads when passing n. Nodes with unique first degree hashes have unique first degree information.
For consistency, blank node identifiers used in Qn
are replaced with placeholders in a canonical n-quads serialization of that quad.
Every blank node component is replaced with either a
or z
,
depending on if that component is n or not.
The resulting serialized quads are then code point ordered, concatenated, and hashed. This hash is the first degree hash of n, hf(n).
This section is non-normative.
This algorithm takes the canonicalization state and a reference blank node identifier as inputs.
a
,
otherwise, use the blank node identifier
z
.This algorithm calculates a hash for a given blank node across the quads in a dataset in which that blank node is a component for which the hash does not uniquely identify that blank node. This is done by expanding the search from quads directly referencing that blank node (the mention set), to those quads which contain nodes which are also components of quads in the mention set, called the gossip path. This process proceeds in every greater degrees of indirection until a unique hash is obtained.
The 'path' terminology could also be changed to better indicate what a path is (a particular deterministic serialization for a subgraph/subdataset of nodes without globally-unique identifiers).
This section is non-normative.
Usually, when trying to determine if two nodes in a graph are equivalent, you simply compare their identifiers. However, what if the nodes don't have identifiers? Then you must determine if the two nodes have equivalent connections to equivalent nodes all throughout the whole graph. This is called the graph isomorphism problem. This algorithm approaches this problem by considering how one might draw a graph on paper. You can test to see if two nodes are equivalent by drawing the graph twice. The first time you draw the graph the first node is drawn in the center of the page. If you can draw the graph a second time such that it looks just like the first, except the second node is in the center of the page, then the nodes are equivalent. This algorithm essentially defines a deterministic way to draw a graph where, if you begin with a particular node, the graph will always be drawn the same way. If two graphs are drawn the same way with two different nodes, then the nodes are equivalent. A hash is used to indicate a particular way that the graph has been drawn and can be used to compare nodes.
When two blank nodes have the same first degree hash, extra steps must be taken to detect global, or N-degree, distinctions. All information that is in any way connected to the blank node n through other blank nodes, even transitively, must be considered.
To consider all transitive information, the algorithm traverses and encodes all possible paths of incident mentions emanating from n, called gossip paths, that reach every unlabeled blank node connected to n. Each unlabeled blank node is assigned a temporary identifier in the order in which it is reached in the gossip path being explored. The mentions that are traversed to reach connected blank nodes are encoded in these paths via related hashes. This provides a deterministic way to order all paths coming from n that reach all blank nodes connected to n without relying on input blank node identifiers.
This algorithm works in concert with the main canonicalization algorithm to produce a unique, deterministic identifier for a particular blank node. This hash incorporates all of the information that is connected to the blank node as well as how it is connected. It does this by creating deterministic paths that emanate out from the blank node through any other adjacent blank nodes.
Ultimately, the algorithm selects a shortest gossip path, distributing canonical identifiers to the unlabeled blank nodes in the order in which they appear in this path. The hash of this encoded shortest path, called the N-degree hash of n, distinguishes n from other blank nodes in the dataset.
For clarity, we consider a gossip path encoded via the string s to be shortest provided that:
For example, abc is shorter than bbc, whereas abcd is longer than bcd.
The following provides a high level outline for how the N-degree hash of n is computed along the shortest gossip path. Note that the full algorithm considers all gossip paths, ultimately returning the hash of the shortest encoded path.
As described above in step 2.3, HN recurses on each unlabeled blank node when it is first reached along the gossip path being explored. This recursion can be visualized as moving along the path from n to the blank node ni that is receiving a temporary identifier. If, when recursing on ni, another unlabeled blank node nj is discovered, the algorithm again recurses. Such a recursion traces out the gossip path from n to nj via ni.
The recursive hash r(i) is the hash returned from the completed recursion on the node ni when computing hN(n). Just as hN(n) is the hash of Dn, we denote the data to hash in the recursion on ni as Di. So, r(i) = h(Di). For each related hash x ∈ Hn, Rn(x) is called the recursion list on which the algorithm recurses.
This section is non-normative.
Add some examples ranging from simple to complicated and resource consuming.
The inputs to this algorithm are the canonicalization state, the identifier for the blank node to recursively hash quads for, and path identifier issuer which is an identifier issuer that issues temporary blank node identifiers. The output from this algorithm will be a hash and the identifier issuer used to help generate it.
quads is the mention set of identifier.
This loop calculates the related hash Hn for other blank nodes within the mention set of identifier.
s
, o
, or
g
based on whether component is a
subject, object,
graph name, respectively.This loop explores the gossip paths for each related blank node sharing a common hash to identifier finding the shortest such path (chosen path). This determines how canonical identifiers for otherwise commonly hashed blank nodes are chosen.
Each path is represented by the concatenation of the identifiers for each related blank node – either the issued identifier, or a temporary identifier created using a copy of issuer. Those for which temporary identifiers were issued are later recursed over using this algorithm.
_:
, followed by
the canonical identifier for related, to path.
A canonical identifier may have been generated before calling this algorithm, if it was issued from an earlier call to Hash First Degree Quads algorithm. There is no reason to recurse and apply the algorithm to any related blank node that has already been assigned a canonical identifier. Furthermore, using the canonical identifier also further distinguishes it from any temporary identifier, allowing for even greater efficiency in finding the chosen path.
Temporarily labeled nodes have identifiers recorded in issuer copy, which is later used to recursively call this algorithm, so that eventually all nodes are given canonical identifiers.
_:
, followed by the result, to path.If path is already longer than the prospective chosen path, we can terminate this iteration early.
path is used to generate a hash at a later step; in this respect, it is similar to
the Hash First Degree Quads algorithm which
uses the serialization of quads in nquads for hashing. For the sake of consistency, the
nquad representation of blank node identifiers is used in these steps, hence the
usage of the _:
string.
The propective path is extended with the hash resulting from recursively calling this algorithm on each related blank node issued a temporary identifier.
_:
, followed by
the result, to path.<
, the hash in
result, and >
to path.If path is already longer than the prospective chosen path, we can terminate this iteration early.
TBD
TBD
This section is non-normative.
TBD
This section is non-normative.
TBD
This section is non-normative.
A previous version of this algorithm has light deployment. For purposes of identification, the algorithm is called the "Universal RDF Graph Canonicalization Algorithm 2012" (URGNA2012), and differs from the stated algorithm in the following ways:
g
, instead of z
.<
and >
; there
were no delimiters.p
, for property, when the related blank node was a
subject and the value r
, for reverse or reference, when
the related blank node was an object. Since URGNA2012 only normalized
graphs, not datasets, there was no use of the graph name position.p
as position.
r
as position.
map
)
map
)
This section is non-normative.
xyz
format for blank node identifiers, instead of
_:xyz
. See Issue 46
for the discussion.
simple
flag,
which was unused in existing implementations.
The original design of the algorithm was to use the
assigned canonical blank node identifier,
if available, instead of _:a
or _:z
,
similar to how it is used in
the related hash algorithm, but this text never made it into the spec
before implementations moved forward.
Therefore, the hashes never change,
making the loop based on the simple
flag that calls this algorithm unnecessary.
See Issue 23 for the discussion.
_:
which is a serialization artifact.
This is still required in the algorithms, but the distinction
between what is an identifier, and the serialzation form
is clarified.This section is non-normative.
The editors would like to thank Jeremy Carroll for his work on the graph canonicalization problem, Gavin Carothers for providing valuable feedback and testing input for the algorithm defined in this specification, Sir Tim Berners Lee for his thoughts on graph canonicalization over the years, Jesús Arias Fisteus for his work on a similar algorithm.
Acknowledge CCG and RCH WG. Consider using .publ_ack.json
.
It's important for both patent and person credit reasons to include the full history.
Manu has offered to do this.
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in: