Copyright © 2024 World Wide Web Consortium. W3C® liability, trademark and permissive document license rules apply.
A controller document contains cryptographic material and lists service endpoints for the purposes of verifying cryptographic proofs from, and interacting with, the controller of an identifier.
This section describes the status of this document at the time of its publication. A list of current W3C publications and the latest revision of this technical report can be found in the W3C technical reports index at https://www.w3.org/TR/.
This document was published by the Verifiable Credentials Working Group as a Working Draft using the Recommendation track.
Publication as a Working Draft does not imply endorsement by W3C and its Members.
This is a draft document and may be updated, replaced or obsoleted by other documents at any time. It is inappropriate to cite this document as other than work in progress.
This document was produced by a group operating under the W3C Patent Policy. W3C maintains a public list of any patent disclosures made in connection with the deliverables of the group; that page also includes instructions for disclosing a patent. An individual who has actual knowledge of a patent which the individual believes contains Essential Claim(s) must disclose the information in accordance with section 6 of the W3C Patent Policy.
This document is governed by the 03 November 2023 W3C Process Document.
This section is non-normative.
Controller documents identify a subject and provide verification methods that express public cryptographic material, such as public keys, for verifying proofs created on behalf of the subject for specific purposes, such as authentication, attestation, key agreement (for encryption), and capability invocation and delegation. Controller documents also list service endpoints related to an identifier; for example, from which to request additional information for verification.
In other words, the controller document contains the information necessary to communicate with, and/or prove that specific actions were taken by, the controller of an identifier, including material for proofs and service endpoints for additional communications.
A controller document specifies verification relationships and service endpoints for a single identifier, for which the current controller document is taken as authoritative.
It is expected that other specifications will profile the features that are defined in this specification, requiring and/or recommending the use of some and prohibiting and/or deprecating the use of others.
The W3C TAG review noted that they would like to see language that clarifies
that this document would be useful to authors that would like to profile its
usage. The DID Core specification is one such document that does that, but
citing it directly received objections. This issue notes that the WG intends to
resolve the text below into something that we can achieve consensus on:
For example, the Decentralized Identifiers Specification is expected to
define DID documents as a profile of controller documents, where the DID is the
identifier, DID documents are controller documents, and resolution is the
process of retrieving the canonical DID document for a DID.
The use cases below illustrate the need for this specification. While many other related use cases exist, such as those in Use Cases and Requirements for Decentralized Identifiers and Verifiable Credentials Use Cases, those described below are the main scenarios that this specification is designed to address.
Lemmy runs multiple enterprise portals that manage large amounts of sensitive data submitted by people working for a variety of organizations. He would like to use identifiers for entities in the databases that are provided by his customers and do not depend on easily phishable information such as email addresses and passwords.
Lemmy would like to ensure that his customers prove control over their identifiers — for example, by using public/private key cryptography — in order to increase security related to who is allowed to access and update each organization's data.
Stef, who operates a high security service, would like to ensure that certain cryptographic keys used by his customers can only be used for specific purposes (such as encryption, authorization, and/or authentication) to enable different levels of access and protection for each type of cryptographic key.
Marge, a software developer, would like to publicly advertise ways in which other people on the Web can reach her through various communication services she uses based on her globally unique identifier(s).
Cory, a systems architect, would like to extend the use cases described in this section in a way that provides new functionality without creating conflicts with extensions being added by others.
Neru would like to issue digital credentials on behalf of her company that contain claims about their employees. The claims that are made need to use identifiers that are cryptographically attributable back to Neru's company and need to allow for the holder's of those credentials to be able to cryptographically authenticate themselves when they present the credential.
The following requirements are derived from the use cases described earlier in this specification. Additional requirements which could lead to a more decentralized solution can be found in Use Cases and Requirements for Decentralized Identifiers.
As well as sections marked as non-normative, all authoring guidelines, diagrams, examples, and notes in this specification are non-normative. Everything else in this specification is normative.
The key words MAY, MUST, MUST NOT, OPTIONAL, RECOMMENDED, REQUIRED, and SHOULD in this document are to be interpreted as described in BCP 14 [RFC2119] [RFC8174] when, and only when, they appear in all capitals, as shown here.
A conforming controller document is any concrete expression of the data model that follows the relevant normative requirements in Sections 2. Data Model and 4. Contexts and Vocabularies.
A conforming verification method is any concrete expression of the data model that follows the relevant normative requirements in Sections 2.2 Verification Methods and 4. Contexts and Vocabularies.
A conforming document is either a conforming controller document, or a conforming verification method.
A conforming processor is any algorithm realized as software and/or hardware that generates and/or consumes a conforming document according to the relevant normative statements in Section 3. Algorithms. Conforming processors MUST produce errors when non-conforming documents are consumed.
This section defines the terms used in this specification. A link to the relevant definition is included whenever one of these terms appears in this specification.
An entity that is capable of performing an action with a specific resource, such as updating a controller document or generating a proof using a verification method.
A document that contains cryptographic material and lists service endpoints that can be used for verifying proofs from, and interacting with, the controller of an identifier.
An entity, such as a person, group, organization, physical thing, digital thing,
or logical thing that is referred to by the value of an id
property in a
controller document. Subjects identified in a controller document
are also used as a subjects in other contexts, such as during
authentication or in verifiable credentials.
A method and its parameters, used to independently verify a proof. For example, a cryptographic public key can be used as a verification method with respect to a digital signature; in such use, it verifies that the signer used the associated cryptographic private key.
An expression that one or more verification methods are authorized to verify proofs made on behalf of the subject. One example of a verification relationship is 2.3.1 Authentication.
A controller document specifies one or more relationships between an identifier and a set of verification methods and/or service endpoints. The controller document SHOULD contain verification relationships that explicitly permit the use of certain verification methods for specific purposes.
{
"id": "https://controller.example/101",
"verificationMethod": [{
"id": "https://controller.example/101#key-20240828",
"type": "JsonWebKey",
"controller": "https://controller.example/101",
"publicKeyJwk": {
"kid": "key-20240828",
"kty": "EC",
"crv": "P-256",
"alg": "ES256",
"x": "f83OJ3D2xF1Bg8vub9tLe1gHMzV76e8Tus9uPHvRVEU",
"y": "x_FEzRu9m36HLN_tue659LNpXW6pCyStikYjKIWI5a0"
}
}],
"authentication": ["#key-20240828"]
}
{
"@context": "https://www.w3.org/ns/controller/v1",
"id": "https://controller.example",
"authentication": [{
"id": "https://controller.example#authn-key-123",
"type": "Multikey",
"controller": "https://controller.example",
"publicKeyMultibase": "z6MkmM42vxfqZQsv4ehtTjFFxQ4sQKS2w6WR7emozFAn5cxu"
}]
}
The property names id
, type
, and controller
can be present in map of
different types with possible differences in constraints.
The following sections define the properties in a controller document, including whether these properties are required or optional. These properties describe relationships between the subject and the value of the property.
The following tables contain informative references for the core properties defined by this specification, with expected values, and whether or not they are required. The property names in the tables are linked to the normative definitions and more detailed descriptions of each property.
Property | Required? | Value constraints | Definition |
---|---|---|---|
id |
yes | A string that conforms to the URL syntax. | 2.1.1 Subjects |
controller |
no | A string or a set of strings, each of which conforms to the URL syntax. | 2.1.2 Controllers |
alsoKnownAs |
no | A set of strings, each of which conforms to the URL syntax. | 2.1.3 Also Known As |
service |
no | A set of service maps. | 2.1.4 Services |
verificationMethod |
no | A set of verification method maps. | 2.2 Verification Methods |
authentication |
no | A set of strings, each of which conforms to the URL syntax, or a set of verification method maps. | 2.3.1 Authentication |
assertionMethod |
no | A set of strings, each of which conforms to the URL syntax, or a set of verification method maps. | 2.3.2 Assertion |
keyAgreement |
no | A set of strings, each of which conforms to the URL syntax, or a set of verification method maps. | 2.3.3 Key Agreement |
capabilityInvocation |
no | A set of strings, each of which conforms to the URL syntax, or a set of verification method maps. | 2.3.4 Capability Invocation |
capabilityDelegation |
no | A set of strings, each of which conforms to the URL syntax, or a set of verification method maps. | 2.3.5 Capability Delegation |
A subject is expressed using the id
property in a controller document.
The value of an id
property is referred to as an identifier.
id
property MUST be a string that
conforms to the rules in the URL Standard.
A controller document MUST contain an id
value in the topmost
map.
{ "id": "https://controller.example/123" }
The value of the id
property in the topmost
map of the controller document is
called the base identifier for the controller document. The URL
for retrieving the current, authoritative controller document for a given
identifier is called the canonical URL for the controller document. Dereferencing the canonical URL MUST return the current
authoritative controller document. The returned document's base identifier MUST be the same as the canonical URL; if it is anything else,
then the returned document is not an authoritative controller document and
the identifier SHOULD be treated as invalid. Every controller document is
stored and retrieved according to the canonical URL of the document, which
MUST also be the base identifier of the document.
It is expected that the subject referred to by an id
in a controller document will be consistent over time, such that any verifiable credentials that use them can be interpreted as referring to the same entity.
For example, it is preferred that an issuer of a verifiable credential
require that a subject demonstrate proof of control over their
identifier before issuing a credential with that identifier as
subject, creating assurance that the same entity was involved in the
issuance of each credential with that identifier as subject.
However, there are valid cases where that practice is either impossible or unreasonable; for example, when a parent requests a verifiable credential for their child. There are also cases where an issuer simply makes a mistake or intentionally issues a false statement. All of these possibilities are considered when evaluating the security impacts of reliance on a given identifier for any given purpose. See Section 5.2 Identifier Ambiguity.
A controller of a controller document is any entity capable of making changes to that controller document. Whoever can update the content of the resource returned from dereferencing the controller document's canonical URL is, by definition, a controller of the document and its canonical identifier. Proofs that satisfy a controller document's verification methods are taken as cryptographic assurance that the controller of the identifier created those proofs.
The controller of the controller document is taken to be the controller of the document's canonical identifier, also known as its URL. That is, whoever can update the controller document is both the document controller and the identifier controller. Updating the document is how you control the identifier. These terms can be used interchangeably. Controlling the canonical controller document for an identifier is the same as controlling the identifier.
controller
property is OPTIONAL. If it is possible to represent
the legitimate controllers of the document as URLs, the document SHOULD
list URLs identifying those controllers.
It is possible to list a verification method which is functionally under the control of someone other than the controller of the controller document. For example, a document controller could set a public key under another party's control as an authentication verification method. This would enable the other party to authenticate on behalf of this identifier (because their public key is listed in an authentication verification method) without enabling that party to update the controller document. However, since the document controller explicitly listed that key for authentication, the proof in question is taken as created by the document controller, as it was created by their explicit assignee. This is especially useful when the "other party" is a device under the control of the document controller, but with a distinct cryptographic authority, i.e., it has its own keystore and can generate proofs. That pattern enables different devices, each using their own cryptographic material, to generate verifiable proofs that are taken as proofs created by controller of the identifier.
Each entry in the controller
property MUST identify
an entity capable of updating the canonical version
of the controller document.
Subsequent requests for this controller document through its
canonical location will always receive the latest version.
If the controller
property is not present, then control of the document
is determined entirely by its storage location.
{ "@context": "https://www.w3.org/ns/controller/v1", "id": "https://controller1.example/123", "controller": "https://controllerB.example/abc", }
While the identifier used for a controller is unambiguous, this does not imply that a single entity is always the controller, nor that a controller only has a single identifier. A controller might be a single entity, or a collection of entities, such as a partnership. A controller might also use multiple identifiers to refer to itself, for purposes such as privacy or delineating operational boundaries within an organization. Similarly, a controller might control many verification methods. For these reasons, no assumptions are to be made about a controller being a single entity nor controlling only a single verification method.
Note that the definition of authentication is different from the definition
of authorization. Generally speaking, authentication answers the
question of "Do we know who this is?" while authorization answers the question of "Are
they allowed to perform this action?". The authentication
property in this
specification is used to, unsurprisingly, perform authentication while the
other verification relationships such as capabilityDelegation
and
capabilityInvocation
are used to perform authorization. Since successfully
performing authorization might have more serious effects on a system,
controllers are urged to use different verification methods when
performing authentication versus authorization and provide stronger
access protection for verification methods used for authorization versus
authentication. See 5. Security Considerations for information related
to threat models and attack vectors.
A subject can have multiple identifiers that are used for different purposes
or at different times. The assertion that two or more identifiers (or other types
of URI) refer to the same subject can be made using the
alsoKnownAs
property.
alsoKnownAs
property is OPTIONAL. If present, its value MUST
be a set where each item in the
set is a URI conforming to [RFC3986].
Applications might choose to consider two identifiers related by alsoKnownAs
to be equivalent if the alsoKnownAs
relationship expressed in the
controller document of one subject is also expressed in the reverse direction
(i.e., reciprocated) in the controller document of the other subject. It is
best practice not to consider them
equivalent in the absence of this reciprocating relationship. In other words,
the presence of an alsoKnownAs
assertion does not prove that this assertion
is true. Therefore, it is strongly advised that a requesting party obtain
independent verification of an alsoKnownAs
assertion.
Given that the subject might use different identifiers for different purposes, such as enhanced privacy protection, an expectation of strong equivalence between the two identifiers, or taking action to merge the information from the two corresponding controller documents, is not necessarily appropriate, even with a reciprocal relationship.
Services are used in controller documents to express ways of communicating with the controller, or associated entities, in relation to the controlled identifier. A service can be any type of service the controller wants to advertise for further discovery, authentication, authorization, or interaction.
Due to privacy concerns, revealing public information through services, such as social media accounts, personal websites, and email addresses, is discouraged. Further exploration of privacy concerns can be found in sections 6.1 Keep Personal Data Private and 6.6 Service Privacy. The information associated with services is often service specific. For example, the information associated with an encrypted messaging service can express how to initiate the encrypted link before messaging begins.
Services are expressed using the service
property, which is described
below:
The service
property is OPTIONAL. If present, the associated value MUST be a
set of services, where each service is
described by a map. Each service map MUST contain id
, type
, and
serviceEndpoint
properties. Each service extension MAY include additional
properties and MAY further restrict the properties associated with the
extension.
id
property is OPTIONAL. If present, its value MUST be a URL
conforming to URL Standard. A conforming document MUST NOT include multiple
service
entries with the same id
.
type
property is REQUIRED. Its value MUST be a
string or a
set of
strings. To maximize interoperability,
the service type and its associated properties SHOULD be registered in the
Verifiable Credential Extensions.
serviceEndpoint
property is REQUIRED. The value of the serviceEndpoint
property MUST be a single string, a single
map, or a set
composed of one or more
strings and/or
maps. Each string
value MUST be a valid URL conforming to URL Standard.
For more information regarding privacy and security considerations related to services see 6.6 Service Privacy, 6.1 Keep Personal Data Private, 6.4 Controller Document Correlation Risks, and 5.11 Service Endpoints for Authentication and Authorization.
{ "service": [{ "type": "ExampleSocialMediaService", "serviceEndpoint": "https://warbler.example/sal674" }] }
A controller document can express verification methods, such as cryptographic public keys, which can be used to verify proofs, such as those used to authenticate or authorize interactions with the controller or associated parties. For example, a cryptographic public key can be used as a verification method with respect to a digital signature; in such use, it verifies that the signer could use the associated cryptographic private key. Verification methods might take many parameters. An example of this is a set of five cryptographic keys from which any three are required to contribute to a cryptographic threshold signature.
"Verification" and "proof" are intended to apply broadly. For example, a cryptographic public key might be used during Diffie-Hellman key exchange to negotiate a shared symmetric key for encryption. This guarantees the integrity of the key agreement process. It is thus another type of verification method, even though descriptions of the process might not use the words "verification" or "proof."
A verification method is defined in a controller document using the map below and is referred to as the verification method definition:
The verificationMethod
property is OPTIONAL. If present, the value
MUST be a set of verification
methods, where each verification method is expressed using a map. The verification method map MUST include the id
,
type
, controller
, and specific verification material
properties that are determined by the value of type
and are defined
in 2.2.1 Verification Material. A verification method MAY
include additional properties.
The value of the id
property for a verification method MUST be a string that conforms to the [URL] syntax. This
value is called the verification method identifier and can also be
used in a proof to refer to a specific instance of a verification method, which is called the verification method definition.
type
property MUST be a string that references exactly one verification
method type. This specification defines the types JsonWebKey
(see Section
2.2.3 JsonWebKey) and Multikey
(see Section 2.2.2 Multikey).
controller
property MUST be a string that conforms to the [URL] syntax.
expires
property is OPTIONAL.
If provided, it MUST be an
[XMLSCHEMA11-2] dateTimeStamp
string specifying when the
verification method SHOULD cease to be used. Once the value is set, it is
not expected to be updated, and systems depending on the value are expected to
not verify any proofs associated with the verification method at or after
the time of expiration.
revoked
property is OPTIONAL.
If present, it MUST be an [XMLSCHEMA11-2]
dateTimeStamp
string specifying when the verification method
MUST NOT be used. Once the value is set, it is not expected to be
updated, and systems depending on the value are expected to not verify any
proofs associated with the verification method at or after the time of
revocation.
{ "@context": "https://www.w3.org/ns/controller/v1", "id": "https://controller.example/123456789abcdefghi", ... "verificationMethod": [{ "id": ..., "type": ..., "controller": ..., "publicKeyJwk": ... }, { "id": ..., "type": ..., "controller": ..., "publicKeyMultibase": ... }] }
The controller
property is used by controller documents, as described in
Section 2.1 Controller Documents, and by verification methods, as
described in Section 2.2 Verification Methods. When it is used in either
place, its purpose is essentially the same; that is, it expresses one or more
entities that are authorized to perform certain actions associated with the
resource with which it is associated.
In the case of the controller of a controller document, the controller can update the content of the document. In the case of the controller of a verification method, the controller can generate proofs that satisfy the method.
To ensure explicit security guarantees, the
controller of a verification method cannot be inferred from the
controller document. It is necessary to explicitly express the identifier of
the controller of the key because the value of controller
for a verification method is not necessarily the value of the controller
for a
controller document.
Verification material is any information that is used by a process that applies
a verification method. The type
of a verification method is
expected to be used to determine its compatibility with such processes. Examples
of verification methods include JsonWebKey
and Multikey
.
A cryptographic suite specification is responsible for specifying the
verification method type
and its associated verification material
format. For examples using verification material, see
Securing Verifiable Credentials using JOSE and COSE,
the Data Integrity ECDSA
Cryptosuites and
the Data Integrity EdDSA Cryptosuites.
To increase the likelihood of interoperable implementations, this specification limits the number of formats for expressing verification material in a controller document. The fewer formats that implementers have to choose from, the more likely that interoperability will be achieved. This approach attempts to strike a delicate balance between easing implementation and providing support for formats that have historically had broad deployment.
A verification method MUST NOT contain multiple verification material
properties for the same material. For example, expressing key material in a
verification method using both publicKeyJwk
and
publicKeyMultibase
at the same time is prohibited.
Implementations MAY convert keys between formats as desired for operational purposes or to interface with cryptographic libraries. As an internal implementation detail, such conversion MUST NOT affect the external representation of key material.
An example of a controller document containing verification methods using both properties above is shown below.
{ "@context": "https://www.w3.org/ns/controller/v1", "id": "https://controller.example/123456789abcdefghi", ... "verificationMethod": [{ "id": "https://controller.example/123#_Qq0UL2Fq651Q0Fjd6TvnYE-faHiOpRlPVQcY_-tA4A", "type": "JsonWebKey", // external (property value) "controller": "https://controller.example/123456789abcdefghi", "publicKeyJwk": { "crv": "Ed25519", // external (property name) "x": "VCpo2LMLhn6iWku8MKvSLg2ZAoC-nlOyPVQaO3FxVeQ", // external (property name) "kty": "OKP", // external (property name) "kid": "_Qq0UL2Fq651Q0Fjd6TvnYE-faHiOpRlPVQcY_-tA4A" // external (property name) } }, { "id": "https://controller.example/123456789abcdefghi#keys-1", "type": "Multikey", // external (property value) "controller": "https://controller.example/123456789abcdefghi", "publicKeyMultibase": "z6MkmM42vxfqZQsv4ehtTjFFxQ4sQKS2w6WR7emozFAn5cxu" }], ... }
The Multikey data model is a specific type of verification method that encodes key types into a single binary stream that is then encoded as a Multibase value as described in Section 2.4 Multibase.
When specifying a Multikey
, the object takes the following form:
type
property MUST be a string that is set to
Multikey
.
publicKeyMultibase
property is OPTIONAL. If present, its value MUST be a
Multibase encoded value as described in Section 2.4 Multibase.
secretKeyMultibase
property is OPTIONAL. If present, its value MUST be a
Multibase encoded value as described in Section 2.4 Multibase.
The example below expresses an Ed25519 public key using the format defined above:
{ "@context": "https://www.w3.org/ns/controller/v1", "id": "https://controller.example/123456789abcdefghi#keys-1", "type": "Multikey", "controller": "https://controller.example/123456789abcdefghi", "publicKeyMultibase": "z6MkmM42vxfqZQsv4ehtTjFFxQ4sQKS2w6WR7emozFAn5cxu" }
The public key values are expressed using the rules in the table below:
Key type | Description |
---|---|
ECDSA 256-bit public key |
The Multikey encoding of a P-256 public key MUST start with the two-byte prefix
0x8024 (the varint expression of 0x1200 ) followed by the 33-byte compressed
public key data. The resulting 35-byte value MUST then be encoded using the
base-58-btc alphabet, according to Section 2.4 Multibase, and then
prepended with the base-58-btc Multibase header (z ).
|
ECDSA 384-bit public key |
The encoding of a P-384 public key MUST start with the two-byte prefix 0x8124
(the varint expression of 0x1201 ) followed by the 49-byte compressed public
key data. The resulting 51-byte value is then encoded using the base-58-btc
alphabet, according to Section 2.4 Multibase, and then prepended with the
base-58-btc Multibase header (z ).
|
Ed25519 256-bit public key |
The encoding of an Ed25519 public key MUST start with the two-byte prefix
0xed01 (the varint expression of 0xed ), followed by the 32-byte public key
data. The resulting 34-byte value MUST then be encoded using the base-58-btc
alphabet, according to Section 2.4 Multibase, and then prepended with the
base-58-btc Multibase header (z ).
|
BLS12-381 381-bit public key |
The encoding of an BLS12-381 public key in the G2 group MUST start with
the two-byte prefix 0xeb01 (the varint expression of 0xeb ), followed
by the 96-byte compressed public key data. The resulting 98-byte value MUST
then be encoded using the base-58-btc alphabet, according to Section
2.4 Multibase, and then prepended with the base-58-btc Multibase header
(z ).
|
The secret key values are expressed using the rules in the table below:
Key type | Description |
---|---|
ECDSA 256-bit secret key |
The Multikey encoding of a P-256 secret key MUST start with the two-byte prefix
0x8626 (the varint expression of 0x1306 ) followed by the 32-byte
secret key data. The resulting 34-byte value MUST then be encoded using the
base-58-btc alphabet, according to Section 2.4 Multibase, and then
prepended with the base-58-btc Multibase header (z ).
|
ECDSA 384-bit secret key |
The encoding of a P-384 secret key MUST start with the two-byte prefix 0x8726
(the varint expression of 0x1307 ) followed by the 48-byte secret
key data. The resulting 50-byte value is then encoded using the base-58-btc
alphabet, according to Section 2.4 Multibase, and then prepended with the
base-58-btc Multibase header (z ).
|
Ed25519 256-bit secret key |
The encoding of an Ed25519 secret key MUST start with the two-byte prefix
0x8026 (the varint expression of 0x1300 ), followed by the 32-byte secret key
data. The resulting 34-byte value MUST then be encoded using the base-58-btc
alphabet, according to Section 2.4 Multibase, and then prepended with the
base-58-btc Multibase header (z ).
|
BLS12-381 381-bit secret key |
The encoding of an BLS12-381 secret key in the G2 group MUST start with
the two-byte prefix 0x8030 (the varint expression of 0x130a ), followed
by the 96-byte compressed public key data. The resulting 98-byte value MUST
then be encoded using the base-58-btc alphabet, according to Section
2.4 Multibase, and then prepended with the base-58-btc Multibase header
(z ).
|
Developers are advised to not accidentally publish a representation of a secret key. Implementations that adhere to this specification will raise errors in the event of a Multikey header value that is not in the public key header table above, or when reading a Multikey value that is expected to be a public key, such as one published in a controller document, that does not start with a known public key header.
When defining values for use with publicKeyMultibase
and secretKeyMultibase
,
specification authors MAY define additional header values for other key types in
other specifications and MUST NOT define alternate encodings for key types
already defined by this specification.
The JSON Web Key (JWK) data model is a specific type of verification method that uses the JWK specification [RFC7517] to encode key types into a set of parameters.
When specifing a JsonWebKey
, the object takes the following form:
type
property MUST be a string that is set to
JsonWebKey
.
The publicKeyJwk
property is OPTIONAL. If present, its value MUST
be a map representing a JSON Web Key that
conforms to [RFC7517]. The map MUST NOT
include any members of the private information class, such as d
, as described
in the JWK
Registration Template. It is RECOMMENDED that verification methods that use
JWKs [RFC7517] to represent their public keys use the value of kid
as
their fragment identifier. It is RECOMMENDED that JWK kid
values are set to
the JWK Thumbprint [RFC7638] using the SHA-256 (SHA2-256) hash function of the public key.
See the first key in
Example 7 for an example of a
public key with a compound key identifier.
As specified in Section 4.4 of the JWK specification,
the OPTIONAL alg
property identifies the algorithm intended for use with the public key,
and SHOULD be included to prevent security issues that can arise when using the same
key with multiple algorithms. As specified in
Section 6.2.1.1 of the JWA specification, describing a key using an elliptic curve,
the REQUIRED crv
property is used to identify the particular curve type of the public key.
As specified in Section 4.1.4 of the JWS specification,
the OPTIONAL kid
property is a hint used to help discover the key; if present, the kid
value SHOULD
match, or be included in, the id
property of the encapsulating JsonWebKey
object,
as part of the path, query, or fragment of the URL.
secretKeyJwk
property is OPTIONAL. If present, its value MUST be a map representing a JSON Web Key that conforms
to [RFC7517].
It MUST NOT be used if the data structure containing it is public or may be revealed to parties other than the legitimate holders of the secret key.
An example of an object that conforms to JsonWebKey
is provided below:
{ "id": "https://controller.example/123456789abcdefghi#key-1", "type": "JsonWebKey", "controller": "https://controller.example/123456789abcdefghi", "publicKeyJwk": { "kid": "key-1", "kty": "EC", "crv": "P-384", "alg": "ES384", "x": "1F14JSzKbwxO-Heqew5HzEt-0NZXAjCu8w-RiuV8_9tMiXrSZdjsWqi4y86OFb5d", "y": "dnd8yoq-NOJcBuEYgdVVMmSxonXg-DU90d7C4uPWb_Lkd4WIQQEH0DyeC2KUDMIU" } }
In the example above, the publicKeyJwk
value contains the JSON Web Key.
The kty
property encodes the key type of "EC", which means
"Elliptic Curve". The alg
property identifies the algorithm intended
for use with the public key, which in this case is ES384
. The crv
property identifies
the particular curve type of the public key, P-384
. The x
and y
properties specify
the point on the P-384
curve that is associated with the public key.
The publicKeyJwk
property MUST NOT contain any property marked as
"Private" or "Secret" in any registry contained in the JOSE Registries [JOSE-REGISTRIES], including "d".
The JSON Web Key data model is also capable of encoding secret keys, sometimes referred to as private keys.
{ "id": "https://controller.example/123456789abcdefghi#key-1", "type": "JsonWebKey", "controller": "https://controller.example/123456789abcdefghi", "secretKeyJwk": { "kty": "EC", "crv": "P-384", "alg": "ES384", "d": "fGwges0SX1mj4eZamUCL4qtZijy9uT15fI4gKTuRvre4Kkoju2SHM4rlFOeKVraH", "x": "1F14JSzKbwxO-Heqew5HzEt-0NZXAjCu8w-RiuV8_9tMiXrSZdjsWqi4y86OFb5d", "y": "dnd8yoq-NOJcBuEYgdVVMmSxonXg-DU90d7C4uPWb_Lkd4WIQQEH0DyeC2KUDMIU" } }
The private key example above is almost identical to the previous example of the
public key, except that the information is stored in the secretKeyJwk
property
(rather than the publicKeyJwk
), and the private key value is encoded in the d
property thereof (alongside the x
and y
properties, which still specify
the point on the P-384
curve that is associated with the public key).
Verification methods can be embedded in or referenced from properties associated with various verification relationships as described in 2.3 Verification Relationships. Referencing verification methods allows them to be used by more than one verification relationship.
If the value of a verification method property is a map, the verification method has been
embedded and its properties can be accessed directly. However, if the value is a
URL string, the verification method has
been included by reference and its properties will need to be retrieved from
elsewhere in the controller document or from another controller document. This
is done by dereferencing the URL and searching the resulting resource for a
verification method map with an
id
property whose value matches the URL.
{ ... "authentication": [ // this key is referenced and might be used by // more than one verification relationship "https://controller.example/123456789abcdefghi#keys-1", // this key is embedded and may *only* be used for authentication { "id": "https://controller.example/123456789abcdefghi#keys-2", "type": "Multikey", // external (property value) "controller": "https://controller.example/123456789abcdefghi", "publicKeyMultibase": "z6MkmM42vxfqZQsv4ehtTjFFxQ4sQKS2w6WR7emozFAn5cxu" } ], ... }
A verification relationship is an expression that one or more verification methods are authorized to verify proofs made on behalf of the subject.
Different verification relationships enable the associated verification methods to be used for different purposes. It is up to a verifier to ascertain the validity of a verification attempt by checking that the verification method used is referred to by the appropriate verification relationship property in the controller document.
The verification relationship between the subject and the verification method is explicit in the controller document. Verification methods that are not associated with a particular verification relationship cannot be used for that verification relationship. For example, a verification method associated with the authentication property cannot be used to engage in key agreement protocols — the value of the keyAgreement property needs to be used for that.
If a referenced verification method definition is not in the latest controller document used to dereference it, then that verification method is considered invalid or revoked.
The following sections define several useful verification relationships. A controller document MAY include any of these, or other properties, to express a specific verification relationship. To maximize interoperability, any such properties used SHOULD be registered in the list of DID Document Property Extensions.
The authentication
verification relationship is used to specify how the
subject is expected to be authenticated, for purposes such as logging
into a website or engaging in any sort of challenge-response protocol. The
processing performed following authentication is application-specific.
authentication
property is OPTIONAL. If present, its
value MUST be a set of one or more
verification methods. Each verification method MAY be embedded or
referenced.
{ "@context": "https://www.w3.org/ns/controller/v1", "id": "https://controller.example/123456789abcdefghi", ... "authentication": [ // this method can be used to authenticate "https://controller.example/123456789abcdefghi#keys-1", // this method is *only* approved for authentication, so its // full description is embedded here rather than using only a reference { "id": "https://controller.example/123456789abcdefghi#keys-2", "type": "JsonWebKey", "controller": "https://controller.example/123456789abcdefghi", "publicKeyJwk": { "crv": "Ed25519", "x": "VCpo2LMLhn6iWku8MKvSLg2ZAoC-nlOyPVQaO3FxVeQ", "kty": "OKP", "kid": "_Qq0UL2Fq651Q0Fjd6TvnYE-faHiOpRlPVQcY_-tA4A" } }, { "id": "https://controller.example/123456789abcdefghi#keys-3", "type": "Multikey", "controller": "https://controller.example/123456789abcdefghi", "publicKeyMultibase": "z6MkmM42vxfqZQsv4ehtTjFFxQ4sQKS2w6WR7emozFAn5cxu" } ], ... }
This is useful to any entity verifying authentication that needs to check
whether an entity that is attempting to authenticate is presenting a valid
proof of authentication. When such an authentication-verifying entity receives
some data (in some protocol-specific format) that contains a proof that was made
for the purpose of "authentication", and that says that an entity is identified
by the id
, then that verifier checks to ensure that the proof can be
verified using a verification method (for example, public key) listed
under authentication
in the controller document.
Note that the verification method indicated by the
authentication
property of a controller document can
only be used to authenticate on behalf
of the controller document's base identifier.
The assertionMethod
verification relationship is used to
specify verification methods that a controller authorizes for use
when expressing assertions or claims, such as in verifiable credentials.
assertionMethod
property is OPTIONAL. If present, its associated
value MUST be a set of one or
more verification methods. Each verification method MAY
be embedded or referenced.
This property is useful, for example, during the processing of a verifiable credential by a verifier.
{ "@context": "https://www.w3.org/ns/controller/v1", "id": "https://controller.example/123456789abcdefghi", ... "assertionMethod": [ // this method can be used to assert statements "https://controller.example/123456789abcdefghi#keys-1", // this method is *only* approved for assertion of statements, it is not // used for any other verification relationship, so its full description is // embedded here rather than using a reference { "id": "https://controller.example/123456789abcdefghi#keys-2", "type": "Multikey", // external (property value) "controller": "https://controller.example/123456789abcdefghi", "publicKeyMultibase": "z6MkmM42vxfqZQsv4ehtTjFFxQ4sQKS2w6WR7emozFAn5cxu" } ], ... }
The keyAgreement
verification relationship is used to
specify how an entity can perform encryption in order to transmit
confidential information intended for the controller, such as for
the purposes of establishing a secure communication channel with the recipient.
keyAgreement
property is OPTIONAL. If present, the associated
value MUST be a set of one or more
verification methods. Each verification method MAY be embedded or
referenced.
An example of when this property is useful is when encrypting a message intended for the controller. In this case, the counterparty uses the cryptographic public key information in the verification method to wrap a decryption key for the recipient.
{ "@context": "https://www.w3.org/ns/controller/v1", "id": "https://controller.example/123456789abcdefghi", ... "keyAgreement": [ "https://controller.example/123456789abcdefghi#keys-1", // the rest of the methods below are *only* approved for key agreement usage // they will not be used for any other verification relationship // the full value is embedded here rather than using only a reference { "id": "https://controller.example/123#keys-2", "type": "Multikey", "controller": "https://controller.example/123", "publicKeyMultibase": "zDnaerx9CtbPJ1q36T5Ln5wYt3MQYeGRG5ehnPAmxcf5mDZpv" }, { "id": "https://controller.example/123#keys-3", "type": "JsonWebKey", "controller": "https://controller.example/123", "publicKeyJwk": { "kty": "OKP", "crv": "X25519", "x": "W_Vcc7guviK-gPNDBmevVw-uJVamQV5rMNQGUwCqlH0" } } ], ... }
The capabilityInvocation
verification relationship is used
to specify a verification method that might be used by the
controller to invoke a cryptographic capability, such as the
authorization to update the controller document.
capabilityInvocation
property is OPTIONAL. If present, the
associated value MUST be a set of
one or more verification methods. Each verification method MAY be
embedded or referenced.
An example of when this property is useful is when a controller needs to access a protected HTTP API that requires authorization in order to use it. In order to authorize when using the HTTP API, the controller uses a capability that is associated with a particular URL that is exposed via the HTTP API. The invocation of the capability could be expressed in a number of ways, for example, as a digitally signed message that is placed into the HTTP Headers.
The server providing the HTTP API is the verifier of the capability and
it would need to verify that the verification method referred to by the
invoked capability exists in the capabilityInvocation
property of the controller document. The verifier would also check to make sure
that the action being performed is valid and the capability is appropriate for
the resource being accessed. If the verification is successful, the server has
cryptographically determined that the invoker is authorized to access the
protected resource.
{ "@context": "https://www.w3.org/ns/controller/v1", "id": "https://controller.example/123456789abcdefghi", ... "capabilityInvocation": [ // this method can be used to invoke capabilities as https:...fghi "https://controller.example/123456789abcdefghi#keys-1", // this method is *only* approved for use in capability invocation; it will not // be used for any other verification relationship, so its full description is // embedded here rather than using only a reference { "id": "https://controller.example/123456789abcdefghi#keys-2", "type": "Multikey", // external (property value) "controller": "https://controller.example/123456789abcdefghi", "publicKeyMultibase": "z6MkmM42vxfqZQsv4ehtTjFFxQ4sQKS2w6WR7emozFAn5cxu" } ], ... }
The capabilityDelegation
verification relationship is used to specify a
mechanism that might be used to delegate a cryptographic
capability to another party. The mechanism, such as an
Authorization Capability
or a UCAN, and
the processing performed following delegation, such as accessing a specific HTTP
API, is application-specific.
capabilityDelegation
property is OPTIONAL. If present, the
associated value MUST be a set of
one or more verification methods. Each verification method MAY be
embedded or referenced.
An example of when this property is useful is when a controller chooses
to delegate their capability to access a protected HTTP API to a party other
than themselves. In order to delegate the capability, the controller
would use a verification method associated with the
capabilityDelegation
verification relationship to
cryptographically sign the capability over to another controller. The
delegate would then use the capability in a manner that is similar to the
example described in 2.3.4 Capability Invocation.
{ "@context": "https://www.w3.org/ns/controller/v1", "id": "https://controller.example/123456789abcdefghi", ... "capabilityDelegation": [ // this method can be used to perform capability delegation "https://controller.example/123456789abcdefghi#keys-1", // this method is *only* approved for granting capabilities; it will not // be used for any other verification relationship, so its full description is // embedded here rather than using only a reference { "id": "https://controller.example/123456789abcdefghi#keys-2", "type": "JsonWebKey", // external (property value) "controller": "https://controller.example/123456789abcdefghi", "publicKeyJwk": { "kty": "OKP", "crv": "Ed25519", "x": "O2onvM62pC1io6jQKm8Nc2UyFXcd4kOmOsBIoYtZ2ik" } }, { "id": "https://controller.example/123456789abcdefghi#keys-3", "type": "Multikey", // external (property value) "controller": "https://controller.example/123456789abcdefghi", "publicKeyMultibase": "z6MkmM42vxfqZQsv4ehtTjFFxQ4sQKS2w6WR7emozFAn5cxu" } ], ... }
A Multibase value encodes a binary value as a base-encoded string. The value starts with a single character header, which identifies the base and encoding alphabet used to encode a binary value, followed by the encoded binary value (using that base and alphabet). The common Multibase header values and their associated base encoding alphabets, as provided below, are normative:
Multibase Header | Description |
---|---|
u |
The base-64-url-no-pad alphabet is used to encode the bytes. The base-alphabet
consists of the following characters, in order:
ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789-_
|
z |
The base-58-btc alphabet is used to encode the bytes. The base-alphabet consists
of the following characters, in order:
123456789ABCDEFGHJKLMNPQRSTUVWXYZabcdefghijkmnopqrstuvwxyz
|
Other Multibase encoding values MAY be used, but interoperability is not guaranteed between implementations using such values.
To base-encode a binary value into a Multibase string, an implementation MUST apply the algorithm in Section 3.1 Base Encode to the binary value, with the desired base encoding and alphabet from the table above, ensuring to prepend the associated Multibase header from the table above to the result. Any algorithm with equivalent output MAY be used.
To base-decode a Multibase string, an implementation MUST apply the algorithm in Section 3.2 Base Decode to the string following the first character (Multibase header), with the alphabet associated with the Multibase header. Any algorithm with equivalent output MAY be used.
A Multihash value starts with a binary header, which includes 1) an identifier for the specific cryptographic hashing algorithm, 2) a cryptographic digest length in bytes, and 3) the value of the cryptographic digest. The normative Multihash header values defined by this specification, and their associated output sizes and associated specifications, are provided below:
Multihash Identifier | Multihash Header | Description |
---|---|---|
sha2-256 |
0x12 |
SHA-2 with 256 bits (32 bytes) of output, as defined by [RFC6234]. |
sha2-384 |
0x20 |
SHA-2 with 384 bits (48 bytes) of output, as defined by [RFC6234]. |
sha3-256 |
0x16 |
SHA-3 with 256 bits (32 bytes) of output, as defined by [SHA3]. |
sha3-384 |
0x15 |
SHA-3 with 384 bits (48 bytes) of output, as defined by [SHA3]. |
Other Multihash encoding values MAY be used, but interoperability is not guaranteed between implementations.
To encode to a Multihash value, an implementation MUST concatenate the associated Multihash header (encoded as a varint), the cryptographic digest length in bytes (encoded as a varint), and the cryptographic digest value, in that order.
To decode a Multihash value, an implementation MUST 1) remove the prepended Multihash header value, which identifies the type of cryptographic hashing algorithm, 2) remove the cryptographic digest length in bytes, and 3) extract the raw cryptographic digest value which MUST match the expected output length associated with the Multihash header as well as the output length provided in the Multihash value itself.
This section defines algorithms used by this specification including instructions on how to base-encode and base-decode values, safely retrieve verification methods, and produce processing errors over HTTP channels.
The following algorithm specifies how to encode an array of bytes, where each byte represents a base-256 value, to a different base representation that uses a particular base alphabet, such as base-64-url-no-pad or base-58-btc. The required inputs are the bytes, targetBase, and baseAlphabet. The output is a string that contains the base-encoded value. All mathematical operations MUST be performed using integer arithmetic. Alternatives to the algorithm provided below MAY be used as long as the outputs of the alternative algorithm remain the same.
0
, length to
0
, begin to 0
, and end to the length of
bytes.
0
byte
values in bytes.
1
to the
value of size.
0
. Set basePosition to
size minus 1
. Perform the following loop as long as carry
does not equal 0
or i is less than length, and
basePosition does not equal -1
.
256
and add
it to carry.
1
and increment i by 1
.
1
.
0
, increment
baseEncodingPosition. This step skips the leading zeros in the
base-encoded result.
function baseEncode(bytes, targetBase, baseAlphabet) {
let zeroes = 0;
let length = 0;
let begin = 0;
let end = bytes.length;
// count the number of leading bytes that are zero
while(begin !== end && bytes[begin] === 0) {
begin++;
zeroes++;
}
// allocate enough space to store the target base value
const baseExpansionFactor = Math.log(256) / Math.log(targetBase);
let size = Math.floor((end - begin) * baseExpansionFactor + 1);
let baseValue = new Uint8Array(size);
// process the entire input byte array
while(begin !== end) {
let carry = bytes[begin];
// for each byte in the array, perform base-expansion
let i = 0;
for(let basePosition = size - 1;
(carry !== 0 || i < length) && (basePosition !== -1);
basePosition--, i++) {
carry += Math.floor(256 * baseValue[basePosition]);
baseValue[basePosition] = Math.floor(carry % targetBase);
carry = Math.floor(carry / targetBase);
}
length = i;
begin++;
}
// skip leading zeroes in base-encoded result
let baseEncodingPosition = size - length;
while(baseEncodingPosition !== size &&
baseValue[baseEncodingPosition] === 0) {
baseEncodingPosition++;
}
// convert the base value to the base encoding
let baseEncoding = baseAlphabet.charAt(0).repeat(zeroes)
for(; baseEncodingPosition < size; ++baseEncodingPosition) {
baseEncoding += baseAlphabet.charAt(baseValue[baseEncodingPosition])
}
return baseEncoding;
}
The following algorithm specifies how to decode an array of bytes, where each byte represents a base-encoded value, to a different base representation that uses a particular base alphabet, such as base-64-url-no-pad or base-58-btc. The required inputs are the sourceEncoding, sourceBase, and baseAlphabet. The output is an array of bytes that contains the base-decoded value. All mathematical operations MUST be performed using integer arithmetic. Alternatives to the algorithm provided below MAY be used as long as the outputs of the alternative algorithm remain the same.
0
,
zeroes to 0
, and decodedLength to 0
.
256
) and then multiplying by the
length of sourceEncoding minus the leading zeroes. Add 1 to the value
of size.
0
. Set byteOffset
to decodedSize minus 1
. Perform the following loop as long as,
carry does not equal 0
or i is less than
decodedLength, and byteOffset does not equal -1
:
256
.
256
, ensuring that integer
division is used to perform the division.
1
and increment i by 1
.
1
.
0
, increment
decodedOffset by 1
. This step skips the leading zeros in the
final base-decoded byte array.
0
.
1
, copy all bytes in decodedBytes, up
to decodedSize, starting at offset decodedOffset to
finalBytes.
function baseDecode(sourceEncoding, sourceBase, baseAlphabet) {
// build the base-alphabet to integer value map
baseMap = {};
for(let i = 0; i < baseAlphabet.length; i++) {
baseMap[baseAlphabet[i]] = i;
}
// skip and count zero-byte values in the sourceEncoding
let sourceOffset = 0;
let zeroes = 0;
let decodedLength = 0;
while(sourceEncoding[sourceOffset] === baseAlphabet[0]) {
zeroes++;
sourceOffset++;
}
// allocate the decoded byte array
const baseContractionFactor = Math.log(sourceBase) / Math.log(256);
let decodedSize = Math.floor((
(sourceEncoding.length - sourceOffset) * baseContractionFactor) + 1);
let decodedBytes = new Uint8Array(decodedSize);
// perform base-conversion on the source encoding
while(sourceEncoding[sourceOffset]) {
// process each base-encoded number
let carry = baseMap[sourceEncoding[sourceOffset]];
// convert the base-encoded number by performing base-expansion
let i = 0
for(let byteOffset = decodedSize - 1;
(carry !== 0 || i < decodedLength) && (byteOffset !== -1);
byteOffset--, i++) {
carry += Math.floor(sourceBase * decodedBytes[byteOffset]);
decodedBytes[byteOffset] = Math.floor(carry % 256);
carry = Math.floor(carry / 256);
}
decodedLength = i;
sourceOffset++;
}
// skip leading zeros in the decoded byte array
let decodedOffset = decodedSize - decodedLength;
while(decodedOffset !== decodedSize && decodedBytes[decodedOffset] === 0) {
decodedOffset++;
}
// create the final byte array that has been base-decoded
let finalBytes = new Uint8Array(zeroes + (decodedSize - decodedOffset));
let j = zeroes;
while(decodedOffset !== decodedSize) {
finalBytes[j++] = decodedBytes[decodedOffset++];
}
return finalBytes;
}
The following algorithm specifies how to safely retrieve a verification method, such as a cryptographic public key, by using a verification method identifier. Required inputs are a verification method identifier (vmIdentifier), a verification relationship (verificationRelationship), and a set of dereferencing options (options). A verification method is produced as output.
The following example provides a minimum conformant controller document containing a minimum conformant verification method as required by the algorithm in this section:
{
"id": "https://controller.example/123",
"verificationMethod": [{
"id": "https://controller.example/123#key-456",
"type": "ExampleVerificationMethodType",
"controller": "https://controller.example/123",
// public cryptographic material goes here
}],
"authentication": ["#key-456"]
}
Verification method identifiers are expressed as strings that are URLs, or
via the id
property, whose value is a URL. It is possible for a controller document to express a verification method, through a verification relationship, that exists in a place that is external to the controller document. As described in Section 5.9 Integrity Protection of Controllers,
specifying a verification method that is external to a controller document is a valid use of this specification. It is vital that this
verification method is retrieved from the external controller document.
When retrieving any verification method the algorithm above is used to
ensure that the verification method is retrieved from the correct
controller document. The algorithm also ensures that this controller document refers to the verification method (via a verification relationship) and that the verification method refers to the controller document (via the verification method's controller
property). Failure to
use this algorithm, or an equivalent one that performs these checks, can lead to
security compromises where an attacker poisons a cache by claiming control of a
victim's verification method.
{
"id": "https://controller.example/123",
"capabilityInvocation": ["https://external.example/xyz#key-789"]
}
In the example above, the algorithm described in this section will use the
https://external.example/xyz#key-789
URL value as the verification method identifier. The algorithm will then confirm that the verification method
exists in the external controller document and that the appropriate
relationships exist as described earlier in this section.
The algorithms described in this specification throw specific types of errors. Implementers might find it useful to convey these errors to other libraries or software systems. This section provides specific URLs, descriptions, and error codes for the errors, such that an ecosystem implementing technologies described by this specification might interoperate more effectively when errors occur.
When exposing these errors through an HTTP interface, implementers SHOULD use [RFC9457] to encode the error data structure. If [RFC9457] is used:
type
value of the error object MUST be a URL that starts with the value
https://w3id.org/security#
and ends with the value in the section listed
below.
code
value MUST be the integer code described in the table below
(in parentheses, beside the type name).
title
value SHOULD provide a short but specific human-readable string for
the error.
detail
value SHOULD provide a longer human-readable string for the error.
verificationMethod
value in a proof was malformed. See Section
3.3 Retrieve Verification Method.
id
value in a controller document was malformed. See Section
3.3 Retrieve Verification Method.
proofPurpose
property in the proof. See Section
3.3 Retrieve Verification Method.
This section lists cryptographic hash values that might change during the Candidate Recommendation phase based on implementer feedback that requires the referenced files to be modified.
The terms defined in this specification are also part of the
RDF vocabulary namespace [RDF-CONCEPTS]
https://w3id.org/security#.
For any TERM
, the relevant URL is of the form
https://w3id.org/security#TERM
or https://w3id.org/security#TERMmethod
.
Implementations that use RDF processing and rely on this specification MUST use these URLs.
When dereferencing the https://w3id.org/security# URL, the media type of the data that is returned depends on HTTP content negotiation. These are as follows:
Media Type | Description and Hash |
---|---|
application/ld+json |
The vocabulary in JSON-LD format [JSON-LD11]. SHA2-256 Digest: 0825e3f71462e105e85ea144e2eb1521c2755e6679bd2eb459a9a796c56b18e8
|
text/turtle |
The vocabulary in Turtle format [TURTLE]. SHA2-256 Digest: 2fefc7e645fdfa34491c772d0e9c2eed9f95cde3b205e4667abe876580be7f7d
|
text/html |
The vocabulary in HTML+RDFa Format [HTML-RDFA]. SHA2-256 Digest: 0f989a247fb87f514640f1080ad40713b6c950edeb1d29e8c5b45f647699b3d4
|
It is possible to confirm the cryptographic digests above by running
a command like the following (replacing <MEDIA_TYPE>
and <DOCUMENT_URL>
with the appropriate values) through a modern UNIX-like OS command line interface:
curl -sL -H "Accept: <MEDIA_TYPE>" <DOCUMENT_URL> | openssl dgst -sha256
Implementations that perform JSON-LD processing MUST treat the following JSON-LD context URL as already resolved, where the resolved document matches the corresponding hash value below:
Context URL and Hash |
---|
URL: https://www.w3.org/ns/controller/v1 SHA2-256 Digest: ea216ecc1cb02cd39b693dba2250141e270ba0bf95890be107dd9a9e8e43de85
|
It is possible to confirm the cryptographic digests listed above by running a
command like the following through a modern UNIX-like OS command line interface: curl -sL -H
"Accept: application/ld+json" https://www.w3.org/ns/controller/v1 | openssl dgst -sha256
The security vocabulary terms that the JSON-LD contexts listed above resolve to are in the https://w3id.org/security# namespace. See also 4.1 Vocabulary for further details.
Applications or specifications may define mappings to the
vocabulary URLs using their own JSON-LD contexts. For example, these mappings are part
of the https://w3id.org/security/data-integrity/v2
context,
defined by the Verifiable Credential Data Integrity 1.0 specification, or the https://www.w3.org/ns/did/v1
context,
defined by the Decentralized Identifiers (DIDs) v1.0 specification.
The @context
property is used to ensure that implementations are using the
same semantics when terms in this specification are processed. For example, this
can be important when properties like authentication
are processed and its
value, such as Multikey
or JsonWebKey
, are used.
When an application is processing a controller document, if an @context
property is not provided in the document or the terms used in the document are
not mapped by existing values in the @context
property, implementations MUST
inject or append an @context
property with a value of
https://www.w3.org/ns/controller/v1
or one or more contexts with at least the
same declarations, such as the Decentralized Identifier v1.1 context
(https://www.w3.org/ns/did/v1
).
{ "id": "https://controller.example/101", "verificationMethod": [{ "id": "https://controller.example/101#key-203947", "type": "JsonWebKey", "controller": "https://controller.example/101", "publicKeyJwk": { "kid": "key-203947", "kty": "EC", "crv": "P-256", "alg": "ES256", "x": "f83OJ3D2xF1Bg8vub9tLe1gHMzV76e8Tus9uPHvRVEU", "y": "x_FEzRu9m36HLN_tue659LNpXW6pCyStikYjKIWI5a0" } }], "authentication": ["#key-203947"] }
Implementations that do not intend to use JSON-LD MAY choose to not include an
@context
declaration at the top-level of the document. Whether or not the
@context
value or JSON-LD processors are used, the semantics for all properties
and values expressed in conforming documents interpreted by conforming processors are the same. Any differences in semantics between documents
processed in either mode are either implementation or specification bugs.
This section defines datatypes that are used by this specification.
Multibase-encoded strings are used to encode binary
data into printable formats, such as ASCII, which are useful in environments
that cannot directly represent binary values. This specification makes use of
this encoding. In environments that support data types for string values, such
as RDF [RDF-CONCEPTS], Multibase-encoded content
is indicated using a literal value whose datatype is set to
https://w3id.org/security#multibase
.
The multibase
datatype is defined as follows:
https://w3id.org/security#multibase
This section is non-normative.
This section contains a variety of security considerations that people using this specification are advised to consider before deploying this technology in a production setting. This technologies described in this document are designed to operate under the threat model used by many IETF standards and documented in [RFC3552]. This section elaborates upon a number of the considerations in [RFC3552], as well as other considerations that are unique to this specification.
Binding an entity in the digital world or the physical world to an identifier, to a controller document, or to cryptographic material requires the use of security protocols contemplated by this specification. The following sections describe some possible scenarios and how an entity therein might prove control over an identifier or a controller document for the purposes of authentication or authorization.
Proving control over an identifier and/or a controller document is useful when accessing remote systems. Cryptographic digital signatures enable certain security protocols related to controller documents to be cryptographically verifiable. For these purposes, this specification defines useful verification relationships in 2.3.1 Authentication and 2.3.4 Capability Invocation. The secret cryptographic material associated with the verification methods can be used to generate a cryptographic digital signature as a part of an authentication or authorization security protocol.
An identifier or controller document do not inherently carry any personal data and it is strongly advised that non-public entities do not publish personal data in controller documents.
It can be useful to express a binding of an identifier to a person's or organization's physical identity in a way that is provably asserted by a trusted authority, such as a government. This specification provides the 2.3.2 Assertion verification relationship for these purposes. This feature can enable interactions that are private and can be considered legally enforceable under one or more jurisdictions; establishing such bindings has to be carefully balanced against privacy considerations (see 6. Privacy Considerations).
The process of binding an identifier to something in the physical world, such as a person or an organization — for example, by using verifiable credentials with the same subject as that identifier — is contemplated by this specification and further defined in Verifiable Credentials Data Model v2.0.
Even in cases where the subject referred to by an identifier proves control, the interpretation of the subject remains contextual and potentially ambiguous.
For example, a school might issue a verifiable credential about the teacher
of Intro to Computer Science, using
https://controller.example/abc
as a subject identifier, saying
"https://controller.example/abc
is the teacher of
Intro to Computer Science" and
"https://controller.example/abc
controls access to the school's computer lab.
See them to request access".
In this usage, it is ambiguous whether https://controller.example/abc
refers
to a specific teacher or to whomever is the current teacher. Only with further
statements might we be able to discern the difference. But it's still tricky.
For example the subject in the following statement remains ambiguous:
<https://controller.example/abc> <https://schema.org/name> "Bob Smith" .
If https://controller.example/abc
refers to a specific human being, then the
statement is taken as an attestation about the particular human identified by
that name. However, if https://controller.example/abc
is used to refer to the
_current_ teacher, it is also valid if the current teacher does have that
name. In this case, the ambiguity is immaterial.
However, in a statement like the following, the difference becomes vital.
<https://controller.example/abc> <http://law.example/convicted> <http://calaw.example/PenalCode647b> .
The statement in English could be "The person referred to by
https://controller.example/abc
has been convicted of California Penal Code
647b." But which person(s) did we mean? Did we mean to say one, some, or all of
the teachers of computer science at the school have been convicted of violating
PenalCode647b
? Or is it meant to say that a particular individual teacher,
perhaps the one named "Bob Smith", has been convicted of said crime?
The challenge is particularly difficult in situations where the subject is fundamentally uninvolved in the issuance of the verifiable credential. For example, an identifier might be used by a school to refer to a teacher, and students and or parents might use that identifier to make statements about the teacher, with neither the teacher nor the school involved. In these cases, it is easy to imagine that the subtle nuance of the school's intended meaning, for example, "any current teacher of the computer science class", gets lost and the identifier gets misused by parents and students to refer to a specific teacher, quite likely in contexts where neither the school nor the teacher is aware of the conversation.
In natural language, these ambiguities are often easily ignored or corrected. In digital media, it is vital that context be evaluated to establish the intended referent, especially when identifiers are used in different contexts by different issuers, for example, on an official school website by the school, but in an unofficial social networking app by parents and students.
In short, the context in which identifiers are created and used has to be considered when relying on any particular interpretation of the subject of any particular identifier.
In a decentralized architecture, there might not be centralized authorities to enforce cryptographic material or cryptographic digital signature expiration policies. Therefore, it is with supporting software such as verification libraries that requesting parties validate that cryptographic materials were not expired at the time they were used. Requesting parties might employ their own expiration policies in addition to inputs into their verification processes. For example, some requesting parties might accept authentications from five minutes in the past, while others with access to high precision time sources might require authentications to be time stamped within the last 500 milliseconds.
There are some requesting parties that have legitimate needs to extend the use of already-expired cryptographic material, such as verifying legacy cryptographic digital signatures. In these scenarios, a requesting party might instruct their verification software to ignore cryptographic key material expiration or determine if the cryptographic key material was expired at the time it was used.
Rotation is a management process that enables the secret cryptographic material associated with an existing verification method to be deactivated or destroyed once a new verification method has been added to the controller document. Going forward, any new proofs that a controller would have generated using the old secret cryptographic material can now instead be generated using the new cryptographic material and can be verified using the new verification method.
Rotation is a useful mechanism for protecting against verification method compromise, since frequent rotation of a verification method by the controller reduces the value of a single compromised verification method to an attacker. Performing revocation immediately after rotation is useful for verification methods that a controller designates for short-lived verifications, such as those involved in encrypting messages and authentication.
The following considerations might be of use when contemplating the use of verification method rotation:
Revocation is a management process that enables the secret cryptographic material associated with an existing verification method to be deactivated such that it ceases to be a valid form of creating new proofs.
Revocation is a useful mechanism for reacting to a verification method compromise. Performing revocation immediately after rotation is useful for verification methods that a controller designates for short-lived verifications, such as those involved in encrypting messages and authentication.
Compromise of the secrets associated with a verification method allows the attacker to use them according to the verification relationship expressed by controller in the controller document, for example, for authentication. The attacker's use of the secrets might be indistinguishable from the legitimate controller's use starting from the time the verification method was registered, to the time it was revoked.
The following considerations might be of use when contemplating the use of verification method revocation:
Although verifiers might choose not to accept proofs or signatures from a revoked verification method, knowing whether a verification was made with a revoked verification method is trickier than it might seem. Some auditing systems provide the ability to look back at the state of an identifier at a point in time, or at a particular version of the controller document. When such a feature is combined with a reliable way to determine the time or identifier version that existed when a cryptographically verifiable statement was made, then revocation does not undo that statement. This can be the basis for using digital signatures to make binding commitments; for example, to sign a mortgage.
If these conditions are met, revocation is not retroactive; it only nullifies future use of the method.
However, in order for such semantics to be safe, the second condition — an ability to know what the state of the controller document was at the time the assertion was made — is expected to apply. Without that guarantee, someone could discover a revoked key and use it to make cryptographically verifiable statements with a simulated date in the past.
Some auditing systems only allow the retrieval of the current state of a identifier. When this is true, or when the state of an identifier at the time of a cryptographically verifiable statement cannot be reliably determined, then the only safe course is to disallow any consideration of state with respect to time, except the present moment. Identifier ecosystems that take this approach essentially provide cryptographically verifiable statements as ephemeral tokens that can be invalidated at any time by the controller.
Multiformats enable self-describing data; if data is known to be a Multiformat, its exact type can be determined by reading a few compact header bytes that are expressed at the beginning of the data. Multibase, Multihash, and Multikey are types of Multiformats that are defined by this specification.
The Multiformats specifications exist because application developers appropriately choose different base-encoding functions, cryptographic hashing functions, and cryptographic key formats, among other things, based on different use cases and their requirements. No single base-encoding function, cryptographic hashing function, or cryptographic key format in the world has ever satisfied all requirement sets. Multiformats provides an alternative means by which to encode and/or detect any base-encoding, cryptographic hash, or cryptographic key format in self-documenting data and documents.
To increase interoperability, specification authors are urged to minimize the number of Multiformats — optimally, choosing only one — to be used for any particular application or ecosystem.
Encryption algorithms have been known to fail due to advances in cryptography and computing power. Implementers are advised to assume that any encrypted data placed in a controller document might eventually be made available in clear text to the same audience to which the encrypted data is available. This is particularly pertinent if the controller document is public.
Encrypting all or parts of a controller document is not an appropriate means to protect data in the long term. Similarly, placing encrypted data in a controller document is not an appropriate means to protect personal data.
Given the caveats above, if encrypted data is included in a controller document, implementers are advised to not associate any correlatable information that could be used to infer a relationship between the encrypted data and an associated party. Examples of correlatable information include public keys of a receiving party, identifiers to digital assets known to be under the control of a receiving party, or human readable descriptions of a receiving party.
Controller documents that include links to external machine-readable content such as images, web pages, or schemas are vulnerable to tampering. It is strongly advised that external links are integrity protected using mechanisms to secure related resources such as those described in the Verifiable Credentials Data Model v2.0 specification. External links are to be avoided if they cannot be integrity protected and the controller document's integrity is dependent on the external link.
One example of an external link where the integrity of the controller document itself could be affected is the JSON-LD Context [JSON-LD11], when present. To protect against compromise, controller document consumers using JSON-LD are advised to cache local static copies of JSON-LD contexts and/or verify the integrity of external contexts against a cryptographic hash that is known to be associated with a safe version of the external JSON-LD Context.
As described in Section 2.1.2 Controllers, this specification includes a
mechanism by which to delegate change control of a controller document to
an entity that is described in an external controller document through the
use of the controller
property.
Delegating change control addresses a number of use cases including those where the care of an entity is the responsibility of some other entity or entities, as well as those where some entity desires that another entity provide account recovery services, among other use cases. In such scenarios, it can be beneficial to allow the guardian to manage the rotation of their own key material. It can also be beneficial for the delegator to associate a cryptographic hash of the remote controller document to "pin" the remote document to a known good value.
While this document does not specify a particular mechanism for
cryptographically protected URLs, the relatedResource
property in
Verifiable Credentials Data Model v2.0 and the digestMultibase
property in
Verifiable Credential Data Integrity 1.0 could be employed by a mechanism that can provide
such protection.
Additional information about the security context of authentication events is often required for compliance reasons, especially in regulated areas such as the financial and public sectors. This information is often referred to as a Level of Assurance (LOA). Examples include the protection of secret cryptographic material, the identity proofing process, and the form-factor of the authenticator.
Payment services (PSD 2) and eIDAS introduce such requirements to the security context. Level of assurance frameworks are classified and defined by regulations and standards such as eIDAS, NIST 800-63-3 and ISO/IEC 29115:2013, including their requirements for the security context, and making recommendations on how to achieve them. This might include strong user authentication where FIDO2/WebAuthn can fulfill the requirement.
Some regulated scenarios require the implementation of a specific level of assurance. Since verification relationships used to perform assertion and authentication might be used in some of these situations, information about the applied security context might need to be expressed and provided to a verifier. Whether and how to encode this information in the controller document data model is out of scope for this specification. Interested readers might note that 1) the information could be transmitted using Verifiable Credentials [VC-DATA-MODEL-2.0], and 2) the controller document data model can be extended to incorporate this information.
This section is non-normative.
Since controller documents are designed to be administered directly by the controller, it is critically important to apply the principles of Privacy by Design [PRIVACY-BY-DESIGN] to all aspects of the controller document. All seven of these principles have been applied throughout the development of this specification. The design used in this specification does not assume that there is a registrar, hosting company, nor other intermediate service provider to recommend or apply additional privacy safeguards. Privacy in this specification is preventive, not remedial, and is an embedded default. The following sections cover privacy considerations that implementers might find useful when building systems that utilize controller documents.
If a controller document is about a specific individual and is public-facing, it is critical that controller documents contain no personal biometric or biographical data. While it is true that personal data might include pseudonymous information, such as a public cryptographic key or an IP address, publishing that sort of information does not create the same immediate privacy dangers as publishing an individual's full name, profile photo, or social media account in a controller document. A better alternative is to transmit such personal data through other means such as verifiable credentials [VC-DATA-MODEL-2.0] or other data formats sent over private and secure communication channels.
The Same-origin policy is a security and privacy concept that constrains information to the same Web domain by default. There are mechanisms, such as Web Authentication:An API for accessing Public Key Credentials Level 1, that extend this policy to cryptographic keys. When a cryptographic key is bound to a specific domain, it is sometimes referred to as a pairwise identifier.
The same-origin policy can be overridden for a variety of use cases, such as for Cross-origin resource sharing (CORS). This specification allows for the cross-origin resource sharing of verification methods and service endpoints, which means that correlatable identifiers might be shared between origins. While resource sharing can lead to positive security outcomes (reduced cryptographic key registration burden), it can also lead to negative privacy outcomes (tracking). Those that use this specification are warned that there are trade-offs with each approach and to use the mechanism that maximizes security and privacy according to the needs of the individual or organization. Using a controller document for all use cases is not always advantageous when a same-origin bound cryptographic key would suffice.
Identifiers can be used for unwanted correlation. Controllers can mitigate this privacy risk by using pairwise identifiers that are unique to each relationship or interaction domain; in effect, each identifier acts as a pseudonym. A pairwise identifier need only be shared with more than one party when correlation across contexts is explicitly desired. If pairwise identifiers are the default, then the only need to publish an identifier openly, or to share it with multiple parties, is when the controllers and/or subjects explicitly desire public identification and correlation across interaction domains.
The anti-correlation protections of pairwise identifiers are easily defeated if the data in the corresponding controller documents can be correlated. For example, using identical verification methods in multiple controller documents can provide as much correlation information as using the same identifier. Therefore, the controller document for a pairwise identifier also needs to use pairwise unique information, such as ensuring that verification methods are unique to the pairwise relationship.
It is dangerous to add properties to the controller document that can be used to indicate, explicitly or through inference, what type or nature of thing the subject is, particularly if the subject is a person.
Not only do such properties potentially result in personal data (see 6.1 Keep Personal Data Private) or correlatable data (see 6.3 Identifier Correlation Risks and 6.4 Controller Document Correlation Risks) being present in the controller document, but they can be used for grouping particular identifiers in such a way that they are included in or excluded from certain operations or functionalities.
Including type information in a controller document can result in personal privacy harms even for subjects that are non-person entities, such as IoT devices. The aggregation of such information around a controller could serve as a form of digital fingerprint and this is best avoided.
To minimize these risks, all properties in a controller document ought to be for expressing verification methods and verification relationships related to using the identifier.
The ability for a controller to optionally express at least one service in the controller document increases their control and agency. Each additional endpoint in the controller document adds privacy risk either due to correlation, such as across endpoint descriptions, or because the services are not protected by an authorization mechanism, or both.
Controller documents are often public and, since they are standardized, will be stored and indexed efficiently. This risk is increased if controller documents are published to immutable verifiable data registries. Access to a history of the controller documents referenced by a URL enables a form of traffic analysis made more efficient through the use of standards.
The degree of additional privacy risk caused by including multiple services in
one controller document can be difficult to estimate. Privacy harms are
typically unintended consequences. URLs can refer to documents, services,
schemas, and other things that might be associated with individual people,
households, clubs, and employers — and correlation of their services
could become a powerful surveillance and inference tool. An example of
this potential harm can be seen when multiple common country-level top level
domains such as https://example.co.uk
might be used to infer the approximate
location of the subject with a greater degree of probability.
The following section describes accessibility considerations that developers implementing this specification are urged to consider in order to ensure that their software is usable by people with different cognitive, motor, and visual needs. As a general rule, this specification is used by system software and does not directly expose individuals to information subject to accessibility considerations. However, there are instances where individuals might be indirectly exposed to information expressed by this specification and thus the guidance below is provided for those situations.
This specification enables the expression of dates and times related to the validity period of proofs. This information might be indirectly exposed to an individual if a proof is processed and is detected to be outside an allowable time range. When exposing these dates and times to an individual, implementers are urged to take into account cultural normas and locales when representing dates and times in display software. In addition to these considerations, presenting time values in a way that eases the cognitive burden on the individual receiving the information is a suggested best practice.
For example, when conveying the expiration date for a particular set of digitally signed information, implementers are urged to present the time of expiration using language that is easier to understand rather than language that optimizes for accuracy. Presenting the expiration time as "This ticket expired three days ago." is preferred over a phrase such as "This ticket expired on July 25th 2023 at 3:43 PM." The former provides a relative time that is easier to comprehend than the latter time, which requires the individual to do the calculation in their head and presumes that they are capable of doing such a calculation.
This section is non-normative.
This section contains more detailed examples of the concepts introduced in the specification.
This section is non-normative.
This section contains various Multikey examples that might be useful for developers seeking test values.
{ "id": "https://multikey.example/issuer/123#key-0", "type": "Multikey", "controller": "https://multikey.example/issuer/123", "publicKeyMultibase": "zDnaerx9CtbPJ1q36T5Ln5wYt3MQYeGRG5ehnPAmxcf5mDZpv" }
{ "id": "https://multikey.example/issuer/123#key-0", "type": "Multikey", "controller": "https://multikey.example/issuer/123", "publicKeyMultibase": "z82LkvCwHNreneWpsgPEbV3gu1C6NFJEBg4srfJ5gdxEsMGRJ Uz2sG9FE42shbn2xkZJh54" }
{ "id": "https://multikey.example/issuer/123#key-0", "type": "Multikey", "controller": "https://multikey.example/issuer/123", "publicKeyMultibase": "z6Mkf5rGMoatrSj1f4CyvuHBeXJELe9RPdzo2PKGNCKVtZxP" }
{ "id": "https://multikey.example/issuer/123#key-0", "type": "Multikey", "controller": "https://multikey.example/issuer/123", "publicKeyMultibase": "zUC7EK3ZakmukHhuncwkbySmomv3FmrkmS36E4Ks5rsb6VQSRpoCrx6 Hb8e2Nk6UvJFSdyw9NK1scFXJp21gNNYFjVWNgaqyGnkyhtagagCpQb5B7tagJu3HDbjQ8h 5ypoHjwBb" }
{ "@context": "https://www.w3.org/ns/controller/v1", "id": "https://controller.example/123", "verificationMethod": [{ "id": "https://multikey.example/issuer/123#key-1", "type": "Multikey", "controller": "https://multikey.example/issuer/123", "publicKeyMultibase": "zDnaerx9CtbPJ1q36T5Ln5wYt3MQYeGRG5ehnPAmxcf5mDZpv" }, { "id": "https://multikey.example/issuer/123#key-2", "type": "Multikey", "controller": "https://multikey.example/issuer/123", "publicKeyMultibase": "z6Mkf5rGMoatrSj1f4CyvuHBeXJELe9RPdzo2PKGNCKVtZxP" }, { "id": "https://multikey.example/issuer/123#key-3", "type": "Multikey", "controller": "https://multikey.example/issuer/123", "publicKeyMultibase": "zUC7EK3ZakmukHhuncwkbySmomv3FmrkmS36E4Ks5rsb6VQSRpoCrx6 Hb8e2Nk6UvJFSdyw9NK1scFXJp21gNNYFjVWNgaqyGnkyhtagagCpQb5B7tagJu3HDbjQ8h 5ypoHjwBb" }], "authentication": [ "https://controller.example/123#key-1" ], "assertionMethod": [ "https://controller.example/123#key-2" "https://controller.example/123#key-3" ], "capabilityDelegation": [ "https://controller.example/123#key-2" ], "capabilityInvocation": [ "https://controller.example/123#key-2" ] }
This section is non-normative.
This section contains various JsonWebKey examples that might be useful for developers seeking test values.
{ "id": "https://jsonwebkey.example/issuer/123#key-0", "type": "JsonWebKey", "controller": "https://jsonwebkey.example/issuer/123", "publicKeyJwk": { "kty": "EC", "crv": "P-256", "x": "Ums5WVgwRkRTVVFnU3k5c2xvZllMbEcwM3NPRW91ZzN", "y": "nDQW6XZ7b_u2Sy9slofYLlG03sOEoug3I0aAPQ0exs4" } }
{ "id": "https://jsonwebkey.example/issuer/123#key-0", "type": "JsonWebKey", "controller": "https://jsonwebkey.example/issuer/123", "publicKeyJwk": { "kty": "EC", "crv": "P-384", "x": "VUZKSlUwMGdpSXplekRwODhzX2N4U1BYdHVYWUZsaXVDR25kZ1U0UXA4bDkxeHpE", "y": "jq4QoAHKiIzezDp88s_cxSPXtuXYFliuCGndgU4Qp8l91xzD1spCmFIzQgVjqvcP" } }
{ "id": "https://jsonwebkey.example/issuer/123#key-0", "type": "JsonWebKey", "controller": "https://jsonwebkey.example/issuer/123", "publicKeyJwk": { "kty": "OKP", "crv": "Ed25519", "x": "VCpo2LMLhn6iWku8MKvSLg2ZAoC-nlOyPVQaO3FxVeQ" } }
{ "id": "https://jsonwebkey.example/issuer/123#key-0", "type": "JsonWebKey", "controller": "https://jsonwebkey.example/issuer/123", "publicKeyJwk": { "kty": "EC", "crv": "BLS12381G2", "x": "Ajs8lstTgoTgXMF6QXdyh3m8k2ixxURGYLMaYylVK_x0F8HhE8zk0YWiGV3CHwpQEa2sH4PBZLaYCn8se-1clmCORDsKxbbw3Js_Alu4OmkV9gmbJsy1YF2rt7Vxzs6S", "y": "BVkkrVEib-P_FMPHNtqxJymP3pV-H8fCdvPkoWInpFfM9tViyqD8JAmwDf64zU2hBV_vvCQ632ScAooEExXuz1IeQH9D2o-uY_dAjZ37YHuRMEyzh8Tq-90JHQvicOqx" } }
{ "@context": "https://www.w3.org/ns/controller/v1", "id": "https://controller.example/123", "verificationMethod": [{ "id": "https://jsonwebkey.example/issuer/123#key-1", "type": "JsonWebKey", "controller": "https://jsonwebkey.example/issuer/123", "publicKeyJwk": { "kty": "EC", "crv": "P-256", "x": "fyNYMN0976ci7xqiSdag3buk-ZCwgXU4kz9XNkBlNUI", "y": "hW2ojTNfH7Jbi8--CJUo3OCbH3y5n91g-IMA9MLMbTU" } }, { "id": "https://jsonwebkey.example/issuer/123#key-2", "type": "JsonWebKey", "controller": "https://jsonwebkey.example/issuer/123", "publicKeyJwk": { "kty": "EC", "crv": "P-521", "x": "ASUHPMyichQ0QbHZ9ofNx_l4y7luncn5feKLo3OpJ2nSbZoC7mffolj5uy7s6KSKXFmnNWxGJ42IOrjZ47qqwqyS", "y": "AW9ziIC4ZQQVSNmLlp59yYKrjRY0_VqO-GOIYQ9tYpPraBKUloEId6cI_vynCzlZWZtWpgOM3HPhYEgawQ703RjC" } }, { "id": "https://jsonwebkey.example/issuer/123#key-3", "type": "JsonWebKey", "controller": "https://jsonwebkey.example/issuer/123", "publicKeyJwk": { "kty": "OKP", "crv": "Ed25519", "x": "_eT7oDCtAC98L31MMx9J0T-w7HR-zuvsY08f9MvKne8" } }], "authentication": [ "https://controller.example/123#key-1" ], "assertionMethod": [ "https://controller.example/123#key-2" "https://controller.example/123#key-3" ], "capabilityDelegation": [ "https://controller.example/123#key-2" ], "capabilityInvocation": [ "https://controller.example/123#key-2" ] }
This section is non-normative.
This section contains the substantive changes that have been made to this specification over time.
This section is non-normative.
The specification authors would like to thank the contributors to the W3C Decentralized Identifiers (DIDs) v1.0 specification upon which this work is based.
The Working Group gratefully acknowledges the work that led to the creation of this specification, and extends sincere appreciation to those individuals that worked on technologies and specifications that deeply influenced our work. In particular, this includes the work of Phil Zimmerman, Jon Callas, Lutz Donnerhacke, Hal Finney, David Shaw, and Rodney Thayer on Pretty Good Privacy (PGP) in the 1990s and 2000s.
In the mid-2010s, preliminary implementations of what would become Decentralized Identifiers were built in collaboration with Jeremie Miller's Telehash project and the W3C Web Payments Community Group's work led by Dave Longley and Manu Sporny. Around a year later, the XDI.org Registry Working Group began exploring decentralized technologies for replacing its existing identifier registry. Some of the first written papers exploring the concept of Decentralized Identifiers can be traced back to the first several Rebooting the Web of Trust workshops convened by Christopher Allen. That work led to a key collaboration between Christopher Allen, Drummond Reed, Les Chasen, Manu Sporny, and Anil John. Anil saw promise in the technology and allocated the initial set of government funding to explore the space. Without the support of Anil John and his guidance through the years, it is unlikely that Decentralized Identifiers would be where they are today. Further refinement at the Rebooting the Web of Trust workshops led to the first implementers documentation, edited by Drummond Reed, Les Chasen, Christopher Allen, and Ryan Grant. Contributors included Manu Sporny, Dave Longley, Jason Law, Daniel Hardman, Markus Sabadello, Christian Lundkvist, and Jonathan Endersby. This initial work was then merged into the W3C Credentials Community Group, incubated further, and then transitioned to the W3C Decentralized Identifiers Working Group for global standardization. That work was then used as the basis for this, more generalized and less decentralized, specification.
Portions of the work on this specification have been funded by the United States Department of Homeland Security's (US DHS) Science and Technology Directorate under contracts HSHQDC-16-R00012-H-SB2016-1-002, and HSHQDC-17-C-00019, as well as the US DHS Silicon Valley Innovation Program under contracts 70RSAT20T00000003, 70RSAT20T00000010/P00001, 70RSAT20T00000029, 70RSAT20T00000030, 70RSAT20T00000033, 70RSAT20T00000045, 70RSAT21T00000016/P00001, 70RSAT23T00000005, 70RSAT23C00000030, and 70RSAT23R00000006. The content of this specification does not necessarily reflect the position or the policy of the U.S. Government and no official endorsement should be inferred.
Portions of the work on this specification have also been funded by the European Union's StandICT.eu program under sub-grantee contract number CALL05/19. The content of this specification does not necessarily reflect the position or the policy of the European Union and no official endorsement should be inferred.
We would also like to thank the base-x software library contributors and the Bitcoin Core developers who wrote the original code, shared under an MIT License, found in Section 3.1 Base Encode and Section 3.2 Base Decode.
Work on this specification has also been supported by the Rebooting the Web of Trust community facilitated by Christopher Allen, Shannon Appelcline, Kiara Robles, Brian Weller, Betty Dhamers, Kaliya Young, Kim Hamilton Duffy, Manu Sporny, Drummond Reed, Joe Andrieu, Heather Vescent, Samantha Chase, Andrew Hughes, Erica Connell, Shigeya Suzuki, and Zaïda Rivai. Development of this specification has also been supported by the W3C Credentials Community Group, which has been Chaired by Kim Hamilton Duffy, Joe Andrieu, Christopher Allen, Heather Vescent, and Wayne Chang. The participants in the Internet Identity Workshop, facilitated by Phil Windley, Kaliya Young, Doc Searls, and Heidi Nobantu Saul, also supported this work through numerous working sessions designed to debate, improve, and educate participants about this specification.
The Working Group thanks the following individuals for their contributions to
this specification (in alphabetical order, Github handles start with @
and
are sorted as last names): Denis Ah-Kang, Nacho Alamillo, Christopher Allen, Joe
Andrieu, Antonio, Phil Archer, George Aristy, Baha, Juan Benet, BigBlueHat, Dan
Bolser, Chris Boscolo, Pelle Braendgaard, Daniel Buchner, Daniel Burnett, Juan
Caballero, @cabo, Tim Cappalli, Melvin Carvalho, David Chadwick, Wayne Chang,
Sam Curren, Hai Dang, Tim Daubenschütz, Oskar van Deventer, Kim Hamilton Duffy,
Arnaud Durand, Ken Ebert, Veikko Eeva, @ewagner70, Carson Farmer, Nikos Fotiou,
Gabe, Gayan, @gimly-jack, @gjgd, Ryan Grant, Peter Grassberger, Adrian Gropper,
Amy Guy, Daniel Hardman, Kyle Den Hartog, Philippe Le Hegaret, Ivan Herman,
Michael Herman, Alen Horvat, Dave Huseby, Marcel Jackisch, Mike Jones, Andrew
Jones, Tom Jones, jonnycrunch, Gregg Kellogg, Michael Klein, @kdenhartog-sybil1,
Paul Knowles, @ktobich, David I. Lehn, Charles E. Lehner, Michael Lodder,
@mooreT1881, Dave Longley, Tobias Looker, Wolf McNally, Robert Mitwicki, Mircea
Nistor, Grant Noble, Mark Nottingham, @oare, Darrell O'Donnell, Vinod Panicker,
Dirk Porsche, Praveen, Mike Prorock, @pukkamustard, Drummond Reed, Julian
Reschke, Yancy Ribbens, Justin Richer, Rieks, @rknobloch, Mikeal Rogers,
Evstifeev Roman, Troy Ronda, Leonard Rosenthol, Michael Ruminer, Markus
Sabadello, Cihan Saglam, Samu, Rob Sanderson, Wendy Seltzer, Mehran Shakeri,
Jaehoon (Ace) Shim, Samuel Smith, James M Snell, SondreB, Manu Sporny, @ssstolk,
Orie Steele, Shigeya Suzuki, Sammotic Switchyarn, @tahpot, Oliver Terbu, Ted
Thibodeau Jr., Joel Thorstensson, Tralcan, Henry Tsai, Rod Vagg, Mike Varley,
Kaliya "Identity Woman" Young, Eric Welton, Fuqiao Xue, @Yue, Dmitri Zagidulin,
@zhanb, and Brent Zundel.
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in: