§ Sidetree v1.0.1

Specification Status: DIF Ratified Specification

Latest published version: identity.foundation/sidetree/spec

Editors:
Daniel Buchner (Microsoft)
Orie Steele (Transmute)
Troy Ronda (SecureKey)
Contributors:
Henry Tsai (Microsoft)
Mudassir Ali (Microsoft)
Guillaume Dardelet (Transmute)
Isaac Chen (Microsoft)
Christian Lundkvist (Consensys)
Kyle Den Hartog (Mattr)
Tobias Looker (Mattr)
Participate:
GitHub repo
File a bug
Commit history

Sidetree REST API specification: identity.foundation/sidetree/api


§ Abstract

Sidetree is a protocol for creating scalable Decentralized Identifier networks that can run atop any existing decentralized anchoring system (e.g. Bitcoin, Ethereum, distributed ledgers, witness-based approaches) and be as open, public, and permissionless as the underlying anchoring systems they utilize. The protocol allows users to create globally unique, user-controlled identifiers and manage their associated PKI metadata, all without the need for centralized authorities or trusted third parties. The syntax of the identifier and accompanying data model used by the protocol is conformant with the W3C Decentralized Identifiers specification. Implementations of the protocol can be codified as their own distinct DID Methods and registered in the W3C DID Method Registry.

§ Introduction

This section is non-normative

Decentralized ledgers (e.g. Bitcoin) introduced the first-ever solution to the chronological oracle problem, which unlocked the ability to create robust decentralized identifier networks. However, current approaches that utilize event anchoring systems to create decentralized identifier networks suffer from severely limited transactional volumes and other performance issues. Sidetree is a ‘Layer 2’ protocol that can be implemented atop any form of event anchoring system to enable scalable W3C Decentralized Identifier (DID) implementations that can be fully open, public, and permissionless. Sidetree is able to do all this without requiring trusted intermediaries, centralized authorities, special protocol tokens, or secondary consensus mechanisms, while preserving the core attributes of decentralization and immutability of the underlying anchoring systems it is implemented on.

Architecturally, Sidetree-based DID Method implementations are overlay networks composed of independent peer nodes (Sidetree nodes) that interact with an underlying decentralized anchoring system (as illustrated under Network Topology) to write, observe, and process replicated DID PKI state operations using deterministic protocol rules that produce an eventually strongly consistent view of all DIDs in the network. The Sidetree protocol defines a core set of DID PKI state change operations, structured as delta-based Conflict-Free Replicated Data Types (i.e. Create, Update, Recover, or Deactivate), that mutate a Decentralized Identifier’s DID Document state. Sidetree nodes that participate in writing operations into the overlay network do so by anchoring Content-Addressable Storage (CAS) (e.g. IPFS) references to aggregated bundles of operations in an underlying anchoring system. The anchoring system acts as a linear chronological sequencing oracle, which the protocol leverages to order DID PKI operations in an immutable history all observing nodes can replay and validate. It is this ability to replay the precise sequence of DID PKI state change events, and process those events using a common set of deterministic rules, that allows Sidetree nodes to achieve a consistent view of DIDs and their DID Document states, without requiring any additional consensus mechanism.

§ Terminology

Term Description
Anchoring System A decentralized sequencing oracle (e.g. Bitcoin, Ethereum, distributed ledgers, witness-based approaches) that can be used to determine the order of PKI state transformations for Decentralized Identifiers (DIDs), which can be deterministically verified to derive the current PKI state of DIDs.
Witness System Synonym for Anchoring System, see above.
Core Index File JSON Document containing proving and index data for Create, Recovery, and Deactivate operations, and a CAS URI for the associated Provisional Index File. This file is anchored to the target anchoring system.
Provisional Index File JSON Document containing Update operation proving and index data, as well as CAS URI for Chunk File chunks.
Core Proof File JSON Document containing the cryptographic proofs for Recovery and Deactivate operations, which form the persistent backbone of DID PKI lineages.
Provisional Proof File JSON Document containing the cryptographic proofs for Update operations, which can be pruned via decentralized checkpointing mechanisms (this mechanism will arrive in future versions of the Sidetree protocol).
Chunk File JSON Document containing all verbose operation data for the corresponding set of DIDs specified in the related Provisional Index File.
CAS Content-addressable storage protocol/network (e.g. IPFS)
CAS URI The unique content-bound identifier used to locate a resource via the CAS protocol/network (e.g. IPFS)
Commit Value A chosen value that is used with a commitment scheme
Commitment The output of a commitment scheme
Commitment Scheme A cryptographic primative that allows one to commit to a chosen value, known as the commit value resulting in the generation of a commitment. A commitment can then be shared without revealing the commit value forming a proof of commitment where the possessor of the commit value can then later reveal the commit value proving the original commitment.
DID Document JSON Document containing public key references, service endpoints, and other PKI metadata that corresponds to a given DID (as defined in the W3C DID Specification). This is the most common form of DID state used in Sidetree implementations.
DID Suffix The unique identifier string within a DID URI. e.g. The unique suffix of did:sidetree:123 would be 123.
DID Suffix Data Data required to deterministically generate a DID.
Multihash Protocol for differentiating outputs from common cryptographic hash functions, addressing size and encoding considerations: https://multiformats.io/multihash/
DID Operation Set of delta-based CRDT patches that modify a DID’s state data when applied.
Operation Request JWS formatted request sent to a Sidetree Node to include a DID Operation in a batch of operations.
Update Key Pair A cryptographic key used to produce an Update Request JWS. Public key representation MUST be used to produce Update Request commitment.
Recovery Key Pair A cryptographic key used to produce an Operation Request of type Recover or Deactivate. Public key representation MUST be used to produce Operation Request commitment.
Public Key Commitment The resulting commitment obtained by applying the defined commitment scheme to a public key
Recovery Commitment The resulting commitment obtained by applying the defined commitment scheme to the public key of a recovery key pair
Sidetree Node Executable code that implements all the required components, functionality, and rules specified in the Sidetree protocol specification.
Transaction Anchoring System transaction that anchors a set of Sidetree operations, via a CAS URI for an associated Core Index File.
Anchor String The string anchored to the anchoring system, composed of the CAS URI to the Core Index File, prefixed with the declared operation count.
Anchor Time The logical order of operations, as determined by the underlying anchoring system (e.g. Bitcoin block and transaction order). Anchoring systems may widely vary in how they determine the logical order of operations, but the only requirement of an anchoring system is that it can provide a means to deterministically order each operation within a DID’s operational lineage.
Transaction Number A monotonically increasing number deterministically ordered and assigned to every transaction relative to its position in Anchor Time.
Light Node A node that downloads and processes only Core Index Files and Provisional Index Files on a proactive basis, waiting until resolution time to download and process the Chunk File related to a given DID. This type of configuration enables a node to operate trustlessly while consuming approximately one order of magnitude less storage.

§ Protocol Versioning

The rules and parameters of the Sidetree protocol MAY change in the future, resulting in new versions of the specification. The Sidetree specification and reference implementation follow SemVer 2.0.

Versions of the specification can be found on the Decentralized Identity Foundation’s website at the following version-based paths:

Latest Draft

https://identity.foundation/sidetree/spec/

Specific Versions

https://identity.foundation/sidetree/spec/v<major>.<minor>.<patch>/

Versions of the Sidetree reference implementation are also provided as npm modules and GitHub releases:

{
  "name": "@decentralized-identity/sidetree",
  "version": "<major>.<minor>.<patch>",
  ...

§ Version Segment Definitions

§ New Version Activation

New versions of the protocol, or modifications to parameter values by implementers, MUST be activated at a specified Anchor Time so all nodes can remain in sync by enforcing the same parameter configuration and protocol rules at the same logical starting point. All transactions that occur after the specified Anchor Time will adhere to the associated version’s rules and parameters until a newer version of the protocol is defined and implemented at a future Anchor Time.

§ Default Parameters

Each version of the protocol will define a set of protocol rules and parameters with default suggested values. The following are the parameters used by this version of the Sidetree protocol - implementers MAY choose different values than the defaults listed below:

Protocol Parameter Description Suggested Defaults
HASH_ALGORITHM Algorithm for generating hashes of protocol-related values. SHA256
HASH_PROTOCOL Protocol for generating hash representations in Sidetree implementations, using the HASH_ALGORITHM Multihash
DATA_ENCODING_SCHEME Encoding selected for various data (JSON, hashes, etc.) used within an implementation, the output of which MUST be in ASCII format. Base64URL
JSON_CANONICALIZATION_SCHEME The scheme selected for canonicalizing JSON structures used throughout the specification. JCS
KEY_ALGORITHM Asymmetric public key algorithm for signing DID operations. Must be a valid JWK crv. secp256k1
SIGNATURE_ALGORITHM Asymmetric public key signature algorithm. Must be a valid JWS alg. ES256K
CAS_PROTOCOL The CAS network protocol used within an implementation. IPFS
CAS_URI_ALGORITHM Algorithm for generating unique content-bound identifiers for the implementation-selected CAS protocol. IPFS CID
COMPRESSION_ALGORITHM File compression algorithm. GZIP
REVEAL_VALUE Cryptographic hash of the commitment value. SHA256 Multihash (0x12)
GENESIS_TIME The point in the target anchoring system’s transaction history at which Sidetree implementation is first activated (e.g. block number in a blockchain). 630000
MAX_CORE_INDEX_FILE_SIZE Maximum compressed Core Index File size. 1 MB (zipped)
MAX_PROVISIONAL_INDEX_FILE_SIZE Maximum compressed Provisional Index File size. 1 MB (zipped)
MAX_PROOF_FILE_SIZE Maximum compressed Proof File size. 2.5 MB (zipped)
MAX_CHUNK_FILE_SIZE Maximum compressed chunk file size. 10 MB
MAX_MEMORY_DECOMPRESSION_FACTOR Maximum size after decompression. 3x file size
MAX_CAS_URI_LENGTH Maximum length of CAS URIs. 100 bytes
MAX_DELTA_SIZE Maximum canonicalized operation delta buffer size. 1,000 bytes
MAX_OPERATION_COUNT Maximum number of operations per batch. 10,000 ops
MAX_OPERATION_HASH_LENGTH Maximum length of all hashes in CAS URI files. 100 bytes
NONCE_SIZE The number of bytes (octets) in nonce values. 16 bytes

§ Common Functions

The following is a list of functional procedures that are commonly used across the protocol. These functions are defined once here and referenced throughout the specification, wherever an implementer must invoke them to comply with normative processes.

§ Hashing Process

All data hashed within the bounds of the protocol follow the same procedural steps, and yield a consistently encoded output. Given a data value, the following steps are used to generated a hashed output:

  1. Generate a hash of the data value using the HASH_PROTOCOL with the HASH_ALGORITHM.
  2. Encode the resulting output using the DATA_ENCODING_SCHEME.
  3. Return the encoded hashing output.

Pseudo-code example using current protocol defaults:

let HashingOutput = Base64URL( Multihash(DATA, 0x12) );

§ Commitment Schemes

Commitment schemes are used by the Sidetree protocol in important ways to preserve the integrity of operations and assist in recovery.

§ Public Key Commitment Scheme

The following steps define the commitment scheme for generating a public key commitment from a public key.

  1. Encode the public key into the form of a valid JWK.
  2. Canonicalize the JWK encoded public key using the implementation’s JSON_CANONICALIZATION_SCHEME.
  3. Use the implementation’s HASH_PROTOCOL to hash the canonicalized public key to generate the REVEAL_VALUE, then hash the resulting hash value again using the implementation’s HASH_PROTOCOL to produce the public key commitment.

For maximum forward cryptographic security, implementers SHOULD NOT re-use public keys across different commitment invocations. Implementers MUST NOT re-use public key JWK payloads across different commitment invocations.

§ JWK Nonce

Implementers MAY define the nonce property in the public key JWK payload. The nonce property enables the re-use of public keys across commitments without re-using the public key JWK payloads. If the nonce property is defined by the implementer, the DID Owner MAY populate the nonce property in the public key JWK payload. If the nonce property is populated, the value of the nonce property MUST be of size NONCE_SIZE and be encoded using with Base64URL encoding.

§ Network Topology

The figure below illustrates the three primary components of a Sidetree-based DID overlay network:

  1. The underlying anchoring system that serves as the global anchoring and linear sequencing system for DID operations.
  2. The Sidetree nodes themselves, which interact with the anchoring system to anchor operations, fetch and replicate data from the CAS network, and process operations in accordance with the protocol deterministic ruleset.
  3. An integrated Content-Addressable Storage (CAS) network layer Sidetree nodes use to distribute and replicate DID operation files.

§ File Structures

The protocol defines the following three file structures, which house DID operation data and are designed to support key functionality to enable light node configurations, minimize permanently retained data, and ensure performant resolution of DIDs.

§ Core Index File

Core Index Files contain Create, Recover, and Deactivate operation values, as well as a CAS URI for the related Provisional Index File (detailed below). As the name suggests, Core Index Files are anchored to the target anchoring system via embedding a CAS URI in the anchoring system’s transactional history.

EXAMPLE
{
  "coreProofFileUri": CAS_URI,
  "provisionalIndexFileUri": CAS_URI,
  "writerLockId": OPTIONAL_LOCKING_VALUE,
  "operations": {
    "create": [
      {
        "suffixData": {
          "type": TYPE_STRING,
          "deltaHash": DELTA_HASH,
          "recoveryCommitment": COMMITMENT_HASH
        }
      },
      {...}
    ],
    "recover": [
      {
        "didSuffix": SUFFIX_STRING,
        "revealValue": MULTIHASH_OF_JWK
      },
      {...}
    ],
    "deactivate": [
      {
        "didSuffix": SUFFIX_STRING,
        "revealValue": MULTIHASH_OF_JWK
      },
      {...}
    ]
  }
}

A valid Core Index File is a JSON document that MUST NOT exceed the MAX_CORE_INDEX_FILE_SIZE. Any unknown properties in this file not defined by this specification or specifically permitted by the implementer, MUST result in an invalidation of the entire file.

The Core Index File JSON document is composed as follows:

  1. The Core Index File MUST contain a provisionalIndexFileUri property if the batch of transactions being anchored contains any Create, Recovery, or Update operations, and its value MUST be a CAS URI for the related Provisional Index File. If the batch of transactions being anchored is only comprised of Deactivate operations, the provisionalIndexFileUri property MUST NOT be present.
  2. The Core Index File MUST contain a coreProofFileUri property if the batch of transactions being anchored contains any Recovery or Deactivate operations, and its value MUST be a CAS URI for the related Core Proof File.
  3. The Core Index File MAY contain a writerLockId if the implementation chooses to implement an mechanism that requires embedded anchoring information, and if present, its value MUST comply with the specifications of the implementation.
  4. If the set of operations to be anchored contain any Create, Recover, or Deactivate operations, the Core Index File MUST contain an operations property, and its value MUST be an object, composed as follows:
    • If there are any Create operations to be included in the Core Index File:
      1. The operations object MUST include a create property, and its value MUST be an array.
      2. For each Create operation to be included in the create array, herein referred to as Core Index File Create Entries, use the following process to compose and include a JSON object for each entry:
      3. The Core Index File MUST NOT include multiple Create operations that produce the same DID Suffix.
    • If there are any Recovery operations to be included in the Core Index File:
      1. The operations object MUST include a recover property, and its value MUST be an array.
      2. For each Recovery operation to be included in the recover array, herein referred to as Core Index File Recovery Entries, use the following process to compose and include entries:
        • The object MUST contain a didSuffix property, and its value MUST be the DID Suffix of the DID the operation pertains to. An Core Index File MUST NOT contain more than one operation of any type with the same DID Suffix.
        • The object MUST contain a revealValue property, and its value MUST be the REVEAL_VALUE of the last update commitment.
    • If there are any Deactivate operations to be included in the Core Index File:
      1. The operations object MUST include a deactivate property, and its value MUST be an array.
      2. For each Deactivate operation to be included in the deactivate array, use the following process to compose and include entries:
        • The object MUST contain a didSuffix property, and its value MUST be the DID Suffix of the DID the operation pertains to. An Core Index File MUST NOT contain more than one operation of any type with the same DID Suffix.
        • The object MUST contain a revealValue property, and its value MUST be the REVEAL_VALUE of the last update commitment.

§ Provisional Index File

Provisional Index Files contain Update operation proving data, as well as CAS URI links to Chunk Files.

EXAMPLE
{
  "provisionalProofFileUri": CAS_URI,
  "chunks": [
    { "chunkFileUri": CAS_URI },
    {...}
  ],
  "operations": {
    "update": [
      {
        "didSuffix": SUFFIX_STRING,
        "revealValue": MULTIHASH_OF_JWK
      },
      {...}
    ]
  }
}

A valid Provisional Index File is a JSON document that MUST NOT exceed the MAX_PROVISIONAL_INDEX_FILE_SIZE. Any unknown properties in this file not defined by this specification or specifically permitted by the implementer, MUST result in an invalidation of the entire file.

The Provisional Index File JSON document is composed as follows:

  1. The Provisional Index File MUST contain a provisionalProofFileUri property if the batch of transactions being anchored contains any Update operations, and its value MUST be a CAS URI for the related Provisional Proof File.
  2. The Provisional Index File MUST contain a chunks property, and its value MUST be an array of Chunk Entries for the related delta data for a given chunk of operations in the batch. Future versions of the protocol will specify a process for separating the operations in a batch into multiple Chunk Entries, but for this version of the protocol there MUST be only one Chunk Entry present in the array. Chunk Entry objects are composed as follows:
    1. The Chunk Entry object MUST contain a chunkFileUri property, and its value MUST be a URI representing the corresponding CAS file entry, generated via the CAS_URI_ALGORITHM.
  3. If there are any operation entries to be included in the Provisional Index File (currently only Update operations), the Provisional Index File MUST include an operations property, and its value MUST be an object composed as follows:
    • If there are any Update entries to be included:
      1. The operations object MUST include an update property, and its value MUST be an array.
      2. For each Update operation to be included in the update array, herein referred to as Provisional Index File Update Entries, use the following process to compose and include entries:
        • The object MUST contain an didSuffix property, and its value MUST be the DID Suffix of the DID the operation pertains to, with a maximum length as specified by the MAX_OPERATION_HASH_LENGTH.
        • The object MUST contain a revealValue property, and its value MUST be the REVEAL_VALUE of the last update commitment, with a maximum length as specified by the MAX_OPERATION_HASH_LENGTH.

§ Core Proof File

Core Proof Files are compressed JSON Documents containing the cryptographic proofs (signatures, hashes, etc.) that form the signature-chained backbone for the state lineages of all DIDs in the system. The cryptographic proofs present in Core Proof Files also link a given operation to its verbose state data, which resides in an related Chunk File.

EXAMPLE
{
  "operations": {
    "recover": [
      {
        "signedData": {
          "protected": {...},
          "payload": {
            "recoveryCommitment": COMMITMENT_HASH,
            "recoveryKey": JWK_OBJECT,
            "deltaHash": DELTA_HASH
          },
          "signature": SIGNATURE_STRING
        }
      },
      {...}
    ],
    "deactivate": [
      {
        "signedData": {
          "protected": {...},
          "payload": {
            "didSuffix": SUFFIX_STRING,
            "recoveryKey": JWK_OBJECT
          },
          "signature": SIGNATURE_STRING
        }
      },
      {...}
    ]
  }
}

Any unknown properties in this file not defined by this specification or specifically permitted by the implementer, MUST result in an invalidation of the entire file.

In this version of the protocol, Core Proof Files are constructed as follows:

  1. The Core Proof File MUST include an operations property, and its value MUST be an object containing cryptographic proof entries for any Recovery and Deactivate operations to be included in a batch. Include the Proof Entries as follows:

§ Provisional Proof File

Provisional Proof Files are compressed JSON Documents containing the cryptographic proofs (signatures, hashes, etc.) for all the (eventually) prunable DID operations in the system. The cryptographic proofs present in Provisional Proof Files also link a given operation to its verbose state data, which resides in an related Chunk File.

EXAMPLE
{
  "operations": {
    "update": [
      {
        "signedData": {
          "protected": {...},
          "payload": {
            "updateKey": JWK_OBJECT,
            "deltaHash": DELTA_HASH
          },
          "signature": SIGNATURE_STRING
        }
      },
      {...}
    ]
  }
}

Any unknown properties in this file not defined by this specification or specifically permitted by the implementer, MUST result in an invalidation of the entire file.

In this version of the protocol, Provisional Proof Files are constructed as follows:

  1. The Provisional Proof File MUST include an operations property, and its value MUST be an object containing cryptographic proof entries for any Recovery and Deactivate operations to be included in a batch. Include the Proof Entries as follows:

§ Chunk Files

Chunk Files are JSON Documents, compressed via the COMPRESSION_ALGORITHM, that contain Sidetree Operation source data, which are composed of delta-based CRDT entries that modify the state of a Sidetree identifier’s DID state.

For this version of the protocol, there will only exist a single Chunk File that contains all the state modifying data for all operations in the included set. Future versions of the protocol will separate the total set of included operations into multiple chunks, each with their own Chunk File.

EXAMPLE
{
  "deltas": [
       
    {
      "patches": PATCH_ARRAY,
      "updateCommitment": COMMITMENT_HASH
    },
    ...
  ]
}

Any unknown properties in this file not defined by this specification or specifically permitted by the implementer, MUST result in an invalidation of the entire file.

In this version of the protocol, Chunk Files are constructed as follows:

  1. The Chunk File MUST include a deltas property, and its value MUST be an array containing Chunk File Delta Entry objects.

  2. Each Chunk File Delta Entry MUST be a JSON object serialized via the JSON_CANONICALIZATION_SCHEME, assembled as follows:

    1. The object MUST contain a patches property, and its value MUST be an array of DID State Patches.
    2. The payload MUST contain an updateCommitment property, and its value MUST be the next Update Commitment generated during the operation process associated with the type of operation being performed.
  3. Each Chunk File Delta Entry MUST be appended to the deltas array as follows, in this order:

    1. If any Create operations were present in the associated Core Index File, append all Create Operation Delta Objects in the same index order as their matching Core Index File Create Entry.
    2. If any Recovery operations were present in the associated Core Index File, append all Recovery Operation Delta Objects in the same index order as their matching Core Index File Recovery Entry.
    3. If any Update operations were present in the associated Provisional Index File, append all Update Operation Delta Objects in the same index order as their matching Provisional Index File Update Entry.

§ DID URI Composition

DID Methods based on the Sidetree protocol all share the same identifier format. The unique identifier segment of a Sidetree-based DID, known as the DID Suffix, is derived based on the initial state of the DID’s state data. The DID Suffix is cryptographically bound to the initial PKI state of the DID, which means Sidetree DIDs are self-certifying. As a result, a person or entity who creates a Sidetree-based DID knows their unique identifier at the moment of generation, and it is cryptographic secured for instant use (for more on the instant use capabilities of Sidetree DIDs, see Unpublished DID Resolution).

To generate the Short-Form DID URI of a Sidetree DID, use the Hashing Process to generate a hash of the canonicalized Create Operation Suffix Data Object. The following is an example of a resulting colon (:) separated DID URI composed of the URI scheme (did:), Method identifier (sidetree:), and unique identifier string (EiBJz4...):

Format of Short-form DID URI:

did:METHOD:<did-suffix>

Example of Short-Form DID URI:

did:sidetree:EiDahaOGH-liLLdDtTxEAdc8i-cfCz-WUcQdRJheMVNn3A

An implementer MAY define additional components in their method’s DID URI composition.

NOTE

Many implementations have multiple active network instances of their DID Method (e.g. mainnet and testnet). How different network instances of a DID Method are represented in the DID URI string is method-specific. Many methods choose to use the base format above (did:METHOD) as their primary/mainnet network, and add an additional segment after the :METHOD segment to denote other network instances, for example: did:METHOD:testnet. DID Methods SHOULD clearly describe parsing rules for distinguishing between their different network instances.

§ Long-Form DID URIs

In many DID Methods, there is a period of time (which may be indefinite) between the generation of a DID and the DID operation being anchored, propagated, and processed in the underlying anchoring and storage systems. In order to account for this, Sidetree introduces an equivalent variant of Sidetree-based DIDs that is self-certifying and self-resolving, known as the Long-Form DID URI. The Long-Form DID URI variant of Sidetree-based DIDs enables DIDs to be immediately resolvable after generation by including the DID’s initial state data within the Long-Form DID URI itself. Sidetree Long-Form DID URIs are the Short-Form DID URI with an additional colon-separated (:) segment appended to the end. The value of this final URI segment is a canonicalized JSON data payload composed of the Create Operation Suffix data and the Create Operation Delta data, encoded via the implementation’s DATA_ENCODING_SCHEME.

Long-form DID JSON data payload:

{
  "delta": {
    "patches": [
      {
        "action": "replace",
        "document": {
          "publicKeys": [
            {
              "id": "anySigningKeyId",
              "publicKeyJwk": {
                "crv": "secp256k1",
                "kty": "EC",
                "x": "H61vqAm_-TC3OrFSqPrEfSfg422NR8QHPqr0mLx64DM",
                "y": "s0WnWY87JriBjbyoY3FdUmifK7JJRLR65GtPthXeyuc"
              },
              "purposes": [
                "auth"
              ],
              "type": "EcdsaSecp256k1VerificationKey2019"
            }
          ],
          "services": [
            {
              "id": "anyServiceEndpointId",
              "type": "anyType",
              "serviceEndpoint": "http://any.endpoint"
            }
          ]
        }
      }
    ],
    "updateCommitment": "EiBMWE2JFaFipPdthcFiQek-SXTMi5IWIFXAN8hKFCyLJw"
  },
  "suffixData": {
    "deltaHash": "EiBP6gAOxx3YOL8PZPZG3medFgdqWSDayVX3u1W2f-IPEQ",
    "recoveryCommitment": "EiBg8oqvU0Zq_H5BoqmWf0IrhetQ91wXc5fDPpIjB9wW5w"
  }
}

Format of Long-Form DID URI:

did:METHOD:<did-suffix>:<long-form-suffix-data>

Example of Long-Form DID URI:

did:sidetree:EiDahaOGH-liLLdDtTxEAdc8i-cfCz-WUcQdRJheMVNn3A:eyJkZWx0YSI6eyJwYXRjaGVzIjpbeyJhY3Rpb24iOiJyZXBsYWNlIiwiZG9jdW1lbnQiOnsicHVibGljX2tleXMiOlt7ImlkIjoiYW55U2lnbmluZ0tleUlkIiwiandrIjp7ImNydiI6InNlY3AyNTZrMSIsImt0eSI6IkVDIiwieCI6Ikg2MXZxQW1fLVRDM09yRlNxUHJFZlNmZzQyMk5SOFFIUHFyMG1MeDY0RE0iLCJ5IjoiczBXbldZODdKcmlCamJ5b1kzRmRVbWlmSzdKSlJMUjY1R3RQdGhYZXl1YyJ9LCJwdXJwb3NlIjpbImF1dGgiXSwidHlwZSI6IkVjZHNhU2VjcDI1NmsxVmVyaWZpY2F0aW9uS2V5MjAxOSJ9XSwic2VydmljZV9lbmRwb2ludHMiOlt7ImVuZHBvaW50IjoiaHR0cDovL2FueS5lbmRwb2ludCIsImlkIjoiYW55U2VydmljZUVuZHBvaW50SWQiLCJ0eXBlIjoiYW55VHlwZSJ9XX19XSwidXBkYXRlX2NvbW1pdG1lbnQiOiJFaUJNV0UySkZhRmlwUGR0aGNGaVFlay1TWFRNaTVJV0lGWEFOOGhLRkN5TEp3In0sInN1ZmZpeF9kYXRhIjp7ImRlbHRhX2hhc2giOiJFaUJQNmdBT3h4M1lPTDhQWlBaRzNtZWRGZ2RxV1NEYXlWWDN1MVcyZi1JUEVRIiwicmVjb3ZlcnlfY29tbWl0bWVudCI6IkVpQmc4b3F2VTBacV9INUJvcW1XZjBJcmhldFE5MXdYYzVmRFBwSWpCOXdXNXcifX0

The Long-Form DID URI variant of Sidetree-based DIDs supports the following features and usage patterns:

§ JSON Web Signatures

Sidetree relies on JSON Web Signatures for authentication and integrity protection of DID Operations, except for Create, with contains key material and is self certifying.

§ Signing

In addition to RFC7515, the following additional requirements MUST be observed by Sidetree Method implementeers.

  1. kid MAY be present in the protected header.
  2. alg MUST be present in the protected header, its value MUST NOT be none.
  3. No additional members may be present in the protected header.

Here is an example of a decoded JWS header:

{
  "kid": "did:example:123#_Qq0UL2Fq651Q0Fjd6TvnYE-faHiOpRlPVQcY_-tA4A",
  "alg": "EdDSA"
}
WARNING

It is recommended that kid be a DID URL. If it is not, method implementers might need to rely on additional context to uniquely identify the correct verificationMethod.

§ Verifying

Regardless of which verification relationship a verificationMethod is associated with, the process of verifying a JWS linked to a DID is the same.

The JWS header is parsed and a kid is extracted.

  1. Iterate the verificationMethods, until a verificationMethod with id equal to kid is found.
  2. Convert the discovered verificationMethod to JWK if necessary.
  3. Perform JWS Verification using the JWK.

§ Operation Verification

Sidetree operations are considered valid when the JWS can be verified with the correct key pair designated for the type of operation being invoked (i.e. update, recover, deactivate).

An Update Operation MUST be signed by the currently valid Update Key Pair.

A Recover Operation MUST by signed by the currently valid Recovery Key Pair.

A Deactivate Operation MUST by signed by the currently valid Recovery Key Pair.

WARNING

Signatures on operations may be valid, but operations may be deemed invalid for other reasons (e.g. malformed delta payload or being stale).

WARNING

It is not recommended to reuse verificationMethods for multiple verification relationships.

§ Operation Anchoring Time Ranges

A Sidetree-based DID Method MAY define the anchorFrom and/or anchorUntil properties as part of the operation’s data object payload. If anchorFrom is defined by the implementer, a DID owner MAY include the earliest allowed anchoring time for their operation in the anchorFrom property of the operation’s data object payload. The anchorFrom property is conceptually similar to the RFC7519 nbf and iat claims. If anchorUntil is defined by the implementer, a DID owner MAY include the latest allowed anchoring time for their operation in the anchorUntil property of the operation’s data object payload. The anchorUntil property is conceptually similar to the RFC7519 exp claim. These properties contain numeric values; but note that anchoring systems may have differing mechanisms of time (as defined by the method).

A Sidetree-based DID Method MAY require validation for rejecting stale operations. An operation is considered stale relative to the timing information provided by the underlying anchoring system. When an operation is stale according to the DID method’s parameters, the operation is deemed as invalid. During processing, if the DID method validates stale operations, the DID owner’s operation time range is compared to the anchoring system’s timing information. Operations that are anchored prior to anchorFrom are deemed invalid, if anchorFrom is set. Operations that are anchored after anchorUntil are deemed invalid, if anchorUntil is set (or implicitly defined). If the operation is deemed invalid, skip the entry and iterate forward to the next entry.

A Sidetree-based DID Method MAY constrain the range between anchorFrom and anchorUntil using a delta defined by the implementation. The implementer MAY also implicitly define the anchorUntil using the anchorFrom plus a delta defined by the implementation. The delta MAY be defined as the MAX_OPERATION_TIME_DELTA protocol parameter.

§ DID Operations

Sidetree-based DIDs support a variety of DID operations, all of which require the DID owner to generate specific data values and cryptographic material. The sections below describe how to perform each type of operation, and how those operations are represented in the CAS-replicated files that are anchored to the underlying anchoring system.

While virtually all DID owners will engage User Agent applications on their local devices to perform these operations, most will not generate the anchoring transactions on the underlying anchoring system. Instead, most users will likely send the anchoring-related operation values they generate to external nodes for anchoring. This is relatively safe, because operations require signatures that an external node cannot forge. The only attack available to a rogue node operator is to not anchor the operations a DID owner sends them. However, the DID owner can detect this (via a scan of subsequent blocks) and send their operation to a different node or do it themselves, if they so desire.

It is strongly advised that DID owners and User Agents (e.g. wallet apps) retain their DID operations and operation-anchoring files. Doing so is helpful in cases where users, or their User Agent, need to quickly access the operations and operation-anchoring files, or a user wishes to individually persist their operation and operation-anchoring files on the CAS network for even greater independent availability assurance.

NOTE

This specification does not define an API for sending public DID operation values to third-party Sidetree nodes for external anchoring, as that is an elective activity has no bearing on the technical workings of the protocol, its capabilities, or its security guarantees.

WARNING

Operations other than Create contain a compact JWS. Dereferencing of key material used to verify the JWS is a DID Method specific concern. Some methods may rely of the DID Document data model, others may rely on an internal data model. Some methods may rely on kid of the form did:example:123#fingerprint, others may not include a kid in the JWS, or its value may be arbitrary. Support for specific alg fields is also DID Method specific. Implementers are cautioned to choose support for specific alg values carefully.

§ Create

Use the following process to generate a Sidetree-based DID:

  1. Generate a key pair using the defined KEY_ALGORITHM, let this be known as the Update Key Pair.
  2. Generate a public key commitment using the defined public key commitment scheme and public key of the generated Update Key Pair, let this resulting commitment be known as the update commitment.
  3. Generate a canonicalized representation of the following object using the implementation’s JSON_CANONICALIZATION_SCHEME, herein referred to as the Create Operation Delta Object:
    {
      "patches": [ PATCH_1, PATCH_2, ... ],
      "updateCommitment": COMMITMENT_HASH
    }
    
    • The object MUST contain a patches property, and its value MUST be a JSON array of DID State Patches.
    • The object MUST contain an updateCommitment property, and its value MUST be the update commitment as generated in step 2.
  4. Generate a key pair using the defined KEY_ALGORITHM, let this be known as the recovery key pair, where the public key of this pair is used for generating the recovery commitment, and the private key for use in the next recovery operation.
  5. Generate a public key commitment using the defined public key commitment scheme and public key of the generated recovery key pair, let this resulting commitment be known as the recovery commitment.
  6. Generate a canonicalized representation of the following object using the implementation’s JSON_CANONICALIZATION_SCHEME, herein referred to as the Create Operation Suffix Data Object:
    {
      "type": TYPE_STRING,
      "deltaHash": DELTA_HASH,
      "recoveryCommitment": COMMITMENT_HASH,
      "anchorOrigin": ANCHOR_ORIGIN
    }
    
    • The object MAY contain a type property, and if present, its value MUST be a type string, of a length and composition defined by the implementation, that signifies the type of entity a DID represents.
    • The object MUST contain a deltaHash property, and its value MUST be a hash of the canonicalized Create Operation Delta Object (detailed above), generated via the HASH_PROTOCOL.
    • The object MUST contain a recoveryCommitment property, and its value MUST be the recovery commitment as generated in step 5.
    • The object MAY contain an anchorOrigin property if an implemention defines this property. This property signifies the implementer-defined system(s) that know the most recent anchor for this DID. The property’s type and composition is defined by the implementation. Implementers MAY define this property since implementers with a single common anchoring system do not need to support this property.
NOTE

Implementations MAY choose to define additional properties for inclusion in the Create Operation Suffix Data Object, but the presence of any properties beyond the standard properties or implementation-defined properties ARE NOT permitted.

WARNING

The string values used in the type field must be carefully considered, and this specification strongly cautions implementers to avoid allowing any values that represent humans, groups of humans, or any human-identifying classifications.

§ Update

The following process must be used to update the state a Sidetree-based DID:

  1. Retrieve the Update Reveal Value that matches the previously anchored Update Commitment.
  2. Generate a canonicalized representation of the following object using the implementation’s JSON_CANONICALIZATION_SCHEME, herein referred to as the Update Operation Delta Object, composed as follows:
    {
      "patches": [ PATCH_1, PATCH_2, ... ],
      "updateCommitment": COMMITMENT_HASH
    }
    
    • The object MUST contain a patches property, and its value MUST be an array of DID State Patches.
    • The object MUST contain a updateCommitment property, and its value MUST be a new Update Commitment, the value of which will be revealed for the next Update operation.
  3. Generate an IETF RFC 7515 compliant compact JWS representation of the following object, herein referred to as the Update Operation Signed Data Object, with a signature that validates against a currently active update key, and contains the following payload values:
    {
      "protected": {...},
      "payload": {
        "updateKey": JWK_OBJECT,
        "deltaHash": DELTA_HASH
      },
      "signature": SIGNATURE_STRING
    }
    
    • The JWS payload object MUST include a updateKey property, and its value MUST be the IETF RFC 7517 compliant JWK representation matching the previous Update Commitment.
    • The JWS payload object MUST contain a deltaHash property, and its value MUST be a hash of the canonicalized Update Operation Delta Object, generated via the HASH_PROTOCOL, with a maximum length as specified by the MAX_OPERATION_HASH_LENGTH.

§ Recover

Use the following process to recover a Sidetree-based DID:

  1. Retrieve the Recovery Key that matches the previously anchored Recovery Commitment. This value will be used in constructing an Core Index File Recovery Entry for the DID being recovered.
  2. Generate a new recovery key pair, which MUST NOT be the same key used in any previous operations, via the KEY_ALGORITHM, retaining the Next Recovery Public Key for use in generating the next Recovery Commitment, and the private key for use in the next Recovery operation.
  3. Create a Recovery Commitment using the Hashing Process to generate a hash value from the canonicalized IETF RFC 7517 JWK representation (using the implementation’s JSON_CANONICALIZATION_SCHEME) of the Next Recovery Public Key, and retain the hash value for inclusion in an Core Index File.
  4. Generate a new Update Key Pair, which SHOULD NOT be the same key used in any previous operations, via the KEY_ALGORITHM, retaining the Next Update Public Key for use in generating the next Update Commitment, and the private key for use in the next Update operation.
  5. Create an Update Commitment using the Hashing Process to generate a hash value from the canonicalized IETF RFC 7517 JWK representation (using the implementation’s JSON_CANONICALIZATION_SCHEME) of the Next Update Public Key, and retain the hash value for inclusion in the Recovery Operation Delta Object (as described below).
  6. Generate and retain a COMMITMENT_VALUE, in adherence with the Commitment Schemes directives, for use in the next Update operation, herein referred to as the Update Reveal Value.
  7. Generate an Update Commitment using the Hashing Process, in adherence with the Commitment Schemes directives, to generate a hash of the Update Reveal Value, and retain the resulting hash value for inclusion in an Core Index File.
  8. Generate a canonicalized representation of the following object using the implementation’s JSON_CANONICALIZATION_SCHEME, herein referred to as the Recovery Operation Delta Object, composed as follows:
    {
      "patches": [ PATCH_1, PATCH_2, ... ],
      "updateCommitment": COMMITMENT_HASH
    }
    
    • The object MUST contain a patches property, and its value MUST be an array of DID State Patches.
    • The object MUST contain a updateCommitment property, and its value MUST be the Update Commitment, as described above.
  9. Generate an IETF RFC 7515 compliant compact JWS representation of the following object, herein referred to as the Recovery Operation Signed Data Object, with a signature that validates against a currently active recovery key, and contains the following payload values:
    {
      "protected": {...},
      "payload": {
        "recoveryCommitment": COMMITMENT_HASH,
        "recoveryKey": JWK_OBJECT,
        "deltaHash": DELTA_HASH,
        "anchorOrigin": ANCHOR_ORIGIN
      },
      "signature": SIGNATURE_STRING
    }
    
    • The JWS payload object MUST contain a recoveryCommitment property, and its value MUST be the next Recovery Commitment, as described above, with a maximum length as specified by the MAX_OPERATION_HASH_LENGTH.
    • The JWS payload object MUST include a recoveryKey property, and its value MUST be the IETF RFC 7517 JWK representation matching the previous Recovery Commitment.
    • The JWS payload object MUST contain a deltaHash property, and its value MUST be a hash of the canonicalized Recovery Operation Delta Object, generated via the HASH_PROTOCOL, with a maximum length as specified by the MAX_OPERATION_HASH_LENGTH.
    • The JWS payload object MAY contain an anchorOrigin property if an implemention defines this property. This property signifies the implementer-defined system(s) that know the most recent anchor for this DID. The property’s type and composition is defined by the implementation. Implementers MAY define this property since implementers with a single common anchoring system do not need to support this property.

§ Deactivate

The following process must be used to deactivate a Sidetree-based DID:

  1. Retrieve the Recovery Reveal Value that matches the previously anchored Recovery Commitment.
  2. Generate a IETF RFC 7515 compliant compact JWS object, herein referred to as the Deactivate Operation Signed Data Object, with a signature that validates against the currently active recovery key, and contains the following payload values:
    {
      "protected": {...},
      "payload": {
        "didSuffix": SUFFIX_STRING,
        "recoveryKey": JWK_OBJECT
      },
      "signature": SIGNATURE_STRING
    }
    
    • The JWS payload object MUST contain a didSuffix property, and its value MUST be the DID Suffix of the DID the operation pertains to, with a maximum length as specified by the MAX_OPERATION_HASH_LENGTH.
    • The JWS payload object MUST include a recoveryKey property, and its value MUST be the IETF RFC 7517 JWK representation matching the previous Recovery Commitment.

§ DID State Patches

Sidetree defines a delta-based Conflict-Free Replicated Data Type system, wherein the metadata in a Sidetree-based implementation is controlled by the cryptographic PKI material of individual entities in the system, represented by DIDs. While the most common form of state associated with the DIDs in a Sidetree-based implementation is a DID Document, Sidetree can be used to maintain any type of DID-associated state.

Sidetree specifies a general format for patching the state associated with a DID, called Patch Actions, which define how to deterministic mutate a DID’s associated state. Sidetree further specifies a standard set of Patch Actions (below) implementers MAY use to facilitate DID state patching within their implementations. Support of the standard set of Patch Actions defined herein IS NOT required, but implementers MUST use the Patch Action format for defining patch mechanisms within their implementation. The general Patch Action format is defined as follows:

{
  "action": "add-public-keys",
  ...
}

{
  "action": "-custom-action",
  ...
}
  1. Patch Actions MUST be represented as JSON objects.
  2. Patch Action objects MUST include an action property, and its value SHOULD be one of the standard Patch Action types listed in below, or, if the implementer chooses to create a custom Patch Action, a kebab-case string (dash-delimited lowercase words) with a leading dash, to indicate a custom Patch Action, for example: -custom-action.
    • add-public-keys
    • remove-public-keys
    • add-services
    • remove-services
    • ietf-json-patch

§ Standard Patch Actions

The following set of standard Patch Actions are specified to help align on a common set of Patch Actions that provide a predictable usage pattern across Sidetree-based DID Method implementations.

§ add-public-keys

The add-public-keys Patch Action describes the addition of cryptographic keys associated with a given DID. For any part of an add-public-keys Patch Action to be applied to the DID’s state, all specified conditions MUST be met for all properties and values, else the patch MUST be discarded in its entirety. In the case a public key entry already exists for the given id specified within an add-public-keys Patch Action, the implementation MUST overwrite the existing entry entirely with the incoming patch. To construct an add-public-keys patch, compose an object as follows:

  1. The object MUST include an action property, and its value MUST be add-public-keys.
  2. The object MUST include a publicKeys property, and its value MUST be an array.
  3. Each key being added MUST be represented by an entry in the publicKeys array, and each entry must be an object composed as follows:
    1. The object MUST include an id property, and its value MUST be a string with no more than fifty (50) Base64URL encoded characters. If the value is not of the correct type or exceeds the specified maximum length, the entire Patch Action MUST be discarded, without any of the patch being used to modify the DID’s state.
    2. The object MUST include a type property, and its value MUST be a string and SHOULD be of a registered Cryptographic Suite.
    3. The object MAY include a controller property, and its value MUST be a DID URI string. Implementations MAY specify a maximum length for the value, and if specified, the value MUST NOT exceed it. If the controller property is absent, the implementation must set the corresponding property in the resolved DID Document with a value that equates to the DID Document controller’s id. If the value is not of the correct type or exceeds the specified maximum length, the entire Patch Action MUST be discarded, without any of the patch being used to modify the DID’s state.
    4. The object MUST include either a publicKeyJwk or a publicKeyMultibase property with values as defined by DID Core and DID Specification Registries. Implementers MAY choose to only define publicKeyJwk. These key representations are described in the JWK and Multibase subsections. Implementations MAY specify a maximum length for these values, and if specified, the values MUST NOT exceed it. If more or less than one of these properties is present, the value of the included property is not of the correct type, or the value exceeds the implementer’s specified maximum length, the entire Patch Action MUST be discarded, without any of the patch being used to modify the DID’s state.
    5. The object MAY include a purposes property, and if included, its value MUST be an array of one or more strings. The value for each string SHOULD represent a verification relationship defined by DID Core or the DID Specification Registries. If the value is not of the correct type or contains any string not listed below (or defined by the implementer), the entire Patch Action MUST be discarded, without any of it being used to modify the DID’s state.
    • authentication: a reference to the key’s id MUST be included in the authentication array of the resolved DID Document.
    • keyAgreement: a reference to the key’s id MUST be included in the keyAgreement array of the resolved DID Document.
    • assertionMethod: a reference to the key’s id MUST be included in the assertionMethod array of the resolved DID Document.
    • capabilityDelegation: a reference to the key’s id MUST be included in the capabilityDelegation array of the resolved DID Document.
    • capabilityInvocation: a reference to the key’s id MUST be included in the capabilityInvocation array of the resolved DID Document.
NOTE

An implementer may support transformations from publicKeyJwk or publicKeyMultibase to other representations required by a particular Cryptographic Suite. For example, an implementer may support projecting publicKeyBase58 into the resolution result for the Ed25519VerificationKey2018 suite.

§ JWK
EXAMPLE
{
  "action": "add-public-keys",
  "publicKeys": [
    {
      "id": "key1",
      "purposes": ["authentication"],
      "type": "EcdsaSecp256k1VerificationKey2019",
      "publicKeyJwk": {...}
    }
  ]
}

When the object contains a publicKeyJwk, the public key patch is using a JWK representation. The value of publicKeyJwk MUST be a public key expressed as a IETF RFC 7517 compliant JWK representation for a KEY_ALGORITHM supported by the implementation. The key represented by the JWK object MUST be projected into the verificationMethod array of the DID Document upon resolution. If the value is not a compliant JWK representation, the entire Patch Action MUST be discarded, without any of it being used to modify the DID’s state.

§ Multibase
EXAMPLE
{
  "action": "add-public-keys",
  "publicKeys": [
    {
      "id": "key1",
      "purposes": ["authentication"],
      "type": "Ed25519VerificationKey2020",
      "publicKeyMultibase": "zgo4sNiXwJTbeJDWZLXVn9uTnRwgFHFxcgDePvEC9TiTYgRpG7q1p5s7yRAic"
    }
  ]
}

An implementer MAY define support for publicKeyMultibase in addition to supporting publicKeyJwk.

When the object contains a publicKeyMultibase, the public key patch is using a multibase representation. The key represented by the multibase encoding MUST be projected into the verificationMethod array of the DID Document upon resolution.

§ remove-public-keys

EXAMPLE
{
  "action": "remove-public-keys",
  "ids": ["key1", "key2"]
}

The remove-public-keys Patch Action describes the removal of cryptographic keys associated with a given DID. For any part of an remove-public-keys Patch Action to be applied to the DID’s state, all specified conditions MUST be met for all properties and values, else the patch MUST be discarded in its entirety. In the case there exists no public key entry for an id specified within a remove-public-keys Patch Action, the implementation SHALL perform no action and treat application of the delete operation as a success. To construct a remove-public-keys Patch Action, compose an object as follows:

  1. The object MUST include an action property, and its value MUST be remove-public-keys.
  2. The object MUST include a ids property, and its value MUST be an array of key IDs that correspond with keys presently associated with the DID that are to be removed. If the value is not of the correct type or includes a string value that is not associated with a key in the document, the entire Patch Action MUST be discarded, without any of it being used to modify the DID’s state.

§ add-services

EXAMPLE
{
  "action": "add-services",
  "services": [
    {
      "id": "sds",
      "type": "SecureDataStore",
      "serviceEndpoint": "http://hub.my-personal-server.com"
    },
    {
      "id": "did-config",
      "type": "LinkedDomains",
      "serviceEndpoint": {
        "origins": ["https://foo.com", "https://bar.com"]
      }
    }
  ]
}

The add-services Patch Action describes the addition of Service Endpoints to a DID’s state. For any part of an add-services Patch Action to be applied to the DID’s state, all specified conditions MUST be met for all properties and values, else the patch MUST be discarded in its entirety. In the case a service entry already exists for the given id specified within an add-services Patch Action, the implementation MUST overwrite the existing entry entirely with the incoming patch. To construct an add-services patch, compose an object as follows:

  1. The object MUST include an action property, and its value MUST be add-services.
  2. The object MUST include a services property, and its value MUST be an array. If the value is not of the correct type, the entire Patch Action MUST be discarded, without any of it being used to modify the DID’s state.
  3. Each service being added MUST be represented by an entry in the services array, and each entry must be an object composed as follows:
    1. The object MUST include an id property, and its value MUST be a string with a length of no more than fifty (50) Base64URL encoded characters. If the value is not of the correct type or exceeds the specified length, the entire Patch Action MUST be discarded, without any of it being used to modify the DID’s state.
    2. The object MUST include a type property, and its value MUST be a string with a length of no more than thirty (30) Base64URL encoded characters. If the value is not a string or exceeds the specified length, the entire Patch Action MUST be discarded, without any of it being used to modify the DID’s state.
    3. The object MUST include a serviceEndpoint property, and its value MUST be either a valid URI string (including a scheme segment: i.e. http://, git://) or a JSON object with properties that describe the Service Endpoint further. If the values do not adhere to these constraints, the entire Patch Action MUST be discarded, without any of it being used to modify the DID’s state.

§ remove-services

EXAMPLE
{
  "action": "remove-services",
  "ids": ["sds1", "sds2"]
}

The remove-services Patch Action describes the removal of cryptographic keys associated with a given DID. For any part of an remove-services Patch Action to be applied to the DID’s state, all specified conditions MUST be met for all properties and values, else the patch MUST be discarded in its entirety. In the case there exists no service entry for an id specified within a remove-public-keys Patch Action, the implementation SHALL perform no action and treat application of the delete operation as a success. To construct a remove-services Patch Action, compose an object as follows:

  1. The object MUST include an action property, and its value MUST be remove-services.
  2. The object MUST include a ids property, and its value MUST be an array of Service Endpoint IDs that correspond with Service Endpoints presently associated with the DID that are to be removed.

§ replace

EXAMPLE
{
  "action": "replace",
  "document": {
    "publicKeys": [
      {
        "id": "key2",
        "purposes": ["authentication"],
        "type": "EcdsaSecp256k1VerificationKey2019",
        "publicKeyJwk": {...}
      }
    ],
    "services": [
      {
        "id": "sds3",
        "type": "SecureDataStore",
        "serviceEndpoint": "http://hub.my-personal-server.com"
      }
    ]
  }
}

The replace Patch Action acts as a total state reset that replaces a DID’s current PKI metadata state with the state provided. The replace Patch Action enables the declaration of public keys and service endpoints using the same schema formats as the add-public-keys and add-services Patch Actions. To construct a replace patch, compose an object as follows:

  1. The object MUST include an action property, and its value MUST be replace.
  2. The object MUST include a document property, and its value MUST be an object, which may contain the following properties:
    • The object MAY include a publicKeys property, and if present, its value MUST be an array of public key entries that follow the same schema and requirements as the public key entries from the add-public-keys Patch Action
    • The object MAY include a services property, and if present, its value MUST be an array of service endpoint entries that follow the same schema and requirements as the service endpoint entries from the add-services Patch Action.

§ ietf-json-patch

The ietf-json-patch Patch Action describes a mechanism for modifying a DID’s state using IETF JSON Patch. To construct a ietf-json-patch Patch Action, compose an object as follows:

  1. The object MUST include an action property, and its value MUST be ietf-json-patch.
  2. The object MUST include a patches property, and its value MUST be an array of IETF JSON Patch operation objects.

If ietf-json-patch is used to add or remove from a proof purpose collection, such as operations, recovery or assertionMethod, per the DID Core spec, each collection element MUST have a unique id property, or be a unique string identifier.

See Operation Verification for more details on how operations are verified.

EXAMPLE
{
  "action": "ietf-json-patch",
  "patches": [
    { "op": "add", ... },
    { "op": "remove", ... },
    { "op": "replace", ... },
    { "op": "move", ... },
    { "op": "copy", ... }
  ]
}
EXAMPLE
{
  "action": "ietf-json-patch",
  "patches": [
    {
      "op": "replace",
      "path": "/service",
      "value": [
          {
              "id": "did:example:123#edv",
              "type": "EncryptedDataVault",
              "serviceEndpoint": "https://edv.example.com/",
          }
      ]
    }
  ]
}
WARNING

Without careful validation, use of ietf-json-patch may result in unrecoverable states, similar to “Deactivated”.

WARNING

Use of ietf-json-patch may harm an implmentation’s ability to perform validation on operations at ingestion time, which could impact performance negatively.

§ add-also-known-as

EXAMPLE
{
  "action": "add-also-known-as",
  "uris": [
    "did:example:1234"
  ]
}

The add-also-known-as Patch Action describes the addition of Also Known As to a DID’s state. For any part of an add-also-known-as Patch Action to be applied to the DID’s state, all specified conditions MUST be met for all properties and values, else the patch MUST be discarded in its entirety. To construct an add-also-known-as patch, compose an object as follows:

  1. The object MUST include an action property, and its value MUST be add-also-known-as.
  2. The object MUST include a uris property, and its value MUST be an array. Each value of the array MUST be a URI. If the value is not of the correct type, the entire Patch Action MUST be discarded, without any of it being used to modify the DID’s state.

§ remove-also-known-as

EXAMPLE
{
  "action": "remove-also-known-as",
  "uris": [
    "did:example:1234"
  ]
}

The remove-also-known-as Patch Action describes the removal of Also Known As from a DID’s state. For any part of an remove-also-known-as Patch Action to be applied to the DID’s state, all specified conditions MUST be met for all properties and values, else the patch MUST be discarded in its entirety. To construct a remove-also-known-as Patch Action, compose an object as follows:

  1. The object MUST include an action property, and its value MUST be remove-also-known-as.
  2. The object MUST include a uris property, and its value MUST be an array of URIs that correspond with Also Known As URIs presently associated with the DID that are to be removed.

§ Transaction & Operation Processing

§ Transaction Anchoring

Once an Core Index File, Provisional Index File, and associated Chunk Files have been assembled for a given set of operations, a reference to the Core Index File must be embedded within the target anchoring system to enter the set of operations into the Sidetree implementation’s global state. The following process:

  1. Generate a transaction for the underlying anchoring system
  2. Generate and include the following value, herein referred to as the Anchor String, within the transaction:
    1. Generate a numerical string ('732') that represents the total number of operations present in the Core Index File and Provisional Index File, herein referred to as the Operation Count.
    2. Using the CAS_URI_ALGORITHM, generate a CID for the Core Index File, herein referred to as the Core Index File CAS URI.
    3. Join the Operation Count and Core Index File CAS URI with a . as follows:
      "10000" + "." + "QmWd5PH6vyRH5kMdzZRPBnf952dbR4av3Bd7B2wBqMaAcf"
      
    4. Embed the Anchor String in the transaction such that it can be located and parsed by any party that traverses the history of the target anchoring system.
  3. If the implementation implements a per-op fee, ensure the transaction includes the fee amount required for the number of operations being anchored.
  4. Encode the transaction with any other data or values required for inclusion by the target anchoring system, and broadcast it.

§ CAS File Propagation

To ensure other nodes of the implementation can retrieve the operation files required to ingest the included operations and update the states of the DIDs it contains, the implementer must ensure that the files associated with a given set of operations being anchored are available to peers seeking to request and replicate them across the CAS storage layer. Use the following procedure for propagating transaction-anchored CAS files:

  1. If the underlying anchoring system is subject to an anchoring inclusion delay (e.g. the interval between blocks in a blockchain), implementers SHOULD wait until they receive a confirmation of inclusion (whatever that means for the target anchoring system) before exposing/propagating the operation files across the CAS network. (more about the reason for this in the note below)
  2. After confirmation is received, implementers SHOULD use the most effective means of proactive propagation that the CAS_PROTOCOL supports.
  3. A Sidetree-based implementation node that anchors operations should not assume other nodes on the CAS network will indefinitely retain and propagate the files for a given set of operations they anchor. A node SHOULD retain and propagate any files related to the operations it anchors.
NOTE

Most anchoring systems feature some delay between the broadcast of a transaction and the recorded inclusion of the transaction in the anchoring system’s history. Because operation data included in the CAS files contains revealed commitment values for operations, propagating those files before confirmation of transaction inclusion exposes revealed commitment values to external entities who may download them prior to inclusion in the anchoring system. This means an attacker who learns of the revealed commitment value can craft invalid transactions that could be included before the legitimate operation the user is attempting to anchor. While this has no affect on proof-of-control security for a DID, an observing node would have to check the signatures of fraudulent transactions before the legitimate transaction is found, which could result in slower resolution processing for the target DID.

§ Transaction Processing

Regardless of the anchoring system an implementer chooses, the implementer MUST be able to sequence Sidetree-specific transactions within it in a deterministic order, such that any observer can derive the same order if the same logic is applied. The implementer MUST, either at the native transaction level or by some means of logical evaluation, assign Sidetree-specific transactions a Transaction Number. Transaction Numbers MUST be assigned to all Sidetree-specific transactions present in the underlying anchoring system after GENESIS_TIME, regardless of whether or not they are valid.

  1. An implementer MUST develop implementation-specific logic that enables deterministic ordering and iteration of all protocol-related transactions in the underlying anchoring system, such that all operators of the implementation process them in the same order.
  2. Starting at GENESIS_TIME, begin iterating transactions using the implementation-specific logic.
  3. For each transaction found during iteration that is determined to be a protocol-related transaction, process the transaction as follows:
    1. Assign the transaction a Transaction Number.
    2. If the implementation supports enforcement value locking, and the transaction is encoded in accordance with the implementation’s value locking format, skip the remaining steps and process the transaction as described in the Proof of Fee section on Value Locking.
    3. The Anchor String MUST be formatted correctly - if it is not, discard the transaction and continue iteration.
    4. If the implementation DOES NOT support enforcement of a per-operation fee, skip this step. If enforcement of a per-operation fee is supported, ensure the transaction fee meets the per-operation fee requirements for inclusion - if it DOES NOT, discard the transaction and continue iteration.
    5. If the implementation DOES NOT support enforcement of Value Locking, skip this step. If enforcement of Value Locking is supported, ensure the transaction’s fee meets the Value Locking requirements for inclusion - if it does not, discard the transaction and continue iteration.
    6. Parse the Anchor String to derive the Operation Count and Core Index File CAS URI.
    7. Use the CAS_PROTOCOL to fetch the Core Index File using the Core Index File CAS URI. If the file cannot be located, retain a reference that signifies the need to retry fetch of the file. If the file successfully retrieved, proceed to the next section on how to process an Core Index File

§ Core Index File Processing

This sequence of rules and processing steps must be followed to correctly process an Core Index File:

  1. The Core Index File MUST NOT exceed the MAX_CORE_INDEX_FILE_SIZE - if it does, cease processing, discard the file data, and retain a reference that the file is to be ignored.
  2. Decompress the Core Index File in accordance with the implementation’s COMPRESSION_ALGORITHM, within the memory allocation limit specified for decompression in accordance with the implementation-defined MAX_MEMORY_DECOMPRESSION_FACTOR.
  3. The Core Index File MUST validate against the protocol-defined Core Index File schema and construction rules - if it DOES NOT, cease processing, discard the file data, and retain a reference that the whole batch of anchored operations and all its files are to be ignored.
    • While this rule is articulated in the Core Index File section of the specification, it should be emphasized to ensure accurate processing: an Core Index File MUST NOT include multiple operations in the operations section of the Core Index File for the same DID Suffix - if any duplicates are found, cease processing, discard the file data, and retain a reference that the whole batch of anchored operations and all its files are to be ignored.
  4. If processing of rules 1 and 2 above resulted in successful validation of the Core Index File, initiate retrieval of the Provisional Index File via the CAS_PROTOCOL using the provisionalIndexFileUri property’s CAS URI value, if the provisionalIndexFileUri property is present. This is only a SUGGESTED point at which to begin retrieval of the Provisional Index File, not a blocking procedural step, so you may continue with processing before retrieval of the Provisional Index File is complete.
  5. Iterate the Core Index File Create Entries, and for each entry, process as follows:
    1. Derive the DID Suffix from the values present in the entry.
    2. Ensure the DID Suffix of the operation entry has not been included in another valid operation that was previously processed in the scope of this Core Index File.
    3. Create an entry for the operation within the Operation Storage area relative to the DID Suffix.
  6. Iterate the Core Index File Recovery Entries, and for each entry, process as follows:
    1. Ensure the DID Suffix of the operation entry has not been included in another valid operation that was previously processed in the scope of this Core Index File.
    2. Create an entry for the operation within the Operation Storage area relative to the DID Suffix.
  7. Iterate the Core Index File Deactivate Entries, and for each entry, process as follows:
    1. Ensure the DID Suffix of the operation entry has not been included in another valid operation that was previously processed in the scope of this Core Index File.
    2. Create an entry for the operation within the Operation Storage area relative to the DID Suffix.

§ Provisional Index File Processing

This sequence of rules and processing steps must be followed to correctly process a Provisional Index File:

  1. The Provisional Index File MUST NOT exceed the MAX_PROVISIONAL_INDEX_FILE_SIZE - if it does, cease processing, discard the file data, and retain a reference that the file is to be ignored.
  2. Decompress the Provisional Index File in accordance with the implementation’s COMPRESSION_ALGORITHM, within the memory allocation limit specified for decompression in accordance with the implementation-defined MAX_MEMORY_DECOMPRESSION_FACTOR.
  3. The Provisional Index File MUST validate against the protocol-defined Provisional Index File schema and construction rules - if it DOES NOT, cease processing, discard the file data, and retain a reference that all Provisional-type files and their operations are to be ignored.
  4. If processing of rules 1 and 2 above resulted in successful validation of the Provisional Index File, begin retrieval of the Chunk Files by iterating the chunks array and using the CAS_PROTOCOL to fetch each entry’s chunkFileUri (a CAS URI based on the CAS_URI_ALGORITHM). This is only a SUGGESTED point at which to begin retrieval of the Chunk Files, not a blocking procedural step, so you may continue with processing before retrieval of the Chunk Files is complete.
  5. Iterate the Provisional Index File Update Entries, and for each entry, process as follows:
    1. Ensure the DID Suffix of the operation entry has not been included in another valid operation that was previously processed in the scope of the Provisional Index File or its parent Core Index File.
    2. Create an entry for the operation within the Operation Storage area relative to the DID Suffix.
  6. If the node is in a Light Node configuration, retain a reference to the Chunk Files relative to the DIDs in the anchored batch for just-in-time fetch of the Chunk Files during DID resolution.

§ Core Proof File Processing

This sequence of rules and processing steps must be followed to correctly process an Core Proof File:

  1. The Core Proof File MUST NOT exceed the MAX_PROOF_FILE_SIZE - if it does, cease processing, discard the file data, and retain a reference that the whole batch of anchored operations and all its files are to be ignored.
  2. Decompress the Core Proof File in accordance with the implementation’s COMPRESSION_ALGORITHM, within the memory allocation limit specified for decompression in accordance with the implementation-defined MAX_MEMORY_DECOMPRESSION_FACTOR.
  3. The Core Proof File MUST validate against the protocol-defined Core Proof File schema and construction rules - if it DOES NOT, cease processing, discard the file data, and retain a reference that the whole batch of anchored operations and all its files are to be ignored.
  4. Iterate any Core Proof File Recovery Entries and Core Proof File Deactivate Entries that may be present, and for each entry, process as follows:
    1. Ensure an operation for the related DID has not been included in another valid operation that was previously processed in the scope of the Core Proof File or its parent Core Index File.
    2. Create an entry, or associate with an existing entry, the proof payload within the Operation Storage area relative to the DID Suffix.

§ Provisional Proof File Processing

This sequence of rules and processing steps must be followed to correctly process an Provisional Proof File:

  1. The Provisional Proof File MUST NOT exceed the MAX_PROOF_FILE_SIZE - if it does, cease processing, discard the file data, and retain a reference that all Provisional-type files and their operations are to be ignored.
  2. Decompress the Provisional Proof File in accordance with the implementation’s COMPRESSION_ALGORITHM, within the memory allocation limit specified for decompression in accordance with the implementation-defined MAX_MEMORY_DECOMPRESSION_FACTOR.
  3. The Provisional Proof File MUST validate against the protocol-defined Provisional Proof File schema and construction rules - if it DOES NOT, cease processing, discard the file data, and retain a reference that all Provisional-type files and their operations are to be ignored.
  4. Iterate any Provisional Proof File Update Entries that may be present, and for each entry, process as follows:
    1. Ensure an operation for the related DID has not been included in another valid operation that was previously processed in the scope of the Provisional Proof File or its parent Core Index File. If another previous, valid operation was already processed in the scope of the Provisional Proof File or Core Index File for the same DID, do not process the operation and move to the next operation in the array.
    2. Create an entry, or associate with an existing entry, the proof payload within the Operation Storage area relative to the DID Suffix.

§ Chunk File Processing

This sequence of rules and processing steps must be followed to correctly process a Chunk File chunk:

  1. The Chunk File chunk MUST NOT exceed the MAX_CHUNK_FILE_SIZE - if it does, cease processing, discard the file data, and retain a reference that the file is to be ignored.
  2. Decompress the Chunk File in accordance with the implementation’s COMPRESSION_ALGORITHM, within the memory allocation limit specified for decompression in accordance with the implementation-defined MAX_MEMORY_DECOMPRESSION_FACTOR.
  3. The Chunk File MUST validate against the protocol-defined Chunk File schema and construction rules - if it DOES NOT, cease processing, discard the file data, and retain a reference that the file is to be ignored.
  4. The canonicalized buffer of each Chunk File delta entry must not exceed the MAX_DELTA_SIZE. If any deltas entries exceed the maximum size cease processing, discard the file data, and retain a reference that the file is to be ignored.
  5. In order to process Chunk File Delta Entries in relation to the DIDs they are bound to, they must be mapped back to the Create, Recovery, and Update operation entries present in the Core Index File and Provisional Index File. To create this mapping, concatenate the Core Index File Create Entries, Core Index File Recovery Entries, Provisional Index File Update Entries into a single array, in that order, herein referred to as the Operation Delta Mapping Array. Pseudo-code example:
    let mappingArray = [].concat(CREATE_ENTRIES, RECOVERY_ENTRIES, UPDATE_ENTRIES);
    
  6. With the Operation Delta Mapping Array assembled, iterate the Chunk File Delta Entries from 0 index forward, processing each Chunk File Delta Entry as follows:
    1. Identify the operation entry from the Operation Delta Mapping Array at the same index as the current iteration and determine its DID Suffix (for Core Index File Create Entries, you will need to compute the DID Suffix). This is the DID the current iteration element maps to.
    2. Store the current Chunk File Delta Entry relative to its operation entry in the persistent storage area designated for the related DID Suffix.
NOTE

The assembly and processing of Chunk Files will change in a future update to the protocol to accommodate the introduction of multiple chunk files. The current protocol version is designed around one Chunk File, but the scaffolding is present to move to multiple Chunk Files as development progresses.

§ Proof of Fee

NOTE

This section is non-normative

Sidetree implementers MAY choose to implement protective mechanisms designed to strengthen a Sidetree network against low-cost spurious operations. These mechanisms are primarily designed for open, permissionless implementations utilizing public blockchains that feature native crypto-economic systems.

§ Base Fee Variable

All of the mechanisms described in this section are based on the same underlying numeric value, known as the Base Fee Variable, that is calculated by processing a collection of native variables from the target anchoring system with a set of deterministic functions. The Base Fee Variable is used in two primary ways:

  1. To set a minimum required native transaction fee that must be paid relative to the number of DID operations a writer seeks to anchor with the transaction
  2. To establish a fee basis for any additional economic protections, such as a value locking mechanism wherein a writer must escrow or burn some amount of digital asset to have other nodes view their writes into the network as valid.

To calculate the Base Fee Variable, every implementation will define a deterministic algorithm, which may be static or change dynamically via some form of logical calculation that is applied by all nodes in the system at some interval.

§ Per-Operation Fee

An implementation may choose to require a per-operation fee, to ensure that the baseline fee paid by a writer on the anchoring system is not able to game unusually low-fee periods to flood the anchoring system with Sidetree-embedded transactions. The following logical process SHOULD be used to set and evaluate a per-operation fee for each Sidetree-bearing transaction that is observed:

  1. Determine the Base Fee Variable for the current block or transaction interval being assessed.
  2. Multiply the Base Fee Variable by the Operation Count integer from the Anchor String, producing the total batch operation fee.
  3. Validate that the transaction anchored in the anchoring system has spent at least the sum of the total batch operation fee, as derived above.
  4. If the transaction spent the required fee (or some amount greater), proceed with processing the anchored batch of DID operations. If the transaction failed to spend the required fee (or some amount greater), ignore the transaction as invalid.

§ Value Locking

An implementation may choose to institute a value locking scheme wherein digital assets native to the underlying anchoring system are locked under some conditions set by the implementation that afford a locking entity access to greater write operation volumes and related capabilities. The basis principle of value locking is to require a form of escrow to gate consumption of resources in the network. In simple terms, with value locking in place, an implementation can require a writer who wants to write batches at the maximum size to first lock an amount of the native underlying anchoring system asset commensurate with the batch sizes they want to anchor. Implementations can create value locking mechanisms a number of ways, but the following is a general example of a value locking approach:

  1. Using the Base Fee Variable, assess a required locking amount that follows an implementation-defined cost curve that maps to the size of batches up to the maximum batch size. (If your implementation features recurring evaluation logic, this will be reevaluated for whatever block or transaction interval you define)
  2. Using the underlying anchoring system’s asset locking capabilities (e.g. a Bitcoin Timelock script), validate that all transactions observed within the current block or transaction interval are linked to a sum of locked value that meets or exceeds the required value locking amount. Each locked sum may only be linked to one batch per block or transaction interval, which means anchoring multiple batches that require locks requires multiple locks, compounding the sum that must be locked by a multi-batch writer. A link from a batch-embedded transaction to a lock is typically determined by proving control of a lock via some form of deterministic proof that ties the lock to the batch-embedded transaction (e.g. signing the batch-embedded transactions with keys that control the lock)
  3. If a transaction is linked to a locked sum that has been unused by any other transactions from that lock controller during the block, proceed with ingesting the anchored batch and processing it per the directives in the file and transaction processing section of this specification.

§ Resolution

§ Operation Compilation

  1. Upon invocation of resolution, retrieve all observed operations for the DID Unique Suffix of the DID URI being resolved.

  2. If record of the DID being published has been observed, proceed to Step 3. If there is no observed record of the DID being published, skip all remaining Operation Compilation steps and process the DID as follows:

    1. If the DID URI is a Short-Form DID URI, abort resolution and return Not Found.
    2. If the DID URI is a Long-Form DID URI, process as follows:
      1. Isolate the last colon-separated (:) segment of the DID URI.
      2. Using the implementation’s DATA_ENCODING_SCHEME, decode the value. If the values fail to properly decode in accordance with the implementation’s DATA_ENCODING_SCHEME, abort resolution and return Unresolvable.
      3. JSON parse the resulting value, apply the canonicalization algorithm, reencode the resulting value and ensure it is the same as the initial value from Step 1. If the values do not match, abort resolution and return Unresolvable.
      4. Use the Hashing Process to generate a hash of the canonicalized Create Operation Suffix Data Object and ensure it matches the DID Unique Suffix, if the values do not match, abort resolution and return Unresolvable.
      5. Validate the resulting object in accordance with the Create Operation Suffix Data Object schema. If the value is found to be a valid Create Operation Suffix Data Object. If the value fails validation, abort resolution and return Unresolvable.
      6. Validate the Create Operation Delta Object (which is present in a Chunk File Delta Entry for published, anchored DIDs). If the value is found to be a valid Create Operation Delta Object. If the value fails validation, abort resolution and return Unresolvable.
      7. If all steps above are successful, flag the DID as Unpublished and continue to Create operation processing as if the values decoded and validated in the steps above represent the only operation associated with the DID.
  3. Constructing the Operation Hash Map: generate a Create Operation Pool, which will house references to any Create operations processed in the steps below, and begin iterating through the operations present in the DID’s Operation Storage area as follows:

    1. Type-specific operation evaluation:

    2. Ensure a key exists in the Operation Hash Map corresponding to the Map Hash, and that the corresponding value is an array. If no property exists for the Map Hash, create one and let its value be an array.

    3. Insert the entry into the array of the Map Hash at its proper position in ascending Anchor Time order.

  4. Create operation processing: If no operations are present in the Create Operation Pool, cease resolution of the DID and return Unresolvable. If the Create Operation Pool contains operation entries, process them as follows:

    1. Store the value of the recoveryCommitment property from the entry’s Create Operation Suffix Data Object as the Next Recovery Commitment for use in processing the next Recovery operation.
    2. Retrieve the Chunk File Delta Entry corresponding to the operation and proceed to Step 3. If the Chunk File Delta Entry is not present because the associated Chunk File has not yet been retrieved and processed (i.e. node is a Light Node implementation, file was previously unavailable, etc.), perform the following steps:
      1. Using the CAS_PROTOCOL, fetch the Chunk File using the associated Chunk File URI. If the file cannot be retrieved, proceed to recovery and deactivate operation processing.
      2. Validate the Chunk File using the Chunk File Processing procedure. If the Chunk File is valid. If the file is invalid, proceed to recovery and deactivate operation processing.
    3. Validate the Chunk File Delta Entry. If the Chunk File Delta Entry is invalid, proceed to Recovery and deactivate operation processing.
    4. Generate a hash of the canonicalized Chunk File Delta Entry via the HASH_PROTOCOL and ensure the hash matches the value of the Create Operation Suffix Data Object deltaHash property. If the values are exactly equal, proceed, if they are not, proceed to recovery and deactivate operation processing.
    5. Store the updateCommitment value of the Chunk File Delta Entry as the Next Update Commitment for use in processing the next Update operation.
    6. Begin iterating the patches array in the Chunk File Delta Entry, and for each DID State Patch entry, perform the following steps:
      1. Validate the entry in accordance any requirements imposed by the Patch Action type indicated by the action value of the entry. If the entry is valid, proceed, if the entry fails validation, reverse all modifications to the DID’s state and proceed to recovery and deactivate operation processing.
      2. Apply the patch as directed by the Patch Action type specified by the action property. If any part of the patch fails or produces an error, reverse all modifications to the DID’s state and proceed to recovery and deactivate operation processing.
  5. Recovery and deactivate operation processing: when Create operations have been processed, process any Recovery and Deactivate operations that may exist in the Operation Hash Map via the iteration procedure below. If no Recovery and Deactivate operations are present, proceed to update operation processing.

    1. If a property is present in the Operation Hash Map that matches the Next Recovery Commitment exactly, process its array of operation entries using the following steps. If no property exists in the Operation Hash Map that matches the Next Recovery Commitment exactly, exit recovery and deactivate operation processing and advance to update operation processing.
    2. Iterate the array of operation entries forward from 0-index using the process enumerated below until all valid entries are found and processed:
    3. Once all Recovery and Deactivate operations have been processed, if the Next Update Commitment value is present, proceed to update operation processing. If the Next Update Commitment value is not present or the DID is in a Deactivated state, proceed to compiled state processing.
  6. Update operation processing: if the DID is marked as Deactivated or the Next Update Commitment value is not present, skip Update processing and proceed to compiled state processing. If the Next Update Commitment value is present and no Deactivate operations were successfully processed during recovery and deactivate operation processing, process any Update operations that may exist in the Operation Hash Map using the following processing loop:

    1. If a property is present in the Operation Hash Map that matches the Next Update Commitment exactly, process its array of operation entries using the following steps. If no property exists in the Operation Hash Map that matches the Next Update Commitment exactly, exit update operation processing and advance to compiled state processing.

    2. Iterate the array of operation entries forward from 0-index using the process enumerated below until all valid entries are found and processed:

      1. Retrieve the operation’s Provisional Proof File Update Entry and Chunk File Delta Entry from the pre-processed Provisional Proof File and Chunk File associated with the operation and proceed to validation of the entries, or, if the Provisional Proof File and Chunk File have yet to be retrieved and processed (e.g. the resolving node is in a Light Node configuration), perform the following steps:
        1. Using the CAS_PROTOCOL, fetch the Provisional Proof File and Chunk File using the associated Provisional Proof File URI and Chunk File URI.
        2. If the Provisional Proof File is unable to be retrieved, skip the entry and advance to the next operation.
        3. Validate the Provisional Proof File. If the file is valid, proceed, if the file is invalid, skip the entry and advance to the next operation.
      2. Using the revealed updateKey JWK value, validate the Update Operation Signed Data Object signature. If the signature is valid, proceed, if the signature is invalid, skip the entry and iterate forward to the next entry.
      3. Validate the Chunk File and Chunk File Delta Entry. If the Chunk File and Chunk File Delta Entry are valid, proceed, if the entry is invalid, skip the entry and iterate forward to the next entry.
      4. Generate a hash of the canonicalized Chunk File Delta Entry via the HASH_PROTOCOL and ensure the hash equals the value of the Update Operation Signed Data Object deltaHash property. If the values are exactly equal, proceed, if they are not, skip the entry and iterate forward to the next entry.
      5. Store the updateCommitment value of the Chunk File Delta Entry as the Next Update Commitment for use in processing the next Update operation.
      6. Begin iterating the patches array in the Chunk File Delta Entry, and for each DID State Patch entry, perform the following steps:
        1. Apply the patch as directed by the Patch Action type specified by the action property. If any of the patches produce an error, reverse all of this operation’s patch modifications to the DID state data, while retaining the successful rotation to the next Next Update Commitment value, and iterate forward to the next operation.
  7. Compiled state processing: After the DID’s operations have been evaluated in the compilation steps above, the implementation MUST use the DID’s compiled state to generate a valid DID Document in accordance with the W3C Decentralized Identifiers specification. If your implementation is designed to produce a different format of state data, ensure it outputs in accordance with the format you are targeting.

  8. If the implementation is outputting DID state data as a DID Document, and the DID Document is being rendered in the JSON-LD representation variant, the implementer SHOULD add an @base entry to the document’s @context, and set the @base value to the id of the resolved DID. This ensures relative path values in the output DID Document are correctly projected into id-related strings by JSON-LD parsers.

  9. Once a valid DID state output has been generated (e.g. a valid DID Document), proceed to the DID Resolver Output process if you intend to render the output as a DID Document, in accordance with the Decentralized Identifier Resolution specification.

§ DID Resolver Output

The following describes how to construct Decentralized Identifier Resolution-compliant Resolution Result based on a DID resolved via the Operation Compilation process described in the section above.

If the DID was determined to be Not Found or Unresolvable, return a response consistent with those states. If the compiled DID was not determined to be Not Found or Unresolvable (per the Operation Compilation process above), proceed as follows:

  1. Generate a JSON object for the Resolution Result, structured in accordance with the Decentralized Identifier Resolution specification.

  2. Set the didDocument property of the Resolution Result object to the resolved DID Document generated via the Operation Compilation process.

  3. The Resolution Result object MUST include a didDocumentMetadata property, and its value MUST be an object composed of the following properties:

    EXAMPLE
    "didDocumentMetadata": {
      "deactivated": true,
      "canonicalId": "did:sidetree:EiDyOQbbZAa3aiRzeCkV7LOx3SERjjH93EXoIM3UoN4oWg",
      "equivalentId": ["did:sidetree:EiDyOQbbZAa3aiRzeCkV7LOx3SERjjH93EXoIM3UoN4oWg"],
      "method": {
        "published": true,
        "recoveryCommitment": "EiBfOZdMtU6OBw8Pk879QtZ-2J-9FbbjSZyoaA_bqD4zhA",
        "updateCommitment": "EiDOrcmPtfMHuwIWN6YoihdeIPxOKDHy3D6sdMXu_7CN0w"
      }
    }
    
    • deactivated - This property MUST be present if the resolved DID is determined to be in a deactivated state, and it MUST be set to the boolean value true. If the resolved DID is not in a deactivated state, this value MUST be set to the boolean value false.
    • canonicalId - If canonical representation of the resolved DID exists, the implementation MUST include the canonicalId property, and the presence and value of the canonicalId property is determined as follows:
      1. Presence and value of the canonicalId property:
        • If the DID being resolved is a Long-Form DID representation and is unpublished, the canonicalId property MUST NOT be included in the didDocumentMetadata object.
        • If the DID being resolved is a Long-Form DID representation and is published, the canonicalId property MUST be included in the didDocumentMetadata object, and its value MUST be the Short-Form DID representation.
        • If the DID being resolved is a Short-Form DID representation and is published, the canonicalId property MUST be included in the didDocumentMetadata object, and its value MUST be the Short-Form DID representation.
      2. Inclusion of the canonical DID representation in the equivalentId array:
        • If under any of the cases above there is a canonical DID representation included for the canonicalId property, the canonical DID representation MUST also be included in the equivalentId array. See below for details on the equivalentId property.
    • equivalentId - If equivalent representations of the resolved DID exist, the implementation MUST include the equivalentId property, and the presence and value of the equivalentId property is determined as follows:
      • If the DID being resolved is a Long-Form DID representation, the equivalentId property MUST be included in the didDocumentMetadata object, and its array value MUST include the Short-Form DID representation.
    • method - Its value MUST be an object composed of the following values:
      1. The object MUST include a published property with a boolean value. If the compiled DID state is flagged as Unpublished and/or Not Found (per the Operation Compilation process), the published property MUST be set to false, otherwise, set the value to true if a valid anchoring entry was located for the DID.
      2. The object MUST include an updateCommitment property, and its value MUST be the updateCommitment hash value expected to be fulfilled in with the next updateKey revealed in the next Update operation.
      3. The object MUST include an recoveryCommitment property, and its value MUST be the recoveryCommitment hash value expected to be fulfilled in with the next recoveryKey revealed in the next Recovery operation.

§ Unresolvable DIDs

If a DID is found to be unresolvable, per the logic defined under the Operation Compilation section, a Sidetree-compliant node SHOULD return the appropriate error code over the transport of the resolution request. For HTTP, you MUST return the responses and status codes defined by the Sidetree API specification section on Resolution.

§ Late Publishing

Sidetree is an eventually strongly consistent, conflict-free state resolution system based on cryptographically signed, delta-based DID operations, which derives its deterministic order of operations from the position of operation entries in a decentralized anchoring system. Unlike the native tokens of a strongly immutable anchoring system (e.g. Bitcoin), DIDs represent unique identifiers that are generally intended to be non-transferable. As such, the Sidetree protocol provides no technical mechanism for exchanging ownership of DIDs with ‘double-spend’ assurance, the way one might do with a fungible cryptocurrency token.

For Sidetree, non-transferability manifests in a distinct way: a DID owner is ultimately in control of their past, present, and future state changes, and can expose state change operations as they choose across the lineage of their DID’s operational history. DID owners can create forks within their own DID state history, and nothing forces them to expose DID state operations they anchor. A DID operation anchored in the past, at Time X, can be exposed to sometime in the future, at Time Y. This means Sidetree nodes could become aware of past operations that create a change in the lineage of a DID - this is known as Late Publishing of a DID operation. However, due to the non-transferability of DIDs, this condition is isolated to each DID’s own state lineage, and resolved by Sidetree’s deterministic ruleset, which guarantees only one fork of a DID’s state history can ever be valid. To better understand this, consider the following diagram that illustrates a DID owner, Alice, creating forks by creating and anchoring operations in the past that she does not expose to the network:

graph TB 0 --> 1 1 --> 2a 1 --> 2b 2b --> 3

As you can see above, Alice has created a fork by anchoring the divergent operations 2a and 2b. Let us assume Alice refrains from publishing the CAS files that other Sidetree nodes would detect to locate and replicate the date for operation 2a, and further, assume Alice continues creating more operation history stemming from operation 2b. Whenever Alice exposes the DID operation data for 2a, other Sidetree nodes will need to decide which operation between 2a and 2b is the ‘right’ operation. The Sidetree protocol includes a strict rule that resolves this conflict, and any variation of it: the earliest operation in Anchor Time always wins. If operation 2a precedes operation 2b in Anchor Time, whenever she decides to publish operation 2a, all other Sidetree nodes would process the operation and immediately deem operation 2a to be the valid, correct operational fork. This remains true even if Alice continues building operational history stemming from operation 2b any amount of time into the future.

With this example of late publishing in mind, the most important aspect to remember is that DID owners decide what the PKI state of their DIDs should be, and remain in control of that state independent of the shape of their DID operational history. The net takeaway is that regardless of how a DID owner decides to update the state of their DID, the decision over what that state is remains entirely their choice.

§ Method Versioning

It is RECOMMENDED that Sidetree based DID Methods implement the following versioning structures to support development, testing, staging and production network deployments.

We define a network suffix as follows for a given DID Method:

did:<method>:<network>:<didUniqueSuffix>

If no network suffix is provided, it is assumed that the “mainnet” or “production” network is to be used… for example, these DIDs should resolve to the same DID state:

did:elem:mainnet:EiD0x0JeWXQbVIpBpyeyF5FDdZN1U7enAfHnd13Qk_CYpQ
did:elem:EiD0x0JeWXQbVIpBpyeyF5FDdZN1U7enAfHnd13Qk_CYpQ

An ION DID on the Bitcoin Testnet3 testnet is defined as follows:

did:ion:testnet3:EiD0x0JeWXQbVIpBpyeyF5FDdZN1U7enAfHnd13Qk_CYpQ

An ELEM DID on the Ethereum Ropsten testnet is defined as follows:

did:elem:ropsten:EiD0x0JeWXQbVIpBpyeyF5FDdZN1U7enAfHnd13Qk_CYpQ

WARNING

Implementers should be aware that if the underlying decentralized anchoring system were to fork, the identifiers will also be forked. In this case, the a new identifier must be created either through an indication at the network layer or with a completely new method name to identify the decentralized identifiers of the forked network.

§ Context

Per the DID Core Spec an @context MAY be used to represent a DID Document as Linked Data.

If an @context is present, any properties not defined in DID Core, MUST be defined in this context, or in a DID Method specific one.

For example:

{
    "@context": [
        "https://www.w3.org/ns/did/v1", 
        "https://identity.foundation/sidetree/contexts/v1"
        "https://example.com/method/specific.jsonld"
    ]
}

§ recovery

A verificationMethod used to support DID Document Recover Operation verification.

For Example:

{
    "@context": [
        "https://www.w3.org/ns/did/v1", 
        "https://identity.foundation/sidetree/contexts/v1"
    ],
    "recovery": [{
      "id": "did:example:123#JUvpllMEYUZ2joO59UNui_XYDqxVqiFLLAJ8klWuPBw",
      "type": "EcdsaSecp256k1VerificationKey2019",
      "publicKeyJwk": {
        "crv": "secp256k1",
        "kid": "JUvpllMEYUZ2joO59UNui_XYDqxVqiFLLAJ8klWuPBw",
        "kty": "EC",
        "x": "dWCvM4fTdeM0KmloF57zxtBPXTOythHPMm1HCLrdd3A",
        "y": "36uMVGM7hnw-N6GnjFcihWE3SkrhMLzzLCdPMXPEXlA"
      }
    }]
}

§ operation

A verificationMethod used to support verification of DID Document Operations: Create, Update, Deactivate.

For Example:

{
    "@context": [
        "https://www.w3.org/ns/did/v1", 
        "https://identity.foundation/sidetree/contexts/v1"
    ],
    "operation": [{
      "id": "did:example:123#JUvpllMEYUZ2joO59UNui_XYDqxVqiFLLAJ8klWuPBw",
      "type": "EcdsaSecp256k1VerificationKey2019",
      "publicKeyJwk": {
        "crv": "secp256k1",
        "kid": "JUvpllMEYUZ2joO59UNui_XYDqxVqiFLLAJ8klWuPBw",
        "kty": "EC",
        "x": "dWCvM4fTdeM0KmloF57zxtBPXTOythHPMm1HCLrdd3A",
        "y": "36uMVGM7hnw-N6GnjFcihWE3SkrhMLzzLCdPMXPEXlA"
      }
    }]
}

§ usage

Deprecated. DO NOT USE.

Was introduced to support key ops pre sidetree protocol spec v1.

§ publicKeyJwk

A public key in JWK format. A JSON Web Key (JWK) is a JavaScript Object Notation (JSON) data structure that represents a cryptographic key. Read RFC7517.

Example:

{
  "id": "did:example:123#JUvpllMEYUZ2joO59UNui_XYDqxVqiFLLAJ8klWuPBw",
  "type": "EcdsaSecp256k1VerificationKey2019",
  "publicKeyJwk": {
    "crv": "secp256k1",
    "kid": "JUvpllMEYUZ2joO59UNui_XYDqxVqiFLLAJ8klWuPBw",
    "kty": "EC",
    "x": "dWCvM4fTdeM0KmloF57zxtBPXTOythHPMm1HCLrdd3A",
    "y": "36uMVGM7hnw-N6GnjFcihWE3SkrhMLzzLCdPMXPEXlA"
  }
}

§ publicKeyHex

A hex encoded compressed public key.

Example:

{
  "id": "did:example:123#JUvpllMEYUZ2joO59UNui_XYDqxVqiFLLAJ8klWuPBw",
  "type": "EcdsaSecp256k1VerificationKey2019",
  "publicKeyHex": "027560af3387d375e3342a6968179ef3c6d04f5d33b2b611cf326d4708badd7770"
}

§ Method & Client Guidelines

The following are advisements and best practices for DID Method and Client (SDK, wallets, etc.) implementers that interact with Sidetree-based DID Methods. These guidelines should be carefully considered when implementing or interacting with a Sidetree-based DID Method.

§ Sidetree Operations

A Sidetree client manages keys and performs document operations on behalf of a DID owner. The Sidetree client needs to comply to the following guidelines to securely, successfully manage a user’s DIDs:

  1. The client MUST keep the operation payload once it is submitted to a Sidetree node until it is generally available and observed. If the submitted operation is not anchored and propagated, for whatever reason, the same operation payload MUST be resubmitted. Submitting a different operation payload can put the DID at risk of late publish branching, which can lead to an unrecoverable DID if the original operation payload contains a recovery key rotation and that recovery key is lost. While this is a fringe possible issue, it’s best to just retain these small operation payloads.

  2. Another reason to retain operation payloads is to always have them available in the case you want to serve them across the backing Content Addressable Storage network. Most users won’t elect to do this, but advanced wallets and users who seek maximum independence from any reliance on the persistence of their operations in the network may want to.

§ Update vs Recovery Keys

It is advised that clients managing DIDs try as best as possible to separate the concepts of Update and Recovery keys. Compromise or loss of Update keys does not permanently imperil a user’s control over their DID, where a loss or compromise of a Recovery key will, As such, it is important to create appropriate protections and processes for securing and using each type of key, commensurate with their level of control and risk.

§ Appendix

§ Test Vectors

The Resolution test vectors are the result of applying operations and obtaining resolution results.

§ DID

{
  "longFormDid": "did:sidetree:EiDyOQbbZAa3aiRzeCkV7LOx3SERjjH93EXoIM3UoN4oWg:eyJkZWx0YSI6eyJwYXRjaGVzIjpbeyJhY3Rpb24iOiJyZXBsYWNlIiwiZG9jdW1lbnQiOnsicHVibGljS2V5cyI6W3siaWQiOiJwdWJsaWNLZXlNb2RlbDFJZCIsInB1YmxpY0tleUp3ayI6eyJjcnYiOiJzZWNwMjU2azEiLCJrdHkiOiJFQyIsIngiOiJ0WFNLQl9ydWJYUzdzQ2pYcXVwVkpFelRjVzNNc2ptRXZxMVlwWG45NlpnIiwieSI6ImRPaWNYcWJqRnhvR0otSzAtR0oxa0hZSnFpY19EX09NdVV3a1E3T2w2bmsifSwicHVycG9zZXMiOlsiYXV0aGVudGljYXRpb24iLCJrZXlBZ3JlZW1lbnQiXSwidHlwZSI6IkVjZHNhU2VjcDI1NmsxVmVyaWZpY2F0aW9uS2V5MjAxOSJ9XSwic2VydmljZXMiOlt7ImlkIjoic2VydmljZTFJZCIsInNlcnZpY2VFbmRwb2ludCI6Imh0dHA6Ly93d3cuc2VydmljZTEuY29tIiwidHlwZSI6InNlcnZpY2UxVHlwZSJ9XX19XSwidXBkYXRlQ29tbWl0bWVudCI6IkVpREtJa3dxTzY5SVBHM3BPbEhrZGI4Nm5ZdDBhTnhTSFp1MnItYmhFem5qZEEifSwic3VmZml4RGF0YSI6eyJkZWx0YUhhc2giOiJFaUNmRFdSbllsY0Q5RUdBM2RfNVoxQUh1LWlZcU1iSjluZmlxZHo1UzhWRGJnIiwicmVjb3ZlcnlDb21taXRtZW50IjoiRWlCZk9aZE10VTZPQnc4UGs4NzlRdFotMkotOUZiYmpTWnlvYUFfYnFENHpoQSJ9fQ",
  "shortFormDid": "did:sidetree:EiDyOQbbZAa3aiRzeCkV7LOx3SERjjH93EXoIM3UoN4oWg"
}

§ Operation Inputs

The following operation inputs are in the form of Sidetree REST API Operations.

§ Create
{
  "type": "create",
  "suffixData": {
    "deltaHash": "EiCfDWRnYlcD9EGA3d_5Z1AHu-iYqMbJ9nfiqdz5S8VDbg",
    "recoveryCommitment": "EiBfOZdMtU6OBw8Pk879QtZ-2J-9FbbjSZyoaA_bqD4zhA"
  },
  "delta": {
    "updateCommitment": "EiDKIkwqO69IPG3pOlHkdb86nYt0aNxSHZu2r-bhEznjdA",
    "patches": [
      {
        "action": "replace",
        "document": {
          "publicKeys": [
            {
              "id": "publicKeyModel1Id",
              "type": "EcdsaSecp256k1VerificationKey2019",
              "publicKeyJwk": {
                "kty": "EC",
                "crv": "secp256k1",
                "x": "tXSKB_rubXS7sCjXqupVJEzTcW3MsjmEvq1YpXn96Zg",
                "y": "dOicXqbjFxoGJ-K0-GJ1kHYJqic_D_OMuUwkQ7Ol6nk"
              },
              "purposes": [
                "authentication",
                "keyAgreement"
              ]
            }
          ],
          "services": [
            {
              "id": "service1Id",
              "type": "service1Type",
              "serviceEndpoint": "http://www.service1.com"
            }
          ]
        }
      }
    ]
  }
}

§ Update
{
  "type": "update",
  "didSuffix": "EiDyOQbbZAa3aiRzeCkV7LOx3SERjjH93EXoIM3UoN4oWg",
  "revealValue": "EiBkRSeixqX-PhOij6PIpuGfPld5Nif5MxcrgtGCw-t6LA",
  "delta": {
    "patches": [
      {
        "action": "add-public-keys",
        "publicKeys": [
          {
            "id": "additional-key",
            "type": "EcdsaSecp256k1VerificationKey2019",
            "publicKeyJwk": {
              "kty": "EC",
              "crv": "secp256k1",
              "x": "aN75CTjy3VCgGAJDNJHbcb55hO8CobEKzgCNrUeOwAY",
              "y": "K9FhCEpa_jG09pB6qriXrgSvKzXm6xtxBvZzIoXXWm4"
            },
            "purposes": [
              "authentication",
              "assertionMethod",
              "capabilityInvocation",
              "capabilityDelegation",
              "keyAgreement"
            ]
          }
        ]
      }
    ],
    "updateCommitment": "EiDOrcmPtfMHuwIWN6YoihdeIPxOKDHy3D6sdMXu_7CN0w"
  },
  "signedData": "eyJhbGciOiJFUzI1NksifQ.eyJ1cGRhdGVLZXkiOnsia3R5IjoiRUMiLCJjcnYiOiJzZWNwMjU2azEiLCJ4Ijoid2Z3UUNKM09ScVZkbkhYa1Q4UC1MZ19HdHhCRWhYM3R5OU5VbnduSHJtdyIsInkiOiJ1aWU4cUxfVnVBblJEZHVwaFp1eExPNnFUOWtQcDNLUkdFSVJsVHBXcmZVIn0sImRlbHRhSGFzaCI6IkVpQ3BqTjQ3ZjBNcTZ4RE5VS240aFNlZ01FcW9EU19ycFEyOVd5MVY3M1ZEYncifQ.RwZK1DG5zcr4EsrRImzStb0VX5j2ZqApXZnuoAkA3IoRdErUscNG8RuxNZ0FjlJtjMJ0a-kn-_MdtR0wwvWVgg"
}

§ Recover
{
  "type": "recover",
  "didSuffix": "EiDyOQbbZAa3aiRzeCkV7LOx3SERjjH93EXoIM3UoN4oWg",
  "revealValue": "EiAJ-97Is59is6FKAProwDo870nmwCeP8n5nRRFwPpUZVQ",
  "signedData": "eyJhbGciOiJFUzI1NksifQ.eyJkZWx0YUhhc2giOiJFaUNTem1ZSk0yWGpaWE00a1Q0bGpKcEVGTjVmVkM1QVNWZ3hSekVtMEF2OWp3IiwicmVjb3ZlcnlLZXkiOnsia3R5IjoiRUMiLCJjcnYiOiJzZWNwMjU2azEiLCJ4IjoibklxbFJDeDBleUJTWGNRbnFEcFJlU3Y0enVXaHdDUldzc29jOUxfbmo2QSIsInkiOiJpRzI5Vks2bDJVNXNLQlpVU0plUHZ5RnVzWGdTbEsyZERGbFdhQ004RjdrIn0sInJlY292ZXJ5Q29tbWl0bWVudCI6IkVpQ3NBN1NHTE5lZGE1SW5sb3Fub2tVY0pGejZ2S1Q0SFM1ZGNLcm1ubEpocEEifQ.lxWnrg5jaeCAhYuz1fPhidKw6Z2cScNlEc6SWcs15DtJbrHZFxl5IezGJ3cWdOSS2DlzDl4M1ZF8dDE9kRwFeQ",
  "delta": {
    "patches": [
      {
        "action": "replace",
        "document": {
          "publicKeys": [
            {
              "id": "newKey",
              "type": "EcdsaSecp256k1VerificationKey2019",
              "publicKeyJwk": {
                "kty": "EC",
                "crv": "secp256k1",
                "x": "JUWp0pAMGevNLhqq_Qmd48izuLYfO5XWpjSmy5btkjc",
                "y": "QYaSu1NHYnxR4qfk-RkXb4NQnQf1X3XQCpDYuibvlNc"
              },
              "purposes": [
                "authentication",
                "assertionMethod",
                "capabilityInvocation",
                "capabilityDelegation",
                "keyAgreement"
              ]
            }
          ],
          "services": [
            {
              "id": "serviceId123",
              "type": "someType",
              "serviceEndpoint": "https://www.url.com"
            }
          ]
        }
      }
    ],
    "updateCommitment": "EiD6_csybTfxELBoMgkE9O2BTCmhScG_RW_qaZQkIkJ_aQ"
  }
}

§ Deactivate
{
  "type": "deactivate",
  "didSuffix": "EiDyOQbbZAa3aiRzeCkV7LOx3SERjjH93EXoIM3UoN4oWg",
  "revealValue": "EiB-dib5oumdaDGH47TB17Qg1nHza036bTIGibQOKFUY2A",
  "signedData": "eyJhbGciOiJFUzI1NksifQ.eyJkaWRTdWZmaXgiOiJFaUR5T1FiYlpBYTNhaVJ6ZUNrVjdMT3gzU0VSampIOTNFWG9JTTNVb040b1dnIiwicmVjb3ZlcnlLZXkiOnsia3R5IjoiRUMiLCJjcnYiOiJzZWNwMjU2azEiLCJ4IjoiSk1ucF9KOW5BSGFkTGpJNmJfNVU3M1VwSEZqSEZTVHdtc1ZUUG9FTTVsMCIsInkiOiJ3c1QxLXN0UWJvSldPeEJyUnVINHQwVV9zX1lSQy14WXQyRkFEVUNHR2M4In19.ARTZrvupKdShOFNAJ4EWnsuaONKBgXUiwY5Ct10a9IXIp1uFsg0UyDnZGZtJT2v2bgtmYsQBmT6L9kKaaDcvUQ"
}

§ Resolution

§ Create
{
  "@context": "https://w3id.org/did-resolution/v1",
  "didDocument": {
    "id": "did:sidetree:EiDyOQbbZAa3aiRzeCkV7LOx3SERjjH93EXoIM3UoN4oWg",
    "@context": [
      "https://www.w3.org/ns/did/v1",
      {
        "@base": "did:sidetree:EiDyOQbbZAa3aiRzeCkV7LOx3SERjjH93EXoIM3UoN4oWg"
      }
    ],
    "service": [
      {
        "id": "#service1Id",
        "type": "service1Type",
        "serviceEndpoint": "http://www.service1.com"
      }
    ],
    "verificationMethod": [
      {
        "id": "#publicKeyModel1Id",
        "controller": "did:sidetree:EiDyOQbbZAa3aiRzeCkV7LOx3SERjjH93EXoIM3UoN4oWg",
        "type": "EcdsaSecp256k1VerificationKey2019",
        "publicKeyJwk": {
          "kty": "EC",
          "crv": "secp256k1",
          "x": "tXSKB_rubXS7sCjXqupVJEzTcW3MsjmEvq1YpXn96Zg",
          "y": "dOicXqbjFxoGJ-K0-GJ1kHYJqic_D_OMuUwkQ7Ol6nk"
        }
      }
    ],
    "authentication": [
      "#publicKeyModel1Id"
    ],
    "keyAgreement": [
      "#publicKeyModel1Id"
    ]
  },
  "didDocumentMetadata": {
    "canonicalId": "did:sidetree:EiDyOQbbZAa3aiRzeCkV7LOx3SERjjH93EXoIM3UoN4oWg",
    "method": {
      "published": true,
      "recoveryCommitment": "EiBfOZdMtU6OBw8Pk879QtZ-2J-9FbbjSZyoaA_bqD4zhA",
      "updateCommitment": "EiDKIkwqO69IPG3pOlHkdb86nYt0aNxSHZu2r-bhEznjdA"
    }
  }
}
§ Update
{
  "@context": "https://w3id.org/did-resolution/v1",
  "didDocument": {
    "id": "did:sidetree:EiDyOQbbZAa3aiRzeCkV7LOx3SERjjH93EXoIM3UoN4oWg",
    "@context": [
      "https://www.w3.org/ns/did/v1",
      {
        "@base": "did:sidetree:EiDyOQbbZAa3aiRzeCkV7LOx3SERjjH93EXoIM3UoN4oWg"
      }
    ],
    "service": [
      {
        "id": "#service1Id",
        "type": "service1Type",
        "serviceEndpoint": "http://www.service1.com"
      }
    ],
    "verificationMethod": [
      {
        "id": "#publicKeyModel1Id",
        "controller": "did:sidetree:EiDyOQbbZAa3aiRzeCkV7LOx3SERjjH93EXoIM3UoN4oWg",
        "type": "EcdsaSecp256k1VerificationKey2019",
        "publicKeyJwk": {
          "kty": "EC",
          "crv": "secp256k1",
          "x": "tXSKB_rubXS7sCjXqupVJEzTcW3MsjmEvq1YpXn96Zg",
          "y": "dOicXqbjFxoGJ-K0-GJ1kHYJqic_D_OMuUwkQ7Ol6nk"
        }
      },
      {
        "id": "#additional-key",
        "controller": "did:sidetree:EiDyOQbbZAa3aiRzeCkV7LOx3SERjjH93EXoIM3UoN4oWg",
        "type": "EcdsaSecp256k1VerificationKey2019",
        "publicKeyJwk": {
          "kty": "EC",
          "crv": "secp256k1",
          "x": "aN75CTjy3VCgGAJDNJHbcb55hO8CobEKzgCNrUeOwAY",
          "y": "K9FhCEpa_jG09pB6qriXrgSvKzXm6xtxBvZzIoXXWm4"
        }
      }
    ],
    "authentication": [
      "#publicKeyModel1Id",
      "#additional-key"
    ],
    "keyAgreement": [
      "#publicKeyModel1Id",
      "#additional-key"
    ],
    "assertionMethod": [
      "#additional-key"
    ],
    "capabilityInvocation": [
      "#additional-key"
    ],
    "capabilityDelegation": [
      "#additional-key"
    ]
  },
  "didDocumentMetadata": {
    "canonicalId": "did:sidetree:EiDyOQbbZAa3aiRzeCkV7LOx3SERjjH93EXoIM3UoN4oWg",
    "method": {
      "published": true,
      "recoveryCommitment": "EiBfOZdMtU6OBw8Pk879QtZ-2J-9FbbjSZyoaA_bqD4zhA",
      "updateCommitment": "EiDOrcmPtfMHuwIWN6YoihdeIPxOKDHy3D6sdMXu_7CN0w"
    }
  }
}
§ Recover
{
  "@context": "https://w3id.org/did-resolution/v1",
  "didDocument": {
    "id": "did:sidetree:EiDyOQbbZAa3aiRzeCkV7LOx3SERjjH93EXoIM3UoN4oWg",
    "@context": [
      "https://www.w3.org/ns/did/v1",
      {
        "@base": "did:sidetree:EiDyOQbbZAa3aiRzeCkV7LOx3SERjjH93EXoIM3UoN4oWg"
      }
    ],
    "service": [
      {
        "id": "#serviceId123",
        "type": "someType",
        "serviceEndpoint": "https://www.url.com"
      }
    ],
    "verificationMethod": [
      {
        "id": "#newKey",
        "controller": "did:sidetree:EiDyOQbbZAa3aiRzeCkV7LOx3SERjjH93EXoIM3UoN4oWg",
        "type": "EcdsaSecp256k1VerificationKey2019",
        "publicKeyJwk": {
          "kty": "EC",
          "crv": "secp256k1",
          "x": "JUWp0pAMGevNLhqq_Qmd48izuLYfO5XWpjSmy5btkjc",
          "y": "QYaSu1NHYnxR4qfk-RkXb4NQnQf1X3XQCpDYuibvlNc"
        }
      }
    ],
    "authentication": [
      "#newKey"
    ],
    "assertionMethod": [
      "#newKey"
    ],
    "capabilityInvocation": [
      "#newKey"
    ],
    "capabilityDelegation": [
      "#newKey"
    ],
    "keyAgreement": [
      "#newKey"
    ]
  },
  "didDocumentMetadata": {
    "canonicalId": "did:sidetree:EiDyOQbbZAa3aiRzeCkV7LOx3SERjjH93EXoIM3UoN4oWg",
    "method": {
      "published": true,
      "recoveryCommitment": "EiCsA7SGLNeda5InloqnokUcJFz6vKT4HS5dcKrmnlJhpA",
      "updateCommitment": "EiD6_csybTfxELBoMgkE9O2BTCmhScG_RW_qaZQkIkJ_aQ"
    }
  }
}
§ Deactivate
{
  "@context": "https://w3id.org/did-resolution/v1",
  "didDocument": { 
    "id": "did:sidetree:EiDyOQbbZAa3aiRzeCkV7LOx3SERjjH93EXoIM3UoN4oWg", 
    "@context": [ 
      "https://www.w3.org/ns/did/v1", 
      { "@base": "did:sidetree:EiDyOQbbZAa3aiRzeCkV7LOx3SERjjH93EXoIM3UoN4oWg" }
    ] 
  },
  "didDocumentMetadata": {
    "deactivated": true,
    "method": {
      "published": true
    },
    "canonicalId": "did:sidetree:EiDyOQbbZAa3aiRzeCkV7LOx3SERjjH93EXoIM3UoN4oWg"
  }
}
§ Long-Form Response
{
  "@context": "https://w3id.org/did-resolution/v1",
  "didDocument": {
    "id": "did:sidetree:EiDyOQbbZAa3aiRzeCkV7LOx3SERjjH93EXoIM3UoN4oWg:eyJkZWx0YSI6eyJwYXRjaGVzIjpbeyJhY3Rpb24iOiJyZXBsYWNlIiwiZG9jdW1lbnQiOnsicHVibGljS2V5cyI6W3siaWQiOiJwdWJsaWNLZXlNb2RlbDFJZCIsInB1YmxpY0tleUp3ayI6eyJjcnYiOiJzZWNwMjU2azEiLCJrdHkiOiJFQyIsIngiOiJ0WFNLQl9ydWJYUzdzQ2pYcXVwVkpFelRjVzNNc2ptRXZxMVlwWG45NlpnIiwieSI6ImRPaWNYcWJqRnhvR0otSzAtR0oxa0hZSnFpY19EX09NdVV3a1E3T2w2bmsifSwicHVycG9zZXMiOlsiYXV0aGVudGljYXRpb24iLCJrZXlBZ3JlZW1lbnQiXSwidHlwZSI6IkVjZHNhU2VjcDI1NmsxVmVyaWZpY2F0aW9uS2V5MjAxOSJ9XSwic2VydmljZXMiOlt7ImlkIjoic2VydmljZTFJZCIsInNlcnZpY2VFbmRwb2ludCI6Imh0dHA6Ly93d3cuc2VydmljZTEuY29tIiwidHlwZSI6InNlcnZpY2UxVHlwZSJ9XX19XSwidXBkYXRlQ29tbWl0bWVudCI6IkVpREtJa3dxTzY5SVBHM3BPbEhrZGI4Nm5ZdDBhTnhTSFp1MnItYmhFem5qZEEifSwic3VmZml4RGF0YSI6eyJkZWx0YUhhc2giOiJFaUNmRFdSbllsY0Q5RUdBM2RfNVoxQUh1LWlZcU1iSjluZmlxZHo1UzhWRGJnIiwicmVjb3ZlcnlDb21taXRtZW50IjoiRWlCZk9aZE10VTZPQnc4UGs4NzlRdFotMkotOUZiYmpTWnlvYUFfYnFENHpoQSJ9fQ",
    "@context": [
      "https://www.w3.org/ns/did/v1",
      {
        "@base": "did:sidetree:EiDyOQbbZAa3aiRzeCkV7LOx3SERjjH93EXoIM3UoN4oWg:eyJkZWx0YSI6eyJwYXRjaGVzIjpbeyJhY3Rpb24iOiJyZXBsYWNlIiwiZG9jdW1lbnQiOnsicHVibGljS2V5cyI6W3siaWQiOiJwdWJsaWNLZXlNb2RlbDFJZCIsInB1YmxpY0tleUp3ayI6eyJjcnYiOiJzZWNwMjU2azEiLCJrdHkiOiJFQyIsIngiOiJ0WFNLQl9ydWJYUzdzQ2pYcXVwVkpFelRjVzNNc2ptRXZxMVlwWG45NlpnIiwieSI6ImRPaWNYcWJqRnhvR0otSzAtR0oxa0hZSnFpY19EX09NdVV3a1E3T2w2bmsifSwicHVycG9zZXMiOlsiYXV0aGVudGljYXRpb24iLCJrZXlBZ3JlZW1lbnQiXSwidHlwZSI6IkVjZHNhU2VjcDI1NmsxVmVyaWZpY2F0aW9uS2V5MjAxOSJ9XSwic2VydmljZXMiOlt7ImlkIjoic2VydmljZTFJZCIsInNlcnZpY2VFbmRwb2ludCI6Imh0dHA6Ly93d3cuc2VydmljZTEuY29tIiwidHlwZSI6InNlcnZpY2UxVHlwZSJ9XX19XSwidXBkYXRlQ29tbWl0bWVudCI6IkVpREtJa3dxTzY5SVBHM3BPbEhrZGI4Nm5ZdDBhTnhTSFp1MnItYmhFem5qZEEifSwic3VmZml4RGF0YSI6eyJkZWx0YUhhc2giOiJFaUNmRFdSbllsY0Q5RUdBM2RfNVoxQUh1LWlZcU1iSjluZmlxZHo1UzhWRGJnIiwicmVjb3ZlcnlDb21taXRtZW50IjoiRWlCZk9aZE10VTZPQnc4UGs4NzlRdFotMkotOUZiYmpTWnlvYUFfYnFENHpoQSJ9fQ"
      }
    ],
    "service": [
      {
        "id": "#service1Id",
        "type": "service1Type",
        "serviceEndpoint": "http://www.service1.com"
      }
    ],
    "verificationMethod": [
      {
        "id": "#publicKeyModel1Id",
        "controller": "did:sidetree:EiDyOQbbZAa3aiRzeCkV7LOx3SERjjH93EXoIM3UoN4oWg:eyJkZWx0YSI6eyJwYXRjaGVzIjpbeyJhY3Rpb24iOiJyZXBsYWNlIiwiZG9jdW1lbnQiOnsicHVibGljS2V5cyI6W3siaWQiOiJwdWJsaWNLZXlNb2RlbDFJZCIsInB1YmxpY0tleUp3ayI6eyJjcnYiOiJzZWNwMjU2azEiLCJrdHkiOiJFQyIsIngiOiJ0WFNLQl9ydWJYUzdzQ2pYcXVwVkpFelRjVzNNc2ptRXZxMVlwWG45NlpnIiwieSI6ImRPaWNYcWJqRnhvR0otSzAtR0oxa0hZSnFpY19EX09NdVV3a1E3T2w2bmsifSwicHVycG9zZXMiOlsiYXV0aGVudGljYXRpb24iLCJrZXlBZ3JlZW1lbnQiXSwidHlwZSI6IkVjZHNhU2VjcDI1NmsxVmVyaWZpY2F0aW9uS2V5MjAxOSJ9XSwic2VydmljZXMiOlt7ImlkIjoic2VydmljZTFJZCIsInNlcnZpY2VFbmRwb2ludCI6Imh0dHA6Ly93d3cuc2VydmljZTEuY29tIiwidHlwZSI6InNlcnZpY2UxVHlwZSJ9XX19XSwidXBkYXRlQ29tbWl0bWVudCI6IkVpREtJa3dxTzY5SVBHM3BPbEhrZGI4Nm5ZdDBhTnhTSFp1MnItYmhFem5qZEEifSwic3VmZml4RGF0YSI6eyJkZWx0YUhhc2giOiJFaUNmRFdSbllsY0Q5RUdBM2RfNVoxQUh1LWlZcU1iSjluZmlxZHo1UzhWRGJnIiwicmVjb3ZlcnlDb21taXRtZW50IjoiRWlCZk9aZE10VTZPQnc4UGs4NzlRdFotMkotOUZiYmpTWnlvYUFfYnFENHpoQSJ9fQ",
        "type": "EcdsaSecp256k1VerificationKey2019",
        "publicKeyJwk": {
          "crv": "secp256k1",
          "kty": "EC",
          "x": "tXSKB_rubXS7sCjXqupVJEzTcW3MsjmEvq1YpXn96Zg",
          "y": "dOicXqbjFxoGJ-K0-GJ1kHYJqic_D_OMuUwkQ7Ol6nk"
        }
      }
    ],
    "authentication": [
      "#publicKeyModel1Id"
    ],
    "keyAgreement": [
      "#publicKeyModel1Id"
    ]
  },
  "didDocumentMetadata": {
    "equivalentId": ["did:sidetree:EiDyOQbbZAa3aiRzeCkV7LOx3SERjjH93EXoIM3UoN4oWg"],
    "method": {
      "published": false,
      "recoveryCommitment": "EiBfOZdMtU6OBw8Pk879QtZ-2J-9FbbjSZyoaA_bqD4zhA",
      "updateCommitment": "EiDKIkwqO69IPG3pOlHkdb86nYt0aNxSHZu2r-bhEznjdA"
    }
  }
}

§ Acknowledgements

Transmute received funding from the United States Department of Homeland Security’s (US DHS) Silicon Valley Innovation Program to contribute to this work item under contracts 70RSAT20T00000003, and 70RSAT20T00000033. This work item does not necessarily reflect the position or the policy of the U.S. Government and no official endorsement should be inferred. DIF is not lending or implying support/affiliation with the outside entity as a part of the acknowledgement.

Table of Contents