We store a significant amount of sensitive data online, such as personally identifying information (PII), trade secrets, family pictures, and customer information. The data that we store is often not protected in an appropriate manner.

This specification describes a privacy-respecting mechanism for storing, indexing, and retrieving encrypted data at a storage provider. It is often useful when an individual or organization wants to protect data in a way that the storage provider cannot view, analyze, aggregate, or resell the data. This approach also ensures that application data is portable and protected from storage provider data breaches.

This specification is a joint work item of the W3C Credentials Community Group and the Decentralized Identity Foundation. This specification is a combination of and iteration on work done by both of these groups. Input documents, or parts thereof, which have not yet been integrated into the specification may be found in the appendices.

Introduction

We store a significant amount of sensitive data online, such as personally identifying information (PII), trade secrets, family pictures, and customer information. The data that we store is often not protected in an appropriate manner.

Legislation, such as the General Data Protection Regulation (GDPR), incentivizes service providers to better preserve individuals' privacy, primarily through making the providers liable in the event of a data breach. This liability pressure has revealed a technological gap, whereby providers are often not equipped with technology that can suitably protect their customers. Encrypted Data Vaults fill this gap and provide a variety of other benefits.

This specification describes a privacy-respecting mechanism for storing, indexing, and retrieving encrypted data at a storage provider. It is often useful when an individual or organization wants to protect data in a way that the storage provider cannot view, analyze, aggregate, or resell the data. This approach also ensures that application data is portable and protected from storage provider data breaches.

Why Do We Need Encrypted Data Vaults?

Explain why individuals and organizations that want to protect their privacy, trade secrets, and ensure data portability will benefit from using this technology. Explain how giving a standard API for the storage of user data empowering users to "bring their own storage", giving them control of their own information. Explain how applications that are written against a standard API and assume that users will bring their own storage can separate concerns and focus on the functionality of their application, removing the need to deal with storage infrastructure (instead leaving it to a specialist service provider that is chosen by the user).

Requiring client-side (edge) encryption for all data and metadata at the same time as enabling the user to store data on multiple devices and to share data with others, whilst also having searchable or queryable data, has been historically very difficult to implement in one system. Trade-offs are often made which sacrifice privacy in favor of usability, or vice versa.

Due to a number of maturing technologies and standards, we are hopeful that such trade-offs are no longer necessary, and that it is possible to design a privacy-preserving protocol for encrypted decentralized data storage that has broad practical appeal.

Ecosystem Overview

The problem of decentralized data storage has been approached from various different angles, and personal data stores (PDS), decentralized or otherwise, have a long history in commercial and academic settings. Different approaches have resulted in variations in terminology and architectures. The diagram below shows the types of components that are emerging, and the roles they play. Encrypted Data Vaults fulfill a storage role.

diagram showing
         the roles of different technologies in the encrypted
         data vaults ecosystem and how they interact.
Roles and interactions

This section describes the roles of the core actors and the relationships between them in an ecosystem where this specification is expected to be useful. A role is an abstraction that might be implemented in many different ways. The separation of roles suggests likely interfaces and protocols for standardization. The following roles are introduced in this specification:

data vault controller
A role an entity might perform by creating, managing, and deleting data vaults. This entity is also responsible for granting and revoking authorization to storage agents to the data vaults that are under its control.
storage agent
A role an entity might perform by creating, updating, and deleting data in a data vault. This entity is typically granted authorization to to access a data vault by a data vault controller.
storage provider
A role an entity might perform by providing a raw data storage mechanism to a data vault controller. It is impossible for this entity to see the data that it is storing due to all data being encrypted at rest and in transit to and from the storage provider.

Use Cases

Use cases have been moved to a distinct markdown document.

Deployment topologies

Based on the use cases, we consider the following deployment topologies:

  • Mobile Device Only: The server and the client reside on the same device. The vault is a library providing functionality via a binary API, using local storage to provide an encrypted database.
  • Mobile Device Plus Cloud Storage:A mobile device plays the role of a client, and the server is a remote cloud-based service provider that has exposed the storage via a network-based API (eg. REST over HTTPS). Data is not stored on the mobile device.
  • Multiple Devices (Single User) Plus Cloud Storage: When adding more devices managed by a single user, the vault can be used to synchronize data across devices.
  • Multiple Devices (Multiple Users) Plus Cloud Storage: When pairing multiple users with cloud storage, the vault can be used to synchronize data between multiple users with the help of replication and merge strategies.
  • Multi-/Cross-cloud: Some use cases (IoT / machine to machine / Skynet / guardianship ) require a non-human or non-functioning actor to delegate KMS/key control to a cloud vault for oversight or human intervention. In the case of some Password manager use case architectures or biometrically accessed/deployed key material storage, as well as some multi-cloud/hybrid-cloud architectures, key material will need to be retrieved from at least one other vault before accessing the vault being specified here.

    Keys in control of such an entity might still need to securely store signed credentials or data in a separate vault. Additional diagramming or specifications will be needed to show how this 2-vault solution could be constrained to be secure and feasible, even if non-normative.

  • Self-Hosted and/or Home-based Server: Alice wants to host her own SDS software instance, on her own server.
  • Support Low Power Devices/Non-private computing: To support users without access to private computing resources, the following three components need to be considered:
    1. Secure Storage
    2. Key vault - private key storage and recovery (Key management)
    3. Trusted computing - computational resources which have access to private keys and plain text private data

Requirements

The following sections elaborate on the requirements that have been gathered from the core use cases.

Privacy and multi-party encryption

One of the main goals of this system is ensuring the privacy of an entity's data so that it cannot be accessed by unauthorized parties, including the storage provider.

To accomplish this, the data must be encrypted both while it is in transit (being sent over a network) and while it is at rest (on a storage system).

Since data could be shared with more than one entity, it is also necessary for the encryption mechanism to support encrypting data to multiple parties.

Sharing and authorization

It is necessary to have a mechanism that enables authorized sharing of encrypted information among one or more entities.

The system is expected to specify one mandatory authorization scheme, but also allow other alternate authorization schemes. Examples of authorization schemes include OAuth2, Web Access Control, and [[ZCAP]]s (Authorization Capabilities).

Identifiers

The system should be identifier agnostic. In general, identifiers that are a form of URN or URL are preferred. While it is presumed that [[DID-CORE]] (Decentralized Identifiers, DIDs) will be used by the system in a few important ways, hard-coding the implementations to DIDs would be an anti-pattern.

Versioning and replication

It is expected that information can be backed up on a continuous basis. For this reason, it is necessary for the system to support at least one mandatory versioning strategy and one mandatory replication strategy, but also allow other alternate versioning and replication strategies.

Metadata and searching

Large volumes of data are expected to be stored using this system, which then need to be efficiently and selectively retrieved. To that end, an encrypted search mechanism is a necessary feature of the system.

It is important for clients to be able to associate metadata with the data such that it can be searched. At the same time, since privacy of both data and metadata is a key requirement, the metadata must be stored in an encrypted state, and service providers must be able to perform those searches in an opaque and privacy-preserving way, without being able to see the metadata.

Protocols

Since this system can reside in a variety of operating environments, it is important that at least one protocol is mandatory, but that other protocols are also allowed by the design. Examples of protocols include HTTP, gRPC, Bluetooth, and various binary on-the-wire protocols. An HTTPS API is defined in .

Design goals

This section elaborates upon a number of guiding principles and design goals that shape Encrypted Data Vaults.

Layered and modular architecture

A layered architectural approach is used to ensure that the foundation for the system is easy to implement while allowing more complex functionality to be layered on top of the lower foundations.

For example, Layer 1 might contain the mandatory features for the most basic system, Layer 2 might contain useful features for most deployments, Layer 3 might contain advanced features needed by a small subset of the ecosystem, and Layer 4 might contain extremely complex features that are needed by a very small subset of the ecosystem.

Prioritize privacy

This system is intended to protect an entity's privacy. When exploring new features, always ask "How would this impact privacy?". New features that negatively impact privacy are expected to undergo extreme scrutiny to determine if the trade-offs are worth the new functionality.

Push implementation complexity to the client

Servers in this system are expected to provide functionality strongly focused on the storage and retrieval of encrypted data. The more a server knows, the greater the risk to the privacy of the entity storing the data, and the more liability the service provider might have for hosting data. In addition, pushing complexity to the client enables service providers to provide stable server-side implementations while innovation can by carried out by clients.

Terminology

Core Concepts

The following sections outline core concepts, such as encrypted storage, which form the foundation of this specification.

Encrypted Storage

An important consideration of encrypted data stores is which components of the architecture have access to the (unencrypted) data, or who controls the private keys. There are roughly three approaches: storage-side encryption, client-side (edge) encryption, and gateway-side encryption (which is a hybrid of the previous two).

Any data storage systems that let the user store arbitrary data also support client-side encryption at the most basic level. That is, they let the user encrypt data themselves, and then store it. This doesn't mean these systems are optimized for encrypted data however. Querying and access control for encrypted data may be difficult.

Storage-side encryption is usually implemented as whole- disk encryption or filesystem-level encryption. This is widely supported and understood, and any type of hosted cloud storage is likely to use storage-side encryption. In this scenario the private keys are managed by the service provider or controller of the storage server, which may be a different entity than the user who is storing the data. Encrypting the data while it resides on disk is a useful security measure should physical access to the storage hardware be compromised, but does not guarantee that only the original user who stored the data has access.

Conversely, client-side encryption offers a high level of security and privacy, especially if metadata can be encrypted as well. Encryption is done at the individual data object level, usually aided by a keychain or wallet client, so the user has direct access to the private keys. This comes at a cost, however, since the significant responsibility of key management and recovery falls squarely onto the end user. In addition, the question of key management becomes more complex when data needs to be shared.

Gateway-side encryption systems take an approach that combines techniques from storage-side and client-side encryption architectures. These storage systems, typically encountered among multi-server clusters or some "encryption as a platform" cloud service providers, recognize that client-side key management may be too difficult for some users and use cases, and offer to perform encryption and decryption themselves in a way that is transparent to the client application. At the same time, they aim to minimize the number of components (storage servers) that have access to the private decryption keys. As a result, the keys usually reside on "gateway" servers, which encrypt the data before passing it to the storage servers. The encryption/decryption is transparent to the client, and the data is opaque to the storage servers, which can be modular/pluggable as a result. Gateway-side encryption provides some benefits over storage-side systems, but also share the drawbacks: the gateway sysadmin controls the keys, not the user.

Structured Documents

The fundamental unit of storage in data vaults is the encrypted structured document which, when decrypted, provides a data structure that can be expressed in popular syntaxes such as JSON and CBOR. Documents can store structured data and metadata about the structured data. Structured document sizes are limited to 16MB.

Streams

For files larger than 16MB or for raw binary data formats such as audio, video, and office productivity files, a streaming API is provided that enables data to be streamed to/from a data vault. Streams are described using structured documents, but the storage of the data is separated from the structured document using a hashlink to the encrypted content.

Indexing

Data vaults are expected to store a very large number of documents of varying kinds. This means that it is important to be able to search the documents in a timely way, which creates a challenge for the storage provider as the content is encrypted. Previously this has been worked around with a certain amount of unencrypted metadata attached to the data objects. Another possibility is unencrypted listings of pointers to filtered subsets of data.

In the case of data vaults, an encrypted search scheme is provided for secure data vaults that enable data vault clients to do meta data indexing while not leaking metadata to the storage provider.

Architecture

Review this section for language that should be properly normative.

This section describes the architecture of the Encrypted Data Vault protocol, in the form of a client-server relationship. The vault isregarded as the server and the client acts as the interface used to interact with the vault.

This architecture is layered in nature, where the foundational layer consists of an operational system with minimal features, and where more advanced features are layered on top. Implementations can choose to implement only the foundational layer, or optionally, additional layers consisting of a richer set of features for more advanced use cases.

Server and client responsibilities

The server is assumed to be of low trust, and must have no visibility into the data that it persists. However, even in this model, the server still has a set of minimum responsibilities it must adhere to.

The client is responsible for providing an interface to the server, with bindings for each relevant protocol (HTTP, RPC, or binary over-the-wire protocols), as required by the implementation.

All encryption and decryption of data is done on the client side, at the edges. The data (including metadata) MUST be opaque to the server, and the architecture is designed to prevent the server from being able to decrypt it.

Layer 1 (L1) responsibilities

Layer 1 consists of a client-server system that is capable of encrypting data in transit and at rest.

Server: validate requests (L1)

When a vault client makes a request to store, query, modify, or delete data in the vault, the server validates the request. Since the actual data and metadata in any given request is encrypted, such validation is necessarily limited and largely depends on the protocol and the semantics of the request.

Server: Persist data (L1)

The mechanism a server uses to persist data, such as storage on a local, networked, or distributed file system, is determined by the implementation. The persistence mechanism is expected to adhere to the common expectations of a data storage provider, such as reliable storage and retrieval of data.

Server: Persist global configuration (L1)

A vault has a global configuration that defines the following properties:

  • Stream chunk size
  • Other config metadata

The configuration allows the the client to perform capability discovery regarding things like authorization, protocol, and replication mechanisms that are used by the server.

Server: enforcement of authorization policies (L1)

When a client makes a request to store, query, modify, or delete data in the vault, the server enforces any authorization policy that is associated with the request.

Client: encrypted data chunking (L1)

An Encrypted Data Vault is capable of storing many different types of data, including large unstructured binary data. This means that storing a file as a single entry would be challenging for systems that have limits on single record sizes. For example, some databases set the maximum size for a single record to 16MB. As a result, it is necessary that large data is chunked into sizes that are easily managed by a server. It is the responsibility of the client to set the chunk size of each resource and chunk large data into manageable chunks for the server. It is the responsibility of the server to deny requests to store chunks larger that it can handle.

Each chunk is encrypted individually using authenticated encryption. Doing so protects against attacks where an attacking server replaces chunks in a large file and requires the entire file to be downloaded and decrypted by the victim before determining that the file is compromised. Encrypting each chunk with authenticated encryption ensures that a client knows that it has a valid chunk before proceeding to the next one. Note that another authorized client can still perform an attack by doing authenticated encryption on a chunk, but a server is not capable of launching the same attack.

Client: Resource structure (L1)

The process of storing encrypted data starts with the creation of a Resource by the client, with the following structure.

Resource:

  • id (required)
  • meta
    • meta.contentType MIME type
  • content - entire payload, or a manifest-like list of hashlinks to individual chunks

If the data is less than the chunk size, it is embedded directly into the content.

Otherwise, the data is sharded into chunks by the client (see next section), and each chunk is encrypted and sent to the server. In this case, content contains a manifest-like listing of URIs to individual chunks (integrity-protected by [[HASHLINK]].

Client: Encrypted resource structure (L1)

The process of creating the Encrypted Resource. If the data was sharded into chunks, this is done after the individual chunks are written to the server.

  • id
  • index - encrypted index tags prepared by the client (for use with privacy-preserving querying over encrypted resources)
  • Chunk size (if different from the default in global config)
  • Versioning metadata - such as sequence numbers, Git-like hashes, or other mechanisms
  • Encrypted resource payload - encoded as a jwe [[RFC7516]], cwe [[RFC8152]] or other appropriate mechanism

Layer 2 (L2) responsibilities

Layer 2 consists of a system that is capable of sharing data among multiple entities, of versioning and replication, and of performing privacy-preserving searches in an efficient manner.

Client: Encrypted search indexes (L2)

To enable privacy-preserving querying (where the search index is opaque to the server), the client must prepare a list of encrypted index tags (which are stored in the Encrypted Resource, alongside the encrypted data contents).

Need details about salting and encryption mechanism of index tags.

Client: Versioning and replication (L2)

A server must support at least one versioning/change control mechanism. Replication is done by the client, not by the server (since the client controls the keys, knows about which other servers to replicate to, etc.). If an Encrypted Data Vault implementation aims to provide replication functionality, it MUST also pick a versioning/change control strategy (since replication necessarily involves conflict resolution). Some versioning strategies are implicit ("last write wins", eg. rsync or uploading a file to a file hosting service), but keep in mind that a replication strategy always implies that some sort of conflict resolution mechanism should be involved.

Client: Sharing with other entities

An individual vault's choice of authorization mechanism determines how a client shares resources with other entities (authorization capability link or similar mechanism).

Layer 3 (L3) responsibilities

Server: Notifications (L3)

It is helpful if data storage providers are able to notify clients when changes to persisted data occurs. A server may optionally implement a mechanism by which clients can subscribe to changes in the vault.

Client: Vault-wide integrity protection (L3)

Vault-wide integrity protection is provided to prevent a variety of storage provider attacks where data is modified in a way that is undetectable, such as if documents are reverted to older versions or deleted. This protection requires that a global catalog of all the resource identifiers that belong to a user, along with the most recent version, is stored and kept up to date by the client. Some clients may store a copy of this catalog locally (and include integrity protection mechanism such as [[HASHLINK]] to guard against interference or deletion by the server.

Data Model

The following sections outlines the data model for data vaults.

DataVaultConfiguration

Data vault configuration isn't strictly necessary for using the other features of data vaults. This should have its own conformance section/class or potentially event be non-normative.

A data vault configuration specifies the properties a particular data vault will have.

Property Description
sequence A unique counter for the data vault in order to ensure that clients are properly synchronized to the data vault. The value is required and MUST be an unsigned 64-bit number.
controller The entity or cryptographic key that is in control of the data vault. The value is required and MUST be a URI.
invoker The root entities or cryptographic key(s) that are authorized to invoke an authorization capability to modify the data vault's configuration or read or write to it. The value is optional, but if present, MUST be a URI or an array of URIs. When this value is not present, the value of controller property is used for the same purpose.
delegator The root entities or cryptographic key(s) that are authorized to delegate authorization capabilities to modify the data vault's configuration or read or write to it. The value is optional, but if present, MUST be a URI or an array of URIs. When this value is not present, the value of controller property is used for the same purpose.
referenceId Used to express an application-specific reference identifier. The value is optional and, if present, MUST be a string.
keyAgreementKey.id An identifier for the key agreement key. The value is required and MUST be a URI. The key agreement key is used to derive a secret that is then used to generate a key encryption key for the receiver.
keyAgreementKey.type The type of key agreement key. The value is required and MUST be or map to a URI.
hmac.id An identifier for the HMAC key. The value is required a MUST be or map to a URI.
hmac.type The type of HMAC key. The value is required and MUST be or map to a URI.
{
  "sequence": 0,
  "controller": "did:example:123456789",
  "referenceId": "my-primary-data-vault",
  "keyAgreementKey": {
    "id": "https://example.com/kms/12345",
    "type": "X25519KeyAgreementKey2019"
  },
  "hmac": {
    "id": "https://example.com/kms/67891",
    "type": "Sha256HmacKey2019"
  }
}
        

StructuredDocument

A structured document is used to store application data as well as metadata about the application data. This information is typically encrypted and then stored on the data vault.

Property Description
id An identifier for the structured document. The value is required and MUST be a Base58-encoded 128-bit random value.
meta Key-value metadata associated with the structured document.
content Key-value content for the structured document.
{
  "id": "urn:uuid:94684128-c42c-4b28-adb0-aec77bf76044",
  "meta": {
    "created": "2019-06-18"
  },
  "content": {
    "message": "Hello World!"
  }
}
        

Streams

Streams can be used to store images, video, backup files, and any other binary data of arbitrary length. This is performed by using the stream property and additional metadata that further identifies the type of stream being stored. This table below provides the metadata to be stored in addition to the values specified in StructuredDocument.

Property Description
meta.chunks Specifies the number of chunks in the stream.
stream.id The identifier for the stream. The stream identifier MUST be a URI that references a stream on the same data vault. Once the stream has been written to the data vault, the content identifier MUST be updated such that it is a valid hashlink. To allow for streaming encryption, the value of the digest for the stream is assumed to be unknowable until after the stream has been written. The hashlink MUST exist as a content hash for the stream that has been written to the data vault.
{
  "id": "urn:uuid:41289468-c42c-4b28-adb0-bf76044aec77",
  "meta": {
    "created": "2019-06-19",
    "contentType": "video/mpeg",
    "chunks": 16
  },
  "stream": {
    "id": "https://example.com/encrypted-data-vaults/zMbxmSDn2Xzz?hl=zb47JhaKJ3hJ5Jkw8oan35jK23289Hp"
  }
}
          

EncryptedDocument

An encrypted document is used to store a structured document in a way that ensures that no entity can read the information without the consent of the data controller.

While the table below is a simple version of an EncryptedDocument, there is no other table that yet describes the indexed property and its subproperties, should it be present on an EncryptedDocument.

Property Description
id An identifier for the encrypted document. The value is required and MUST be a Base58-encoded 128-bit random value.
sequence A unique counter for the data vault in order to ensure that clients are properly synchronized to the data vault. The value is required and MUST be an unsigned 64-bit number.
jwe or cwe A JSON Web Encryption or COSE Encrypted value that, if decoded, results in the corresponding StructuredDocument.

Another example should be added that shows that a Diffie-Hellman key can be identified in the JWE recipients field. This type of key can be used for key agreement on a key wrapping key.

Another section should detail that data vault servers may omit certain fields or certain values in certain fields, such as the recipients field, based on whether or not the entity requesting an EncryptedDocument is authorized to see the field or its values. This can be finely controlled through the use of Authorization Capabilities.

{
  "id":"z19x9iFMnfo4YLsShKAvnJk4L",
  "sequence":0,
  "indexed":[
    {
      "hmac":{
        "id":"did:ex:12345#key1",
        "type":"Sha256HmacKey2019"
      },
      "sequence":0,
      "attributes":[
      ]
    }
  ],
  "jwe":{
    "protected":"eyJlbmMiOiJDMjBQIn0",
    "recipients":[
      {
        "header":{
          "kid":"urn:123",
          "alg":"ECDH-ES+A256KW",
          "epk":{
            "kty":"OKP",
            "crv":"X25519",
            "x":"d7rIddZWblHmCc0mYZJw39SGteink_afiLraUb-qwgs"
          },
          "apu":"d7rIddZWblHmCc0mYZJw39SGteink_afiLraUb-qwgs",
          "apv":"dXJuOjEyMw"
        },
        "encrypted_key":"4PQsjDGs8IE3YqgcoGfwPTuVG25MKjojx4HSZqcjfkhr0qhwqkpUUw"
      }
    ],
    "iv":"FoJ5uPIR6HDPFCtD",
    "ciphertext":"tIupQ-9MeYLdkAc1Us0Mdlp1kZ5Dbavq0No-eJ91cF0R0hE",
    "tag":"TMRcEPc74knOIbXhLDJA_w"
  }
}
        

Data vault HTTPS API

This section introduces the HTTPS API for interacting with data vaults and their contents.

Discovering Service Endpoints

A website may provide service endpoint discovery by embedding JSON-LD in their top-most HTML web page (e.g. at https://example.com/):

<!DOCTYPE html>
<html lang="en">
  <head>
    <meta charset="utf-8">
    <title>Example Website</title>
    <link rel="stylesheet" href="style.css">
    <script src="script.js"></script>
    <script type="application/ld+json">
{
  "@context": "https://w3id.org/encrypted-data-vaults/v1",
  "id": "https://example.com/",
  "name": "Example Website",
  "dataVaultManagementService": "https://example.com/data-vaults"
}
    </script>
  </head>
  <body>
    <!-- page content -->
  </body>
</html>
        

Service descriptions may also be requested via content negotiation. In the following example a JSON-compatible service description is provided (e.g. curl -H "Accept: application/json" https://example.com/):

{
  "@context": "https://w3id.org/encrypted-data-vaults/v1",
  "id": "https://example.com/",
  "name": "Example Website",
  "dataVaultCreationService": "https://example.com/data-vaults"
}
        

Creating a data vault

A data vault is created by performing an HTTP POST of a DataVaultConfiguration to the dataVaultCreationService. The following HTTP status codes are defined for this service:

HTTP Status Description
201 data vault creation was successful. The HTTP Location header will contain the URL for the newly created data vault.
400 data vault creation failed.
409 A duplicate data vault exists.

An example exchange of a data vault creation is shown below:

POST /data-vaults HTTP/1.1
Host: example.com
Content-Type: application/json
Accept: application/json, text/plain, */*
Accept-Encoding: gzip, deflate

{
  "sequence": 0,
  "controller": "did:example:123456789",
  "referenceId": "urn:uuid:abc5a436-21f9-4b4c-857d-1f5569b2600d",
  "keyAgreementKey": {
    "id": "https://example.com/kms/12345",
    "type": "X25519KeyAgreementKey2019"
  },
  "hmac": {
    "id": "https://example.com/kms/67891",
    "type": "Sha256HmacKey2019"
  }
}
        

Explain the purpose of the controller property to root authority. Explain how Authorization Capabilities can be created and invoked via HTTP signatures to authorize reading and writing from/to data vaults.

If the creation of the data vault was successful, an HTTP 201 status code is expected in return:

HTTP/1.1 201 Created
Location: https://example.com/encrypted-data-vaults/z4sRgBJJLnYy
Cache-Control: no-cache, no-store, must-revalidate
Pragma: no-cache
Expires: 0
Date: Fri, 14 Jun 2019 18:35:33 GMT
Connection: keep-alive
Transfer-Encoding: chunked
        

Creating a Document

A structured document is stored in a data vault by encoding a StructuredDocument as an EncryptedDocument and then performing an HTTP POST to a data vault endpoint created via . The following HTTP status codes are defined for this service:

HTTP Status Description
201 Structured document creation was successful. The HTTP Location header will contain the URL for the newly created document.
400 Structured document creation failed.

In order to convert a StructuredDocument to an EncryptedDocument an implementer MUST encode the StructuredDocument as a JWE or a COSE Encrypted object. Once the document is encrypted, it can be sent to the document creation service.

A protocol example of a document creation is shown below:

POST /encrypted-data-vaults/z4sRgBJJLnYy/docs HTTP/1.1
Host: example.com
Content-Type: application/json
Accept: application/json, text/plain, */*
Accept-Encoding: gzip, deflate

{
  "id": "urn:uuid:94684128-c42c-4b28-adb0-aec77bf76044",
  "sequence": 0,
  "jwe": {
    "protected": "eyJlbmMiOiJDMjBQIn0",
    "recipients": [{
      "header": {
        "alg": "A256KW",
        "kid": "https://example.com/kms/zSDn2MzzbxmX"
      },
      "encrypted_key": "OR1vdCNvf_B68mfUxFQVT-vyXVrBembuiM40mAAjDC1-Qu5iArDbug"
    }],
    "iv": "i8Nins2vTI3PlrYW",
    "ciphertext": "Cb-963UCXblINT8F6MDHzMJN9EAhK3I",
    "tag": "pfZO0JulJcrc3trOZy8rjA"
  }
}
        

If the creation of the structured document was successful, an HTTP 201 status code is expected in return:

HTTP/1.1 201 Created
Location: https://example.com/encrypted-data-vaults/z4sRgBJJLnYy/docs/zMbxmSDn2Xzz
Cache-Control: no-cache, no-store, must-revalidate
Pragma: no-cache
Expires: 0
Date: Fri, 14 Jun 2019 18:37:12 GMT
Connection: keep-alive
Transfer-Encoding: chunked
        

Reading a Document

Reading a document from a data vault is performed by retrieving the EncryptedDocument and then decrypting it to a StructuredDocument. The following HTTP status codes are defined for this service:

HTTP Status Description
200 EncryptedDocument retrieval was successful.
400 EncryptedDocument retrieval failed.
404 EncryptedDocument with given id was not found.

In order to convert an EncryptedDocument to a StructuredDocument an implementer MUST decode the EncryptedDocument from a JWE or a COSE Encrypted object. Once the document is decrypted, it can be processed by the web application.

A protocol example of a document retrieval is shown below:

Explain that the URL path structure is fixed for all data vaults to enable portability and the use of stable URLs (such as through DID URLs) to reference certain documents while allowing users to change their data vault service providers. Explain how this enables portability.

GET https://example.com/encrypted-data-vaults/z4sRgBJJLnYy/docs/zMbxmSDn2Xzz HTTP/1.1
Host: example.com
Accept: application/json, text/plain, */*
Accept-Encoding: gzip, deflate
        

If the retrieval of the encrypted document was successful, an HTTP 200 status code is expected in return:

HTTP/1.1 200 OK
Date: Fri, 14 Jun 2019 18:37:12 GMT
Connection: keep-alive

{
  "id": "urn:uuid:94684128-c42c-4b28-adb0-aec77bf76044",
  "sequence": 0,
  "jwe": {
    "protected": "eyJlbmMiOiJDMjBQIn0",
    "recipients": [{
      "header": {
        "alg": "A256KW",
        "kid": "https://example.com/kms/zSDn2MzzbxmX"
      },
      "encrypted_key": "OR1vdCNvf_B68mfUxFQVT-vyXVrBembuiM40mAAjDC1-Qu5iArDbug"
    }],
    "iv": "i8Nins2vTI3PlrYW",
    "ciphertext": "Cb-963UCXblINT8F6MDHzMJN9EAhK3I",
    "tag": "pfZO0JulJcrc3trOZy8rjA"
  }
}
        

Updating a Document

A structured document is updated in a data vault by encoding the updated StructuredDocument as an EncryptedDocument and then performing an HTTP POST to a data vault endpoint created via . The following HTTP status codes are defined for this service:

HTTP Status Description
200 Structured document update was successful.
400 Structured document update failed.

In order to convert a StructuredDocument to an EncryptedDocument an implementer MUST encode the StructuredDocument as a JWE or a COSE Encrypted object. Once the document is encrypted, it can be sent to the document creation service.

A protocol example of a document update is shown below:

POST  /encrypted-data-vaults/z4sRgBJJLnYy/docs/zMbxmSDn2Xzz HTTP/1.1
Host: example.com
Content-Type: application/json
Accept: application/json, text/plain, */*
Accept-Encoding: gzip, deflate

{
  "id": "urn:uuid:94684128-c42c-4b28-adb0-aec77bf76044",
  "sequence": 1,
  "jwe": {
    "protected": "eyJlbmMiOiJDMjBQIn0",
    "recipients": [{
      "header": {
        "alg": "A256KW",
        "kid": "https://example.com/kms/zSDn2MzzbxmX"
      },
      "encrypted_key": "OR1vdCNvf_B68mfUxFQVT-vyXVrBembuiM40mAAjDC1-Qu5iArDbug"
    }],
    "iv": "i8Nins2vTI3PlrYW",
    "ciphertext": "Cb-963UCXblINT8F6MDHzMJN9EAhK3I",
    "tag": "pfZO0JulJcrc3trOZy8rjA"
  }
}
        

If the update to the encrypted document was successful, an HTTP 200 status code is expected in return:

HTTP/1.1 200 OK
Cache-Control: no-cache, no-store, must-revalidate
Date: Fri, 14 Jun 2019 18:39:52 GMT
Connection: keep-alive
        

Deleting a Document

A structured document is deleted by performing an HTTP DELETE to a data vault endpoint created via . The following HTTP status codes are defined for this service:

HTTP Status Description
200 Structured document was deleted successfully.
400 Structured document deletion failed.
404 Structured document was not found.

A protocol example of a document deletion is shown below:

DELETE  /encrypted-data-vaults/z4sRgBJJLnYy/docs/zMbxmSDn2Xzz HTTP/1.1
Host: example.com
        

If the deletion of the encrypted document was successful, an HTTP 200 status code is expected in return:

HTTP/1.1 200 OK
Cache-Control: no-cache, no-store, must-revalidate
Date: Fri, 14 Jun 2019 18:40:18 GMT
Connection: keep-alive
        

Creating a Stream

This section is out of date, do not implement.

Another design is being considered that would transform streams into a single index document and a collection of documents, each of which contains a chunk of the stream. This would be done to help prevent misuse of a decryption stream prior to its authentication. In order for this approach to be implemented in a Web browser, it also requires certain File or Blob APIs. Further investigation is needed to ensure that support of these APIs would be sufficient for this design approach, as it would be preferred to prevent data misuse and to make better use of native implementations of authenticated encryption modes.

A stream is stored in a data vault by writing a document containing metadata about the stream, encrypting the stream, writing it to a data vault, and then updating the document containing metadata about the stream. The following HTTP status codes are defined for this service:

HTTP Status Description
201 Stream creation was successful. The HTTP Location header will contain the URL for the newly created stream.
400 Stream creation failed.

Implementations first encode the metadata associated with the stream into a StructuredDocument:

{
  "id": "urn:uuid:94684128-c42c-4b28-adb0-aec77bf76044",
  "meta": {
    "created": "2019-06-18",
    "contentType": "video/mpeg",
    "contentLength": 56735817
  },
  "content": {
    "id": "https://example.com/encrypted-data-vaults/z4sRgBJJLnYy/streams/zMbxmSDn2Xzz"
  }
}
        

In this case, the value of content.id is a reference to the stream located at https://example.com/encrypted-data-vaults/z4sRgBJJLnYy/streams/zMbxmSDn2Xzz, which is the location that the stream MUST be written to. This content identifier MUST be updated to include a hashlink once the stream has been written and its digest is known.

The StructuredDocument above is then transformed to an EncryptedDocument and the procedure in is executed:

POST /encrypted-data-vaults/z4sRgBJJLnYy/docs HTTP/1.1
Host: example.com
Content-Type: application/json
Accept: application/json, text/plain, */*
Accept-Encoding: gzip, deflate

{
  "id": "urn:uuid:94684128-c42c-4b28-adb0-aec77bf76044",
  "sequence": 0,
  "jwe": {
    "protected": "eyJlbmMiOiJDMjBQIn0",
    "recipients": [{
      "header": {
        "alg": "A256KW",
        "kid": "https://example.com/kms/zSDn2MzzbxmX"
      },
      "encrypted_key": "OR1vdCNvf_B68mfUxFQVT-vyXVrBembuiM40mAAjDC1-Qu5iArDbug"
    }],
    "iv": "i8Nins2vTI3PlrYW",
    "ciphertext": "Cb-963UCXblINT8F6MDHzMJN9EAhK3I",
    "tag": "pfZO0JulJcrc3trOZy8rjA"
  }
}
        

If the creation of the structured document was successful, an HTTP 201 status code is expected in return:

HTTP/1.1 201 Created
Location: https://example.com/encrypted-data-vaults/z4sRgBJJLnYy/docs/zp4H8ekWn
Cache-Control: no-cache, no-store, must-revalidate
Pragma: no-cache
Expires: 0
Date: Fri, 14 Jun 2019 18:37:12 GMT
Connection: keep-alive
Transfer-Encoding: chunked
        

Next, in order to convert a stream to an EncryptedStream an implementer MUST encrypt the stream. Once the stream is encrypted (or as it is encrypted), it can be sent to the stream creation service.

A protocol example of a stream creation is shown below:

POST /encrypted-data-vaults/z4sRgBJJLnYy/streams HTTP/1.1
Host: example.com
Content-Type: application/octet-stream
Transfer-Encoding: chunked
Accept: application/json, text/plain, */*
Accept-Encoding: gzip, deflate

TBD
        

If the creation of the stream was successful, an HTTP 201 status code is expected in return:

HTTP/1.1 201 Created
Location: https://example.com/encrypted-data-vaults/z4sRgBJJLnYy/streams/zMbxmSDn2Xzz
Cache-Control: no-cache, no-store, must-revalidate
Pragma: no-cache
Expires: 0
Date: Fri, 14 Jun 2019 18:37:12 GMT
Connection: keep-alive
Transfer-Encoding: chunked
        

Once a stream is created, the metadata related to the stream can be updated in the data vault using the protocol defined in . An example of updating a link to a video file is shown below.

Implementations update the metadata associated with the stream in its StructuredDocument:

{
  "id": "urn:uuid:94684128-c42c-4b28-adb0-aec77bf76044",
  "sequence": 1,
  "meta": {
    "created": "2019-06-18",
    "contentType": "video/mpeg",
    "contentLength": 56735817
  },
  "content": {
    "id": "https://example.com/encrypted-data-vaults/z4sRgBJJLnYy/streams/zMbxmSDn2Xzz?hl=zb47JhaKJ3hJ5Jkw8oan35jK23289Hp",
    "jwe": {
      "protected": "eyJlbmMiOiJDMjBQIn0",
      "recipients": [{
        "header": {
          "alg": "A256KW",
          "kid": "https://example.com/kms/zSDn2MzzbxmX"
        },
        "encrypted_key": "OR1vdCNvf_B68mfUxFQVT-vyXVrBembuiM40mAAjDC1-Qu5iArDbug"
      }],
      "iv": "i8Nins2vTI3PlrYW",
      "tag": "pfZO0JulJcrc3trOZy8rjA"
    }
  }
}
        

The value of content.id MUST be updated to include a hashlink now that the stream has been written and its digest is known.

The StructuredDocument above is then transformed to an EncryptedDocument and the procedure in is executed:

POST /encrypted-data-vaults/z4sRgBJJLnYy/docs HTTP/1.1
Host: example.com
Content-Type: application/json
Accept: application/json, text/plain, */*
Accept-Encoding: gzip, deflate

{
  "id": "urn:uuid:94684128-c42c-4b28-adb0-aec77bf76044",
  "sequence": 1,
  "jwe": {
    "protected": "eyJlbmMiOiJDMjBQIn0",
    "recipients": [{
      "header": {
        "alg": "A256KW",
        "kid": "https://example.com/kms/zSDn2MzzbxmX"
      },
      "encrypted_key": "OR1vdCNvf_B68mfUxFQVT-vyXVrBembuiM40mAAjDC1-Qu5iArDbug"
    }],
    "iv": "i8Nins2vTI3PlrYW",
    "ciphertext": "Cb-963UCXblINT8F6MDHzMJN9EAhK3I",
    "tag": "pfZO0JulJcrc3trOZy8rjA"
  }
}
        

If the creation of the structured document was successful, an HTTP 200 status code is expected in return:

HTTP/1.1 200 OK
Location: https://example.com/encrypted-data-vaults/z4sRgBJJLnYy/docs/zp4H8ekWn
Cache-Control: no-cache, no-store, must-revalidate
Pragma: no-cache
Expires: 0
Date: Fri, 14 Jun 2019 18:37:12 GMT
Connection: keep-alive
Transfer-Encoding: chunked
        

Reading a Stream

This section is out of date, do not implement.

Reading a stream from a data vault is performed by retrieving the associated metadata that is encrypted as an EncryptedDocument, decoding the hashlink information, and then retrieving the EncryptedStream and then decrypting it. The following HTTP status codes are defined for this service:

HTTP Status Description
200 Encrypted stream retrieval was successful.
400 Encrypted stream retrieval failed.
404 Encrypted stream with given id was not found.

In order to convert an EncryptedStream to a stream an implementer MUST decode the EncryptedStream using the information provided in the associated EncryptedDocument. Once the stream is decrypted, it can be processed by the web application.

Implementers can perform random seeking in the stream by using the Content-Range HTTP Header.

A protocol example of a stream retrieval is shown below:

GET https://example.com/encrypted-data-vaults/z4sRgBJJLnYy/streams/zn2XmSDzMbxz HTTP/1.1
Host: example.com
Content-Range: 0-1048576
Accept: application/octet-stream
Accept-Encoding: gzip, deflate
        

If the retrieval of the encrypted stream was successful, an HTTP 200 status code is expected in return:

HTTP/1.1 200 OK
Date: Fri, 14 Jun 2019 18:37:12 GMT
Content-Range: 0-1048576
Content-Length: 1048576
Connection: keep-alive

...
        

Deleting a Stream

This section is out of date, do not implement.

A stream is deleted by performing an HTTP DELETE to a data vault stream endpoint created via and the corresponding metadata document created via . The following HTTP status codes are defined for this service:

HTTP Status Description
200 Stream was deleted successfully.
400 Stream deletion failed.
404 Stream was not found.

A protocol example of a stream deletion is shown below:

DELETE  /encrypted-data-vaults/z4sRgBJJLnYy/streams/zMbxmSDn2Xzz HTTP/1.1
Host: example.com

If the deletion of the encrypted stream was successful, an HTTP 200 status code is expected in return:

HTTP/1.1 200 OK
Cache-Control: no-cache, no-store, must-revalidate
Date: Fri, 14 Jun 2019 18:40:18 GMT
Connection: keep-alive
        

Once the stream is deleted, implementations MUST also delete the corresponding metadata document:

DELETE  /encrypted-data-vaults/z4sRgBJJLnYy/docs/zMbxmSDn2Xzz HTTP/1.1
Host: example.com
        

If the deletion of the encrypted stream was successful, an HTTP 200 status code is expected in return:

HTTP/1.1 200 OK
Cache-Control: no-cache, no-store, must-revalidate
Date: Fri, 14 Jun 2019 18:40:18 GMT
Connection: keep-alive
        

Creating Encrypted Indexes

It is often useful to search a data vault for structured documents that contain specific metadata. Efficient searching requires the use of search indexes and local access to data. This poses an interesting challenge as the search has to be performed on the storage provider without leaking information that could violate the privacy of the entities that are storing information in the data vault. This section details how encrypted indexes can be created and used to perform efficient searching while protecting the privacy of entities that are storing information in the data vault.

When creating an EncryptedDocument, blinded index properties MAY be used to perform efficient searches. An example of the use of these properties is shown below:

{
  "id": "urn:uuid:698f3fb6-592f-4d22-9e04-462cc4606a23",
  "sequence": 0,
  "indexed": [{
    "sequence": 0,
    "hmac": {
      "id": "https://example.com/kms/z7BgF536GaR",
      "type": "Sha256HmacKey2019"
    },
    "attributes": [{
      "name": "CUQaxPtSLtd8L3WBAIkJ4DiVJeqoF6bdnhR7lSaPloZ",
      "value": "RV58Va4904K-18_L5g_vfARXRWEB00knFSGPpukUBro",
      "unique": true
    }, {
      "name": "DUQaxPtSLtd8L3WBAIkJ4DiVJeqoF6bdnhR7lSaPloZ",
      "value": "QV58Va4904K-18_L5g_vfARXRWEB00knFSGPpukUBro"
    }]
  }],
  "jwe": {
    "protected": "eyJlbmMiOiJDMjBQIn0",
    "recipients": [
      {
        "header": {
          "alg": "A256KW",
          "kid": "https://example.com/kms/z7BgF536GaR"
        },
        "encrypted_key":
          "OR1vdCNvf_B68mfUxFQVT-vyXVrBembuiM40mAAjDC1-Qu5iArDbug"
      }
    ],
    "iv": "i8Nins2vTI3PlrYW",
    "ciphertext": "Cb-963UCXblINT8F6MDHzMJN9EAhK3I",
    "tag": "pfZO0JulJcrc3trOZy8rjA"
  }
}
        

The example above demonstrates the use of unique index values as well as non-unique indexes.

The example above enables the storage provider to build efficient indexes on encrypted properties while enabling storage agents to search the information without leaking information that would create privacy concerns.

Provide instructions and examples for how indexes are blinded using an HMAC key.

Explain that multiple entities can maintain their own independent indexes (using their own HMAC key) provided they have been granted this capability. Explain that indexes can be sparse/partial. Explain that indexes have their own sequence number and that it will match the document's sequence number once it is updated.

Add a section showing the update index endpoint and how it works.

Searching Encrypted Documents

The contents of a data vault can be searched using encrypted indexes created using the processes described in . There are two primary ways of searching for encrypted documents. The first is to search for a specific value associated with a specific index. The second is to search to see if a specific index exists on a document.

The example below demonstrates how to search for a specific value associated with a specific index.

POST https://example.com/encrypted-data-vaults/z4sRgBJJLnYy HTTP/1.1
Host: example.com
Content-Type: application/json
Accept: application/json, text/plain, */*
Accept-Encoding: gzip, deflate

{
  "index": "DUQaxPtSLtd8L3WBAIkJ4DiVJeqoF6bdnhR7lSaPloZ",
  "equals": [
    {"QV58Va4904K-18_L5g_vfARXRWEB00knFSGPpukUBro":
      "dh327d234h8437hc34f43f43ZXGHDXG"}
  ]
}
        

A successful query will result in a standard HTTP 200 response with a list of identifiers for all encrypted documents that match the query:

HTTP/1.1 200 OK
Cache-Control: no-cache, no-store, must-revalidate
Date: Fri, 14 Jun 2019 18:45:18 GMT
Connection: keep-alive

["https://example.com/encrypted-data-vaults/z4sRgBJJLnYy/docs/zMbxmSDn2Xzz"]
        

The contents of a data vault can also be searched to see if a certain attribute name is indexed by using the has keyword.

POST https://example.com/encrypted-data-vaults/z4sRgBJJLnYy/queries HTTP/1.1
Host: example.com
Content-Type: application/json
Accept: application/json, text/plain, */*
Accept-Encoding: gzip, deflate

{
  "has": ["CUQaxPtSLtd8L3WBAIkJ4DiVJeqoF6bdnhR7lSaPloZ"]
}
        

If the query above is successful, an HTTP 200 code is expected with a list of EncryptedDocument identifiers that contain the value.

HTTP/1.1 200 OK
Cache-Control: no-cache, no-store, must-revalidate
Date: Fri, 14 Jun 2019 18:45:18 GMT
Connection: keep-alive

["https://example.com/encrypted-data-vaults/z4sRgBJJLnYy/docs/zMbxmSDn2Xzz"]
        

Extension points

Encrypted Data Vaults support a number of extension points:

Privacy Considerations

This section details the general privacy considerations and specific privacy implications of deploying this specification into production environments.

Write privacy considerations.

Security Considerations

There are a number of security considerations that implementers should be aware of when processing data described by this specification. Ignoring or not understanding the implications of this section can result in security vulnerabilities.

While this section attempts to highlight a broad set of security considerations, it is not a complete list. Implementers are urged to seek the advice of security and cryptography professionals when implementing mission critical systems using the technology outlined in this specification.

Malicious or accidental modification of data

While a service provider is not able to read data in an Encrypted Data Vault, it is possible for a service provider to delete, add, or modify encrypted data. The deletion, addition, or modification of encrypted data can be prevented by keeping a global manifest of data in the data vault.

Compromised vault

An Encrypted Data Vault can be compromised if the data controller (the entity who holds the decryption keys and appropriate authorization credentials) accidentally grants access to an attacker. For example, a victim might accidentally authorize an attacker to the entire vault or mishandle their encryption key. Once an attacker has access to the system, they may modify, remove, or change the vault's configuration.

Data access timing attacks

While it is normally difficult for a server to determine the identity of an entity as well as the purpose for which that entity is accessing the Encrypted Data Vault, there is always metadata related to access patterns, rough file sizes, and other information that is leaked when an entity accesses the vault. The system has been designed to not leak information that it creates concerning privacy limitations, an approach that protects against many, but not all, surveillance strategies that may be used by servers that are not acting in the best interest of the privacy of the vault's users.

Encrypted data on public networks

Assuming that all encryption schemes will eventually be broken is a safe assumption to make when protecting one's data. For this reason, it is inadvisable that servers use any sort of public storage network to store encrypted data as a storage strategy.

Unencrypted data on server

While this system goes to great lengths to encrypt content and metadata, there are a handful of fields that cannot be encrypted in order to ensure the server can provide the features outlined in this specification. For example, a version number associated with data provides insight into how often the data is modified. The identifiers associated with encrypted content enables a server to gain knowledge by possibly correlating identifiers across documents. Implementations are advised to minimize the amount of information that is stored in an unencrypted fashion.

Partial matching on encrypted indexes

The encrypted indexes used by this system are designed to maximize privacy. As a result, there are a number of operations that are common in search systems that are not available with encrypted indexes, such as partial matching on encrypted text fields or searches over a scalar range. These features might be added in the future through the use of zero-knowledge encryption schemes.

Threat model for malicious service provider

While it is expected that most service providers are not malicious, it is also important to understand what a malicious service provider can and cannot do. The following attacks are possible given a malicious service provider:

Accessibility Considerations

There are a number of accessibility considerations implementers should be aware of when processing data described in this specification. As with any web standards or protocols implementation, ignoring accessibility issues makes this information unusable to a large subset of the population. It is important to follow accessibility guidelines and standards, such as [[WCAG21]], to ensure all people, regardless of ability, can make use of this data. This is especially important when establishing systems using cryptography, which have historically created problems for assistive technologies.

This section details the general accessibility considerations to take into account when using this data model.

Write accessibility considerations.

Internationalization Considerations

There are a number of internationalization considerations implementers should be aware of when publishing data described in this specification. As with any web standards or protocols implementation, ignoring internationalization makes it difficult for data to be produced and consumed across a disparate set of languages and societies, which would limit the applicability of the specification and significantly diminish its value as a standard.

This section outlines general internationalization considerations to take into account when using this data model.

Write i18n considerations.

Identity Hubs

Hubs let you securely store and share data. A Hub is a datastore containing semantic data objects at well-known locations. Each object in a Hub is signed by an identity and accessible via a globally recognized API format that explicitly maps to semantic data objects. Hubs are addressable via unique identifiers maintained in a global namespace.

One DID to Many Hub Instances

A single entity may have one or more instances of a Hub, all of which are addressable via a URI routing mechanism linked to the entity's identifier. Hub instances sync state changes, ensuring the owner can access data and attestations from anywhere, even when offline.

DID Document Service Endpoint Descriptors

There are two different variations of Hub-specific DID Document Service Endpoint descriptors, one that users associate with their DIDs, and another that Hosts use to direct requests to locations where their Hub infrastructure resides.

Users specify their Hub instances (different Hub Hosts) via the UserServiceEndpoint descriptor:

  "service": [{
  "type": "IdentityHub",
  "publicKey": "did:foo:123#key-1",
  "serviceEndpoint": {
    "@context": "schema.identity.foundation/hub",
    "@type": "UserServiceEndpoint",
    "instances": ["did:bar:456", "did:zaz:789"]
  }
}]
          

Hosts specify the locations of their Hub offerings via the HostServiceEndpoint descriptor:

"service": [{
  "type": "IdentityHub",
  "publicKey": "did:bar:456#key-1",
  "serviceEndpoint": {
    "@context": "schema.identity.foundation/hub",
    "@type": "HostServiceEndpoint",
    "locations": ["https://hub1.bar.com/.identity", "https://hub2.bar.com/blah/.identity"]
  }
}]
          

Syncing data between Hubs

Hub instances must sync data without requiring master-slave relationships or forcing a single implementation for storage or application logic. This requires a shared replication protocol for broadcasting and resolving changes. The protocol for reproducing Hub state across multiple instances is in the formative phases of definition/selection, but should be relatively straightforward to integrate on top of any NoSQL datastore.

Hub data serialization and export

All Hubs must support the export of their serialized state. This is to ensure the user retains full control over the portability of their data. A later revision to this document will specify the process for invoking this intent and retrieving the serialized data from a Hub instance.

Hub Protocol Schemes

Hub URI Scheme

In addition to the URL path convention for individual Hubs instances, it is important that links to an identity owner's data not be encoded with a dependency on a specific Hub instance. To make this possible, we propose the introduction of the following Hub URI scheme:

hub://did:foo:123abc/
          

User Agents that understand this scheme will leverage the Universal Resolver to lookup the Hub instances of the target DID and address the Hub endpoints via the Service Endpoints it finds within. This allows the formation of URIs that are not Hub instance-specific, allowing a more natural way to link to a DID's data, without having to specify a specific instance. This also prevents the circulation of dead links across the Web, given an identity owner can choose to add/remove new Hub instances at any time.

Authentication

The process of authenticating requests to identity hubs will follow the DIF/W3C DID Auth schemes. These standards are in early phases of formation - more info is available here.

The current Identity Hub authentication scheme seeks to accomplish two tasks:

The current authentication scheme is an implementation of DID Auth, as described here. This document will outline how to authenticate Hub requests and responses. For full details on the authentication protocol used, and for a reference implementation of the protocol, please refer to the `did-auth-jose` library.

Authenticating a Hub request

Identity Hub requests and responses are signed and encrypted using the DID keys of the sender and the recipient. This protects the message over any transportation medium. All encrypted requests and responses follow the JSON Web Encryption (JWE) standard.

The steps to construct a JWE are as follows. First, construct a JWT. This JWT will be signed by the sender of the Hub request; the `iss`. This JWT must have the following form:

// JWT headers
{
  "alg": "RS256",
  "kid": "did:example:abc123#key-abc",
  "did-requester-nonce": "randomized-string",
  "did-access-token": "eyJhbGciOiJSUzI1N...",
}

// JWT body
{
  "@context": "https://schema.identity.foundation/0.1",
  "@type": "WriteRequest",
  "iss": "did:example:abc123",
  ...
}

// JWT signature
uQRqsaky-SeP3m9QPZmTGtRtMoKzyg6wwWF...
          

The JWT body is just the request whose format is outlined in . The header values must be the following:

Header Description
`alg` Standard JWT header. Indicates the algorithm used to sign the JWT.
`kid` Standard JWT header. The value should take the form `{did}#{key-id}`. Another app can take this value, resolve the DID, and find the indicated public key that can be used for signature validation of the commit. Here we have used `did:example:abc123`, because the request is signed with the user's private key.
`did-requester-nonce` A randomly generated string that must be cached on the client side. This string will be used to verify the response from the Hub in the sections below.
`did-access-token` A token that should be cached on the client side and included in each request sent to the Hub. Since we do not have an access token yet, leave this property out on the initial request. Sections below explain how to get an access token.

This JWT must use the typical JWT compact serialization format.

We can now use this JWT as the plaintext used to construct our JWE. The JWE must have the following format.

// JWE protected header
{
  "alg": "RSA-OAEP-256",
  "kid": "did:example:abc456#abc-123",
  "enc": "A128GCM",
}

// JWE encrypted content encryption key
OKOawDo13gRp2ojaHV7LFpZcgV7T6DVZKTyKOM...

// JWE initialization vector
48V1_ALb6US04U3b...

// JWE plaintext (the JWT from above)
eyJhbGciOiJSUzI1NiIsImtpZCI6InRlc3R...

// JWE authentication tag
XFBoMYUZodetZdv...
          

We strongly reccommend using a JWT library to produce the above JWE. Using a library, you should only need to provide the protected headers and the plaintext. The plaintext value should be the JWT constructed above. The header values are:

Header Description
`alg` Standard JWT header. Indicates the algorithm used to encrypt the JWE content encryption key.
`kid` Standard JWT header. The value should take the form `{did}#{key-id}`. Indicates the Hub's key that is used to encrypt the JWE content encryption key. Here we have used `did:example:abc456`, since that is the DID used by the Hub. The DID used here should match the `aud` value in the Hub `WriteRequest`.
`enc` Standard JWT header. Indicates the algorithm used to encrypt the plaintext using the content encryption key to produce the ciphertext and authentication tag.

Finally, you have a signed and encrypted Hub request that can be transmitted to the user's Identity Hub for secure storage.

Caching the access token

To send a successful request to an Identity Hub, you need to include an access token in the `did-access-token` header of the JWE. The access token is a short-lived JWT that can be used across many Hub requests until it expires.

On an initial request to an Identity Hub, you should exclude the `did-access-token` header. When a Hub request does not include this header, the Hub will reject the request. Instead, the Hub will return a JWE response (as described in the next section) whose payload is an access token. You should extract the access token from the response and cache it somewhere safe. The access token can be used for subsequent requests.

Once you've cached the access token, include it in each request in the `did-access-token` JWE header as described above.

Eventually, the access token will expire. Its expiry time can be found in the `exp` claim inside the access token. If you attempt to use an expired access token, the Identity Hub will return an error indicating a new access token is required. To get a new access token, send another hub request without the `did-access-token` header.

Receiving a hub response

When possible, a hub will respond with a JWE encrypted with the client's DID keys:

eyJhbGciOiJSU0EtT0FFUCIsImVuYyI6IkEyNTZHQ00ifQ...
          

This JWE can be decrypted with the client's private key following the JWE standard to reproduce the response's plaintext.

The contents of the JWE will either be a valid hub response or a new access token. A new access token will only be included if the `did-access-token` header was omitted in the request.

Authorization

Access control for data stored in Hubs is currently implemented via a bare bones permission layer. Access to data can be granted by a Hub owner, and can be restricted to certain types of data. More features to improve control over data access will be added in the future.

The success of a decentralized identity platform is dependent upon the ability for users to share their data with other people, organizations, apps, and services in a way that respects and protects a user's privacy. In our decentralized platform, all user information & data resides in the user's identity Hub. This section outlines the current proposal for identity hub authorization.

Scope of the current design

This proposal is a first cut. The intention is to start extremely simple, and extend the model to include more richness over time. We choose to focus on two simple use cases, described below.

Use case 1: Registering for a website
Alice has added some useful data about her wardrobe style to her Hub: her measurements from her tailor, and a list of her favorite clothing brands. When Alice goes to try out a new online clothing retailer, the retailer's website allows her to set up an account using her DID. After signing in her DID, the retailer's website is able to access Alice's style data. Alice does not have to re-enter her sizes in the site, and the site can give her recommended options based on her brand preferences.
Permission request flow
Use case 2: Reviewing & managing access
Alice learns that one of the websites she visited is making improper use of her personal data. She would like to immediately remove that website's access to her Hub.
Permission denied flow
Out of scope

These use cases, and the current Hub authorization system are not sufficient to consider identity Hubs ready for real world usage. It leaves out several features that have been discussed as being necessary for a minimally viable authorization layer, including:

Features that control what is being granted:

  • How to grant a permission to a specific object by ID, rather than all objects of a certain type.
  • How to grant a permission to a property of some object type, rather than the entire object.
  • How to grant permission to an object type and all of the children object types in its respective schema.
  • How to filter a permission to only:
    • objects created by a specific DID.
    • objects created in a certain time period.
    • objects larger than some byte size.
  • How to grant a permission to a zero-knowledge proof of some object, rather than the entire object.
  • How to grant permission to act as a delegate of a DID when interacting with other Hubs.

Features that control who is being granted access:

  • How to grant a permission to all DIDs, and therefore make some data public.
  • How to create a permission that explicitly denies a DID access to an object.

Features that limit/expand where or when access is granted:

  • How to time-bound permissions, such that a permission expires automatically.
  • How to grant permissions to an app on some devices, but not others.

Features that control why access is granted:

  • How an app can specify why permission is being requested.
  • How a user can specify why permission is being denied.
  • How relying parties and trust providers are reviewed for trustworthiness and integrity.

Features that are related to Hub authorization, but will be addressed at a later time:

  • How to request & send callbacks to notify apps of changes to data and permissions in a Hub.
  • How to authorize the execution of services, or extensions, in a Hub.
  • What format(s) the Hub uses for requests & responses.
  • How to encrypt data in a Hub such that the Hub provider cannot access it.

Clearly, there is a large body of functionality that can be added to Hub authorization over time. This is why this initial document intentionally strives to be as simple as possible. We'll incorporate these things into Hub authorization over time as we receive feedback from early adopters of Identity Hubs.

Authorization Model

Access to data in Identity Hubs is controlled by a special object stored in Hubs called a `PermissionGrant`. The structure of a `PermissionGrant` is:

{
  "owner": "did:example:12345", // the identity owner (granters)'s DID
  "grantee": "did:example:67890", // the grantee's DID
  "context": "schemas.clothing.org", // the data schema context
  "type": "measurements", // the data type
  "allow": "-R--", // allows create, read, update, and delete
  ... // additional richness & specificity can be added in the future
}
          
Granting permissions

When a hub owner grants a permission to another DID, they can do so by specifying the exact objects in the permission grant. When permissions span more than one data type, several PermissionGrant objects can be created. For each PermissionGrant, the following object should be written to the `Permissions` interface of the owner's Hub, typically via user agent:

{
  "@context": "schema.identity.foundation/Hub/",
  "@type": "PermissionGrant",
  "owner": "did:example:12345",
  "grantee": "did:example:67890",
  "context": "schemas.clothing.org",
  "type": "measurements",
  "allow": "-R--"
}
            

Note that the Hub Permissions interface only supports the single PermissionGrant object type. The Hub should reject any requests to create objects of other types in the Permissions interface, barring future updates to the PermissionGrant model.

The response format, and any error conditions, should be consistent with all other requests to Hubs. Upon creation of this permission grant object in a user's Hub, the permission will be propagated to all other Hub instances listed in the user's DID document via the Hub's standard sync & replication protocol. This will ensure that all Hub instances are up-to-date with all new permission grants in a timely manner.

Checking permissions

The following describes the logic implemented by the Hub's authorization layer when a request arrives.

  1. Receive incoming request from client
  2. Determine relevant schema, verb, and client from request
  3. Query for all PermissionGrants that whose object_type matches the schema, for the given client DID
  4. Check if any query results allow the verb in question
  5. Return success/failed authorization check

Note that PermissionGrants do not understand or evaluate the structure of a given schema. For instance, if a user grants access to all "https://schema.org/game" objects, they do not implicitly grant access to all "https://schema.org/videogame" objects (which is a child of game in schema.org's hierarchy).

Reviewing & managing permissions

`PermissionGrant` objects can be created, read, modified, and deleted just like any other object in a hub. To revoke access to data, the Hub owner needs to simply modify an existing `PermissionGrant` or delete it entirely. Instructions for reading and writing data in Identity Hubs is available in .

Requesting permissions

At this time, proposals for how to request access to data in an identity hub via a user agent are still being evaluated. In the future, we will update this document with details on how a client can request access from a user.

API

Because of the sensitive nature of the data being transmitted to Identity Hubs, the Identity Hub request and response API may look a bit different to developers who are used to a traditional REST service API. Most of the differences are based on the high level of security and privacy decentralized identity inherently demands.

Commits

All data in identity hubs is represented as a series of "commits". A commit is similar to a git commit; it represents a change to an object. To write data to an identity hub, you need to construct and send a new commit to the hub. To read data from an identity hub, you need to fetch all commits from the hub. An object's current value can be constructed by applying all its commits in order.

The use of commits to represent data in identity hubs offers a few distinct advantages:

  • it facilitates the hub's replication protocol, enabling multiple hub instances to sync data.
  • it creates an auditable history of changes to an object, especially when each commit is signed by a DID.
  • it eases implementation for use cases that need offline data modification and require conflict resolution.

Each commit in a hub is a JWT whose body contains the data to be written to the hub. Here's an example of a deserialized and decoded JWT:

// JWT headers
{
  "alg": "RS256",
  "kid": "did:foo:123abc#key-abc",
  "interface": "Collections",
  "context": "https://schema.org",
  "type": "MusicPlaylist",
  "operation": "create",
  "committed_at": "2018-10-24T18:39:10.10:00Z",
  "commit_strategy": "basic",
  "sub": "did:bar:456def",

// Example metadata about the object that is intended to be "public"
  "meta": {
    "tags": ["classic rock", "rock", "rock n roll"],
    "cache-intent": "full"
  }
}

// JWT body
{
  "@context": "http://schema.org/",
  "@type": "MusicPlaylist",
  "description": "The best rock of the 60s, 70s, and 80s",
  "tracks": ["..."],
}

// JWT signature
uQRqsaky-SeP3m9QPZmTGtRtMoKzyg6wwWF...
          

The commit is signed by the committer writing the data, in this case did:foo:123abc. To write the commit into a hub, the committer must send a Hub write request.

Write Request & Response Format

Instead of a REST-based scheme where data like the username, object types, and query strings are present in the URL, Identity Hubs requests are self-contained message objects that encapsulate all they need to be shielded from observing entities during transport.

Each Hub request is a JSON object which is then signed and encrypted as outlined in the authentication section. The outer envelope is signed with the key of the "iss" DID, and encrypted with the Hub's DID key(s).

{
  iss: 'did:foo:123abc',
  sub: 'did:bar:456def',
  aud: 'did:baz:789ghi',
  "@context": "https://schema.identity.foundation/0.1",
  '@type': 'WriteRequest',

  // The commit in JSON Serialization Format
  // See: https://tools.ietf.org/html/rfc7515#section-3.1
  commit: {
    protected: "ewogICJpbnRlcmZhY2...",

// Optional metadata information not protected by the JWT signature
header: {
  "iss": "did:foo:123abc"
},

payload: "ewogICJAY29udGV4dCI6...",
signature: "b7V2UpDPytr-kMnM_YjiQ3E0J2..."
  }
}
          

Each response is also a JSON object, signed and encrypted in the same way as the request. Its contents are:

{
  "@context": "https://schema.identity.foundation/0.1",
  "@type": "WriteResponse",
  "developer_message": "completely optional message from the hub",
  "revisions": ["aHashOfTheCommitSubmitted"]
}
          

Object Read Request & Response Format

Objects follow one logical object across multiple commits. Object reads do not contain the literal object data, only metadata associated. Objects may be queried for using the following request format:

{
  "@context": "https://schema.identity.foundation/0.1",
  "@type": "ObjectQueryRequest",
  "iss": "did:foo:123abc",
  "sub": "did:bar:456def",
  "aud": "did:baz:789ghi",
  "query": {
      "interface": "Collections",
      "context": "http://schema.org",
      "type": "MusicPlaylist",

  // Optional object_id filters
  "object_id": ["3a9de008f526d239..", "a8f3e7..."]
  }
}
          

The response to a query for objects returns a list of object IDs along with the object metadata. The format is:

{
  "@context": "https://schema.identity.foundation/0.1",
  "@type": "ObjectQueryResponse",
  "developer_message": "completely optional",
  "objects": [
    {
      // object metadata
      "interface": "Collections",
      "context": "http://schema.org",
      "type": "MusicPlaylist",
      "id": "3a9de008f526d239...",
      "created_by": "did:foo:123abc",
      "created_at": "2018-10-24T18:39:10.10:00Z",
      "sub": "did:foo:123abc",
      "commit_strategy": "basic",
      "meta": {
        "tags": ["classic rock", "rock", "rock n roll"],
        "cache-intent": "full"
      }
    },
    // ...more objects
  ]

// potential pagination token
  "skip_token": "ajfl43241nnn1p;u9390",
}
          

Commit Read Request & Response Format

To get the actual data in an object, you must read the commits from the Identity Hub:

{
  "@context": "https://schema.identity.foundation/0.1",
  "@type": "CommitQueryRequest",
  "iss": "did:foo:123abc",
  "sub": "did:bar:456def",
  "aud": "did:baz:789ghi",
  "query": {
    "object_id": ["3a9de008f526d239..."],
    "revision": ["abc", "def", ...]
  },
}
          

A response to a query for commits contains a list of commit JWTs:

{
  "@context": "https://schema.identity.foundation/0.1",
  "@type": "CommitQueryResponse",
  "developer_message": "completely optional",
  "commits": [
    {
      protected: "ewogICJpbnRlcmZhY2UiO...",
      header: {
        "iss": "did:foo:123abc",
        // Hubs may add additional information to the unprotected headers for convenience
        "rev": "aHashOfTheCommit",
      },
      payload: "ewogICJAY29udGV4dCI6ICdo...",
      signature: "b7V2UpDPytr-kMnM_YjiQ3E0J2..."
    },
    // ...
  ],

// potential pagination token
  "skip_token": "ajfl43241nnn1p;u9390",
}
          

Paging

skip_token is an opaque token to be used for continuation of a request.

They may be returned on responses with multiple results, and added to the initial request's query object:

{
  "iss": "did:foo:123abc",
  "sub": "did:bar:456def",
  "aud": "did:baz:789ghi",
  "@context": "https://schema.identity.foundation/0.1",
  "@type": "ObjectQueryRequest",
  "interface": "Collections",
  "query": {
    "context": "schema.org",
    "type": "MusicPlaylist",
    "skip_token": "ajfl43241nnn1p;u9390"
  }
}
          

Interfaces

To facilitate common interactions and data storage, Hubs provide a set of standard interfaces that can be written to:

Profile

Each Hub has a profile object that describes the owning entity. The profile object should use whatever schema and object that best represents the entity. To ge the profile for a DID, send an object query to the Profile interface:

{
  "@context": "https://schema.identity.foundation/0.1",
  "@type": "ObjectQueryRequest",
  "iss": "did:foo:123abc",
  "sub": "did:bar:456def",
  "aud": "did:baz:789ghi",
  "query": {
      "interface": "Profile",
  }
}
          

Permissions

All access and manipulation of identity data is subject to the permissions established by the owning entity. See explainer for details.

Actions

The Actions interface is for sending a target identity semantically meaningful objects that convey an intent to the sender, which often involves the data payload of the object. The Actions interface is not constrained to simple human-centric communications. Rather, it is intended as a universal conduit through which identities can transact all manner of activities, exchanges, and notifications.

The base data format for conveying an action shall be: http://schema.org/Action

Here is a list of examples to show the range of use-cases this interface is intended to support:

  • Human user contacts another with a textual message (ReadAction)
  • Event app sends a request to RSVP for an event (RsvpAction)
  • Voting agency prompts a user to submit a vote (VoteAction)
{
  "@context": "http://schema.org/",
  "@type": "ReadAction",
  "name": "Acme Bank - March 2018 Statement",
  "description": "Your Acme Bank statement for account #1734765",
  "object": PDF_SOURCE
}
          

Stores

The best way to describe Stores is as a 1:1 DID-scoped variant of the W3C DOM's origin-scoped window.localStorage API. The key difference being that this form of persistent, pairwise object storage transcends providers, platforms, and devices. For each storage relationship between the DID owner and external DIDs, the Hub shall create a key-value document-based storage area. The DID owner or external DID can store unstructured JSON data to the document, in relation to the keys they specify. The Hub implementer may choose to limit the available space of the storage document, with the option to expand the storage limit based on criteria the implementer defines.

Collections

Data discovery has been a problem since the inception of the Web. Most previous attempts to solve this begin with the premise that discovery is about individual entities providing a mapping of their own service-specific API and data schemas. While you can certainly create a common format for expressing different APIs and data schemas, you are left with the same basic issue: a sea of services that can't efficiently interoperate without specific review, effort, and integration. Hubs avoid this issue entirely by recognizing that the problem with data discovery is that it relies on discovery. Instead, Hubs assert the position that locating and retrieving data should be an implicitly knowable process.

Collections provide an interface for accessing data objects across all Hubs, regardless of their implementation. This interface exerts almost no opinion on what data schemas entities use. To do this, the Hub Collection interface allows objects from any schema to be stored, indexed, and accessed in a unified manner.

With Collections, you store, query, and retrieve data based on the very schema and type of data you seek. Here are a few example data objects from a variety of common schemas that entities may write and access via a user's Hub:

Locate any offers a user might want to share with apps (http://schema.org/Offer)

{
  "@context": "https://schema.identity.foundation/0.1",
  "@type": "ObjectQueryRequest",
  "iss": "did:foo:123abc",
  "sub": "did:bar:456def",
  "aud": "did:baz:789ghi",
  "query": {
      "interface": "Collections",
      "context": "http://schema.org",
      "type": "Offer",
  }
}
          

Services

Services offer a means to surface custom service calls an identity wishes to expose publicly or in an access-limited fashion. Services should not require the Hub host to directly execute code the service calls describe; service descriptions should link to a URI where execution takes place.

Performing a Request request to the base Services interface will respond with an object that contains an entry for every service description the requesting entity is permitted to access.

// request
{
  "iss": "did:foo:123abc",
  "sub": "did:bar:456def",
  "aud": "did:baz:789ghi",
  "@context": "https://schema.identity.foundation/0.1",
  "@type": "ServicesRequest",
}

// response
{
  "@context": "https://schema.identity.foundation/0.1",
  "@type": "ServicesResponse",
  developer_message: "optional message",
  services: [{
    // Open API service descriptors
  }]
}
          

All definitions shall conform to the Open API descriptor format.