Skip to main content

FPSF-MPC-001 — MPC System for Disposable Keys

Document Metadata

FieldValue
Spec IDFPSF-MPC-001
TitleMPC System for Disposable Keys
Version1.0.0
StatusDraft
Date2026-03-25
AuthorAdalton Reis — reis@fabricpaymentstandards.org
OrganizationFabric Payment Standards Foundation
Contactspecs@fabricpaymentstandards.org>
LicenseApache-2.0

Table of Contents

  1. Abstract
  2. Scope
  3. Normative Language
  4. Terminology
  5. Trust and Threat Model
  6. Key Taxonomy
  7. System Architecture
  8. Cryptographic Foundations
  9. Network Layer and Transport Security
  10. Identity and Admission Control
  11. Coordinator Protocol
  12. Node Lifecycle
  13. Group Formation Protocol
  14. Distributed Key Generation
  15. Threshold Signature Protocol
  16. Key Share Storage
  17. Disposable Key Lifecycle
  18. Public REST API
  19. Request Authentication
  20. Account Model
  21. Anti-Collusion Mechanisms
  22. Observability and Uptime
  23. Fault Tolerance and Recovery
  24. Data Minimisation and Privacy
  25. Message Formats
  26. Error Codes
  27. Security Considerations
  28. Implementation Guidance
  29. Conformance
  30. Future Work

1. Abstract

This specification defines a multi-party computation (MPC) threshold signature system for generating, managing, and operating disposable Ed25519 signing keys on behalf of users. No single participant node — nor the coordinator — ever holds a complete private key. Private key material exists only as secret shares distributed across a dynamically formed group of participant nodes, and is irrecoverably discarded upon user request.

The system exposes a public REST API through which users may: register an account implicitly on first use; create disposable Ed25519 key pairs via MPC; sign arbitrary messages with a disposable key; and destroy disposable keys.

All API requests are self-authenticated through canonicalized, signed JSON payloads. There is no session state, no username/password, and no bearer token.


2. Scope

This specification covers:

  • System architecture and component responsibilities
  • Key taxonomy and role separation
  • Cryptographic protocols (Pedersen DKG, FROST)
  • Node identity, admission, and lifecycle
  • Group formation and anti-collusion mechanisms
  • The disposable key lifecycle from creation through destruction
  • The public REST API and its authentication model
  • The pseudo-account model
  • Fault tolerance and recovery procedures
  • Security considerations and threat mitigations

This specification does not cover:

  • Integration of disposable keys into specific payment protocols (see FPSF-CPP-001)
  • Regulatory or compliance obligations of system operators
  • Wallet or client-side key management beyond what is necessary for API interaction

3. Normative Language

The key words MUST, MUST NOT, REQUIRED, SHALL, SHALL NOT, SHOULD, SHOULD NOT, RECOMMENDED, MAY, and OPTIONAL are to be interpreted as described in RFC 2119.


4. Terminology

TermDefinition
Root KeyAn Ed25519 key pair held exclusively offline by the user. Its public key serves as the account identifier. Never transmitted to the system.
Sub KeyAn Ed25519 key pair held on the user's device. Authorized by the root key. Signs all API requests.
Disposable KeyAn Ed25519 key pair generated entirely within the MPC system via DKG. The private key never exists in assembled form.
NodeA participant in the MPC network. Holds shares of disposable keys. Communicates with the Coordinator over mTLS WebSocket.
CoordinatorThe central server that admits nodes, orchestrates group formation, dispatches jobs, and monitors uptime.
GroupA temporary set of n nodes assigned to manage one disposable key.
ThresholdThe minimum number t of nodes required to produce a valid signature. t ≤ n.
DKGDistributed Key Generation — the protocol by which a group collectively creates a key pair such that no single node learns the full private key.
FROSTFlexible Round-Optimized Schnorr Threshold Signatures — the signing protocol used. Produces standard Ed25519-compatible signatures.
ShareA node's portion of a distributed private key scalar.
Authorization TokenA signed statement by a root key certifying that a given sub key is authorized to act on the account.
Canonical JSONJSON serialized per RFC 8785 (JCS). Required for deterministic signing.
mTLSMutual TLS — both sides of a connection present and verify X.509 certificates.
CACertificate Authority — issues and revokes node X.509 certificates.
VRFVerifiable Random Function — produces pseudorandom output with a cryptographic proof of correctness. Used for anti-collusion group selection.
Key IDA UUID v4 assigned by the Coordinator to a disposable key at creation time.
Account IDThe SHA-256 hash of the root key's public key, encoded as lowercase hexadecimal.

5. Trust and Threat Model

5.1 Trusted Parties

The CA / Issuing Institution. Issues node certificates. Trusted not to issue certificates to colluding parties. Its compromise is out of scope.

The Coordinator. Trusted to dispatch jobs fairly and not to permanently bias group selection. Its compromise MUST NOT expose key material — the Coordinator never holds shares.

5.2 Untrusted Parties

Individual Nodes. No single node is trusted with a complete key. Up to t-1 nodes may be compromised without affecting security.

API Callers. Considered untrusted until they present a valid, fresh, self-signed request envelope.

5.3 Threat Mitigations

ThreatMitigation
Node compromise (fewer than t nodes)Threshold property — partial shares are cryptographically useless
Node collusionVRF-based group assignment; nodes do not learn each other's identity across groups
Coordinator compromiseCoordinator never holds key material; audit log is append-only and verifiable
Replay attacks on APINonce and timestamp in every request envelope; nonce cache with TTL
Root key exfiltrationRoot key never contacts the API; all requests are signed by sub keys
Share exfiltration via networkmTLS with client certificates; inter-node payloads are encrypted point-to-point
Long-lived key exposureDisposable keys are destroyed on demand; shares are wiped on destruction
Side-channel share reconstructionShares encrypted at rest; memory-safety enforced by implementation

6. Key Taxonomy

The system defines exactly three key classes. They are not interchangeable.

┌──────────────────────────────────────────────────────────────────┐
│ User Domain (off-system) │
│ │
│ Root Key (Ed25519) ──signs──► Sub-Key Authorization Token │
│ Lives offline │
│ │
│ Sub Key (Ed25519) ──signs──► All API Requests │
│ Lives on user device │
└──────────────────────────────────────────────────────────────────┘

│ HTTPS REST

┌──────────────────────────────────────────────────────────────────┐
│ MPC System │
│ │
│ Disposable Key (Ed25519) │
│ Public key: returned to user on creation │
│ Private scalar: held as shares by nodes; never assembled │
│ Identified by Key ID │
└──────────────────────────────────────────────────────────────────┘

6.1 Root Key

An Ed25519 key pair. The private key is held by the user and MUST NEVER be transmitted anywhere. The public key is registered implicitly on first API call. Its sole function is to sign Sub-Key Authorization Tokens. The system MUST reject any request signed directly by a known root key public key.

6.2 Sub Key

An Ed25519 key pair held on the user's device. Authorized by a root key Authorization Token, which is presented in every API request. Multiple sub keys may be authorized per root key. Sub keys are the sole permitted signers of API requests.

6.3 Disposable Key

An Ed25519 key pair generated by FROST DKG. The private scalar is split into n shares across a group of nodes and MUST NEVER be assembled. The public key is returned to the user on creation. Identified by a system-assigned UUID v4 Key ID. Lifecycle: created → used (zero or more times) → destroyed.


7. System Architecture

7.1 Component Diagram

                     ┌────────────────────┐
│ User / Client │
│ (Root Key offline)│
│ (Sub Key online) │
└────────┬───────────┘
│ HTTPS REST (RFC 8785 signed JSON)

┌─────────────────────────────┐
│ API Gateway │
│ - TLS termination │
│ - Canonicalization verify │
│ - Request validation │
│ - Rate limiting │
└──────────────┬──────────────┘
│ Internal gRPC / HTTP

┌─────────────────────────────┐
│ Coordinator │
│ - Node registry │
│ - Group formation (VRF) │
│ - Job dispatch │
│ - Uptime monitoring │
│ - Key ID to Group mapping │
└──────┬──────────────┬───────┘
│ mTLS WebSocket
┌────────────┴──┐ ┌─────┴──────────────┐
│ Node A │ │ Node B │
│ (share store)│ ... │ (share store) │
└───────────────┘ └─────────────────────┘

7.2 Component Responsibilities

API Gateway. Terminates HTTPS. Validates RFC 8785 canonical JSON structure. Verifies sub-key signature over the request body. Verifies Authorization Token (root key signature over sub-key public key). Routes valid requests to the Coordinator. Returns structured error responses.

Coordinator. Maintains a live registry of connected nodes and their health status. Selects node groups using a VRF. Dispatches DKG and signing jobs. Stores the mapping of Key ID to group node IDs, threshold parameters, and public key. Monitors node liveness via heartbeat. Manages node admission and revocation. Produces signed, append-only audit logs.

Participant Nodes. Maintain a persistent mTLS WebSocket connection to the Coordinator. Participate in DKG and FROST signing when selected. Store encrypted key shares. Report health metrics. Wipe shares upon receiving a verified destruction command.

7.3 Persistence Requirements

EntityLocationNotes
Node mTLS certificateNode local diskIssued by CA
Key sharesNode encrypted storageAES-256-GCM; never in plaintext
Key ID to Group mappingCoordinator databaseAuthoritative record
Pseudo-account (root pub key hash)Coordinator databaseHash only; raw public key not stored
Sub-key authorization tokensNot storedVerified on each request; not retained
Audit logAppend-only storeCoordinator signs each entry

8. Cryptographic Foundations

8.1 Signature Scheme

All keys in this system are Ed25519. The threshold signing protocol is FROST (Flexible Round-Optimized Schnorr Threshold Signatures), as specified in IETF draft-irtf-cfrg-frost.

FROST properties relevant to this system:

  • Two-round protocol (Commitment round + Signature round)
  • Produces a standard Ed25519-compatible signature — verifiers need not be FROST-aware
  • Supports t-of-n threshold: any t of n participants can sign; fewer than t learn nothing about the private key
  • Non-interactive aggregation: the Coordinator can aggregate partial signatures without learning the private key

8.2 Threshold Parameters

The threshold (t, n) is configurable per key-creation request, subject to system policy bounds:

  • Minimum t: 2 (single-node signing is disallowed)
  • Minimum n: t + 1 (at least one redundant node)
  • Maximum n: Coordinator policy (recommended: 7–15 for latency; bounded by available live nodes)
  • Default if not specified: (t=3, n=5)

The chosen (t, n) is recorded in Key ID metadata and is immutable after key creation.

8.3 Distributed Key Generation

DKG uses Pedersen DKG, compatible with FROST. Each node:

  1. Generates a random polynomial of degree t-1.
  2. Broadcasts commitments to polynomial coefficients (Pedersen commitments).
  3. Sends secret shares to every other group member, encrypted point-to-point over the mTLS channel.
  4. Verifies received shares against commitments.
  5. Derives its final share as the sum of received contributions.

The group public key is derived from the broadcasted commitments and is identical to what standard Ed25519 key generation would produce for the corresponding private scalar.

8.4 Authorization Token Format

An Authorization Token is an Ed25519 signature by the root key over the following canonical structure:

{
"version": "1",
"type": "sub_key_authorization",
"root_key_pub": "<base64url-encoded root public key>",
"sub_key_pub": "<base64url-encoded sub key public key>",
"issued_at": "<ISO 8601 UTC timestamp>",
"expires_at": "<ISO 8601 UTC timestamp — optional>"
}

This object MUST be serialized using RFC 8785 before signing. The resulting signature is base64url-encoded and included in every API request envelope. The system MUST NOT store the token. Verification is stateless.


9. Network Layer and Transport Security

9.1 External API (Client to API Gateway)

Protocol: HTTPS, TLS 1.3 minimum. No client certificate required (public-facing). Authentication is at the application layer via signed request envelopes (Section 19).

9.2 Internal Network (Coordinator to Nodes)

Protocol: WebSocket over TLS 1.3 (WSS) with mutual TLS (mTLS). Both Coordinator and each Node present X.509 certificates issued by the CA. The Coordinator validates node certificates at connection time and periodically (every 5 minutes) for connected nodes via CRL/OCSP.

Node certificates MUST contain:

  • SubjectPublicKeyInfo: the node's long-term identity key (Ed25519 preferred; RSA-2048 minimum for compatibility)
  • SubjectAltName: opaque node identifier assigned by CA
  • KeyUsage: digitalSignature, keyAgreement
  • Validity period: recommended 90 days

Nodes connect to the Coordinator. The Coordinator MUST NOT initiate connections to nodes.

9.3 WebSocket Message Envelope

All WebSocket messages are binary frames carrying UTF-8 JSON payloads conforming to:

{
"msg_id": "<UUID v4>",
"msg_type": "<type string>",
"sender_node_id": "<opaque node identifier>",
"timestamp": "<ISO 8601 UTC>",
"payload": {},
"sig": "<base64url Ed25519 signature over canonical JSON of all other fields>"
}

All participants MUST verify sig before processing any message. Messages with invalid signatures MUST be silently dropped and the anomaly MUST be logged.

9.4 Connection Management

  • Nodes maintain a persistent WebSocket connection to the Coordinator.
  • Heartbeat: nodes send a PING every 10 seconds; Coordinator expects PONG within 5 seconds.
  • Three consecutive missed heartbeats: node marked DEGRADED.
  • Five consecutive missed heartbeats: node marked OFFLINE.
  • Reconnection: exponential backoff (base 1s, max 60s, ±20% jitter).
  • The Coordinator MUST re-validate the mTLS certificate on reconnect.

10. Identity and Admission Control

10.1 Node Identity

Each node's identity is its certificate subject, issued by the CA. Process:

  1. Node operator generates a key pair and submits a CSR to the CA.
  2. CA verifies operator identity through out-of-band means.
  3. CA issues certificate with required extensions (Section 9.2).
  4. Node presents certificate on WebSocket connection; Coordinator validates.
  5. Node is added to the node registry in ONLINE state.

10.2 Node Revocation

The CA maintains a CRL and an OCSP responder. The Coordinator checks certificate validity at connection time and every 5 minutes for connected nodes. On revocation detection: node is marked REVOKED, disconnected, and removed from all future group assignments.

10.3 User Sub-Key Authorization

There is no explicit registration step for sub keys. Authorization is carried inline with every request (Section 19). The Coordinator tracks which root key hashes have been seen, creating pseudo-accounts implicitly on first request.


11. Coordinator Protocol

11.1 Job State Machine

Key creation jobs:

PENDING → GROUPS_ASSIGNED → DKG_IN_PROGRESS → COMPLETE

└──► FAILED

Signing jobs:

PENDING → GROUP_NOTIFIED → COMMITMENT_ROUND → SIGNATURE_ROUND → COMPLETE

└──► FAILED → RETRY or ABORT

11.2 Job Queue

The Coordinator maintains a persistent job queue ordered by arrival time. Job TTLs: DKG jobs 30 seconds; signing jobs 15 seconds. Expired jobs are marked FAILED. Jobs are retried at most once before failure is returned to the client.

11.3 Internal WebSocket Messages

Message TypeDirectionDescription
NODE_REGISTERNode → CoordinatorInitial handshake after connection
NODE_PINGNode → CoordinatorHeartbeat
NODE_PONGCoordinator → NodeHeartbeat acknowledgment
NODE_LEAVENode → CoordinatorClean departure notification
JOB_ASSIGNCoordinator → NodeAssignment to DKG or signing group
JOB_DECLINENode → CoordinatorNode cannot accept job
DKG_COMMITMENTNode → CoordinatorBroadcast DKG polynomial commitments
DKG_SHARENode → Node (relayed)Encrypted share for a specific peer
DKG_COMPLETENode → CoordinatorDKG finished successfully on this node
DKG_ABORTNode → CoordinatorDKG error on this node
SIGN_NONCE_COMMITNode → CoordinatorFROST round 1: nonce commitment
SIGN_PARTIAL_SIGNode → CoordinatorFROST round 2: partial signature
SIGN_ABORTNode → CoordinatorSigning error on this node
KEY_DESTROYCoordinator → NodeInstruct node to wipe a specific share
KEY_DESTROY_ACKNode → CoordinatorShare wiped confirmation
HEALTH_REPORTNode → CoordinatorCPU, memory, active groups, error counts

12. Node Lifecycle

12.1 States

CONNECTING → ONLINE → DEGRADED → OFFLINE
│ │
▼ ▼
REVOKED RECONNECTING
StateMeaning
CONNECTINGWebSocket handshake in progress
ONLINEFully connected; eligible for group assignment
DEGRADEDMissed 3–4 consecutive heartbeats; not assigned to new groups
OFFLINEMissed 5+ heartbeats or connection dropped
REVOKEDCertificate revoked; permanently excluded
RECONNECTINGAttempting to re-establish connection

12.2 Concurrency

A node may participate in multiple concurrent groups. The Coordinator limits concurrent group membership per node (recommended maximum: 10) to prevent resource exhaustion. Nodes report current load in HEALTH_REPORT messages.

12.3 Node Departure

On clean disconnect, the node sends NODE_LEAVE. The Coordinator marks it OFFLINE. In-flight jobs involving that node are assessed: if t nodes remain available for the group, the job continues. Otherwise it is aborted and retried with a new group. Shares remain on disk (encrypted) until reconnection or explicit destruction.


13. Group Formation Protocol

13.1 Principles

Groups are ephemeral — formed for a specific key and dissolved conceptually at key destruction. Nodes do not need to know each other's real identities. All inter-node DKG traffic is relayed through the Coordinator using opaque session-scoped handles, preventing nodes from correlating peer identities across groups.

13.2 Group Selection Algorithm

Group selection MUST use a Verifiable Random Function (VRF):

  1. Coordinator generates a random job_seed (32 bytes, CSPRNG).
  2. Coordinator computes vrf_output = VRF_prove(coordinator_private_key, job_seed || key_id).
  3. Coordinator sorts eligible (ONLINE, under load limit) nodes by HMAC(vrf_output, node_id).
  4. Selects the top n nodes from the sorted list.
  5. Publishes job_seed, vrf_output, and vrf_proof to the audit log.

This prevents the Coordinator from selectively biasing groups while maintaining central control. Selection can be retrospectively verified by any auditor.

13.3 Group Metadata

{
"key_id": "<UUID v4>",
"account_id": "<hex SHA-256 of root pub key>",
"threshold": { "t": 3, "n": 5 },
"group_node_handles": ["<opaque handle>"],
"group_public_key": "<base64url Ed25519 public key>",
"created_at": "<ISO 8601>",
"state": "ACTIVE | SIGNING | DESTROYING | DESTROYED"
}

Node handles are session-scoped opaque tokens that cannot be correlated with node identities by observers.


14. Distributed Key Generation

14.1 Protocol (Pedersen DKG)

Precondition: A group of n nodes has been assigned to a DKG job.

Round 1 — Commitments:

  1. Each node i samples a random polynomial f_i(x) of degree t-1 over the scalar field of Ed25519.
  2. Node i computes Pedersen commitments C_i_j = f_i(j) * G for j = 0..t-1.
  3. Node i broadcasts commitments to all group members via the Coordinator.

Round 2 — Share Distribution:

  1. Each node i computes shares s_i_j = f_i(j) for each peer j.
  2. Node i encrypts s_i_j using an ephemeral ECDH key derived from peer j's public key, then sends via DKG_SHARE (relayed by Coordinator, addressed by session handle).
  3. Each node j decrypts and verifies: s_i_j * G == sum(C_i_k * j^k for k in 0..t-1). On failure, node broadcasts DKG_ABORT.

Completion:

  1. Each node i computes its final share: x_i = sum(s_j_i for all j in group).
  2. Each node computes the group public key: PK = sum(C_j_0 for all j in group).
  3. All nodes broadcast their computed PK; Coordinator checks unanimity.
  4. Coordinator records PK as the disposable key's public key and returns Key ID and public key to the client.

14.2 Abort Conditions

DKG MUST be aborted if:

  • Any node broadcasts DKG_ABORT
  • Fewer than n nodes complete Round 1 within the timeout
  • Computed PK values are not unanimous
  • The Coordinator's job TTL expires

On abort, the Coordinator selects a new group and retries once. On second failure, the key creation request fails.


15. Threshold Signature Protocol

15.1 Overview

Signing uses FROST (IETF draft-irtf-cfrg-frost, latest stable version). Only t of the n group nodes are needed.

15.2 Signer Selection

  1. Coordinator looks up the group for the requested Key ID.
  2. Filters to nodes currently ONLINE.
  3. If fewer than t nodes are online: job fails with INSUFFICIENT_NODES.
  4. Otherwise: selects exactly t nodes and broadcasts JOB_ASSIGN.

15.3 Round 1 — Nonce Commitment

  1. Each signer i generates random nonces (d_i, e_i) and computes commitments (D_i, E_i) = (d_i * G, e_i * G).
  2. Each signer sends {D_i, E_i} to the Coordinator.
  3. Coordinator assembles commitment list L = [(i, D_i, E_i) for all signers] and broadcasts to all signers.

15.4 Round 2 — Partial Signatures

  1. Each signer computes binding factor ρ_i = H(i, message, L) per the FROST specification.
  2. Each signer computes group commitment R = sum(D_i + ρ_i * E_i).
  3. Each signer computes challenge c = H(R, PK, message).
  4. Each signer computes partial signature z_i = d_i + e_i * ρ_i + λ_i * x_i * c where λ_i is the Lagrange coefficient for signer i and x_i is the signer's share.
  5. Each signer sends z_i to the Coordinator.

15.5 Aggregation

  1. Coordinator verifies each partial signature z_i against the signer's share commitment.
  2. Coordinator aggregates: z = sum(z_i).
  3. Final signature: σ = (R, z) — a standard Ed25519-compatible Schnorr signature.
  4. Coordinator verifies σ against PK and the message before returning to the client.

15.6 Nonce Security

  • Nonces MUST be generated fresh for each signing operation.
  • Signers MUST NOT pre-generate and store nonce batches before job assignment, to prevent state-compromise attacks.
  • Nonces MUST be discarded immediately after use.

16. Key Share Storage

16.1 Encryption at Rest

Each node stores key shares encrypted using a key derived from the node's mTLS private key:

  • Storage key: HKDF-SHA-256(node_mtls_private_key, info="share-storage-v1")
  • Encryption: AES-256-GCM
  • AAD: key_id || node_id (prevents shares being transplanted between nodes or keys)

16.2 Share Index

Each node maintains a local index:

key_id → {
encrypted_share,
threshold_params,
group_public_key,
account_id_hash
}

16.3 Share Availability During Node Absence

If a node goes offline while holding shares:

  • Signing can continue if at least t other group nodes are online.
  • If fewer than t nodes are available, signing fails until enough nodes return.
  • Shares are not automatically redistributed. The system relies on n > t for redundancy.

16.4 Share Destruction

On key destruction:

  1. Coordinator broadcasts KEY_DESTROY to all group nodes.
  2. Each online node wipes the share from storage and returns KEY_DESTROY_ACK.
  3. Nodes offline at destruction time MUST wipe the share upon next reconnection, before being admitted to any new groups.
  4. Coordinator tracks outstanding KEY_DESTROY_ACK messages.

17. Disposable Key Lifecycle

CREATE  ──►  DKG protocol  ──►  ACTIVE

┌──────────┤
│ │
SIGN DESTROY
(0 or more) │

DESTROYING

(all shares wiped)


DESTROYED

17.1 Creation

Triggered by POST /api/v1/keys. The key transitions from internal PENDING to ACTIVE upon DKG completion. The public key is returned to the user.

17.2 Signing

Triggered by POST /api/v1/keys/:key_id/sign. The key state remains ACTIVE after signing. A key may be signed with an unlimited number of times until destroyed.

17.3 Retained Metadata

For an active key, the system retains only:

  • Key ID (UUID v4)
  • Account ID hash
  • Group node handles (session-scoped)
  • Group public key (Ed25519 point)
  • Threshold parameters (t, n)
  • Creation timestamp
  • State

No message content, signature history, or user-identifying data beyond the account hash is retained.

17.4 Destruction

Triggered by DELETE /api/v1/keys/:key_id. The Coordinator sets state to DESTROYING, broadcasts KEY_DESTROY to all group nodes, awaits KEY_DESTROY_ACK responses, and marks the key DESTROYED. Key metadata is retained for audit purposes with state DESTROYED.


18. Public REST API

18.1 Base URL

https://{system-domain}/api/v1/

18.2 Content Negotiation

All requests: Content-Type: application/json. All responses: Content-Type: application/json. Encoding: UTF-8.

18.3 Endpoints

POST /api/v1/keys

Create a new disposable key pair.

Request body:

{
"envelope": {
"version": "1",
"action": "create_key",
"nonce": "<16-byte random, base64url>",
"timestamp": "<ISO 8601 UTC>",
"sub_key_pub": "<base64url Ed25519 pub key>",
"root_key_pub": "<base64url Ed25519 pub key>",
"authorization": {
"token": {},
"token_sig": "<base64url>"
},
"params": {
"threshold_t": 3,
"threshold_n": 5
}
},
"sig": "<base64url Ed25519 signature by sub key over canonical JSON of envelope>"
}

Response 201:

{
"key_id": "<UUID v4>",
"public_key": "<base64url Ed25519 public key>",
"threshold_t": 3,
"threshold_n": 5,
"created_at": "<ISO 8601>"
}

GET /api/v1/keys

List all active disposable keys for the caller's account.

Request: signed envelope in X-MPC-Request header (action: "list_keys").

Response 200:

{
"keys": [
{
"key_id": "<UUID v4>",
"public_key": "<base64url>",
"threshold_t": 3,
"threshold_n": 5,
"created_at": "<ISO 8601>",
"state": "ACTIVE"
}
]
}

POST /api/v1/keys/:key_id/sign

Sign a message using a specified disposable key.

Request body:

{
"envelope": {
"version": "1",
"action": "sign",
"nonce": "<base64url>",
"timestamp": "<ISO 8601 UTC>",
"sub_key_pub": "<base64url>",
"root_key_pub": "<base64url>",
"authorization": { "token": {}, "token_sig": "<base64url>" },
"message": "<base64url — raw bytes to be signed>"
},
"sig": "<base64url>"
}

Response 200:

{
"key_id": "<UUID v4>",
"signature": "<base64url Ed25519 signature over message>",
"public_key": "<base64url — the disposable key public key>",
"signed_at": "<ISO 8601>"
}

DELETE /api/v1/keys/:key_id

Destroy a disposable key. Wipes all shares.

Request: signed envelope in X-MPC-Request header (action: "destroy_key").

Response 200:

{
"key_id": "<UUID v4>",
"destroyed_at": "<ISO 8601>",
"ack_count": 4,
"pending_ack_count": 1
}

pending_ack_count > 0 is informational — it indicates nodes that were offline and will wipe upon reconnection. The key is considered destroyed immediately.


GET /api/v1/keys/:key_id

Retrieve metadata for a single disposable key.

Request: signed envelope in X-MPC-Request header (action: "get_key").

Response 200:

{
"key_id": "<UUID v4>",
"public_key": "<base64url>",
"threshold_t": 3,
"threshold_n": 5,
"created_at": "<ISO 8601>",
"state": "ACTIVE | DESTROYING | DESTROYED"
}

18.4 HTTP Status Codes

CodeMeaning
200Success
201Resource created
400Malformed request
401Signature verification failed
403Root key attempted to sign directly; invalid sub key authorization
404Key ID not found or not owned by caller
409Key already in DESTROYING or DESTROYED state
429Rate limit exceeded
500Internal error
503Insufficient online nodes

19. Request Authentication

19.1 Overview

Every API request MUST carry a signed request envelope. The system is authentication-by-signature — there are no sessions, API keys, or bearer tokens.

19.2 Envelope Construction

  1. Construct the envelope JSON object with all required fields.
  2. Serialize using RFC 8785 (JCS) canonical JSON.
  3. Sign the canonical bytes using the sub key's Ed25519 private key.
  4. Encode the signature as base64url.
  5. Construct the final request body: { "envelope": <envelope object>, "sig": "<base64url>" }.

19.3 Server-Side Verification

The API Gateway MUST perform the following checks in order, rejecting on first failure:

  1. Structure: body is valid JSON with envelope and sig fields.
  2. Canonical form: re-serialize envelope with JCS; it MUST match the received bytes.
  3. Timestamp freshness: envelope.timestamp MUST be within ±5 minutes of server time.
  4. Nonce uniqueness: envelope.nonce MUST NOT have been seen in the last 10 minutes.
  5. Authorization token structure: authorization.token MUST be valid JSON with all required fields.
  6. Authorization token signature: verify authorization.token_sig using root_key_pub over the canonical token.
  7. Sub key binding: authorization.token.sub_key_pub MUST equal envelope.sub_key_pub.
  8. Root key not a signer: root_key_pub MUST NOT itself be a known sub key.
  9. Request signed by sub key: sig MUST be verifiable with sub_key_pub, not root_key_pub.
  10. Request signature: verify sig using sub_key_pub over the canonical envelope.

19.4 Nonce and Replay Protection

Nonces are 16 random bytes (base64url encoded, 22 characters). The Gateway maintains a nonce cache with TTL of 10 minutes. Combined with the 5-minute timestamp window, replay attacks are infeasible.


20. Account Model

20.1 Account Identifier

account_id = lowercase_hex(SHA-256(canonical_serialization(root_key_pub_bytes)))

A 64-character hex string. The raw public key is never stored — only its hash.

20.2 Implicit Account Creation

Accounts are created implicitly on the first valid API request from a new root key. On first request:

  1. Gateway verifies the request per Section 19.3.
  2. Coordinator checks whether account_id exists.
  3. If not: Coordinator creates a new pseudo-account record { account_id, first_seen_at }.
  4. The root key public key itself is NOT stored.

20.3 Account Deletion

No explicit account deletion endpoint exists. Implementations MAY garbage-collect accounts with no active keys after a configurable retention period (recommended: 90 days).


21. Anti-Collusion Mechanisms

21.1 VRF-Based Group Selection

Described in Section 13.2. The VRF proof is published in the audit log, making biased selection detectable after the fact.

21.2 Opaque Node Handles

During DKG and signing, nodes are addressed by ephemeral session-scoped handles — random UUIDs assigned per job, destroyed after job completion. Nodes cannot identify each other across sessions.

21.3 Relayed Inter-Node Messages

All DKG and signing inter-node messages are relayed through the Coordinator. Nodes never establish direct connections. Share payloads are encrypted point-to-point; the Coordinator cannot decrypt them.

21.4 Load-Based Assignment Caps

No node may be assigned to more groups than the configured maximum concurrency limit.

21.5 Temporal Separation

Group assignments for consecutive requests from the same account use different VRF seeds, ensuring different group compositions where node pool size permits.


22. Observability and Uptime

22.1 Node Health Reports

Each node sends a HEALTH_REPORT to the Coordinator every 30 seconds:

{
"node_id": "<opaque>",
"timestamp": "<ISO 8601>",
"active_groups": 3,
"pending_jobs": 1,
"cpu_percent": 12.4,
"memory_mb": 256,
"share_count": 47,
"error_count_1m": 0,
"latency_p99_ms": 18
}

22.2 Uptime Metrics

The Coordinator maintains per-node: last_seen, uptime_ratio_24h, uptime_ratio_7d, consecutive_failures, total_jobs_completed, total_jobs_failed.

22.3 Prometheus Metrics

The Coordinator exposes an internal /metrics endpoint:

mpc_nodes_online_total
mpc_nodes_degraded_total
mpc_nodes_offline_total
mpc_dkg_jobs_total{status="success|failure"}
mpc_sign_jobs_total{status="success|failure"}
mpc_job_duration_seconds{type="dkg|sign",quantile="0.5|0.95|0.99"}
mpc_active_keys_total
mpc_destroyed_keys_total

22.4 Audit Log

The Coordinator maintains an append-only, signed audit log. Each entry:

{
"seq": 12345,
"timestamp": "<ISO 8601>",
"event_type": "<string>",
"account_id": "<hex, if applicable>",
"key_id": "<UUID, if applicable>",
"details": {},
"coordinator_sig": "<base64url signature over canonical JSON of all other fields>"
}
Event TypeLogged When
NODE_CONNECTEDNode completes mTLS handshake
NODE_DISCONNECTEDNode goes offline
NODE_REVOKEDNode certificate revoked
KEY_CREATEDDKG completes successfully
KEY_CREATION_FAILEDDKG failed after all retries
KEY_SIGNEDSigning job completes
KEY_SIGNING_FAILEDSigning job failed
KEY_DESTROYEDAll shares wiped
GROUP_FORMEDVRF selection published
ACCOUNT_CREATEDNew account_id first seen

23. Fault Tolerance and Recovery

23.1 Signing with Fewer Than n Nodes

FROST requires only t nodes. If between t and n-1 group nodes are online, signing proceeds normally. If fewer than t are online, signing fails with INSUFFICIENT_NODES (HTTP 503).

23.2 Node Failure During DKG

If a node fails mid-DKG, the job is marked FAILED. The Coordinator selects a fresh group and retries once. On second failure, key creation fails with HTTP 503.

23.3 Node Failure During Signing

If a selected signer drops after JOB_ASSIGN but before Round 1 completes, the Coordinator aborts and retries with a different set of t nodes from the same group (if available). If fewer than t nodes remain, the job fails with HTTP 503.

23.4 Coordinator Failure

The Coordinator is a single operational point of failure (not a security risk — it never holds key material). For high-availability deployments: deploy behind a load balancer with at least two instances; use a shared database for node registry, key metadata, and job queue. In-flight jobs are lost on Coordinator failure; clients SHOULD retry.

23.5 Offline Node Share Recovery

SituationResponse
n > t nodes remainSufficient redundancy; signing continues
Exactly t nodes remainAt threshold; recommend key destruction and re-creation
Fewer than t nodes remainKey permanently inaccessible; operators MUST be alerted

The system SHOULD alert operators when any key drops below t + 1 available nodes.


24. Data Minimisation and Privacy

Data ItemRetainedNotes
Root key public keyNoHash (account_id) only
Sub key public keyNoNot stored after request verification
Authorization tokenNoVerified in-flight; discarded
Message contentNoSigned and returned; not stored
SignatureNoReturned to caller; not retained
Disposable key public keyYes (while ACTIVE)Required for key listing
Disposable key shareYes (encrypted at rest)Wiped on destruction
Account ID hashYesMinimum identifier
Key creation/destruction timestampsYesAudit log
Node identitiesYesRequired for operation; access-controlled

The system MUST NOT store IP addresses, user-agent strings, or metadata that could enable tracking of individual users beyond the account hash.


25. Message Formats

25.1 RFC 8785 Canonicalization

All JSON objects that are signed MUST be serialized using RFC 8785 (JCS) before signing:

  • Keys sorted lexicographically (Unicode code point order)
  • No insignificant whitespace
  • Numbers serialized without trailing zeros
  • Strings escaped per JSON spec

25.2 Ed25519 Key Encoding

All Ed25519 public keys and signatures are encoded as base64url without padding (RFC 4648 §5). Raw lengths: public key 32 bytes (43 base64url chars); signature 64 bytes (86 base64url chars).

25.3 Timestamps

All timestamps: ISO 8601, UTC, millisecond precision. Example: 2026-03-25T14:32:00.123Z.

25.4 Error Response Format

{
"error": {
"code": "INVALID_SIGNATURE",
"message": "Request signature verification failed",
"request_id": "<UUID v4>"
}
}

26. Error Codes

CodeHTTPMeaning
INVALID_JSON400Request body is not valid JSON
NOT_CANONICAL400Envelope JSON is not RFC 8785 canonical
MISSING_FIELD400Required envelope field absent
EXPIRED_TIMESTAMP401Timestamp outside ±5 minute window
REPLAYED_NONCE401Nonce has been seen before
INVALID_SIGNATURE401Request signature does not verify
INVALID_AUTHORIZATION401Authorization token signature does not verify
SUB_KEY_MISMATCH401Request signer does not match token sub_key_pub
ROOT_KEY_SIGNING403Request was signed by a root key directly
KEY_NOT_FOUND404Key ID does not exist or belongs to another account
KEY_DESTROYED409Key has already been destroyed
KEY_BEING_DESTROYED409Destruction is in progress
INSUFFICIENT_NODES503Not enough online nodes to complete operation
DKG_FAILED503Key generation failed after retries
SIGNING_FAILED503Threshold signing failed
COORDINATOR_UNAVAILABLE503Coordinator temporarily unreachable
INTERNAL_ERROR500Unexpected internal error

27. Security Considerations

27.1 Private Key Never Assembled

The FROST/Pedersen DKG protocol mathematically guarantees that the disposable key's private scalar is never present in any single location. This is the fundamental security property of the system.

27.2 Root Key Isolation

The root key is an offline key. The system is designed so that the root key MUST NEVER interact with the network. Its sole function is to issue Authorization Tokens offline. Even if the MPC system is fully compromised, an attacker gains no ability to extract the root key.

27.3 Sub Key Compromise

If a sub key is compromised, the attacker can make API requests until detected. They cannot forge Authorization Tokens (those require the root key). Mitigation: the root key owner generates a new Authorization Token for a new sub key; the compromised sub key's tokens expire naturally (if expires_at is set).

27.4 Coordinator Compromise

A compromised Coordinator cannot obtain key material (it never holds shares or private keys). It could bias group selection (mitigated by VRF audit log) or deny service. It cannot forge user requests (signing requires user sub keys).

27.5 Timing and Side Channels

Implementations MUST use constant-time comparison for all signature verification and share operations.

27.6 Memory Safety

Implementations MUST securely zero memory holding key shares, nonces, and partial signatures after use.

27.7 PKI Security

Node certificate private keys MUST be stored in hardware security modules (HSMs) or TPMs where available. If software-only, keys MUST be encrypted at rest.


28. Implementation Guidance

Coordinator and Nodes: Rust is the recommended implementation language. Rationale: memory safety eliminates side-channel and buffer-overflow vulnerabilities; strong cryptography ecosystem (ed25519-dalek, frost-ed25519, rustls); excellent async networking (tokio, tokio-tungstenite).

Alternative: Go. Advantages: simpler concurrency model, fast startup. Disadvantage: GC makes secure memory zeroing harder.

API Gateway: Nginx or Envoy for TLS termination, plus a thin Rust/Go service for request validation.

Coordinator Database: PostgreSQL for node registry, key metadata, and job queue. Redis for nonce cache (TTL-based).

Audit Log: Append-only PostgreSQL table with Coordinator signatures, mirrored to immutable object storage (e.g., S3 with Object Lock) for tamper evidence.

28.2 FROST Library

Use the frost-ed25519 crate (Rust) or the Zcash Foundation's frost-core. These implement the full IETF draft and are under active security review.

28.3 Deployment Topology

Internet

└─► Load Balancer (TLS termination)

└─► API Gateway (request validation, auth)

└─► Coordinator (internal network only)
│ (mTLS WebSocket, private network)
┌───────┴────────┐
│ │
Node 1 ... Node N (VPC / private network)

Nodes MUST NOT be publicly accessible. They connect outbound to the Coordinator. Firewall rules MUST enforce this.


29. Conformance

An implementation is considered FPSF-MPC-001 conformant if it:

  • Represents the three-key taxonomy (Root, Sub, Disposable) as defined in Section 6
  • Implements Pedersen DKG as specified in Section 14
  • Implements FROST threshold signing as specified in Section 15
  • Enforces the group formation protocol including VRF selection as specified in Section 13
  • Implements the public REST API as specified in Section 18
  • Enforces the authentication model as specified in Section 19
  • Preserves all privacy and data minimisation requirements of Section 24
  • Applies all security requirements of Section 27

30. Future Work

Key refresh / proactive secret sharing. Periodically re-randomize shares without changing the public key, mitigating long-term share exposure.

Threshold re-configuration. Changing (t, n) for an existing key requires a full re-sharing protocol.

Multi-coordinator federation. Geographically distributed deployments requiring distributed job queue and consensus on group state.

Authorization Token revocation. A mechanism for the user to revoke specific sub keys without using the root key.

Audit log external verification. A Merkle tree published periodically for external verification of audit log integrity.


Normative References

ReferenceDescription
RFC 2119Key words for use in RFCs to Indicate Requirement Levels
RFC 8785JSON Canonicalization Scheme (JCS)
RFC 4648The Base16, Base32, and Base64 Data Encodings
IETF draft-irtf-cfrg-frostFlexible Round-Optimized Schnorr Threshold Signatures (FROST)
ISO 8601Date and time format

FPSF-MPC-001 v1.0.0 · Draft · Fabric Payment Standards Foundation · Apache-2.0