System Architecture
Why Hyperauth is built the way it is — the choices behind the component pipeline, the three-database model, and the decision to run cryptography inside SQLite.
System Architecture
Hyperauth is an identity system that wants to satisfy two properties that are almost always in tension: it should work when the user is offline, and it should be verifiable by parties who have never met the user. Most identity systems satisfy one of these by sacrificing the other. A server-side database of credentials is trivially verifiable but fails when the network is unavailable or when you want to stop trusting the server. A purely local secret works offline but tells strangers nothing. The architecture of Hyperauth is largely a series of design decisions that try to hold both properties simultaneously.
The Shape of the System
The component pipeline runs left to right across a trust gradient. On the leftmost end sits the Go WASM enclave — the only place where key material ever exists in assembled form. Moving rightward, the TypeScript SDK bridges the enclave's raw exports into promises the rest of the application can consume. React hooks sit above the SDK, turning the multi-step registration and signing operations into observable state machines. The Vault Cloudflare Worker occupies the edge layer: it holds no key material at all, acting instead as a coordinator, session store, and proxy. At the far right sits the Indexer, a Neon Postgres service that mirrors on-chain events for fast off-chain queries.
Nothing in this pipeline is accidental. The Go enclave runs as WASM precisely because the WASM sandbox provides a memory boundary the host JavaScript runtime cannot reach across. The TypeScript SDK does not re-implement cryptography; it routes JSON-serialized inputs into the enclave and routes the outputs back out. The Vault Worker runs at the Cloudflare edge, near the user, because it never needs to touch raw credentials and therefore does not need the security perimeter of a hardened origin server. The Indexer exists because Ethereum does not offer efficient reverse lookups — you cannot ask "which DIDs were registered today" of the chain directly; you can only ask about specific addresses, so off-chain event mirroring is not a shortcut, it is a necessity.
The Enclave and Its Exports
The enclave exposes seventeen functions via //go:wasmexport, but thirteen are the meaningful ones: generate, sign, verify, derive_address, mint_ucan, parse_webauthn, load, lock, unlock, status, export_vault, import_vault, and sync_init. This is not a large API surface. Each function takes JSON in and returns JSON out through the Extism plugin development kit (github.com/extism/go-pdk), which standardizes how the host passes memory into and out of the WASM guest.
The most important export is generate. It calls stateless.MPCGenerate, which produces an enclave identifier, a public key, and two shares of the private key. The key is never held whole; from the moment of creation it exists only as val_share and user_share — two values that together reconstruct the scalar but individually reveal nothing. The sign export accepts encrypted shares, reassembles the key inside the sandbox for the duration of the signing operation, and then discards it. The key is never persisted in whole form.
The lock and unlock exports exist because the enclave maintains a small amount of in-memory state (an initialized flag, a locked flag, a DID string, a last-activity timestamp) managed through a state package. Locking calls state.Clear(), which wipes that in-memory state, and then sets the locked flag. Any subsequent call to exec, query, export_vault, import_vault, or the sync functions will immediately return an error without touching any data. This is the software analog of a locked safe: the contents may exist elsewhere, but this instance refuses to operate.
Cryptography as SQL Functions
The most unusual decision in the architecture is that cryptographic operations — key generation, signing, verification, and share rotation — are registered as custom SQLite functions rather than as ordinary Go functions. The keybase package calls RegisterMPCFunctions at initialization, which wires four callbacks — mpc_generate, mpc_sign, mpc_verify, and mpc_refresh — into the SQLite connection using conn.CreateFunction.
The reason this works, and the reason it is safe in this context, is the threading model of WASM compiled under wasip1. WASM with the wasip1 target is single-threaded. There are no goroutines, no concurrent requests, no race conditions between the SQLite callback and the Go mutex ordinarily needed to protect shared state. The code comment in functions.go is explicit about this: callbacks must not acquire kb.mu because a deadlock is possible if the caller already holds it, and that risk is only acceptable because the single-threaded guarantee eliminates the class of races that would otherwise make this pattern dangerous.
The practical benefit of this design is that a DID operation expressed as SQL — something like INSERT INTO accounts SELECT mpc_generate() — becomes atomic from the database's perspective. The cryptographic operation and the state mutation happen inside the same transaction. You cannot get a generated key that fails to be recorded, or a recorded key that was never generated. The SQL transaction and the crypto operation succeed or fail together.
The mpc_refresh function deserves special mention because it is the mechanism for share rotation. It reconstructs the private key from the existing shares, splits it again into two new random shares, and updates the database row in place. The public key and all derived addresses remain identical after a refresh — nothing visible to the outside world changes — but the share values that could be stolen or compromised are replaced. This allows the system to implement forward secrecy for stored key material without requiring the user to re-register.
Three Databases, Three Purposes
The system uses three database systems, and the division of responsibility among them explains much of the trust architecture.
The enclave SQLite is the client-side encrypted database. It runs entirely inside the WASM sandbox. It holds the key material (as shares), the DID, verification methods, linked accounts, and credentials. Nothing in this database is ever transmitted to a server in decrypted form. When vault export happens, the exported bytes are the encrypted database itself — a blob that is only meaningful to someone who can unlock the enclave.
The Portal D1 is a Cloudflare D1 (SQLite-compatible) database living at the edge. The Vault Worker reads from and writes to it for globally-shared persistent data: the registered_dids table that maps credential IDs to on-chain DIDs, the verified_identifiers table that records attestation results, and the reserved_identifiers table that prevents certain handles from being claimed. D1 does not hold key material. It holds the binding between a passkey credential ID, a human-readable identifier, and an on-chain DID — the minimal metadata needed to let a returning user authenticate.
The Indexer Postgres is the event-sourced view of the blockchain. It mirrors events emitted by DIDRegistry — DIDRegistered, DIDUpdated, DIDDeactivated, AliasRegistered — into queryable rows. The Vault Worker proxies queries to the Indexer through /api/indexer/, applying an allowlist of permitted paths. The Indexer exists at a different trust level than D1: D1 is a write-authoritative record of what the portal knows; the Indexer is a read-optimized view of what the chain knows. When the two disagree, the chain is correct.
Why This Builds in a Specific Order
The build dependency graph runs from bottom to top: contracts first, then the enclave, then the SDK, then the React hooks, then the portal and vault applications. Contracts must be compiled before anything else because the ABIs they emit are imported by the TypeScript SDK. The enclave must be compiled to WASM before the SDK can load it, because the SDK's EnclaveClient constructor references the WASM binary path. The React hooks depend on the SDK types. The applications depend on the hooks.
This order matters not just for builds but for understanding where changes propagate. A change to a contract ABI must flow through the SDK, through the hooks, and through any UI that renders the result of a contract call. A change to an enclave export must be reflected in the TypeScript wrapper that calls it. The build order makes the dependency graph visible and enforceable; breaking it produces immediate compilation errors rather than subtle runtime failures discovered in production.
Offline-First, Trust-Minimized, Edge-Native
The three design goals are worth naming directly because they explain tradeoffs that might otherwise seem like complications.
Offline-first means the enclave can generate a DID, produce signatures, and mint UCANs without any network access. The network is required only when the user wants to anchor their DID on-chain, sponsor a transaction, or synchronize across devices. For most cryptographic operations, the network is irrelevant.
Trust-minimized means the Vault Worker does not need to be trusted with secrets. It can be compromised, inspected, or replaced without exposing the user's private key, because the private key never reaches it. The worst a compromised Vault Worker can do is refuse to store session metadata or corrupt the registered DID mapping — neither of which affects the user's ability to sign transactions with the key they already hold.
Edge-native means the Vault Worker runs at Cloudflare's edge, co-located with users around the world, without any central origin server. The Durable Object design means each identifier gets its own isolated instance with its own SQLite storage, enabling per-user rate limiting, session management, and verification code tracking without shared state contention. The enclave WASM binary is served from R2 object storage with aggressive cache headers, meaning the cryptographic module loads from a cache near the user rather than from a distant origin.