Releases: hyperspaceai/agi
v1.4.0 — Mysticeti Consensus (Sui uncertified DAG, 9 blocks/sec)
Hyperspace switches from Narwhal/Bullshark to Sui Mysticeti consensus. 5,282 lines of Sui production Rust code via CGO FFI. 4.5x block rate improvement, zero stalls. Includes libmysticeti_consensus.so in lib/ directory.
v1.3.8 — Strict round-quorum (rolling-restart safety)
v1.3.8 — Strict round-quorum (rolling-restart safety)
Eliminates the rolling-restart fork cascade documented in the
2026-04-09 postmortem under "follow-up: rolling restart safety". After
v1.3.7 deployed via auto-update, the chain repeatedly fragmented into
2-validator pairs every time a node restarted. This release fixes the
underlying cause.
Root cause
narwhal.advanceRound() had three "emergency recovery" code paths that
let a validator advance to the next consensus round with 0 or 1
parent certs after a wall-clock timeout:
// Bootstrap (rounds 2-10):
if stuckDuration > 15*time.Second && prevRoundCerts == 0 {
minCertsForRecovery = 0 // <-- dangerous
}
// Round 11+:
if stuckDuration > 60*time.Second && prevRoundCerts >= 1 {
minCertsForRecovery = 1 // <-- dangerous
}
if stuckDuration > 120*time.Second && prevRoundCerts == 0 {
minCertsForRecovery = 0 // <-- very dangerous
}When a validator restarted (auto-update, manual restart, crash) and
hit the bootstrap delay alone — even briefly — these paths would let
it advance through rounds with empty parent sets, producing proposals
that the rest of the network could not link to. From the network's
view, the restarted validator looked like a divergent fork.
The fix
narwhal.go advanceRound(): removed all three emergency-recovery
paths. The new rule is one line:
minCertsForRecovery := f + 1 // 2 of 4 for n=4For n=4 / f=1 that's 2 certs (the BFT minimum for "real progress").
There is no fallback. If quorum cannot form, the validator stalls —
which is the correct BFT behaviour. Liveness without quorum is
impossible by definition; producing rounds in isolation isn't liveness,
it's silent forking.
The bootstrap peer-wait in runRounds() is also tightened: max wait
60s → 120s, with a 500ms poll interval, so a validator coming up after
its peers gets more time to discover them before consensus starts.
Trade-off
Liveness is now strictly bounded by quorum. If 2 of 4 validators are
down at the same time, the chain stalls until at least one comes back
(needs 2/4 active validators for n=4 quorum=3 since the local node
counts as one). This is the correct BFT behaviour.
The previous "advance with 0 certs" path traded safety for liveness in
the wrong direction — it produced a chain that appeared to make
progress but was actually four parallel single-validator chains. This
release restores the safety guarantee.
Upgrade notes
Drop-in upgrade. No chain data wipe required. Auto-updater pulls
this within 5 minutes.
Important: after deploying, do a coordinated restart so that all 4
validators come up with fresh DAG state at roughly the same time. The
strict-quorum rule means a single validator can't bootstrap alone; if
3 of 4 validators come up before the 4th, they'll start producing
together, and the 4th will join cleanly when it's ready.
Verification on testnet
Will be verified on the public 4-validator Hyperspace A1 testnet
immediately after publication. See the post-deploy report in the
release thread.
v1.3.7 — Developer Experience (SDKs, devnet, typed errors)
v1.3.7 — Developer Experience
Biggest DX release since genesis. No consensus changes; drop-in upgrade
from v1.3.6 with no chain data wipe required.
New: typed HSPACE-xxx error taxonomy
Every hspace_* RPC error now returns a structured error.data.code
field with a stable HSPACE-xxx code, not just a loose message string.
Clients can branch on the code instead of substring-matching — this
fixes the exact pain point that turned the 2026-04-09 fork recovery
into a multi-hour debugging session.
{
"jsonrpc": "2.0", "id": 1,
"error": {
"code": -32000,
"message": "HSPACE-101: channel expired",
"data": {
"code": "HSPACE-101",
"detail": "channel abc123... expired at wall-clock 1775749200"
}
}
}Codes:
HSPACE-100..106— payment channel lifecycle errorsHSPACE-200..203— proof-carrying transaction errorsHSPACE-300..301— agent registry errorsHSPACE-900..999— invalid params / unsupported / internal
New: hyperspace devnet subcommand
One command to spin up a local multi-validator chain — the Hyperspace
equivalent of anvil / hardhat node.
hyperspace devnet
# 4 validators start on 127.0.0.1 ports 8545-8548
# chain ID 31337, deterministic keys, Ctrl-C to stopFlags: --validators N, --datadir PATH, --chain-id N, --http-port N,
--p2p-port N, --reset. Deterministic keys mean every run produces the
same validator addresses — convenient for integration tests that want
to reference a known validator.
New: enhanced hyperspace status subcommand
Rich operator-facing node health report:
Hyperspace node status — 2026-04-09T23:08:33Z
Local node:
http://64.227.23.54:8545 block=4189 peers=16 chain=808080
head hash: 0xa4d7ec0bf69eec98...
consensus: NarwhalTusk (4 validators)
Drift:
range: 0 blocks (min 4189, max 4189)
Consensus health:
block: 4189
status: CONSISTENT
Supports --json for machine parsing, --peer URL to compare arbitrary
validators, and HSPACE_STATUS_PEERS env var for scripting. The
cross-validator hash consistency check is the same one that would have
caught the 2026-04-09 four-way fork on day one.
New: official SDKs
First-class TypeScript and Python SDKs in sdk/hyperspace-js/ and sdk/hyperspace-py/:
// TypeScript
import { HyperspaceClient, PaymentChannel } from '@hyperspace/sdk'
const client = new HyperspaceClient({ network: 'testnet' })
const channel = await PaymentChannel.open(client, { sender, recipient, deposit: 50_000_000 })
for (let i = 0; i < 1000; i++) await channel.pay(100)
await channel.close()# Python
from hyperspace import HyperspaceClient, PaymentChannel
client = HyperspaceClient(network="testnet")
channel = PaymentChannel.open(client, sender=sender, recipient=recipient, deposit=50_000_000)
for i in range(1000):
channel.pay(100)
channel.close()Both SDKs:
- Wrap every
hspace_*method - Parse typed
HSPACE-xxxerrors - Auto-reopen channels on
HSPACE-100 / 101 / 104 - Expose
client.consensusHealth()for cross-validator fork detection - Include MetaMask integration helpers (JS only)
New: example contracts
Three opinionated reference contracts at contracts/:
SkillRegistry.sol— discoverable catalog of skills an agent offersTaskEscrow2.sol— post/claim/deliver escrow with 2% protocol feeReputationOracle.sol— composite score + tier gating for TaskEscrow2
All three follow the existing Governable pattern and ship with a
scripts/deploy-examples.js deploy script.
New: Docker image + Grafana stack
docker/fullnode/Dockerfile — multi-stage production build for
hyperspaceai/node:latest. docker/monitoring/ brings up Prometheus
- Grafana with a 9-panel dashboard including a cross-validator
hash-consistency widget. Full docs atdocker/README.md.
New: developer docs
docs/DEVELOPER_QUICKSTART.md— zero to "deployed contract + open payment channel" in 30 minutesdocs/TESTNET_MAINNET_PARITY.md— commitment contract tracking what's frozen vs. what will change between testnet and mainnet
Upgrade notes
Drop-in upgrade. No chain data wipe required. Auto-updater will
pick this up automatically; operators manually upgrading can swap the
binary in place and restart:
curl -L https://github.com/hyperspaceai/agi/releases/download/chain-v1.3.7/hyperspace-agentic-blockchain-linux-amd64.tar.gz -o hs.tar.gz
tar xzf hs.tar.gz
sudo cp hyperspace-agentic-blockchain-linux-amd64/hyperspace-agentic-blockchain /usr/local/bin/
sudo cp hyperspace-agentic-blockchain-linux-amd64/lib/* /usr/local/bin/lib/
sudo systemctl restart hyperspace-agentic-blockchainExisting clients that substring-match error messages will keep working
(the SDKs' parseRpcError has a legacy fallback branch), but should
migrate to HyperspaceErrorCode comparisons over the next release.
v1.3.6 — Remove catch-up wave skipping (determinism fix pt.4)
Remove catch-up wave skipping (determinism fix pt.4)
The final piece of the cross-validator determinism pipeline: the
isCatchingUp code paths in bullshark.tryCommitLocked() that let each
validator unilaterally skip waves once its local DAG advanced 8+ rounds
past the leader round.
Because the DAG advances asynchronously across validators, different
nodes entered catch-up mode at different rounds and skipped different
waves, producing divergent committed sequences — exactly the same class
of fork the v1.3.1–v1.3.5 fixes were addressing, just via a path I
missed in earlier iterations.
Fixed paths
bullshark.go — three former isCatchingUp branches all removed:
-
leaderCert == nil+ catching up → unconditional skip
→ now falls through tocheckLeaderTimeoutLocked(DAG-based view
change, identical across validators). -
support >= QuorumSize+ catching up → commit bs.committedRound
WITHOUT emitting a CommitDecision, effectively dropping the
corresponding block on catching-up validators
→ now emits the same commit decision whether catching up or not. -
support < QuorumSize+ catching up → unconditional skip
→ now falls through tocheckInsufficientSupportTimeoutLocked
(DAG-based, identical across validators).
Testnet verification
Coordinated restart on the 4-validator Digital Ocean testnet. Chain
expected to remain in full consensus indefinitely across all validators.
Upgrade notes
Chain data MUST be wiped (consensus semantics change — existing chains
may have committed waves under the old catch-up logic that don't replay
cleanly under the new one).
v1.3.5 — Deterministic epoch-boundary gas limit
Deterministic epoch-boundary gas limit
Follow-up from v1.3.3, which temporarily froze the block gas limit at the
parent's value because nt.dynamicScaler.GetCurrentGasLimit() was
producing divergent headers across validators.
Gas limit can now grow and shrink under load — but only at epoch boundaries,
and only via a pure function of parent header fields (block number, parent
gas limit, parent gas used). The computation is identical on every validator
because every validator shares the same parent.
Rule
blockNumber % EpochLength == 0 (epoch boundary):
if parent.GasUsed * 2 > parent.GasLimit: raise by parent.GasLimit / 1024
if parent.GasUsed * 2 < parent.GasLimit: lower by parent.GasLimit / 1024
(clamped to [MinBlockGas, MaxBlockGas])
otherwise:
inherit parent.GasLimit verbatim
This is EIP-1559-style elasticity, restricted to epoch boundaries
(default: every 100 blocks). Under sustained load the gas limit climbs
~0.1%/epoch until it hits the ceiling; under light load it drops similarly
until the floor.
Regression tests
TestComputeDeterministicGasLimit_NonEpochInheritsParentTestComputeDeterministicGasLimit_EpochBoundaryRaisesTestComputeDeterministicGasLimit_EpochBoundaryLowersTestComputeDeterministicGasLimit_ClampsTestComputeDeterministicGasLimit_DeterministicAcrossNodesTestComputeDeterministicGasLimit_NilParent
Upgrade notes
Drop-in upgrade. No chain data wipe needed. The new formula uses only
parent fields so existing chains continue seamlessly.
v1.3.4 — Auto-updater downgrade protection + bootnode sync fix
Auto-updater downgrade protection + bootnode sync fix
Auto-updater: refuse to downgrade
The auto-updater previously compared versions with latest != current and
happily downgraded nodes when the remote "latest" release was older than
the locally installed binary. Observed during the 2026-04-09 four-way fork
incident: a brand-new v1.3.1 hotfix was silently downgraded to v1.3.0
because v1.3.0 was still marked latest on GitHub at that moment,
overriding the manual hotfix.
compareSemver() is now used to only apply updates when
latest > current. If a newer local binary sees an older remote latest
(stale metadata, rollback, mis-tagged release), the updater logs:
Remote 'latest' release is OLDER than currently installed — refusing to downgrade
and leaves the binary alone. Regression test added:
TestCompareSemver_Regression.
Bootnode sync fix
Bootnodes stayed permanently 100-300 blocks behind validators because
handleBlockBodies deduplicated by gossip-seen-set instead of
chain-membership. The race:
- Bootnode receives a fresh block via gossip (
handleNewBlock). handleNewBlockmarks the block as seen BEFORE checking parent existence.- Parent is missing → triggers sync → returns without inserting.
- Sync response arrives later with the parent + the original block in a
batch of bodies. handleBlockBodiescheckshasSeen(block.Hash)— which returns true
because step 2 marked it — and silently drops the entire batch. The
bootnode stays stuck forever.
Fix: handleBlockBodies now uses blockchain.HasBlock(block.Hash) —
authoritative chain membership — for the dedup check. The seen set
remains for its original purpose (gossip dedup in handleNewBlock).
Testnet verification
- Consensus stable across all 4 validators, blocks in sync up to current tip
- Payment economy running
- Bootnode sync lag expected to close within ~1 minute of deploy
Upgrade notes
Drop-in upgrade. No chain data wipe needed.
v1.3.3 — DAG-based view change + deterministic gas limit
DAG-based view change + deterministic gas limit
Follows up on v1.3.1 (four-way fork fix) by eliminating the last two sources
of cross-validator non-determinism observed on the 4-validator testnet:
-
View change triggers are now DAG-based, not wall-clock based.
BothcheckLeaderTimeoutLockedandcheckInsufficientSupportTimeoutLocked
now fire when the local DAG has advanced past the leader round by a fixed
round count (8 and 12 rounds respectively), not after N milliseconds of
elapsed wall clock. Because the DAG converges via gossip, all validators
reach the decision point at the same logical moment. Previously, validator
A's timer could fire for wave W while B was still committing W normally,
giving them divergent committed sequences. -
Block gas limit is now inherited verbatim from the parent block.
Previously the block header usednt.dynamicScaler.GetCurrentGasLimit(),
which adjusts based on locally-observed TPS and therefore differs per
validator. Observed at block 254 on testnet: val-1 produced
gasLimit=30,180,000 while val-4 produced 30,000,000 for the same commit.
Every other header field matched; only gasLimit differed. Now every
validator usesparentBlock.Header.GasLimit, which is identical across
nodes. Dynamic scaling still tracks its internal target for metering, but
header-visible changes need a deterministic epoch mechanism (future work).
Testnet verification
All 4 validators on the Digital Ocean testnet now produce identical block
hashes past block 254 (previous divergence point) — the chain stays in
consensus indefinitely with the full determinism pipeline in place.
Upgrade notes
Chain data MUST be wiped on upgrade.
v1.3.24 — Tuned grace period (QuorumSize+1, 200ms)
Grace period now waits for QuorumSize+1 certs (not all), 200ms (not 500ms). Prevents blocking on disconnected validators while still giving cross-region nodes time to submit.
v1.3.23 — Cross-region grace period for Bullshark support
500ms grace period after quorum lets cross-region validators (SFO, AMS) submit certificates before round advancement. Fixes insufficient Bullshark support stalls on geo-distributed networks.
v1.3.22 — Fix 4-way fork from async execution
Reverts async executionLoop which caused determinism bug. Keeps backpressure, scoring, certified timeouts, checkpoint P2P.