Releases: ArweaveTeam/arweave
Release 2.9.4.1
This release fixes a bug in the mining logic that would cause replica.2.9
hashrate to drop to zero at block height 1642850. We strongly recommend all miners upgrade to this release as soon as possible - block height 1642850 is estimated to arrive at roughly April 4 at 11:30a UTC.
If you are not mining, you do do not need to upgrade to this release.
This release is incremental on the 2.9.4 release and does not include any changes from the 2.9.5-alpha1 release.
Full Changelog: N.2.9.4...N.2.9.4.1
Release 2.9.5-alpha2
This is an alpha update and may not be ready for production use. This software was prepared by the Digital History Association, in cooperation from the wider Arweave ecosystem.
This release includes several bug fixes. It passes all automated tests and has undergone a base level of internal testing, but is not considered production ready. We only recommend upgrading if you believe one of the listed bug fixes will improve your mining experience.
Changes
- Apply the 2.9.4.1 patch to the 2.9.5 branch. More info on discord
- Optimization to speed up the collection of peer intervals when syncing. This can improve syncing performance in some situations. Code changes.
- Fix a bug which could cause syncing to occasionally stall out. Code changes
- Bug fixes to address
chunk_not_found
andsub_chunk_mismatch
errors. Code changes - Add support for DNS pools (multiple IPs behind a single DNS address). Code changes
- Publish some more protocol values as metrics. Code changes
- Optimize the shutdown process. This should help with, but not fully address, the slow node shutdown issues. Code changes
- Add webhooks for the entire mining solution lifecycle. New
solution
webhook added with multiple statessolution_rejected
,solution_stale
,solution_partial
,solution_orphaned
,solution_accepted
, andsolution_confirmed
. Code changes - Add metrics to allow tracking mining solutions:
mining_solution_failure
,mining_solution_success
,mining_solution_total
. Code changes - Fix a bug where a VDF client might get pinned to a slow or stalled VDF server. Code changes
Community involvement
A huge thank you to all the Mining community members who contributed to this release by identifying and investigating bugs, sharing debug logs and node metrics, and providing guidance on performance tuning!
Discord users (alphabetical order):
- BerryCZ
- bigbang
- BloodHunter
- Butcher_
- dlmx
- doesn't stay up late
- edzo
- Iba Shinu
- JF
- lawso2517
- MaSTeRMinD
- MCB
- qq87237850
- Qwinn
- RedMOoN
- smash
- sumimi
- T777
- Thaseus
- Vidiot
- Wednesday
Release 2.9.5-alpha1
This is an alpha update and may not be ready for production use. This software was prepared by the Digital History Association, in cooperation from the wider Arweave ecosystem.
This release includes several bug fixes. It passes all automated tests and has undergone a base level of internal testing, but is not considered production ready. We only recommend upgrading if you believe one of the listed bug fixes will improve your mining experience.
verify
Tool Improvements
This release contains several improvements to the verify
tool. Several miners have reported block failures due to invalid or missing chunks. The hope is that the verify
tool improvements in this release will either allow those errors to be healed, or provide more information about the issue.
New verify
modes
The verify
tool can now be launched in log
or purge
modes. In log
mode the tool will log errors but will not flag the chunks for healing. In purge
mode all bad chunks will be marked as invalid and flagged to be resynced and repacked.
To launch in log
mode specify the verify log
flag. To launch in purge
mode specify the verify purge
flag. Note: verify true
is no longer valid and will print an error on launch.
Chunk sampling
The verify
tool will now sample 1,000 chunks and do a full unpack and validation of the chunk. This sampling mode is intended to give a statistical measure of how much data might be corrupt. To change the number of chunks sampled you can use the the verify_samples
option. E.g. verify_samples 500
will have the node sample 500 chunks.
More invalid scenarios tested
This latest version of the verify
tool detects several new types of bad data. The first time you run the verify
tool we recommend launching it in log
mode and running it on a single partition. This should avoid any surprises due to the more aggressive detection logic. If the results are as you expect, then you can relaunch in purge
mode to clean up any bad data. In particular, if you've misnamed your storage_module
the verify
tool will invalidate all chunks and force a full repack - running in log
mode first will allow you to catch this error and rename your storage_module
before purging all data.
Bug Fixes
- Fix several issues which could cause a node to "desync". Desyncing occurs when a node gets stuck at one block height and stops advancing.
- Reduce the volume of unnecessary network traffic due to a flood of
404
requests when trying to sync chunks from a node which only servesreplica.2.9
data. Note: the benefit of this change will only be seen when most of the nodes in the network upgrade. - Performance improvements to HTTP handling that should improve performance more generally.
- Add TX polling so that a node will pull missing transactions in addition to receiving them via gossip
Known issues
- multi-node configuration with the new entry-point script for arweave. A complete description of the problem, a patch and a procedure can be found here: https://gist.github.com/humaite/de7ac23ec4975518e092868d4b4312ee
Community involvement
A huge thank you to all the Mining community members who contributed to this release by identifying and investigating bugs, sharing debug logs and node metrics, and providing guidance on performance tuning!
Discord users (alphabetical order):
- AraAraTime
- BerryCZ
- bigbang
- BloodHunter
- Butcher_
- dlmx
- dzeto
- edzo
- EvM
- Fox Malder
- Iba Shinu
- JF
- jimmyjoe7768
- lawso2517
- MaSTeRMinD
- MCB
- Methistos
- Michael | Artifact
- qq87237850
- Qwinn
- RedMOoN
- smash
- sumimi
- T777
- Thaseus
- Vidiot
- Wednesday
- wybiacx
What's Changed
Full Changelog: N.2.9.4...N.2.9.5-alpha1
Release 2.9.4
This is a minor update. This software was prepared by the Digital History Association, in cooperation from the wider Arweave ecosystem.
This release includes several bug fixes. We recommend upgrading, but it's not required. All releases 2.9.1 and higher implement the consensus rule changes for the 2.9 hard fork and should be sufficient to participate in the network.
Note: this release fixes a packing bug that affects any storage module that does not start on a partition boundary. If you have previously packed replica.2.9
data in a storage module that does not start on a partition boundary, we recommend discarding the previously packed data and repacking the storage module with the 2.9.4 release. This applies only to storage modules that do not start on a partition boundary, all other storage modules are not impacted.
Example of an impacted storage module:
storage_module 3,1800000000000,addr.replica.2.9
Example of storage modules that are not impacted:
storage_module 10,addr.replica.2.9
storage_module 2,1800000000000,addr.replica.2.9
storage_module 0,3400000000000,addr.replica.2.9
Other bug fixes and improvements:
- Fix a regression that caused
GET /tx/id/data
to fail - Fix a regression that could cause a node to get stuck on a single peer while syncing (both
sync_from_local_peers_only
and syncing from the network) - Limit the resources used to sync the tip data. This may address some memory issues reported by miners.
- Limit the resources used to gossip new transactions. This may address some memory issues reported by miners.
- Allow the node to heal itself after encountering a
not_prepared_yet
error. The error has also been downgraded to a warning.
Community involvement
A huge thank you to all the Mining community members who contributed to this release by identifying and investigating bugs, sharing debug logs and node metrics, and providing guidance on performance tuning!
Discord users (alphabetical order):
- AraAraTime
- bigbang
- BloodHunter
- Butcher_
- dlmx
- dzeto
- Iba Shinu
- JF
- jimmyjoe7768
- lawso2517
- MaSTeRMinD
- MCB
- Methistos
- qq87237850
- Qwinn
- RedMOoN
- sam
- T777
- U genius
- Vidiot
- Wednesday
What's Changed
Full Changelog: N.2.9.3...N.2.9.4
Release 2.9.3
This is a minor update. This software was prepared by the Digital History Association, in cooperation from the wider Arweave ecosystem.
This is a minor release that fixes a few bugs:
- sync and pack stalling
ready_for_work
error whensync_jobs = 0
- unnecessary entropy generated on storage modules that are smaller than 3.6TB
- remove some overly verbose error logs
What's Changed
Full Changelog: N.2.9.2...N.2.9.3
Release 2.9.2
Arweave 2.9.2
This is a minor update. This software was prepared by the Digital History Association, in cooperation from the wider Arweave ecosystem.
Bug Fixes / Improvements
- Fix a bug where the node would not sync new data to disks that were 95-99% full
- Fix a bug causing an error message like
[error] ar_chunk_copy:do_ready_for_work/2:135 event: worker_not_found, module: ar_chunk_copy, call: ready_for_work, store_id: default
- Fix a bug preventing the node from launching on some old Xeon processors
- Improve the efficiency of sharing newly uploaded data between peers
- Small performance improvement when preparing entropy
- Small performance improvement when syncing from peers
- Add two more checks to the
verify
tool. These checks will identify some scenarios which resulted in a partition having data packed to two formats. In those cases running theverify
tool should flag the incorrectly packed chunks as invalid so that they can be synced and repacked.
Community involvement
A huge thank you to all the Mining community members who contributed to this release by identifying and investigating bugs, sharing debug logs and node metrics, and providing guidance on performance tuning!
Discord users (alphabetical order):
- AraAraTime
- Butcher_
- dlmx
- dzeto
- JF
- jimmyjoe7768
- lawso2517
- MaSTeRMinD
- Methistos
- Michael | Artifact
- qq87237850
- Qwinn
- RedMOoN
- sam
- some1else
- sumimi
- T777
- U genius
- Vidiot
- Wednesday
What's Changed
Full Changelog: N.2.9.1...N.2.9.2
Release 2.9.1
Arweave 2.9.1
This Arweave node implementation proposes a hard fork that activates at height 1602350, approximately 2025-02-03 14:00 UTC. This software was prepared by the Digital History Association, in cooperation with the wider Arweave ecosystem. Additionally, this release was audited by NCC Group.
Note: with 2.9.1 when enabling the randomx_large_pages
option you will need to configure 5,000 HugePages
rather than the 3,500 required for earlier releases.
Replica 2.9 Format
The primary focus of this release is to complete the implementation, validation, and testing of the Replica 2.9 Format introduced in the previous "early adopter" release: 2.9.0-early-adopter. Those release notes are still a good source of information about the Replica 2.9 Format.
With this 2.9.1 release the Replica 2.9 Format is ready for production use. New and existing miners should consider packing or repacking to the replica.2.9
format.
Note: If you have replica.2.9
data that was previously packed with the 2.9.0-early-adopter release, please delete it before running 2.9.1. There are changes in 2.9.1 which render it incompatible with previously packed replica.2.9
data. spora_2_6
and composite
data is unaffected.
Benefits of the Replica 2.9 Format
Arweave 2.9’s format enables:
- Allow miners to read from their drives at a rate of 5 MiB/s (the equivalent of difficulty 10 in Arweave 2.8), without adversely affecting the security of the network. This represents a decrease of 90% from Arweave 2.8, and 97.5% from Arweave 2.7.x. This will allow miners to use the most cost efficient drives in order to participate in the network, while also lowering pressure on disk I/O during mining operations.
- A ~96.9% decrease in the compute necessary to pack Arweave data when compared to
2.8 composite.1
, and SPoRA_2.6. This decrease also scales approximately linearly for higher packing difficulties. For example, for miners that would have packed with Arweave 2.8 to the difficulty necessary to reach a 5 MB/s read speed (composite.10
), Arweave 2.9 will require ~99.56% less energy and time. This represents an efficiency improvement of 32x against 2.7.x and 2.8 composite.1, and ~229x for composite.10.
Packing Performance
Arweave packing consists of two phases:
- Entropy generation
- Chunk enciphering
In prior packing formats (e.g. spora_2_6
and composite
) those phases were merged: for each chunk a small bit of entropy was generated and then the chunk was enciphered. Historically the entropy generation has been the bottleneck and main driver of CPU usage.
With replica.2.9
the phases are separated. Entropy is generated for many chunks, and then that entropy is read and many chunks are enciphered. The entropy generation phase is many times faster than it was for spora_2_6
and composite
- in our benchmarks a single node is able to generate entropy for the full weave in ~3 days. The CPU requirements for the enciphering phase are also quite low as enciphering is now a lightweight XOR operation. The end result is that now disk IO is the main bottleneck when packing to replica.2.9
.
We have updated the docs to provide guidance on how to approach repacking to replica.2.9
: Syncing and Packing Guide
We are working on a follow-up release which will attempt to further optimize the disk IO phase of the packing process.
Changes from the 2.9.0-early-adopter release
- Previously there was a limitation which degraded packing performance for non-contiguous storage modules. This has been addressed. You can now pack singular and non-contiguous storage modules with no impact on packing performance.
- All modes of packing to
replica.2.9
are supported. I.e. "sync and pack", "cross-module repack", and "repack-in-place" are all supported. However you are not yet be able to repack fromreplica.2.9
to any other format. - Overall packing performance has improved. Further work is needed to streamline the disk IO during the packing process.
- The
packing_rate
flag is now deprecated and will have no impact. It's been replaced by thepacking_workers
flag which allows you to set how many concurrent worker threads are used while packing. The default is the number of logical cores in the system. - The
replica_2_9_workers
flag controls how many storage modules the node will generate entropy for at once. Only one storage module per physical device will have entropy generated at a time. The default is 8, but the optimal value will vary from system to system. - We've update the Metrics Guide with a new Syncing and Packing Grafana dashboard to better visualize the
replica.2.9
packing process.
Support for ECDSA Keys
This release introduces support for ECDSA signing keys. Blocks and transactions now support ECDSA signatures and can be signed with ECDSA keys. RSA keys continue to be supported and remain the default key type.
An upcoming arweave-js release will provide more guidance on using ECDSA keys with the Arweave network.
ECDSA support will activate at the 2.9 hard fork (block height 1602350
).
Composite Packing Format Deprecated
The new packing format was discovered as a result of researching an issue (not endangering data, tokens, or consensus) that affects higher difficulty packs of the 2.8 composite scheme. Given this, and the availability of the significantly improved 2.9 packing format, as of block height 1642850 (roughly 2025-04-04 14:00 UTC), data packed to any level of the composite packing format will not produce valid block solutions.
What's Changed
Full Changelog: N.2.8.3...N.2.9.1
Release 2.9.0-early-adopter
Arweave 2.9.0-Early-Adopter Release Notes
This Arweave node implementation proposes a hard fork that activates at height 1602350, approximately 2025-02-03 14:00 UTC. This software was prepared by the Digital History Association, in cooperation with the wider Arweave ecosystem. Additionally, this release was audited by NCC Group.
This 2.9.0 release is an early adopter release. If you do not plan to benchmark and test the new data format, you do not need to upgrade for the 2.9 hard fork yet.
Note: with 2.9.0 when enabling the randomx_large_pages
option you will need to configure 5,000 HugePages
rather than the 3,500 required for earlier releases.
Replica 2.9 Packing Format
The Arweave 2.9.0-early-adopter release introduces a new data preparation (‘packing’) format. Starting with this release you can begin to test out this new format. This format brings significant improvements to all of the core metrics of data preparation.
To understand the details, please read the full paper here: https://github.com/ArweaveTeam/arweave/blob/release/N.2.9.0-early-adopter/papers/Arweave2_9.pdf
Additionally, an audit of this mechanism was performed by NCC group and is available to read here (the comments highlighted in this audit have since been remediated): https://github.com/ArweaveTeam/arweave/blob/release/N.2.9.0-early-adopter/papers/NCC_Group_ForwardResearch_E020578_Report_2024-12-06_v1.0.pdf
Arweave 2.9’s format enables:
- Allow miners to read from their drives at a rate of 5 MiB/s (the equivalent of difficulty 10 in Arweave 2.8), without adversely affecting the security of the network. This represents a decrease of 90% from Arweave 2.8, and 97.5% from Arweave 2.7.x. This will allow miners to use the most cost efficient drives in order to participate in the network, while also lowering pressure on disk I/O during mining operations.
- A ~96.9% decrease in the compute necessary to pack Arweave data when compared to
2.8 composite.1
, and SPoRA_2.6. This decrease also scales approximately linearly for higher packing difficulties. For example, for miners that would have packed with Arweave 2.8 to the difficulty necessary to reach a 5 MB/s read speed (composite.10
), Arweave 2.9 will require ~99.56% less energy and time. This represents an efficiency improvement of 32x against 2.7.x and 2.8 composite.1, and ~229x for composite.10.
Replica 2.9 Benchmark Tool
If you'd like to benchmark the performance of the new Replica 2.9 packing format on your own machine you can use the new ./bin/benchmark-2.9 tool. It has 2 modes:
- Entropy generation which generates and then discards entropy. This allows you to benchmark the time it takes for your CPU to perform the work component of packing, ignoring any IO-related effects.
- To use the entropy generation benchmark run the tool without using any dir flags.
- Packing which generates entropy, packs some random data, and then writes it to disk. This provides a more complete benchmark of the time it might take your server to pack data. Note: This benchmark does not include unpacking or reading data (and associated disk seek times).
- To use the packing benchmark mode specify one or more output directories using the multi-use dir flag.
Usage: benchmark-2.9 [format replica_2_9|composite|spora_2_6] [threads N] [mib N] [dir path1 dir path2 dir path3 ...]
format: format to pack. replica_2_9, composite.1, composite.10, or spora_2_6. Default: replica_2_9.
threads: number of threads to run. Default: 1.
mib: total amount of data to pack in MiB. Default: 1024.
Will be divided evenly between threads, so the final number may be
lower than specified to ensure balanced threads.
dir: directories to pack data to. If left off, benchmark will just simulate
entropy generation without writing to disk.
Repacking to Replica 2.9
As well as allowing you to run benchmarks, the 2.9.0-early-adopter release also allows you to pack data for the 2.9 format. It has not, however, been fully optimized and tuned for the new entropy distribution scheme. It is included in this build for validation purposes. In our tests, we have observed consistent >=75% reductions in computation requirements (>4x faster packing speeds), but future releases will continue to improve this towards the performance of the benchmarking tool.
To test this functionality run a node with storage modules configured to use the <address>.replica.2.9
packing format. repack_in_place
is not yet supported.
Composite Packing Format Deprecated
The new packing format was discovered as a result of researching an issue (not endangering data, tokens, or consensus) that affects higher difficulty packs of the 2.8 composite scheme. Given this, and the availability of the significantly improved 2.9 packing format, as of block height 1642850 (roughly 2025-04-04 14:00 UTC), data packed to any level of the composite packing format will not produce valid block solutions.
Note: This is an "Early Adopter" release. It implements significant new protocol improvements, but is still in validation. This release is intended for members of the community to try out and benchmark the new data preparation mechanism. You will not need to update your node for 2.9 unless you are interested in testing these features, until shortly before the hard fork height at 1602350 – approximately Feb 3, 2025. As this release is intended for validation purposes, please be aware that there is a possibility that data encoded using its new preparation scheme may need to be repacked before 2.9 activates. The first ‘mainline’ releases for Arweave 2.9 will follow in the coming weeks after community validation has been completed.
Full Changelog: N.2.8.3...N.2.9.0-early-adopter
Release 2.8.3
This is a minor update. This software was prepared by the Digital History Association, in cooperation from the wider Arweave ecosystem.
Bug fixes
- Fix a performance issue which could cause very low read rates when multiple storage modules were stored on a single disk. The bug had a significant impact on SATA read speeds and hash rates, and noticeable, but smaller, impact on SAS disks.
- Fix a bug which caused the Mining Performance Report to report incorrectly for some miners. Notably: 0s in the
Ideal
andData Size
columns. - Fix a bug which could cause the
verify
tool to get stuck when encountering aninvalid_iterator
error - Fix a bug which caused the
verify
tool to fail to launch with the errorreward_history_not_found
- Fix a performance issue which could cause a node to get backed up during periods of high network transaction volume.
- Add the
packing_difficulty
of a storage module to the/metrics
endponit
Community involvement
A huge thank you to all the Mining community members who contributed to this release by identifying and investigating bugs, sharing debug logs and node metrics, and providing guidance on performance tuning!
Discord users (alphabetical order):
- bigbang
- BloodHunter
- Butcher_
- dzeto
- edzo
- foozoolsanjj
- heavyarms1912
- JF
- MCB
- Methistos
- Mastermind
- Qwinn
- Thaseus
- Vidiot
- a8_ar
- jimmyjoe7768
- lawso2517
- qq87237850
- smash
- sumimi
- T777
- tashilo
- thekitty
- wybiacx
What's Changed
- Add node introspection options by @shizzard in #651
- Release/performance 2.8 by @JamesPiechota, @humaite, @vird, @ldmberman in #656
Full Changelog: N.2.8.2...N.2.8.3
Release 2.8.2
Fixes issue with peer history validation upon re-joining the network.
Full Changelog: N.2.8.1...N.2.8.2