-
Notifications
You must be signed in to change notification settings - Fork 2
Sync progress tracking #79
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: v0.40-dev
Are you sure you want to change the base?
Conversation
Remove hardcoded localhost addresses from mainnet and testnet seed lists. These were causing unnecessary connection attempts during initial sync.
- Add DetailedSyncProgress struct with performance metrics - Headers per second, bytes per second, ETA calculation - Sync stage tracking (Connecting, Querying, Downloading, etc.) - Add SyncStage enum for granular progress states - Add filter_sync_available field to track peer capabilities - Add supports_compact_filters() helper to PeerInfo - Add progress channel to DashSpvClient for real-time updates - Add is_filter_sync_available() method to check peer support This enables better progress reporting and performance monitoring for SPV sync operations.
- Add intermediate handshake states for better debugging: - VersionReceivedVerackSent: Version received, verack sent - VerackReceived: Verack received from peer - Add tracking flags for version_received, verack_received, version_sent - Improve logging throughout handshake process - Better error messages with handshake state information This makes handshake debugging easier and provides clearer state transitions during peer connection establishment.
- Reduce header sync timeout from 10s to 4s for faster failure detection - Change status update interval from 5s to 500ms for smoother progress - Add detailed logging for sync process including locator info - Improve empty locator handling for genesis sync These changes provide more responsive sync behavior and better real-time progress feedback.
- Enhanced status display with better formatting and metrics - Improved network connection handling and error recovery - Updated network constants for better peer management - Enhanced multi-peer connection logic with better peer selection - Improved filter sync with better error handling - General sync module improvements for reliability These changes collectively improve the stability and performance of the SPV client's network and synchronization operations.
Change header sync timeout from 4 seconds to 500 milliseconds for more responsive timeout detection during sync operations.
WalkthroughThe changes introduce detailed synchronization progress reporting, peer capability introspection, and improved handshake state management in the Dash SPV client. The network layer now tracks peer metadata and supports querying for compact filter support. Progress reporting is enhanced with new structures and async channels, while handshake logic is made more robust and testable. Peer connection constants are increased, and logging is improved throughout. Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant NetworkManager
participant Peer
Client->>NetworkManager: Query peers with compact filter support
alt Peers support compact filters
NetworkManager-->>Client: List of peers supporting compact filters
Client->>Peer: Start filter header sync
Peer-->>Client: Sync data
Client->>Client: Emit DetailedSyncProgress via channel
else No peers support compact filters
NetworkManager-->>Client: Empty list
Client->>Client: Log warning, skip filter sync
end
sequenceDiagram
participant Client
participant ProgressReceiver
loop Sync Progress Updates
Client->>ProgressReceiver: Send DetailedSyncProgress
end
sequenceDiagram
participant Client
participant Peer
Client->>Peer: Send Version
Peer-->>Client: Send Version
Client->>Peer: Send Verack
Peer-->>Client: Send Verack
alt Both Version and Verack exchanged
Client->>Client: Handshake complete
end
Poem
✨ Finishing Touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
🧹 Nitpick comments (4)
dash-spv/tests/test_handshake_logic.rs (1)
1-16
: Expand test coverage for comprehensive handshake testing.The basic handshake test is a good start, but consider adding more comprehensive test cases for a robust handshake state machine.
Consider adding tests for:
- State transitions (Init → VersionSent → VersionReceived, etc.)
- Error handling and invalid state transitions
- Timeout scenarios
- Message handling in different states
#[test] fn test_handshake_version_exchange() { let mut handshake = HandshakeManager::new(Network::Dash); // Test version sent transition // handshake.handle_version_sent(); // assert_eq!(*handshake.state(), HandshakeState::VersionSent); // Add more state transition tests... }dash-spv/src/sync/filters.rs (1)
531-536
: Good guard logic, but consider using the new method to reduce duplication.The pre-check logic is excellent - it prevents unnecessary sync attempts when no peers support compact filters and provides clear user feedback. However, this duplicates the same capability check implemented in the new
is_filter_sync_available
method above.Consider refactoring to use the new method:
- // Check if any connected peer supports compact filters - if !network.has_peer_with_service(dashcore::network::constants::ServiceFlags::COMPACT_FILTERS).await { + // Check if any connected peer supports compact filters + if !self.is_filter_sync_available(network).await { tracing::warn!("⚠️ No connected peers support compact filters (BIP 157/158). Skipping filter synchronization."); tracing::warn!("⚠️ To enable filter sync, connect to peers that advertise NODE_COMPACT_FILTERS service bit."); return Ok(false); // No sync started }This would improve maintainability and ensure consistency if the capability checking logic needs to be updated in the future.
dash-spv/src/client/mod.rs (1)
717-717
: Address the TODO for bytes tracking.The
bytes_per_second
field is hardcoded to 0. Consider implementing actual byte tracking or documenting why it's not currently feasible.Would you like me to help implement byte tracking for downloaded headers and filters?
dash-spv/src/network/connection.rs (1)
168-186
: Consider consolidating logging approaches.The method correctly updates peer info, but uses both
tracing
andlog
crates for similar messages. Consider standardizing on one logging approach for consistency.-tracing::info!("Updated peer info for {}: height={}, version={}, services={:?}", - self.address, - version_msg.start_height, - version_msg.version, - version_msg.services); - -// Also log with standard logging for debugging -log::info!("PEER_INFO_DEBUG: Updated peer {} with height={}, version={}", - self.address, version_msg.start_height, version_msg.version); +tracing::info!( + peer = %self.address, + height = version_msg.start_height, + version = version_msg.version, + services = ?version_msg.services, + "Updated peer info" +);
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (13)
dash-spv/src/client/config.rs
(0 hunks)dash-spv/src/client/mod.rs
(10 hunks)dash-spv/src/client/status_display.rs
(1 hunks)dash-spv/src/network/connection.rs
(13 hunks)dash-spv/src/network/constants.rs
(1 hunks)dash-spv/src/network/handshake.rs
(9 hunks)dash-spv/src/network/mod.rs
(2 hunks)dash-spv/src/network/multi_peer.rs
(6 hunks)dash-spv/src/sync/filters.rs
(2 hunks)dash-spv/src/sync/headers.rs
(3 hunks)dash-spv/src/sync/mod.rs
(3 hunks)dash-spv/src/types.rs
(5 hunks)dash-spv/tests/test_handshake_logic.rs
(1 hunks)
💤 Files with no reviewable changes (1)
- dash-spv/src/client/config.rs
🔇 Additional comments (30)
dash-spv/src/network/constants.rs (1)
6-8
: Verify the impact of increased peer limits on resource usage.The peer connection limits have been significantly increased:
TARGET_PEERS
: 3 → 8 (167% increase)MAX_PEERS
: 5 → 12 (140% increase)While this can improve sync performance and network resilience, it also increases resource consumption.
Please verify that these increases are appropriate for the target deployment environments:
#!/bin/bash # Check if the increased peer limits are referenced in configuration or documentation echo "Searching for peer limit references and configuration..." rg -i "(peer.*limit|max.*peer|target.*peer)" --type md rg -i "(MIN_PEERS|TARGET_PEERS|MAX_PEERS)" -A 3 -B 3dash-spv/src/sync/headers.rs (2)
217-230
: Excellent logging improvements for debugging sync issues.The added logging for tip height, hash information, and sync preparation provides valuable debugging information that will help troubleshoot sync issues.
296-305
: Good addition of request logging for observability.The logging of header requests with specific hash information will help track sync progress and diagnose issues.
dash-spv/src/sync/filters.rs (1)
172-176
: LGTM! Clean abstraction for peer capability checking.The method provides a clear, well-documented interface for checking compact filter support across connected peers. The implementation correctly delegates to the network manager and follows established patterns.
dash-spv/src/sync/mod.rs (1)
346-354
: Good addition for filter sync availability tracking.The implementation consistently checks filter sync availability across all sync methods and includes it in the
SyncProgress
struct. This provides better visibility into whether peers support compact filters.Also applies to: 386-391, 409-414
dash-spv/src/network/mod.rs (1)
73-80
: Well-implemented peer introspection methods.The new methods provide essential peer capability queries for filter sync availability checks. The implementation correctly handles:
- Optional connection state
- Service flags conversion from u64 to ServiceFlags type
- Appropriate return values for disconnected state
Also applies to: 222-258
dash-spv/src/client/mod.rs (5)
54-55
: Progress tracking channels properly implemented.The unbounded channels provide a good mechanism for external progress monitoring without blocking the sync process.
Also applies to: 261-262, 278-279
65-69
: Appropriate error handling for progress emission.Good use of
let _ = sender.send(progress)
to avoid failing sync operations if the progress receiver has been dropped.
695-706
: Well-designed sync stage determination.The sync stage logic comprehensively covers all states with clear, mutually exclusive conditions.
494-494
: Consider the performance impact of more frequent status updates.The status update interval was reduced from 5 seconds to 500ms (10x more frequent). While this provides better responsiveness, ensure that
update_status_display()
is lightweight enough to handle this frequency without impacting sync performance.
1823-1825
: Good addition of genesis block storage verification.Logging the stored tip height after genesis initialization helps confirm successful storage and aids debugging.
dash-spv/src/types.rs (10)
3-3
: LGTM!The import is necessary for the new timing-related fields in
DetailedSyncProgress
.
34-36
: Good addition for tracking filter sync capability.The new
filter_sync_available
field is properly initialized and will help track whether connected peers support compact filters (BIP 157/158).Also applies to: 62-62
71-128
: Well-designed detailed sync progress tracking.The
DetailedSyncProgress
struct provides comprehensive sync metrics with proper edge case handling:
calculate_percentage()
correctly guards against division by zero and caps at 100%calculate_eta()
properly handles edge cases like zero speed and no remaining workThe struct will enable rich progress reporting to clients.
95-105
: Comprehensive sync stage enumeration.The
SyncStage
enum effectively captures all phases of the sync process, including failure states with error messages for debugging.
275-284
: Correct implementation of compact filter support detection.The method properly checks the
COMPACT_FILTERS
service flag to determine if a peer supports BIP 157/158, with appropriate handling of the optional services field.
3-3
: LGTM!The Duration import is necessary for the new timing-related fields in DetailedSyncProgress.
34-36
: LGTM!The new
filter_sync_available
field is properly added to the struct and correctly initialized tofalse
in the Default implementation.Also applies to: 62-62
95-105
: LGTM!The
SyncStage
enum provides comprehensive coverage of synchronization phases with appropriate data for each stage.
107-128
: LGTM with good defensive programming.The calculation methods handle edge cases well:
calculate_percentage()
handles zero peer height and caps at 100%calculate_eta()
checks for non-positive headers per second and usessaturating_sub
to prevent underflow
275-284
: LGTM!The
supports_compact_filters()
method correctly:
- Handles the
Option<u64>
services field- Uses proper service flag conversion and checking for BIP 157/158 support
- Returns
false
for missing services datadash-spv/src/network/connection.rs (3)
18-30
: Good refactoring to unify connection state management.Using a single
Arc<Mutex<ConnectionState>>
to protect both the stream and read buffer simplifies concurrent access control and reduces potential race conditions.
79-84
: Appropriate read timeout configuration.Setting a 15ms read timeout instead of non-blocking mode is a good choice that prevents busy-waiting while maintaining responsive message handling.
Also applies to: 128-133
284-291
: Proper handling of timeout conditions.The code correctly treats both
WouldBlock
andTimedOut
errors as "no data available" conditions, returningOk(None)
to allow the caller to retry.dash-spv/src/network/multi_peer.rs (3)
142-142
: Correct API usage update.The method properly passes the
network
parameter toTcpConnection::connect
, aligning with the updated connection API.
514-574
: Well-implemented filter support checking.The code correctly identifies messages that require compact filter support and ensures they're only sent to capable peers. The error handling provides clear feedback when no suitable peers are available.
906-973
: Comprehensive peer capability introspection methods.The new methods provide valuable peer querying capabilities:
get_peer_best_height()
correctly finds the maximum height with detailed logging- Service flag checking methods properly use the dashcore API
- All methods handle edge cases appropriately
The debug logging will be helpful for troubleshooting sync issues.
dash-spv/src/network/handshake.rs (3)
23-26
: Enhanced handshake state tracking for better reliability.The additional states and tracking flags make the handshake flow more explicit and easier to debug. This will help identify handshake failures more precisely.
Also applies to: 37-39, 50-52
140-147
: Good addition of peer-initiated handshake support.The code correctly handles the case where a peer initiates the handshake by sending their version first, improving compatibility with different peer implementations.
91-94
: Improved handshake flow without artificial delays.Good improvement to remove artificial delays and rely on the socket read timeout instead. This makes the handshake more responsive while still handling messages that arrive in quick succession.
Also applies to: 98-101
@@ -56,6 +56,7 @@ impl<'a> StatusDisplay<'a> { | |||
headers_synced: false, // TODO: Implement | |||
filter_headers_synced: false, // TODO: Implement | |||
masternodes_synced: false, // TODO: Implement | |||
filter_sync_available: false, // TODO: Get from network manager |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Complete the filter sync availability implementation.
The filter_sync_available
field is currently hardcoded to false
with a TODO comment. This placeholder implementation could mislead users about actual filter sync capabilities.
Would you like me to help implement the network manager integration to properly query filter sync availability from connected peers?
🤖 Prompt for AI Agents
In dash-spv/src/client/status_display.rs at line 59, the field
filter_sync_available is hardcoded to false, which does not reflect the actual
filter sync status. To fix this, integrate the network manager to query the
current filter sync availability from connected peers and assign the result to
filter_sync_available instead of the hardcoded false value.
@@ -146,7 +146,7 @@ impl HeaderSyncManager { | |||
// More aggressive timeout when no peers | |||
std::time::Duration::from_secs(5) | |||
} else { | |||
std::time::Duration::from_secs(10) | |||
std::time::Duration::from_millis(500) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Reconsider the aggressive timeout reduction.
The timeout has been reduced from 10 seconds to 500ms (95% reduction). This could cause false timeouts in high-latency network conditions or when peers are legitimately slow to respond.
Consider a more moderate timeout value (e.g., 2-5 seconds) or implement adaptive timeout based on network conditions:
- std::time::Duration::from_millis(500)
+ std::time::Duration::from_secs(2)
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
std::time::Duration::from_millis(500) | |
std::time::Duration::from_secs(2) |
🤖 Prompt for AI Agents
In dash-spv/src/sync/headers.rs at line 149, the timeout duration was
aggressively reduced from 10 seconds to 500 milliseconds, which may cause false
timeouts under high latency or slow peer responses. Adjust the timeout to a more
moderate value between 2 to 5 seconds to balance responsiveness and reliability,
or implement an adaptive timeout mechanism that adjusts based on observed
network conditions.
pub async fn disconnect(&mut self) -> NetworkResult<()> { | ||
if let Some(stream) = self.write_stream.take() { | ||
let _ = stream.shutdown(std::net::Shutdown::Both); | ||
if let Some(state_arc) = self.state.take() { | ||
if let Ok(state_mutex) = Arc::try_unwrap(state_arc) { | ||
let state = state_mutex.into_inner(); | ||
let _ = state.stream.shutdown(std::net::Shutdown::Both); | ||
} | ||
} | ||
self.read_stream = None; | ||
self.connected_at = None; | ||
|
||
tracing::info!("Disconnected from peer {}", self.address); | ||
|
||
Ok(()) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Handle Arc::try_unwrap failure case.
The Arc::try_unwrap
at line 156 will fail if there are other references to the connection state (e.g., from ongoing async operations). In this case, the TCP stream won't be properly shut down.
Consider forcing the shutdown even when unwrap fails:
-if let Some(state_arc) = self.state.take() {
- if let Ok(state_mutex) = Arc::try_unwrap(state_arc) {
- let state = state_mutex.into_inner();
- let _ = state.stream.shutdown(std::net::Shutdown::Both);
- }
-}
+if let Some(state_arc) = self.state.take() {
+ // Try to get exclusive access first
+ match Arc::try_unwrap(state_arc) {
+ Ok(state_mutex) => {
+ let state = state_mutex.into_inner();
+ let _ = state.stream.shutdown(std::net::Shutdown::Both);
+ }
+ Err(state_arc) => {
+ // If we can't unwrap, forcefully shutdown through the mutex
+ if let Ok(state) = state_arc.try_lock() {
+ let _ = state.stream.shutdown(std::net::Shutdown::Both);
+ }
+ }
+ }
+}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
pub async fn disconnect(&mut self) -> NetworkResult<()> { | |
if let Some(stream) = self.write_stream.take() { | |
let _ = stream.shutdown(std::net::Shutdown::Both); | |
if let Some(state_arc) = self.state.take() { | |
if let Ok(state_mutex) = Arc::try_unwrap(state_arc) { | |
let state = state_mutex.into_inner(); | |
let _ = state.stream.shutdown(std::net::Shutdown::Both); | |
} | |
} | |
self.read_stream = None; | |
self.connected_at = None; | |
tracing::info!("Disconnected from peer {}", self.address); | |
Ok(()) | |
} | |
pub async fn disconnect(&mut self) -> NetworkResult<()> { | |
if let Some(state_arc) = self.state.take() { | |
// Try to get exclusive access first | |
match Arc::try_unwrap(state_arc) { | |
Ok(state_mutex) => { | |
let state = state_mutex.into_inner(); | |
let _ = state.stream.shutdown(std::net::Shutdown::Both); | |
} | |
Err(state_arc) => { | |
// If we can't unwrap, forcefully shutdown through the mutex | |
if let Ok(state) = state_arc.try_lock() { | |
let _ = state.stream.shutdown(std::net::Shutdown::Both); | |
} | |
} | |
} | |
} | |
self.connected_at = None; | |
tracing::info!("Disconnected from peer {}", self.address); | |
Ok(()) | |
} |
🤖 Prompt for AI Agents
In dash-spv/src/network/connection.rs around lines 154 to 166, the current code
only shuts down the TCP stream if Arc::try_unwrap succeeds, but this can fail if
other references exist, leaving the stream open. Modify the code to handle the
failure case by accessing the stream through the Arc's inner mutex even when
try_unwrap fails, and force the shutdown of the TCP stream in both cases to
ensure proper disconnection.
/// Detailed sync progress with performance metrics. | ||
#[derive(Debug, Clone, Serialize, Deserialize)] | ||
pub struct DetailedSyncProgress { | ||
/// Current state | ||
pub current_height: u32, | ||
pub peer_best_height: u32, | ||
pub percentage: f64, | ||
|
||
/// Performance metrics | ||
pub headers_per_second: f64, | ||
pub bytes_per_second: u64, | ||
pub estimated_time_remaining: Option<Duration>, | ||
|
||
/// Detailed status | ||
pub sync_stage: SyncStage, | ||
pub connected_peers: usize, | ||
pub total_headers_processed: u64, | ||
pub total_bytes_downloaded: u64, | ||
|
||
/// Timing | ||
pub sync_start_time: SystemTime, | ||
pub last_update_time: SystemTime, | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider the consistency between stored and calculated values.
The DetailedSyncProgress
struct stores both percentage
and estimated_time_remaining
as fields, but also provides methods to calculate these values. This could lead to inconsistency if the stored values aren't kept in sync with the underlying data.
Consider either:
- Making these fields private and only exposing them through the calculation methods
- Adding documentation clarifying when these fields should be updated
- Removing the stored fields and only using the calculation methods
pub struct DetailedSyncProgress {
/// Current state
pub current_height: u32,
pub peer_best_height: u32,
- pub percentage: f64,
/// Performance metrics
pub headers_per_second: f64,
pub bytes_per_second: u64,
- pub estimated_time_remaining: Option<Duration>,
/// Detailed status
pub sync_stage: SyncStage,
pub connected_peers: usize,
pub total_headers_processed: u64,
pub total_bytes_downloaded: u64,
/// Timing
pub sync_start_time: SystemTime,
pub last_update_time: SystemTime,
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
/// Detailed sync progress with performance metrics. | |
#[derive(Debug, Clone, Serialize, Deserialize)] | |
pub struct DetailedSyncProgress { | |
/// Current state | |
pub current_height: u32, | |
pub peer_best_height: u32, | |
pub percentage: f64, | |
/// Performance metrics | |
pub headers_per_second: f64, | |
pub bytes_per_second: u64, | |
pub estimated_time_remaining: Option<Duration>, | |
/// Detailed status | |
pub sync_stage: SyncStage, | |
pub connected_peers: usize, | |
pub total_headers_processed: u64, | |
pub total_bytes_downloaded: u64, | |
/// Timing | |
pub sync_start_time: SystemTime, | |
pub last_update_time: SystemTime, | |
} | |
/// Detailed sync progress with performance metrics. | |
#[derive(Debug, Clone, Serialize, Deserialize)] | |
pub struct DetailedSyncProgress { | |
/// Current state | |
pub current_height: u32, | |
pub peer_best_height: u32, | |
/// Performance metrics | |
pub headers_per_second: f64, | |
pub bytes_per_second: u64, | |
/// Detailed status | |
pub sync_stage: SyncStage, | |
pub connected_peers: usize, | |
pub total_headers_processed: u64, | |
pub total_bytes_downloaded: u64, | |
/// Timing | |
pub sync_start_time: SystemTime, | |
pub last_update_time: SystemTime, | |
} |
🤖 Prompt for AI Agents
In dash-spv/src/types.rs between lines 71 and 93, the DetailedSyncProgress
struct stores both percentage and estimated_time_remaining as public fields
while also having methods to calculate these values, risking inconsistency. To
fix this, make these fields private and provide public getter methods that
return the calculated values, or remove these stored fields entirely and rely
solely on the calculation methods to ensure consistency. Additionally, update
any code that accesses these fields directly to use the new methods.
This pull request introduces several enhancements and refactorings to the
dash-spv
project, focusing on improving synchronization progress tracking, refining TCP connection management, and updating network configurations. The changes streamline functionality, improve reliability, and add new features for progress reporting and peer management.Synchronization Progress Tracking Enhancements:
DashSpvClient
, including metrics such as headers per second, estimated time remaining, and sync stage. This is implemented using a newDetailedSyncProgress
type and progress channels (progress_sender
andprogress_receiver
). [1] [2] [3]TCP Connection Management Refactor:
TcpConnection
to use a unifiedConnectionState
struct protected by aMutex
, simplifying socket access and ensuring thread safety. [1] [2]VersionMessage
and log peer details for debugging purposes.Network Configuration Updates:
Network::Dash
andNetwork::Testnet
to improve reliability.Codebase Improvements:
StatusDisplay
.Summary by CodeRabbit
New Features
Improvements
Bug Fixes
Tests