Skip to content

Conversation

@LudeeD
Copy link
Contributor

@LudeeD LudeeD commented Jan 6, 2026

No description provided.

@LudeeD LudeeD requested review from Alw3ys and dduuukk as code owners January 6, 2026 11:15
@coderabbitai
Copy link

coderabbitai bot commented Jan 6, 2026

Walkthrough

Implements atomic downloads using .part temporary files: downloads, resumes, ETAG reads/writes, and verifications operate on .part and are atomically renamed to the final file on success. Adds pre-download validation to remove zero-byte local packages and skip downloading when size+ETAG match.

Changes

Cohort / File(s) Summary
Atomic download & resume
smithd/src/downloader/download.rs
All download/resume writes target a .part file; ETAG is read/written on the .part; pre-download checks compare size and ETAG and can short-circuit; completed .part is atomically renamed to the final file; .part is cleaned up on multiple error/force-stop/ETAG/size-mismatch paths.
Local package validation & logging
smithd/src/utils/network.rs
Added private async validate_package_file(path) to check existence and non-zero size. Removed prior .tmp handling. get_package() now validates existing files, deletes 0-byte files before downloading, logs fetch attempts, and proceeds with download when needed.

Sequence Diagram(s)

sequenceDiagram
    participant Caller
    participant DownloadMgr as Download Manager
    participant FS as File System
    participant Server

    Caller->>DownloadMgr: request download(package)
    DownloadMgr->>FS: stat final file & read ETAG (if exists)
    DownloadMgr->>Server: HEAD -> remote size & ETAG
    alt Local matches remote (size & ETAG)
        DownloadMgr-->>Caller: success (skip download)
    else
        DownloadMgr->>FS: remove/cleanup final or existing `.part`
        DownloadMgr->>FS: open/create `.part` (store/read ETAG)
        loop fetch chunks / resume
            DownloadMgr->>Server: GET (with Range if resuming)
            Server-->>DownloadMgr: chunk data
            DownloadMgr->>FS: write chunk to `.part`
        end
        alt all bytes received & verified
            rect rgb(200,240,200)
                DownloadMgr->>FS: verify `.part` size & ETAG
                DownloadMgr->>FS: rename `.part` -> final (atomic)
                DownloadMgr-->>Caller: success
            end
        else error / ETAG mismatch / force-stop
            rect rgb(250,220,220)
                DownloadMgr->>FS: cleanup `.part`
                DownloadMgr-->>Caller: error
            end
        end
    end
Loading

Pre-merge checks and finishing touches

❌ Failed checks (1 warning, 1 inconclusive)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
Description check ❓ Inconclusive No pull request description was provided by the author, making it impossible to assess relevance to the changeset. Add a pull request description explaining the problem, solution, and impact of the 0-byte download fix.
✅ Passed checks (1 passed)
Check name Status Explanation
Title check ✅ Passed The title 'fix: problems with 0 byte downloads' directly addresses the core issue being fixed—handling 0-byte download failures—making it clear and specific.
✨ Finishing touches
  • 📝 Generate docstrings

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Fix all issues with AI Agents
In @smithd/src/downloader/download.rs:
- Around line 165-180: The pre-download ETAG check relies on xattr::get and a
later tokio::fs::rename preserving extended attributes, which only holds when
the .part file and final file are on the same filesystem; document this
assumption or add an explicit same-filesystem check. Update the code around the
xattr::get(local_path, "user.etag") check to either (a) add a clear comment
stating that this relies on same-filesystem rename semantics for
tokio::fs::rename and that cross-filesystem moves will drop xattrs, or (b)
implement an explicit check comparing the device IDs of the .part file and final
path (use Metadata via std::os::unix::fs::MetadataExt::st_dev / st_ino or
equivalent) before trusting preserved xattrs and, if they differ, fall back to a
safe alternative (e.g., treat as not matching and proceed to download or
re-compute/store ETAG). Ensure references to local_path, etag, xattr::get and
tokio::fs::rename are updated accordingly.

In @smithd/src/utils/network.rs:
- Around line 128-143: The file uses tracing macros warn! and error! but only
info was imported; add the missing imports for warn and error from the tracing
crate (so the macros are in scope) to the top of smithd/src/utils/network.rs
where info is imported; ensure no other tracing macros are used without import
(check for debug!, trace!, etc.) and recompile to confirm the missing-import
errors are resolved, referencing the warn! usage in the package removal warning
and the error! usage within the tokio::fs::remove_file().await.inspect_err
closure.
📜 Review details

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 467cebf and ba1d059.

📒 Files selected for processing (2)
  • smithd/src/downloader/download.rs
  • smithd/src/utils/network.rs
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: test
🔇 Additional comments (11)
smithd/src/utils/network.rs (2)

103-113: LGTM! Validation logic is sound.

The validation correctly identifies 0-byte files as invalid while treating non-existent files the same way (returning Ok(false)). The comment about files without etag being potentially valid provides useful context for the design decision.


145-155: LGTM! Download flow is clear.

The logging and download initiation logic is straightforward and correctly structured.

smithd/src/downloader/download.rs (9)

73-75: LGTM! Clean .part file setup.

The temporary file naming convention is clear and follows common patterns for atomic file operations.


82-85: LGTM! Appropriate cleanup on recursion limit.

The cleanup of the .part file when the recursion limit is exceeded is correct, and using inspect_err for logging with let _ = is the right pattern for best-effort cleanup.


112-118: LGTM! Resume detection correctly targets .part file.

The change to check for the .part file's existence and size is appropriate for the atomic download workflow.


186-230: LGTM! Resume logic with ETAG validation is robust.

The implementation correctly handles multiple scenarios:

  • Early finalization when .part file is complete (lines 191-198)
  • ETAG validation to ensure file integrity
  • Appropriate cleanup on mismatches or missing ETAGs

267-281: LGTM! File opening logic correctly targets .part file.

Both the resume (append) and new download (create) paths correctly operate on the .part file, with proper ETAG attribute initialization.


305-312: LGTM! Proper cleanup on force_stop.

The sequence correctly flushes and drops the file handle before cleaning up the .part file, ensuring no resource leaks.


329-338: LGTM! Consistent cleanup on chunk errors.

The cleanup logic mirrors the force_stop path and correctly handles file closure before removal.


359-397: LGTM! Atomic finalization with proper verification.

The verification and finalization logic correctly:

  • Validates file size against expected content length
  • Handles 0-byte downloads with retry (addressing the PR objective)
  • Performs atomic finalization via rename after the file handle is closed
  • Provides clear error messages for debugging

343-344: LGTM! Explicit file handle cleanup before rename.

Dropping the file handle after flushing and before the rename operation is important for ensuring the file is fully closed, especially on Windows where open file handles can prevent rename operations.

Comment on lines +165 to +180
// Check if final file already exists and is complete
if let Ok(final_metadata) = tokio::fs::metadata(local_path).await {
if final_metadata.len() == content_length {
if let Ok(Some(stored)) = xattr::get(local_path, "user.etag") {
let stored_etag = String::from_utf8_lossy(&stored);
if let Some(current_etag) = etag {
if stored_etag.as_ref() == current_etag {
info!("File already fully downloaded at {}", local_path);
stats.success = true;
stats.bytes_downloaded = content_length;
return Ok(stats);
}
}
}
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

cd /repo && wc -l smithd/src/downloader/download.rs

Repository: Teton-ai/smith

Length of output: 113


🏁 Script executed:

cd /repo && sed -n '160,200p' smithd/src/downloader/download.rs

Repository: Teton-ai/smith

Length of output: 113


🏁 Script executed:

cd /repo && sed -n '270,290p' smithd/src/downloader/download.rs

Repository: Teton-ai/smith

Length of output: 113


🏁 Script executed:

cd /repo && sed -n '385,400p' smithd/src/downloader/download.rs

Repository: Teton-ai/smith

Length of output: 113


🏁 Script executed:

cd /repo && grep -n "xattr::" smithd/src/downloader/download.rs

Repository: Teton-ai/smith

Length of output: 113


🏁 Script executed:

wc -l smithd/src/downloader/download.rs

Repository: Teton-ai/smith

Length of output: 95


🏁 Script executed:

sed -n '160,200p' smithd/src/downloader/download.rs

Repository: Teton-ai/smith

Length of output: 1788


🏁 Script executed:

sed -n '270,290p' smithd/src/downloader/download.rs

Repository: Teton-ai/smith

Length of output: 711


🏁 Script executed:

sed -n '385,400p' smithd/src/downloader/download.rs

Repository: Teton-ai/smith

Length of output: 611


🏁 Script executed:

grep -n "xattr::" smithd/src/downloader/download.rs

Repository: Teton-ai/smith

Length of output: 287


🏁 Script executed:

find smithd -name "*.rs" -type f | xargs grep -l "test.*download\|download.*test" | head -5

Repository: Teton-ai/smith

Length of output: 89


🏁 Script executed:

grep -n "TODO\|FIXME\|NOTE" smithd/src/downloader/download.rs | head -10

Repository: Teton-ai/smith

Length of output: 40


🏁 Script executed:

find . -name "*.md" -o -name "*.txt" | xargs grep -l "platform\|xattr\|filesystem" 2>/dev/null | head -5

Repository: Teton-ai/smith

Length of output: 40


🏁 Script executed:

grep -A5 "xattr" smithd/Cargo.toml

Repository: Teton-ai/smith

Length of output: 145


🏁 Script executed:

find smithd/src/downloader -name "*test*" -o -name "tests"

Repository: Teton-ai/smith

Length of output: 40


🏁 Script executed:

git log --oneline --all -- smithd/src/downloader/download.rs | head -10

Repository: Teton-ai/smith

Length of output: 321


🏁 Script executed:

web_search
Research: Does tokio::fs::rename preserve extended attributes (xattr) on Unix systems?

Repository: Teton-ai/smith

Length of output: 280


🏁 Script executed:

web_search
xattr Rust crate 1.6.1 documentation and behavior with file operations

Repository: Teton-ai/smith

Length of output: 152


🏁 Script executed:

sed -n '175,195p' smithd/src/downloader/download.rs

Repository: Teton-ai/smith

Length of output: 878


🏁 Script executed:

sed -n '265,285p' smithd/src/downloader/download.rs

Repository: Teton-ai/smith

Length of output: 676


🏁 Script executed:

grep -n "xattr" smithd/src/downloader/download.rs -B2 -A2

Repository: Teton-ai/smith

Length of output: 783


🌐 Web query:

Does tokio::fs::rename preserve extended attributes xattr on Unix systems

💡 Result:

Yes. tokio::fs::rename is just an async wrapper around std::fs::rename (it calls std::fs::rename), which on Unix maps to the rename(2) syscall that moves the directory entry to the same inode. Extended attributes (xattr) are stored on the inode, so a same-filesystem rename preserves them. (docs.rs)

Citations:


🌐 Web query:

xattr Rust crate behavior with file rename operations POSIX

💡 Result:

Short answer: The Rust xattr crates (xattr / xattrs / extattr) expose the platform's extended-attribute syscalls and do not implement special rename semantics themselves — xattrs are tied to the filesystem inode, so a POSIX-style rename() that keeps the same inode (rename within the same filesystem/path) preserves xattrs; moving/renaming that creates a new inode (e.g., copy-then-remove across filesystems) will not preserve them unless you explicitly copy the xattrs. See the crate docs for the API (path vs. File/FileExt) and the OS manual describing xattr behavior and filesystem limits. [1][2][3]

Sources:

  • xattr crate docs (API note: works on paths and File/FileExt). [1]
  • Linux xattr manual: xattrs are per-file (inode) and not part of POSIX; filesystem-dependent behavior and limits. [2]
  • Practical notes: mv preserves xattrs if target filesystem supports them; copying between filesystems may lose xattrs unless explicitly preserved. [3]

🏁 Script executed:

sed -n '150,410p' smithd/src/downloader/download.rs | head -100

Repository: Teton-ai/smith

Length of output: 3983


🏁 Script executed:

sed -n '1,50p' smithd/src/downloader/download.rs

Repository: Teton-ai/smith

Length of output: 1316


Extended attributes are preserved during same-filesystem rename; document the filesystem requirement.

The pre-download check reads the ETAG from the final file's extended attributes (line 168), relying on the fact that tokio::fs::rename preserves xattr when moving the .part file to the final filename (lines 194, 392). On Unix systems, same-filesystem rename(2) operations do preserve xattr since they maintain the same inode. However, if the .part file and final file are on different filesystems, extended attributes will not be preserved and the check will fail.

Since .part is typically derived from the same path as the final filename, this is unlikely in normal operation. Consider adding a comment or documentation noting that this code assumes same-filesystem operations, or consider adding an explicit check if cross-filesystem scenarios need to be supported.

🤖 Prompt for AI Agents
In @smithd/src/downloader/download.rs around lines 165 - 180, The pre-download
ETAG check relies on xattr::get and a later tokio::fs::rename preserving
extended attributes, which only holds when the .part file and final file are on
the same filesystem; document this assumption or add an explicit same-filesystem
check. Update the code around the xattr::get(local_path, "user.etag") check to
either (a) add a clear comment stating that this relies on same-filesystem
rename semantics for tokio::fs::rename and that cross-filesystem moves will drop
xattrs, or (b) implement an explicit check comparing the device IDs of the .part
file and final path (use Metadata via std::os::unix::fs::MetadataExt::st_dev /
st_ino or equivalent) before trusting preserved xattrs and, if they differ, fall
back to a safe alternative (e.g., treat as not matching and proceed to download
or re-compute/store ETAG). Ensure references to local_path, etag, xattr::get and
tokio::fs::rename are updated accordingly.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI Agents
In @smithd/src/utils/network.rs:
- Around line 128-132: The info! log message in the method that checks
local_package_path (the branch using Self::validate_package_file) is misleading
because it says "downloader will check for resume" yet the function returns
immediately; update the log in that branch (the block that calls
Self::validate_package_file(&local_package_path).await?) to clearly state the
existing valid package will be used as-is (or that no downloader action will be
taken) so the message matches the actual control flow.
🧹 Nitpick comments (2)
smithd/src/utils/network.rs (2)

103-113: Consider simplifying return type or handling metadata errors explicitly.

The function returns Result<bool> but never returns an Err variant—all metadata errors are silently converted to Ok(false). This masks genuine I/O issues (permission denied, disk errors) that might warrant different handling than a missing file.

Consider either:

  1. Returning bool directly if errors should be treated as "invalid file"
  2. Propagating metadata errors to the caller for explicit handling
🔎 Option 1: Simplify to return bool
-    async fn validate_package_file(path: &std::path::Path) -> Result<bool> {
+    async fn validate_package_file(path: &std::path::Path) -> bool {
         // Check if file exists and has content
         let metadata = match tokio::fs::metadata(path).await {
             Ok(m) => m,
-            Err(_) => return Ok(false),
+            Err(_) => return false,
         };
 
         // Only reject obviously broken 0-byte files
         // Files without etag may still be valid (downloaded before .part implementation)
-        Ok(metadata.len() > 0)
+        metadata.len() > 0
     }

And update the caller on line 130:

-            if Self::validate_package_file(&local_package_path).await? {
+            if Self::validate_package_file(&local_package_path).await {
🔎 Option 2: Propagate metadata errors
     async fn validate_package_file(path: &std::path::Path) -> Result<bool> {
         // Check if file exists and has content
-        let metadata = match tokio::fs::metadata(path).await {
-            Ok(m) => m,
-            Err(_) => return Ok(false),
-        };
+        let metadata = tokio::fs::metadata(path).await?;
 
         // Only reject obviously broken 0-byte files
         // Files without etag may still be valid (downloaded before .part implementation)
         Ok(metadata.len() > 0)
     }

And update the caller to handle NotFound explicitly:

-            if Self::validate_package_file(&local_package_path).await? {
+            match Self::validate_package_file(&local_package_path).await {
+                Ok(true) => {
                     info!("Package already exists and is valid, downloader will check for resume");
                     return Ok(());
-            } else {
+                }
+                Ok(false) => {
                     warn!(
                         "Package file {} exists but is 0 bytes (broken download), removing for re-download",
                         package_name
                     );
                     // ... rest of removal logic
+                }
+                Err(e) if e.downcast_ref::<std::io::Error>().map(|io| io.kind()) == Some(std::io::ErrorKind::NotFound) => {
+                    // File doesn't exist, proceed with download
+                }
+                Err(e) => return Err(e), // Propagate other errors
+            }

134-142: Consider whether failed removal should block download.

If removing the invalid 0-byte file fails (line 140), the entire operation fails due to ?. In practice, the download might succeed by overwriting the file, making this error handling overly strict.

🔎 Alternative: Log error and proceed with download
                 tokio::fs::remove_file(&local_package_path)
                     .await
-                    .inspect_err(|e| error!("Failed to remove invalid package file: {}", e))?;
-                info!("Removed 0-byte package, proceeding with download");
+                    .inspect_err(|e| error!("Failed to remove invalid package file: {}, attempting download anyway", e))
+                    .ok();
+                info!("Proceeding with download (0-byte package removal {} )", 
+                      if tokio::fs::metadata(&local_package_path).await.is_err() { "succeeded" } else { "failed but continuing" });
📜 Review details

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between ba1d059 and 208db82.

📒 Files selected for processing (1)
  • smithd/src/utils/network.rs
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: test
🔇 Additional comments (2)
smithd/src/utils/network.rs (2)

6-6: LGTM! Missing tracing imports are now resolved.

The addition of error and warn imports addresses the previous compilation failure. All tracing macros used in the file are now properly imported.


145-155: No action required - .part file handling is properly implemented.

The downloader implementation comprehensively addresses all concerns:

  1. Existing .part files are detected and validated via ETAG before resuming (lines 186-223)
  2. ETAG and size are validated before resume; if ETAG mismatches or server provides no ETag, the .part file is removed and download restarts
  3. Atomic rename operations finalize downloads (lines 391-392), and ETAG-based validation prevents race conditions—even stale .part files are correctly detected and either resumed or discarded based on ETAG validity

The downloader uses extended attributes (xattr) to persistently store file ETags, enabling reliable validation across invocations.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI Agents
In @smithd/src/utils/network.rs:
- Around line 128-136: Whitespace/formatting in the block handling
local_package_path validation doesn't match rustfmt rules; run cargo fmt to
reformat the file and ensure the if/else and warn! macro lines conform to style,
then re-run cargo fmt -- --check in CI. Specifically reformat the block around
local_package_path.exists(),
Self::validate_package_file(&local_package_path).await?, and the warn!
invocation referencing package_name so rustfmt will fix the alignment and
spacing issues before merging.
🧹 Nitpick comments (1)
smithd/src/utils/network.rs (1)

138-140: Consider more graceful error handling for file removal.

If remove_file fails (e.g., due to permissions or file locks), the function immediately returns an error and doesn't attempt the download. Consider whether continuing with the download attempt (which might overwrite the file) or returning a more specific error message would be more robust.

🔎 Alternative: continue with download on removal failure
-                tokio::fs::remove_file(&local_package_path)
-                    .await
-                    .inspect_err(|e| error!("Failed to remove invalid package file: {}", e))?;
+                if let Err(e) = tokio::fs::remove_file(&local_package_path).await {
+                    warn!("Could not remove invalid package file ({}), download will attempt to overwrite", e);
+                }
📜 Review details

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 208db82 and fdfc5a6.

📒 Files selected for processing (1)
  • smithd/src/utils/network.rs
🧰 Additional context used
🪛 GitHub Actions: test
smithd/src/utils/network.rs

[error] 128-136: cargo fmt --check failed: code formatting mismatch detected. The log shows a multi-line formatting change for a log statement. Run 'cargo fmt' (or 'cargo fmt -- --check' again after formatting) to fix the code style issues.


[info] 128-128: Diff in formatting: original single-line info! macro changed to a multi-line invocation.

🔇 Additional comments (3)
smithd/src/utils/network.rs (3)

6-6: LGTM! Missing tracing imports addressed.

The addition of error and warn to the tracing imports correctly addresses the previous review feedback.


103-113: LGTM! Validation logic is sound.

The validate_package_file helper correctly checks for file existence and non-zero size. The approach of only rejecting 0-byte files (rather than requiring ETAGs) maintains backward compatibility with packages downloaded before the .part implementation.


145-155: LGTM! Download path is clear and correct.

The comment and logging accurately reflect the control flow. At this point, the file either doesn't exist or was invalid and removed, so the download proceeds as expected.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
smithd/src/utils/network.rs (1)

103-113: Consider differentiating error types in validation.

The function returns Ok(false) for all metadata errors (line 107), which conflates "file does not exist" with permission errors or other I/O failures. While downstream error handling will eventually surface issues during file removal (line 141-143), explicitly distinguishing error types would improve diagnostics.

🔎 Optional: Differentiate "not found" from other errors
 async fn validate_package_file(path: &std::path::Path) -> Result<bool> {
     // Check if file exists and has content
     let metadata = match tokio::fs::metadata(path).await {
         Ok(m) => m,
-        Err(_) => return Ok(false),
+        Err(e) if e.kind() == std::io::ErrorKind::NotFound => return Ok(false),
+        Err(e) => return Err(e.into()),
     };
 
     // Only reject obviously broken 0-byte files
     // Files without etag may still be valid (downloaded before .part implementation)
     Ok(metadata.len() > 0)
 }

This allows genuine I/O errors to propagate while treating "not found" as a validation failure.

📜 Review details

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between fdfc5a6 and 8f3dc37.

📒 Files selected for processing (1)
  • smithd/src/utils/network.rs
🔇 Additional comments (3)
smithd/src/utils/network.rs (3)

148-158: LGTM! Download flow is clear and well-structured.

The comment and log message clearly indicate this path executes for files that don't exist or were removed due to validation failure. The download call passes the final package path to download_blocking, which aligns with the architecture where the downloader itself handles .part temporary files and atomic operations (as mentioned in the AI summary).


6-6: LGTM! Missing tracing macros now imported.

The addition of error and warn to the tracing imports directly addresses the previous critical issue. Both macros are now properly imported and used in the validation logic throughout the file.


128-146: Excellent validation and cleanup logic!

The implementation correctly:

  • Validates existing packages before downloading (performance optimization)
  • Removes 0-byte broken files (directly addresses the PR objective)
  • Logs clearly at each decision point with appropriate levels (info for success, warn for removal)
  • Handles removal errors gracefully with inspect_err before propagating

The log message on lines 132-133 properly reflects the early-return behavior (addresses previous review feedback).

Regarding the TOCTOU window: the .part file implementation in the downloader provides adequate protection. Downloads are written atomically to .package_name.part and only renamed to the final filename after verification and successful completion. Even if a race condition causes the validated file to be deleted before use, the downloader will simply re-download it using the same atomic .part mechanism, preventing data corruption.

Copy link
Member

@Alw3ys Alw3ys left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants