Skip to content

feat(drive): add +push shortcut for one-way local → Drive mirror#709

Open
fangshuyu-768 wants to merge 1 commit intomainfrom
feat/drive-push
Open

feat(drive): add +push shortcut for one-way local → Drive mirror#709
fangshuyu-768 wants to merge 1 commit intomainfrom
feat/drive-push

Conversation

@fangshuyu-768
Copy link
Copy Markdown
Collaborator

@fangshuyu-768 fangshuyu-768 commented Apr 29, 2026

Summary

Adds drive +push, a one-way local → Drive mirror command. Walks --local-dir, recursively lists --folder-token, mirrors local subdirectory structure (including empty dirs) onto Drive, and for each rel_path uploads new files, overwrites already-present files (carrying the existing file_token on upload_all for protocol-level versioning), or skips them per --if-exists. With --delete-remote --yes, any Drive type=file entry absent locally is removed; online docs and shortcuts are never touched.

This is the third of the three P1 sync-disk commands (4.1 +status is in #692; 4.2 +pull is in #696; 4.3 +push is this PR).

Output shape

{
  "summary": {
    "uploaded": 0,
    "skipped": 0,
    "failed": 0,
    "deleted_remote": 0
  },
  "items": [
    {"rel_path": "...", "file_token": "...", "action": "folder_created"},
    {"rel_path": "...", "file_token": "...", "action": "uploaded",       "size_bytes": 0},
    {"rel_path": "...", "file_token": "...", "action": "overwritten",    "version": "...", "size_bytes": 0},
    {"rel_path": "...", "file_token": "...", "action": "skipped",        "size_bytes": 0},
    {"rel_path": "...",                       "action": "failed",        "size_bytes": 0, "error": "..."},
    {"rel_path": "...", "file_token": "...", "action": "deleted_remote"},
    {"rel_path": "...", "file_token": "...", "action": "delete_failed",  "error": "..."}
  ]
}

folder_created items do not bump summary.uploaded — that field counts files only. Pre-existing remote folders cache-hit and stay silent (no items[] entry).

Design notes

  • Overwrite via file_token on upload_all. The overwrite path passes the existing file_token as a form field on POST /open-apis/drive/v1/files/upload_all, and the response is expected to carry a non-empty version (or data_version) which is surfaced as items[].version. This mirrors the markdown +overwrite contract.
  • API rollout caveat. The file_token-on-upload_all overwrite parameter and the matching version field on the response may not be exposed on every Drive tenant yet. The implementation is written against the protocol; on tenants where the field has not shipped, --if-exists=overwrite surfaces a structured api_error per file (overwrite for "..." succeeded but no version was returned by upload_all) and the rest of the push (new uploads, folder creation, deletes) keeps running. Callers can downgrade to --if-exists=skip until the backend catches up. The unit test TestDrivePushOverwriteWithoutVersionFails pins this behavior.
  • --delete-remote is bound to --yes in Validate, same pattern as +pull's --delete-local: without --yes, the command refuses upfront — no listing, no upload, no delete — so a stray flag never silently deletes anything. With --yes, uploads run first, then any remote type=file absent locally is DELETEd.
  • type=file only. Lark native cloud docs (docx/sheet/bitable/mindnote/slides) and shortcuts are never uploaded against, overwritten, or deleted by --delete-remote. drivePushListRemote filters them out of remoteFiles, so they are unreachable from both the upload loop (no overwrite) and the delete loop (not iterated). When a local regular file shares a rel_path with a native cloud doc, the local file is uploaded as a sibling type=file and the native doc is left untouched — Drive permits same-name entries of different types in one folder. Pinned by TestDrivePushDeleteRemoteSkipsOnlineDocs and TestDrivePushUploadsSiblingWhenRemoteSameNameIsNativeDoc, and verified live against a folder pre-populated with a native sheet and docx.
  • Local directory structure is mirrored (including empty dirs) via POST /open-apis/drive/v1/files/create_folder. A per-execution folderCache is seeded from the remote listing so existing sub-folders are reused; newly created folders surface as items[].action="folder_created".
  • Token preservation on overwrite failure. The overwrite-failure branch in Execute prefers the file_token returned by the API and falls back to entry.FileToken only when the response carried none — this matters for the missing-version partial-success case where bytes already landed and the response carries a non-empty token alongside the error.
  • --local-dir is funneled through validate.SafeLocalFlagPath then walked from a canonical absolute root, same threat model as feat(drive): add +status shortcut for content-hash diff #692/feat(drive): add +pull shortcut for one-way Drive → local mirror #696. WalkDir's default child-symlink policy keeps traversal inside the validated subtree.
  • Scopes: drive:drive.metadata:readonly + drive:file:upload + space:folder:create. The broader drive:drive is disabled by enterprise policy in some tenants; this narrower set follows the precedent set by feat(drive): add +status shortcut for content-hash diff #692/feat(drive): add +pull shortcut for one-way Drive → local mirror #696. space:document:delete is intentionally NOT in the default set even though --delete-remote needs it: the framework's pre-flight scope check (runner.go: checkShortcutScopes) runs unconditionally before Validate / dry-run, so declaring it would block every plain push and dry-run for callers that haven't granted delete. The skill ref calls out the scope so users running mirror sync can grant it upfront and avoid a half-synced state.
  • Multipart for files >20MB falls back to the existing 3-step upload_prepare / upload_part / upload_finish path. The fd is opened once before the chunking loop and reused via io.NewSectionReader per block (saves one Open + path validation per chunk vs. the legacy reopen-per-block pattern in drive_upload.go). The current multipart upload_finish contract does not return version, so items[].version is empty for large-file overwrites; the file_token and action="overwritten" are still produced correctly.
  • Listing helper is duplicated locally (drivePushListRemote) instead of being lifted from drive_status.go / drive_pull.go because both feat(drive): add +status shortcut for content-hash diff #692 and feat(drive): add +pull shortcut for one-way Drive → local mirror #696 are still open. Once they merge, all three copies should be consolidated. TODO marker left in drive_push.go.

Test plan

Static checks

  • go build ./... clean
  • go vet ./... clean
  • gofmt clean
  • golangci-lint run --new-from-rev=origin/main ./shortcuts/drive/... — 0 issues (only the same nolint:forbidigo pattern used in +status/+pull for the local walker, with comment)
  • node scripts/skill-format-check/index.js skills passes
  • go test $(go list ./... | grep -v cli_e2e) -count=1 — all packages green

Unit tests (go test ./shortcuts/drive/... -run TestDrivePush)

  • TestDrivePushUploadsAndCreatesParents — happy path with a nested subfolder; verifies create_folder runs once and both files reach Drive
  • TestDrivePushOverwritesWhenIfExistsOverwrite — overwrite path sends file_token in the form body and propagates version to items[].version
  • TestDrivePushSkipsWhenIfExistsSkip — pre-existing remote file is preserved without an upload_all call; summary.skipped counts it
  • TestDrivePushDeleteRemoteRequiresYes--delete-remote without --yes rejected upfront
  • TestDrivePushDeleteRemoteSkipsOnlineDocsdocx / shortcut entries that share a rel_path with a missing local file are NOT deleted; only type=file orphans are
  • TestDrivePushRejectsAbsoluteLocalDir — error message references --local-dir
  • TestDrivePushRejectsBadIfExistsEnum — framework enum guard kicks in
  • TestDrivePushOverwriteWithoutVersionFails — pins the behavior on tenants where the overwrite-version field hasn't shipped yet (structured api_error per item, summary.failed=1)
  • TestDrivePushReusesExistingRemoteFolder — when a remote folder exists at the target rel_path, upload parent_node reuses its token instead of calling create_folder
  • TestDrivePushUploadsSiblingWhenRemoteSameNameIsNativeDoc — a remote Lark native cloud doc (sheet/docx/...) at the same rel_path as a local regular file is never overwritten and never reached by --delete-remote; the local file is uploaded as a sibling type=file
  • TestDrivePushMirrorsEmptyDirectories — empty local dirs surface as folder_created instead of being silently dropped
  • TestDrivePushUploadsLargeFileViaMultipart — exercises the 3-step upload_prepare / two upload_part / upload_finish flow on a file one byte over the single-part threshold; pins the shared-fd refactor and asserts upload_prepare body omits file_token for fresh uploads
  • TestDrivePushHelpersRelPath — pins the path utilities (drivePushParentRel / drivePushSplitRel / drivePushJoinRel)
  • TestShortcutsIncludesExpectedCommands updated to require +push

E2E dry-run tests (tests/cli_e2e/drive/drive_push_dryrun_test.go)

  • TestDrive_PushDryRun — request shape: GET /open-apis/drive/v1/files + folder_token + description includes both list and upload phrasing
  • TestDrive_PushDryRunRejectsAbsoluteLocalDir — Validate runs under --dry-run; --local-dir surfaced in error
  • TestDrive_PushDryRunRejectsDeleteRemoteWithoutYes--delete-remote guard works under dry-run too
  • TestDrive_PushDryRunRejectsMissingFolderToken — cobra required-flag enforcement

Manual end-to-end against a real Drive folder

Six-phase walkthrough on a freshly created Drive folder with a fixture of 6 files across 2 nested directories + 2 empty directories (one of which is itself a nested empty dir), including a multilingual filename 中文.txt, a binary file with PNG header bytes, and a 0-byte file.

  1. Fresh push. Result: 5 files uploaded, 4 folders created — folder_created surfaces every newly minted directory including the nested empty empty-parent/empty-child. The 0-byte file is rejected by the Drive backend with 1061002 params error; drive +upload against the same backend returns the identical error, so this is a backend constraint rather than a +push regression. The failure surfaces in items[] as failed with the full structured error and log_id; other files are unaffected and summary.failed=1.
  2. Idempotent re-push (--if-exists=skip). No local changes. Result: skipped=5, uploaded=0. Empty folders cache-hit silently (no re-create, no items[] entry), confirming the folderCache contract holds across runs.
  3. Overwrite. Locally edit docs/changelog.md and re-push without --if-exists. Result: all 5 type=file entries with action="overwritten"; each surfaces a non-empty distinct version field, and every file_token is preserved (the overwrite-token-stability invariant is verified live, not just unit-tested). drive +download of the modified file round-trips the new content byte-for-byte.
  4. Mirror-sync safety guard. Run --delete-remote without --yes. Result: rejected upfront in Validate, zero API calls issued.
  5. Mirror sync (--delete-remote --yes). Locally remove readme.txt, add docs/new-doc.md, re-push. Result: summary.uploaded=5 (1 new + 4 overwrites), summary.deleted_remote=1 (readme.txt), summary.failed=0. A follow-up drive files list confirms readme.txt is gone from remote and docs/new-doc.md is present, alongside the 4 overwritten files and the 4 mirrored folders.
  6. Dry-run against the real folder. Renders the standard GET /files plan envelope; zero side-effecting requests issued.

Manual end-to-end with native cloud documents present in the target folder

Separate live walkthrough on a folder pre-populated with a Lark native sheet (titled report, type=sheet) and a Lark native docx (titled minutes, type=docx), both imported via drive +import. Confirms the unit-test mock string values (type: "sheet" / type: "docx") match what GET /open-apis/drive/v1/files actually returns, and pins the same-name-conflict path against a real backend.

  1. Push with same-name conflicts. Local fixture: data.csv, notes.md (no remote conflict) + report and minutes (regular files whose names collide with the native sheet and docx). Result: summary.uploaded=4, failed=0; all four files emerge as action="uploaded" with brand-new file_tokens, none of which match the sheet's or docx's tokens. A follow-up drive files list shows the folder now holds 6 entries — the original sheet (token unchanged) + the original docx (token unchanged) + 4 newly uploaded type=file siblings, including two same-named pairs (report-sheet alongside report-file, minutes-docx alongside minutes-file). Drive permits same-name siblings of different types.
  2. --delete-remote --yes with same-name conflict. Locally remove the regular report file and re-push with --delete-remote --yes. Result: summary.deleted_remote=1 deletes the type=file report (Zlrbb…) by token, the native sheet report (XiH2s…) is left strictly alone. The drive files list afterward confirms the sheet still has its original token, and the docx is also intact. Pins that the orphan-deletion loop iterates remoteFiles (type=file only) by token rather than by name.

Related

Summary by CodeRabbit

  • New Features

    • Added drive +push to mirror a local directory to a Drive folder (dry-run supported)
    • Creates missing remote folders, preserves empty-folder semantics, and uploads in deterministic order
    • Large-file multipart upload supported; overwrite vs skip via --if-exists
  • Bug Fixes / Safety

    • Validates local path is inside cwd and rejects unsafe inputs; protects native cloud docs
    • --delete-remote requires --yes
  • Documentation

    • User guide and reference for drive +push
  • Tests

    • New unit, e2e, and CLI tests covering behaviors and validations

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 29, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Adds a new drive +push shortcut that mirrors a local directory into a Drive folder: validates inputs, traverses local files/dirs, lists remote items, creates missing folders, uploads files (single-part or multipart) with overwrite/skip semantics, optionally deletes remote orphans, and emits a structured summary.

Changes

Cohort / File(s) Summary
Core Implementation
shortcuts/drive/drive_push.go
New DrivePush shortcut with flags, validation (safe paths, resource-name folder token), local tree walk, recursive remote listing, rel_path→folder_token caching, folder creation, sorted uploads (single-part or multipart), --if-exists handling (skip/overwrite with file_token), optional --delete-remote deletions, and structured summary/items output.
Unit Tests
shortcuts/drive/drive_push_test.go
Comprehensive tests covering mirroring flows, parent-folder creation, overwrite vs skip semantics (including version handling), multipart upload prepare/part/finish, deletion gating (--yes), native-doc/shortcut exclusions, error cases, and rel-path helper behavior.
Registry & Shortcut Tests
shortcuts/drive/shortcuts.go, shortcuts/drive/shortcuts_test.go
Registers DrivePush in Shortcuts() and updates tests to expect the "+push" command.
CLI E2E Dry-run Tests
tests/cli_e2e/drive/drive_push_dryrun_test.go
Adds --dry-run e2e tests validating emitted Drive HTTP calls and flag validation rejections (absolute --local-dir, missing --folder-token, --delete-remote without --yes).
Documentation
skills/lark-drive/SKILL.md, skills/lark-drive/references/lark-drive-push.md
Adds +push docs describing behavior, flags, guarded destructive delete, overwrite/skip semantics, output schema, multipart notes, OAuth scope guidance, and path/symlink constraints with examples.

Sequence Diagram

sequenceDiagram
    participant User as CLI User
    participant Validator as Input Validator
    participant LocalFS as Local Filesystem
    participant DriveAPI as Drive API
    participant Cache as Token Cache
    participant Output as Result Output

    User->>Validator: run `drive +push` with flags
    Validator->>Validator: validate & canonicalize inputs
    Validator->>LocalFS: walk local dir -> collect files & dirs
    Validator->>DriveAPI: list remote folder recursively
    DriveAPI-->>Validator: remote files & folders
    Validator->>Cache: resolve/create parent folder tokens as needed
    loop per local file (sorted rel_path)
        Validator->>DriveAPI: decide upload or skip/overwrite
        alt upload single-part
            Validator->>DriveAPI: `upload_all` (new or with `file_token`)
        else multipart
            Validator->>DriveAPI: `upload_prepare`
            loop upload parts
                Validator->>DriveAPI: `upload_part`
            end
            Validator->>DriveAPI: `upload_finish`
        end
        DriveAPI-->>Validator: confirm upload (may return version)
    end
    alt --delete-remote enabled
        Validator->>DriveAPI: delete orphan remote files (DELETE)
        DriveAPI-->>Validator: deletion responses
    end
    Validator->>Output: emit summary and items[]
    Output-->>User: display structured result
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

Suggested labels

size/XL

Suggested reviewers

  • liujinkun2025
  • liangshuo-1

Poem

🐰 I hopped through folders, sniffed each file's name,
I packed them up tidy, no two paths the same.
I skipped with care, I overwrote on cue,
I only purged orphans when you said "do."
A rabbit's push—mirrored, neat, and true 🥕

🚥 Pre-merge checks | ✅ 5
✅ Passed checks (5 passed)
Check name Status Explanation
Title check ✅ Passed The title 'feat(drive): add +push shortcut for one-way local → Drive mirror' clearly and concisely summarizes the main change—adding a new drive +push command for mirroring local directories to Drive.
Description check ✅ Passed The PR description comprehensively covers all required template sections: a detailed Summary explaining the feature, Changes listing the files modified, a thorough Test Plan documenting unit/e2e/manual testing, and Related Issues linking to companion PRs.
Docstring Coverage ✅ Passed Docstring coverage is 89.66% which is sufficient. The required threshold is 80.00%.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feat/drive-push

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
Review rate limit: 7/8 reviews remaining, refill in 7 minutes and 30 seconds.

Comment @coderabbitai help to get the list of available commands and usage tips.

@github-actions github-actions Bot added domain/ccm PR touches the ccm domain size/L Large or sensitive change across domains or core paths labels Apr 29, 2026
@codecov
Copy link
Copy Markdown

codecov Bot commented Apr 29, 2026

Codecov Report

❌ Patch coverage is 65.81197% with 120 lines in your changes missing coverage. Please review.
✅ Project coverage is 64.15%. Comparing base (4181925) to head (74caa7f).

Files with missing lines Patch % Lines
shortcuts/drive/drive_push.go 65.71% 74 Missing and 46 partials ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main     #709      +/-   ##
==========================================
+ Coverage   64.14%   64.15%   +0.01%     
==========================================
  Files         504      505       +1     
  Lines       44285    44636     +351     
==========================================
+ Hits        28406    28637     +231     
- Misses      13411    13485      +74     
- Partials     2468     2514      +46     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@github-actions
Copy link
Copy Markdown

github-actions Bot commented Apr 29, 2026

🚀 PR Preview Install Guide

🧰 CLI update

npm i -g https://pkg.pr.new/larksuite/cli/@larksuite/cli@74caa7fe5f0a183f8534772753953c038426f961

🧩 Skill update

npx skills add larksuite/cli#feat/drive-push -y -g

@fangshuyu-768
Copy link
Copy Markdown
Collaborator Author

Addressed the three review findings in 2e77ce2:

1. Missing space:folder:create (drive_push.go:81, :409)

Added it to the default Scopes so the framework's pre-flight check (runner.go: checkShortcutScopes) catches missing grants before any upload, instead of failing mid-push with half-synced state. Comment in code explains the rationale.

2. space:document:delete blocking plain pushes / dry-run (drive_push.go:81)

Removed it from default Scopes. The pre-flight runs unconditionally before Validate / dry-run, so declaring it always meant any caller without delete grant couldn't even dry-run. Now --delete-remote --yes relies on the runtime DELETE call to surface missing_scope from the backend. The skill ref calls out the scope explicitly with a "grant it before you run mirror sync" recommendation so callers don't end up half-synced — the same UX win the pre-check would have provided, just shifted into docs.

3. Empty directories silently dropped (drive_push.go:274, :205)

Walker now also returns dirs []string (every non-root directory rel_path encountered, shallow-first). Execute pre-creates each one through the existing drivePushEnsureFolder cache before the upload loop runs. Newly created folders show up as items[].action = "folder_created"; existing remote folders stay silent (cache hit). summary.uploaded is unchanged — that field still only counts files. Pinned by the new TestDrivePushMirrorsEmptyDirectories.

All checks rerun locally:

  • go build / go vet / gofmt clean
  • golangci-lint run --new-from-rev=origin/main ./shortcuts/drive/... — 0 issues
  • go test ./shortcuts/drive/ -count=1 — 11 push tests + the rest of the drive package green
  • go test ./tests/cli_e2e/drive/ -run 'TestDrive_PushDryRun' — 4 dry-run tests green
  • skill format check passes

@fangshuyu-768
Copy link
Copy Markdown
Collaborator Author

Round 2 of review addressed in 80681b2:

1. Multipart re-opened the file per block (drive_push.go:547)

fileio.File already implements io.ReaderAt, so the helper now Opens once before the chunking loop, defers Close(), and feeds each block through a fresh io.NewSectionReader(partFile, offset, partSize). One Open + one path validation per multipart upload, instead of one per block. Note: the same pattern exists in drive_upload.go's multipart helper (where this copy originated). Left for a follow-up to avoid scope creep here, but happy to fold it in if preferred.

2. Failed overwrite dropped the returned file_token

Comment added explaining the overwrite-token-stability invariant (in-place overwrite preserves the existing token, so entry.FileToken is normally still authoritative). The failed branch now prefers the freshly returned token (token) and falls back to entry.FileToken — important for the missing-version partial-success path in drivePushUploadAll where the bytes already landed and the response carries a non-empty token alongside the error.

3. Multipart had no unit test

Added TestDrivePushUploadsLargeFileViaMultipart:

  • os.Truncate mints a file one byte over MaxDriveMediaUploadSinglePartSize so the second chunk exercises the "remaining < block_size" tail
  • Mocks upload_prepare (block_num=2) → two upload_part calls → upload_finish
  • Asserts items[].file_token=tok_large and that upload_prepare's body omits file_token (proves the fresh-upload branch was taken, not the overwrite branch)
  • The single-shared-fd refactor is implicitly exercised by the two-part flow — two upload_part calls with separate offsets through one fd

Local checks rerun: build, vet, gofmt, golangci-lint (0 issues), go test ./shortcuts/drive/ -count=1 (12 push tests + rest of drive green), full non-e2e suite green, push dry-run e2e green.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
shortcuts/drive/drive_push.go (1)

417-472: Consider adding a depth limit for recursive folder listing.

The recursive drivePushListRemote has no explicit depth bound. While Drive likely has practical limits, extremely deep folder structures could cause stack exhaustion. This mirrors existing behavior in +pull/+status, so it's not a blocker, but worth noting for future hardening.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@shortcuts/drive/drive_push.go` around lines 417 - 472, The recursive function
drivePushListRemote currently has no depth limit; add a depth parameter (e.g.,
depth int) and a constant max depth (e.g., drivePushMaxDepth) and check at the
start of drivePushListRemote to stop recursion when depth >= max (either return
no children or a specific error); when recursing into subfolders, pass depth+1;
update all call sites (including initial callers like the top-level drive push
handler) to pass 0 as the initial depth and ensure the new max depth constant is
defined near related symbols (drivePushListRemote, drivePushListPageSize,
drivePushJoinRel).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@shortcuts/drive/drive_push.go`:
- Around line 417-472: The recursive function drivePushListRemote currently has
no depth limit; add a depth parameter (e.g., depth int) and a constant max depth
(e.g., drivePushMaxDepth) and check at the start of drivePushListRemote to stop
recursion when depth >= max (either return no children or a specific error);
when recursing into subfolders, pass depth+1; update all call sites (including
initial callers like the top-level drive push handler) to pass 0 as the initial
depth and ensure the new max depth constant is defined near related symbols
(drivePushListRemote, drivePushListPageSize, drivePushJoinRel).

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 5d0f3f0e-acb8-40f5-bce5-614f4c6e6d23

📥 Commits

Reviewing files that changed from the base of the PR and between a75aa8f and 80681b2.

📒 Files selected for processing (3)
  • shortcuts/drive/drive_push.go
  • shortcuts/drive/drive_push_test.go
  • skills/lark-drive/references/lark-drive-push.md

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
shortcuts/drive/drive_push_test.go (1)

720-817: This test doesn't actually pin the single-open optimization.

The current assertions prove the multipart request sequence, but a regression back to reopen-per-block would still pass because nothing here counts or constrains runtime.FileIO().Open calls. If the shared-fd reuse is important, please instrument Open and assert it happens exactly once.

As per coding guidelines, "Every behavior change must have an accompanying test".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@shortcuts/drive/drive_push_test.go` around lines 720 - 817,
TestDrivePushUploadsLargeFileViaMultipart currently verifies multipart flow but
doesn't assert reuse of the shared fd; wrap or mock runtime.FileIO().Open to
count invocations before calling mountAndRunDrive and assert the counter equals
1 after the run to ensure drivePushUploadMultipart reused the open file
descriptor; restore the original implementation afterwards to avoid test
pollution and fail the test if Open was called more than once.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@shortcuts/drive/drive_push_test.go`:
- Around line 337-339: The test is checking for the wrong JSON key; update the
assertion in drive_push_test.go to search for the serialized field name used by
drivePushItem: replace the check for `"token": "tok_link"` with a check for
`"file_token": "tok_link"` so the test will catch regressions that leak the
shortcut token into items[]. Locate the assertion that inspects the output
string (the existing strings.Contains(...) call) and change the substring to
`"file_token": "tok_link"`, keeping the same failure message and behavior.

---

Nitpick comments:
In `@shortcuts/drive/drive_push_test.go`:
- Around line 720-817: TestDrivePushUploadsLargeFileViaMultipart currently
verifies multipart flow but doesn't assert reuse of the shared fd; wrap or mock
runtime.FileIO().Open to count invocations before calling mountAndRunDrive and
assert the counter equals 1 after the run to ensure drivePushUploadMultipart
reused the open file descriptor; restore the original implementation afterwards
to avoid test pollution and fail the test if Open was called more than once.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: d2eaada4-2bc4-4b15-bb8f-dc28d4872a3d

📥 Commits

Reviewing files that changed from the base of the PR and between a75aa8f and 881b877.

📒 Files selected for processing (3)
  • shortcuts/drive/drive_push.go
  • shortcuts/drive/drive_push_test.go
  • skills/lark-drive/references/lark-drive-push.md
✅ Files skipped from review due to trivial changes (1)
  • skills/lark-drive/references/lark-drive-push.md

Comment thread shortcuts/drive/drive_push_test.go Outdated
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
shortcuts/drive/drive_push_test.go (1)

337-339: ⚠️ Potential issue | 🟡 Minor

Fix the shortcut-token leak assertion key.

This check looks for "token": "tok_link", but items[] serialize tokens under file_token. As written, the test can miss the regression it intends to catch.

💡 Suggested change
-	if strings.Contains(out, `"token": "tok_link"`) {
+	if strings.Contains(out, `"file_token": "tok_link"`) {
 		t.Errorf("shortcut tok_link must not appear in items: %s", out)
 	}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@shortcuts/drive/drive_push_test.go` around lines 337 - 339, The assertion is
checking for the wrong JSON key: change the contains check that looks for
`"token": "tok_link"` to instead search for the serialized key used in items,
`"file_token": "tok_link"` (i.e., update the string in the if-condition that
inspects the test output variable `out` so the test asserts that `"file_token":
"tok_link"` does not appear).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@tests/cli_e2e/drive/drive_push_dryrun_test.go`:
- Around line 96-98: The test currently only asserts result.ExitCode != 0 for
Validate-stage rejects; change these to assert result.ExitCode == 2 (the
Validate failure code) and on failure include the combined stdout/stderr in the
failure message; locate the checks using the result variable in
drive_push_dryrun_test.go (the blocks that currently do `if result.ExitCode == 0
{ ... }` around the Validate path) and replace them with an explicit `if
result.ExitCode != 2 { t.Fatalf("expected exit=2 (Validate), got
exit=%d\noutput:\n%s", result.ExitCode, result.Stdout+result.Stderr) }` style
assertion so the test locks in Validate-classification and surfaces logs.

---

Duplicate comments:
In `@shortcuts/drive/drive_push_test.go`:
- Around line 337-339: The assertion is checking for the wrong JSON key: change
the contains check that looks for `"token": "tok_link"` to instead search for
the serialized key used in items, `"file_token": "tok_link"` (i.e., update the
string in the if-condition that inspects the test output variable `out` so the
test asserts that `"file_token": "tok_link"` does not appear).
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: e44d7529-b096-45b9-9dde-f97137ab341b

📥 Commits

Reviewing files that changed from the base of the PR and between 881b877 and f88f16e.

📒 Files selected for processing (7)
  • shortcuts/drive/drive_push.go
  • shortcuts/drive/drive_push_test.go
  • shortcuts/drive/shortcuts.go
  • shortcuts/drive/shortcuts_test.go
  • skills/lark-drive/SKILL.md
  • skills/lark-drive/references/lark-drive-push.md
  • tests/cli_e2e/drive/drive_push_dryrun_test.go
✅ Files skipped from review due to trivial changes (3)
  • shortcuts/drive/shortcuts_test.go
  • shortcuts/drive/shortcuts.go
  • skills/lark-drive/references/lark-drive-push.md
🚧 Files skipped from review as they are similar to previous changes (1)
  • skills/lark-drive/SKILL.md

Comment on lines +96 to +98
if result.ExitCode == 0 {
t.Fatalf("absolute --local-dir must be rejected, got exit=0\nstdout:\n%s", result.Stdout)
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Assert the exact Validate failure exit code (2).

Line 96, Line 135, and Line 170 only check exit != 0. These cases exercise Validate-stage rejection and should lock in exit == 2, otherwise regressions in failure classification can slip through.

💡 Suggested change
-	if result.ExitCode == 0 {
-		t.Fatalf("absolute --local-dir must be rejected, got exit=0\nstdout:\n%s", result.Stdout)
-	}
+	if result.ExitCode != 2 {
+		t.Fatalf("absolute --local-dir must be rejected with exit=2, got exit=%d\nstdout:\n%s\nstderr:\n%s", result.ExitCode, result.Stdout, result.Stderr)
+	}
...
-	if result.ExitCode == 0 {
-		t.Fatalf("--delete-remote without --yes must be rejected, got exit=0\nstdout:\n%s", result.Stdout)
-	}
+	if result.ExitCode != 2 {
+		t.Fatalf("--delete-remote without --yes must be rejected with exit=2, got exit=%d\nstdout:\n%s\nstderr:\n%s", result.ExitCode, result.Stdout, result.Stderr)
+	}
...
-	if result.ExitCode == 0 {
-		t.Fatalf("missing --folder-token must be rejected, got exit=0\nstdout:\n%s", result.Stdout)
-	}
+	if result.ExitCode != 2 {
+		t.Fatalf("missing --folder-token must be rejected with exit=2, got exit=%d\nstdout:\n%s\nstderr:\n%s", result.ExitCode, result.Stdout, result.Stderr)
+	}

Based on learnings: In larksuite/cli E2E dry-run tests, Validate-callback failures should assert specific exit code 2 and inspect combined stdout/stderr for the message.

Also applies to: 135-137, 170-172

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/cli_e2e/drive/drive_push_dryrun_test.go` around lines 96 - 98, The test
currently only asserts result.ExitCode != 0 for Validate-stage rejects; change
these to assert result.ExitCode == 2 (the Validate failure code) and on failure
include the combined stdout/stderr in the failure message; locate the checks
using the result variable in drive_push_dryrun_test.go (the blocks that
currently do `if result.ExitCode == 0 { ... }` around the Validate path) and
replace them with an explicit `if result.ExitCode != 2 { t.Fatalf("expected
exit=2 (Validate), got exit=%d\noutput:\n%s", result.ExitCode,
result.Stdout+result.Stderr) }` style assertion so the test locks in
Validate-classification and surfaces logs.

@fangshuyu-768
Copy link
Copy Markdown
Collaborator Author

Round 3 of review addressed in e7d3ed7 (force-pushed; same single commit feat(drive): add +push shortcut for one-way local → Drive mirror).

1. Inline: shortcut-token assertion key was checking the wrong JSON name ("token": vs the actual serialized "file_token":), so it would silently pass any regression. Replaced with a bare tok_link substring check (matching the notes.docx assertion two lines above). Strictly stronger than the original suggestion: catches leaks anywhere in the JSON envelope, not just items[].file_token.

2. Multipart fd-reuse pin: added a countingOpenProvider wrapper around the test factory's FileIOProvider that tracks FileIO.Open invocations. TestDrivePushUploadsLargeFileViaMultipart now asserts opens == 1 after the run completes (block_num=2 in the stub, so a regression to reopen-per-block would land at 2). Verified the assertion actually fires by temporarily reverting the shared-fd refactor in drivePushUploadMultipart — the test failed with FileIO.Open invocations = 2, want exactly 1, and now passes again with the real implementation.

3. Recursive folder listing depth limit (nitpick): intentionally not changing here. The bot's own note says it's "not a blocker, mirrors existing behavior in +pull/+status" — divergence would just leave +push out of step with #692/#696, and Drive's practical folder nesting depth is far below any stack-exhaustion threshold. If we want to harden this, it should be a single follow-up PR that lifts the listing helper into a shared package and adds the bound consistently across all three commands.

Local checks rerun: build, vet, gofmt, golangci-lint (0 issues), full non-e2e suite green, go test ./shortcuts/drive/ -count=1 (13 push tests + rest of drive green).

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (1)
tests/cli_e2e/drive/drive_push_dryrun_test.go (1)

96-98: ⚠️ Potential issue | 🟡 Minor

Assert exit == 2 for these Validate failures.

These are Validate-stage rejects, so exit != 0 is too loose. A regression that routes them through a different failure path would still pass.

Based on learnings: In larksuite/cli E2E dry-run tests, Validate-callback failures should assert specific exit code 2 and inspect combined stdout/stderr for the message.

Also applies to: 135-137, 170-172

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/cli_e2e/drive/drive_push_dryrun_test.go` around lines 96 - 98, This
test currently only asserts non-zero exit for Validate-stage rejects; change the
assertion to require exit == 2 and check the combined stdout+stderr contains the
expected validation message. Update the three occurrences that inspect
result.ExitCode (the one using result.Stdout at lines ~96-98 and the similar
checks at ~135-137 and ~170-172) to assert result.ExitCode == 2, and when
failing print both result.Stdout and result.Stderr (or their concatenation) so
the failure message and validation error are visible; keep references to the
existing variables result.ExitCode, result.Stdout, and result.Stderr when making
the change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@shortcuts/drive/drive_push_test.go`:
- Around line 468-496: The test currently stubs the upload_all response with
"file_token": "tok_keep" which matches the original token and leaves the test
blind to regressions; change the stubbed token to a distinct value (e.g.
"tok_new") in the POST /open-apis/drive/v1/files/upload_all stub and add an
assertion that the CLI output contains the returned token in items[].file_token
(check stdout for the new token string) so the test pins the returned token and
fails if the code wrongly re-uses entry.FileToken on failure. Ensure you update
the stub value (currently "tok_keep") and add the corresponding stdout contains
check for that token.

In `@shortcuts/drive/drive_push.go`:
- Around line 572-592: The code currently aborts on non-zero larkCode and
returns empty tokens, dropping any returned data.file_token; extract data :=
result["data"].(map[string]interface{}) and token := common.GetString(data,
"file_token") before returning the ErrAPI in the larkCode != 0 branch so you
preserve and return token (e.g. return token, "", output.ErrAPI(...,
result["error"])); this keeps overwrite-token stability used by Execute and
avoids falling back to entry.FileToken when upload_all reports an error but
still provides a file_token.

---

Duplicate comments:
In `@tests/cli_e2e/drive/drive_push_dryrun_test.go`:
- Around line 96-98: This test currently only asserts non-zero exit for
Validate-stage rejects; change the assertion to require exit == 2 and check the
combined stdout+stderr contains the expected validation message. Update the
three occurrences that inspect result.ExitCode (the one using result.Stdout at
lines ~96-98 and the similar checks at ~135-137 and ~170-172) to assert
result.ExitCode == 2, and when failing print both result.Stdout and
result.Stderr (or their concatenation) so the failure message and validation
error are visible; keep references to the existing variables result.ExitCode,
result.Stdout, and result.Stderr when making the change.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 045af397-01c7-4fe0-91a8-ddaeada86346

📥 Commits

Reviewing files that changed from the base of the PR and between f88f16e and e7d3ed7.

📒 Files selected for processing (7)
  • shortcuts/drive/drive_push.go
  • shortcuts/drive/drive_push_test.go
  • shortcuts/drive/shortcuts.go
  • shortcuts/drive/shortcuts_test.go
  • skills/lark-drive/SKILL.md
  • skills/lark-drive/references/lark-drive-push.md
  • tests/cli_e2e/drive/drive_push_dryrun_test.go
✅ Files skipped from review due to trivial changes (2)
  • shortcuts/drive/shortcuts.go
  • skills/lark-drive/references/lark-drive-push.md
🚧 Files skipped from review as they are similar to previous changes (2)
  • shortcuts/drive/shortcuts_test.go
  • skills/lark-drive/SKILL.md

Comment thread shortcuts/drive/drive_push_test.go
Comment on lines +572 to +592
if larkCode := int(common.GetFloat(result, "code")); larkCode != 0 {
msg, _ := result["msg"].(string)
return "", "", output.ErrAPI(larkCode, fmt.Sprintf("upload failed: [%d] %s", larkCode, msg), result["error"])
}
data, _ := result["data"].(map[string]interface{})
token := common.GetString(data, "file_token")
if token == "" {
return "", "", output.Errorf(output.ExitAPI, "api_error", "upload failed: no file_token returned")
}
version := common.GetString(data, "version")
if version == "" {
// Some backends return the version under data_version; accept either
// per the markdown +overwrite contract.
version = common.GetString(data, "data_version")
}
if existingToken != "" && version == "" {
// The protocol guarantees a non-empty version on overwrite. If the
// deployed backend hasn't shipped the field yet we surface the gap
// rather than report a phantom success — callers can downgrade to
// --if-exists=skip in the meantime.
return token, "", output.Errorf(output.ExitAPI, "api_error", "overwrite for %q succeeded but no version was returned by upload_all", file.RelPath)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Preserve file_token on non-zero upload_all responses.

If the backend returns code != 0 and data.file_token, this path drops the token and the caller falls back to the stale entry.FileToken. That defeats the overwrite-error token-stability handling added in Execute.

Suggested fix
  var result map[string]interface{}
  if err := json.Unmarshal(apiResp.RawBody, &result); err != nil {
  	return "", "", output.Errorf(output.ExitAPI, "api_error", "upload failed: invalid response JSON: %v", err)
  }
+ data, _ := result["data"].(map[string]interface{})
+ token := common.GetString(data, "file_token")
  if larkCode := int(common.GetFloat(result, "code")); larkCode != 0 {
  	msg, _ := result["msg"].(string)
- 	return "", "", output.ErrAPI(larkCode, fmt.Sprintf("upload failed: [%d] %s", larkCode, msg), result["error"])
+ 	return token, "", output.ErrAPI(larkCode, fmt.Sprintf("upload failed: [%d] %s", larkCode, msg), result["error"])
  }
- data, _ := result["data"].(map[string]interface{})
- token := common.GetString(data, "file_token")
  if token == "" {
  	return "", "", output.Errorf(output.ExitAPI, "api_error", "upload failed: no file_token returned")
  }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if larkCode := int(common.GetFloat(result, "code")); larkCode != 0 {
msg, _ := result["msg"].(string)
return "", "", output.ErrAPI(larkCode, fmt.Sprintf("upload failed: [%d] %s", larkCode, msg), result["error"])
}
data, _ := result["data"].(map[string]interface{})
token := common.GetString(data, "file_token")
if token == "" {
return "", "", output.Errorf(output.ExitAPI, "api_error", "upload failed: no file_token returned")
}
version := common.GetString(data, "version")
if version == "" {
// Some backends return the version under data_version; accept either
// per the markdown +overwrite contract.
version = common.GetString(data, "data_version")
}
if existingToken != "" && version == "" {
// The protocol guarantees a non-empty version on overwrite. If the
// deployed backend hasn't shipped the field yet we surface the gap
// rather than report a phantom success — callers can downgrade to
// --if-exists=skip in the meantime.
return token, "", output.Errorf(output.ExitAPI, "api_error", "overwrite for %q succeeded but no version was returned by upload_all", file.RelPath)
data, _ := result["data"].(map[string]interface{})
token := common.GetString(data, "file_token")
if larkCode := int(common.GetFloat(result, "code")); larkCode != 0 {
msg, _ := result["msg"].(string)
return token, "", output.ErrAPI(larkCode, fmt.Sprintf("upload failed: [%d] %s", larkCode, msg), result["error"])
}
if token == "" {
return "", "", output.Errorf(output.ExitAPI, "api_error", "upload failed: no file_token returned")
}
version := common.GetString(data, "version")
if version == "" {
// Some backends return the version under data_version; accept either
// per the markdown +overwrite contract.
version = common.GetString(data, "data_version")
}
if existingToken != "" && version == "" {
// The protocol guarantees a non-empty version on overwrite. If the
// deployed backend hasn't shipped the field yet we surface the gap
// rather than report a phantom success — callers can downgrade to
// --if-exists=skip in the meantime.
return token, "", output.Errorf(output.ExitAPI, "api_error", "overwrite for %q succeeded but no version was returned by upload_all", file.RelPath)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@shortcuts/drive/drive_push.go` around lines 572 - 592, The code currently
aborts on non-zero larkCode and returns empty tokens, dropping any returned
data.file_token; extract data := result["data"].(map[string]interface{}) and
token := common.GetString(data, "file_token") before returning the ErrAPI in the
larkCode != 0 branch so you preserve and return token (e.g. return token, "",
output.ErrAPI(..., result["error"])); this keeps overwrite-token stability used
by Execute and avoids falling back to entry.FileToken when upload_all reports an
error but still provides a file_token.

Mirrors a local directory onto a Drive folder: walks --local-dir,
recursively lists --folder-token, mirrors local subdirectory structure
(including empty dirs) onto Drive via create_folder, and for each
rel_path uploads new files, overwrites already-present files, or skips
them per --if-exists. With --delete-remote --yes, any Drive type=file
entry absent locally is removed; Lark native cloud docs (docx/sheet/
bitable/mindnote/slides) and shortcuts are never overwritten or deleted.

Overwrite hits POST /open-apis/drive/v1/files/upload_all with the
existing file_token in the form body and the response's `version` is
propagated to items[].version, mirroring the markdown +overwrite
contract. Files >20MB fall back to the 3-step
upload_prepare/upload_part/upload_finish path with a single shared fd
reused via io.NewSectionReader per block.

Output is a {summary, items[]} envelope; items[].action is one of
uploaded / overwritten / skipped / folder_created / deleted_remote /
failed / delete_failed.

--delete-remote is bound to --yes upfront in Validate, same pattern as
+pull's --delete-local: a stray flag never silently deletes anything.
Path safety reuses the canonical-root walk + SafeInputPath mechanics
from the sibling +status / +pull commands.

Scopes: drive:drive.metadata:readonly + drive:file:upload +
space:folder:create. space:document:delete is intentionally NOT in the
default set — the framework's pre-flight scope check would otherwise
block plain pushes and dry-runs for callers that haven't granted delete;
--delete-remote --yes relies on the runtime DELETE call to surface
missing_scope. The skill ref calls out the scope so users running
mirror sync can grant it upfront.

13 unit tests cover the upload/overwrite/skip/delete matrix, online-doc
protection, same-name conflict between local file and native cloud doc,
empty-directory mirroring, multipart, scope/path validation, and helper
correctness. 4 dry-run e2e tests pin the request shape.
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (2)
shortcuts/drive/drive_push.go (1)

568-576: ⚠️ Potential issue | 🟠 Major

Preserve data.file_token on non-zero upload_all responses.

The larkCode != 0 branch still returns empty strings, so a partial overwrite that returns data.file_token loses the newer token and Line 244 falls back to the stale entry.FileToken. That breaks the token-stability handling this command is trying to preserve after partial-success overwrite errors.

Suggested fix
  var result map[string]interface{}
  if err := json.Unmarshal(apiResp.RawBody, &result); err != nil {
  	return "", "", output.Errorf(output.ExitAPI, "api_error", "upload failed: invalid response JSON: %v", err)
  }
+ data, _ := result["data"].(map[string]interface{})
+ token := common.GetString(data, "file_token")
  if larkCode := int(common.GetFloat(result, "code")); larkCode != 0 {
  	msg, _ := result["msg"].(string)
- 	return "", "", output.ErrAPI(larkCode, fmt.Sprintf("upload failed: [%d] %s", larkCode, msg), result["error"])
+ 	return token, "", output.ErrAPI(larkCode, fmt.Sprintf("upload failed: [%d] %s", larkCode, msg), result["error"])
  }
- data, _ := result["data"].(map[string]interface{})
- token := common.GetString(data, "file_token")
  if token == "" {
  	return "", "", output.Errorf(output.ExitAPI, "api_error", "upload failed: no file_token returned")
  }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@shortcuts/drive/drive_push.go` around lines 568 - 576, When handling non-zero
larkCode in the upload response, preserve and return any newer data.file_token
instead of returning empty strings; before calling output.ErrAPI in the larkCode
!= 0 branch, extract data := result["data"].(map[string]interface{}) (or use
common.GetString/common helper) and read fileToken :=
data["file_token"].(string) and return that token in the function's file-token
return position rather than "" so the caller (which currently falls back to
entry.FileToken) receives the updated token after partial-success overwrite
errors; update the larkCode != 0 return to include that extracted fileToken
alongside the error return.
tests/cli_e2e/drive/drive_push_dryrun_test.go (1)

96-98: ⚠️ Potential issue | 🟡 Minor

Assert Validate-stage failures with exit code 2.

These checks are too loose right now. All three cases are expected to fail before DryRun completes, so exit != 0 won't catch regressions where the failure starts surfacing through a different path or exit classification.

Suggested fix
-	if result.ExitCode == 0 {
-		t.Fatalf("absolute --local-dir must be rejected, got exit=0\nstdout:\n%s", result.Stdout)
-	}
+	if result.ExitCode != 2 {
+		t.Fatalf("absolute --local-dir must be rejected with exit=2, got exit=%d\nstdout:\n%s\nstderr:\n%s", result.ExitCode, result.Stdout, result.Stderr)
+	}
...
-	if result.ExitCode == 0 {
-		t.Fatalf("--delete-remote without --yes must be rejected, got exit=0\nstdout:\n%s", result.Stdout)
-	}
+	if result.ExitCode != 2 {
+		t.Fatalf("--delete-remote without --yes must be rejected with exit=2, got exit=%d\nstdout:\n%s\nstderr:\n%s", result.ExitCode, result.Stdout, result.Stderr)
+	}
...
-	if result.ExitCode == 0 {
-		t.Fatalf("missing --folder-token must be rejected, got exit=0\nstdout:\n%s", result.Stdout)
-	}
+	if result.ExitCode != 2 {
+		t.Fatalf("missing --folder-token must be rejected with exit=2, got exit=%d\nstdout:\n%s\nstderr:\n%s", result.ExitCode, result.Stdout, result.Stderr)
+	}

Based on learnings: In larksuite/cli E2E dry-run tests, if a shortcut’s validation fails inside the Validate callback, the CLI must exit with code 2 and tests should assert that exact code while checking combined stdout/stderr for the message.

Also applies to: 135-137, 170-172

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/cli_e2e/drive/drive_push_dryrun_test.go` around lines 96 - 98, Update
the assertions that currently check only result.ExitCode != 0 to require the
specific Validate-stage exit code 2: replace the loose checks around
result.ExitCode (in drive_push_dryrun_test.go where result is examined at the
three spots) with an assertion that result.ExitCode == 2, and also assert that
the combined output (result.Stdout+result.Stderr) contains the expected
validation failure message reported by the Validate callback for the DryRun
path; ensure you adjust the messages/assert helpers that reference result to
look at both Stdout and Stderr together and to expect exit code 2 for these
Validate-stage failures.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@shortcuts/drive/drive_push.go`:
- Around line 568-576: When handling non-zero larkCode in the upload response,
preserve and return any newer data.file_token instead of returning empty
strings; before calling output.ErrAPI in the larkCode != 0 branch, extract data
:= result["data"].(map[string]interface{}) (or use common.GetString/common
helper) and read fileToken := data["file_token"].(string) and return that token
in the function's file-token return position rather than "" so the caller (which
currently falls back to entry.FileToken) receives the updated token after
partial-success overwrite errors; update the larkCode != 0 return to include
that extracted fileToken alongside the error return.

In `@tests/cli_e2e/drive/drive_push_dryrun_test.go`:
- Around line 96-98: Update the assertions that currently check only
result.ExitCode != 0 to require the specific Validate-stage exit code 2: replace
the loose checks around result.ExitCode (in drive_push_dryrun_test.go where
result is examined at the three spots) with an assertion that result.ExitCode ==
2, and also assert that the combined output (result.Stdout+result.Stderr)
contains the expected validation failure message reported by the Validate
callback for the DryRun path; ensure you adjust the messages/assert helpers that
reference result to look at both Stdout and Stderr together and to expect exit
code 2 for these Validate-stage failures.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 5b077c1e-4614-4a0d-a37a-5fa1d9f93c11

📥 Commits

Reviewing files that changed from the base of the PR and between e7d3ed7 and 74caa7f.

📒 Files selected for processing (7)
  • shortcuts/drive/drive_push.go
  • shortcuts/drive/drive_push_test.go
  • shortcuts/drive/shortcuts.go
  • shortcuts/drive/shortcuts_test.go
  • skills/lark-drive/SKILL.md
  • skills/lark-drive/references/lark-drive-push.md
  • tests/cli_e2e/drive/drive_push_dryrun_test.go
✅ Files skipped from review due to trivial changes (4)
  • shortcuts/drive/shortcuts_test.go
  • shortcuts/drive/shortcuts.go
  • skills/lark-drive/references/lark-drive-push.md
  • skills/lark-drive/SKILL.md
🚧 Files skipped from review as they are similar to previous changes (1)
  • shortcuts/drive/drive_push_test.go

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

domain/ccm PR touches the ccm domain size/L Large or sensitive change across domains or core paths

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant