perf(repo): add benchmark infrastructure with CodSpeed#98
perf(repo): add benchmark infrastructure with CodSpeed#98zrosenbauer wants to merge 12 commits intomainfrom
Conversation
Enables running the sync engine without starting dev server or building. Useful for CI pipelines and benchmarking. Co-Authored-By: Claude <noreply@anthropic.com>
Root-level benchmarks/ package following es-toolkit pattern. Not included in vitest.workspace.ts — runs via pnpm bench only. Co-Authored-By: Claude <noreply@anthropic.com>
Provides exec helper for CLI benchmarks, fixture path constants for examples/simple and examples/kitchen-sink, and a deterministic 250-file project generator for stress testing. Co-Authored-By: Claude <noreply@anthropic.com>
Benchmarks shell out to the built zpress Node script against small (examples/simple), medium (examples/kitchen-sink), and large (250-file generated) fixture projects. Co-Authored-By: Claude <noreply@anthropic.com>
Imports sync() directly from @zpress/core for granular measurement without CLI/process overhead. Tests against all three fixture tiers. Co-Authored-By: Claude <noreply@anthropic.com>
Runs vitest benchmarks in walltime mode on codspeed-macro runner. Triggers on push to main, PRs, and manual dispatch. Co-Authored-By: Claude <noreply@anthropic.com>
- Generate large fixture inside repo (.bench-fixtures/) instead of /tmp/ so the CLI can resolve workspace deps - Add .bench-fixtures/ to .gitignore - Exclude benchmarks/ from oxlint (test infrastructure, not prod code) - Add superpowers output location to AGENTS.md - Apply formatter fixes to bench files Co-Authored-By: Claude <noreply@anthropic.com>
|
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds a new benchmarks workspace with Vitest benchmark suites (src/**/*.bench.ts) for CLI (build, setup, sync) and code sync, plus helper modules (helpers/exec.ts, helpers/fixtures.ts) that generate deterministic fixtures. Adds benchmarks/package.json, benchmarks/tsconfig.json, and benchmarks/vitest.config.ts. Updates root package.json with bench and bench:run scripts, adds .bench-fixtures/ to .gitignore, registers benchmarks in pnpm-workspace.yaml, adds a new sync CLI command, and a CI workflow .github/workflows/ci-benchmarks.yml. 🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Comment |
There was a problem hiding this comment.
Actionable comments posted: 8
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@benchmarks/src/cli/build.bench.ts`:
- Around line 16-27: The benchmarks reuse the same fixture directories (bench
calls invoking runCli with FIXTURES.small, FIXTURES.medium, largeFixture.dir),
causing incremental/warm-state artifacts (manifests/page-count) to bias timings;
change each bench to run against a fresh, clean workspace by copying the fixture
into a temporary directory (or removing build artifacts/manifests and any state
files) before calling runCli, or ensure runCli is invoked with a
flag/environment to disable incremental optimizations (so
skipMtimeOptimization-related state does not persist) so every iteration
measures a cold build.
In `@benchmarks/src/cli/setup.bench.ts`:
- Around line 9-29: The benchmark uses a shared mutable let tempDir created in
beforeEach/afterEach which runs once per task in Vitest bench mode, so move
setup and teardown into the bench callback so each iteration gets a fresh
directory: remove beforeEach/afterEach, create a const tempDir inside the
bench('fresh project', ...) callback, write the package.json there, call
runCli(['setup'], tempDir) and then rmSync the tempDir at the end of that
callback; this removes the mutable let and ensures per-invocation isolation
while keeping references to tempDir, bench, runCli, beforeEach/afterEach obvious
for locating the change.
In `@benchmarks/src/cli/sync.bench.ts`:
- Around line 6-14: Replace the mutable module-scoped let largeFixture by
encapsulating fixture creation in a small factory/accessor so state stays
internal: create a getLargeFixture factory that lazily calls
generateLargeFixture() and returns the fixture (cached inside the closure), then
call getLargeFixture() from beforeAll and use its cleanup from afterAll;
reference the existing symbols largeFixture, generateLargeFixture, beforeAll,
and afterAll when implementing the factory to keep the lifecycle behavior
identical but avoid exposing a mutable let.
In `@benchmarks/src/code/sync.bench.ts`:
- Around line 17-23: Bench currently silently skips work if loadConfig(... )
returns no config; update the benchmark functions (bench('small project'), and
similarly for 'medium' and 'large') to fail fast instead of no-op: after const
[, config] = await loadConfig(paths.repoRoot) check for a missing config and
throw a clear error (e.g. throw new Error(`Failed to load config for
${FIXTURES.small}`) or use an assertion) so the benchmark fails loudly rather
than silently returning, then proceed to await sync(config, { paths, quiet: true
}) when present.
- Around line 6-14: The duplicated lifecycle pattern using the mutable
largeFixture with beforeAll/afterAll (largeFixture, generateLargeFixture,
beforeAll, afterAll) should be extracted into a shared fixture-management
utility: create a helper (e.g., useFixture or withFixture) that takes a
generator function like generateLargeFixture and returns a stable fixture handle
plus automatic setup/teardown functions, then replace the direct
let/beforeAll/afterAll usage in sync.bench.ts with a call to that helper to
reduce repetition and centralize cleanup logic.
In `@benchmarks/src/helpers/fixtures.ts`:
- Line 2: The file contains an unused import statement "import os from
'node:os'"; remove that import (the identifier "os" and its import line) from
the module (fixtures helper) so no unused symbols remain, and run the linter or
type-check to confirm no other references to "os" exist.
In `@packages/cli/src/commands/sync.ts`:
- Around line 19-20: The intro/outro messages are printed unconditionally; wrap
the calls to ctx.log.intro('zpress sync') and the corresponding
ctx.log.outro(...) so they only run when the quiet flag is not set (e.g. check
!ctx.args.quiet or !ctx.flags.quiet depending on how CLI flags are accessed in
this command). Modify the start-of-command and end-of-command code paths to gate
those two calls on the absence of --quiet so quiet mode suppresses both intro
and outro output.
- Around line 27-31: The CLI is not catching exceptions thrown by sync(config, {
paths, quiet: ctx.args.quiet }) so thrown errors bypass the intended error
handling; wrap the call to sync in a try/catch around the existing result
handling, call ctx.log.error with the caught error (or its message) and then
process.exit(1) on exception, while preserving the existing result.error branch;
update the block that currently invokes sync(...) and checks result.error to
instead try { const result = await sync(...); if (result.error) {
ctx.log.error(result.error); process.exit(1); } } catch (err) {
ctx.log.error(err); process.exit(1); } ensuring you reference the sync call,
ctx.log.error, and process.exit in the change.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: f1fc4d15-21b3-4d81-ade9-d01ffb991851
⛔ Files ignored due to path filters (1)
pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml,!pnpm-lock.yaml
📒 Files selected for processing (16)
.github/workflows/ci-benchmarks.yml.gitignoreAGENTS.mdbenchmarks/package.jsonbenchmarks/src/cli/build.bench.tsbenchmarks/src/cli/setup.bench.tsbenchmarks/src/cli/sync.bench.tsbenchmarks/src/code/sync.bench.tsbenchmarks/src/helpers/exec.tsbenchmarks/src/helpers/fixtures.tsbenchmarks/tsconfig.jsonbenchmarks/vitest.config.tspackage.jsonpackages/cli/src/commands/sync.tspackages/cli/src/index.tspnpm-workspace.yaml
Congrats! CodSpeed is installed 🎉
You will start to see performance impacts in the reports once the benchmarks are run from your default branch.
|
Replace hardcoded tiers and example project references with a single
generateFixture({ sections, directories, files }) function. All fixtures
are generated at bench time and cleaned up after. No dependency on
example projects.
Tiers: small (~50), medium (~150), large (~300), xl (~750)
Co-Authored-By: Claude <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 4
♻️ Duplicate comments (4)
benchmarks/src/code/sync.bench.ts (2)
29-32:⚠️ Potential issue | 🟠 MajorDo not silently no-op when config load fails.
Current
if (config) { ... }pattern allows skipped work to appear as successful benchmark iterations.Proposed fix
- const [, config] = await loadConfig(paths.repoRoot) - if (config) { - await sync(config, { paths, quiet: true }) - } + const [configErr, config] = await loadConfig(paths.repoRoot) + if (configErr || !config) { + return Promise.reject( + configErr ?? new Error(`Failed to load benchmark config: ${paths.repoRoot}`), + ) + } + await sync(config, { paths, quiet: true })Also applies to: 37-40, 45-48, 53-56
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@benchmarks/src/code/sync.bench.ts` around lines 29 - 32, The current pattern silently skips benchmark work when loadConfig returns a falsy config; update the logic around loadConfig(...) and sync(...) so a missing config is treated as an error instead of a no-op: after calling loadConfig(paths.repoRoot) validate the returned config and if it's falsy throw or return a non-zero-failing error (or call a test/bench harness fail helper) with a clear message (e.g., "failed to load config") before calling sync(paths, { paths, quiet: true }); apply the same change to the other occurrences that call loadConfig and then conditionally call sync so benchmarks do not report successful iterations when config load fails.
7-10: 🛠️ Refactor suggestion | 🟠 MajorUse immutable fixture access instead of module-scoped
let.Lines 7-10 keep mutable shared state; align with repo TS constraints by encapsulating fixture lifecycle in an immutable accessor utility.
As per coding guidelines: "Never use classes, loops,
let, orthrow— instead usemap,filter,reduce, pattern matching, and Result/Option types".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@benchmarks/src/code/sync.bench.ts` around lines 7 - 10, Module-scoped mutable variables small, medium, large, xl create shared mutable state; replace them with an immutable accessor utility that encapsulates lifecycle and returns the fixtures immutably (e.g., provide a getFixtures() or makeFixtures() function that constructs the four GeneratedFixture instances with const/local scope and returns an object { small, medium, large, xl } or an array, and update callers to use that accessor instead of the module-level lets); ensure no use of let/classes/loops and keep the accessor pure/functional so fixtures are accessed immutably.benchmarks/src/cli/build.bench.ts (1)
26-41:⚠️ Potential issue | 🟠 MajorBuild benchmarks currently measure warm/incremental runs.
Reusing the same workspace allows prior artifacts/manifests to affect later iterations, so the numbers are not cold-build representative.
Proposed direction
+const runColdBuild = (cwd: string): void => { + runCli(['clean'], cwd) + runCli(['build', '--no-check', '--quiet'], cwd) +} @@ bench('small (~50 files)', () => { - runCli(['build', '--no-check', '--quiet'], small.dir) + runColdBuild(small.dir) }) @@ bench('medium (~150 files)', () => { - runCli(['build', '--no-check', '--quiet'], medium.dir) + runColdBuild(medium.dir) }) @@ bench('large (~300 files)', () => { - runCli(['build', '--no-check', '--quiet'], large.dir) + runColdBuild(large.dir) }) @@ bench('xl (~750 files)', () => { - runCli(['build', '--no-check', '--quiet'], xl.dir) + runColdBuild(xl.dir) })Based on learnings:
packages/core/src/sync/index.tsuses prior manifest/page-count state (skipMtimeOptimization), so reusing benchmark directories can under-measure later iterations.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@benchmarks/src/cli/build.bench.ts` around lines 26 - 41, The benchmarks are reusing the same fixture directories so later iterations measure warm/incremental builds; change each bench case to run against a fresh workspace per iteration by copying the fixture into a new temporary directory (or creating a fresh temp dir) before calling runCli. Update the bench cases that reference small.dir, medium.dir, large.dir, xl.dir so they clone the fixture into a temp path and pass that temp path to runCli (or alternatively clear build/artifact files in the fixture before each run); ensure the preparation happens inside the bench callback so each iteration gets an isolated cold workspace.benchmarks/src/cli/sync.bench.ts (1)
7-10: 🛠️ Refactor suggestion | 🟠 MajorReplace module-scoped mutable fixture bindings.
Lines 7-10 use
letstate for fixtures. Prefer an immutable fixture manager/closure accessor to keep lifecycle state internal.As per coding guidelines: "Never use classes, loops,
let, orthrow— instead usemap,filter,reduce, pattern matching, and Result/Option types".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@benchmarks/src/cli/sync.bench.ts` around lines 7 - 10, The four module-scoped mutable bindings (small, medium, large, xl) should be replaced with an immutable fixture accessor/manager so lifecycle stays internal: remove the top-level let declarations and implement a single factory or closure function (e.g., makeFixtures or getFixtures) that initializes GeneratedFixture instances inside its scope and returns a readonly object/map { small, medium, large, xl } or accessor functions; update callers in this file to call that accessor (or destructure its returned const) so all state is const and encapsulated, and ensure any setup/teardown remains inside the factory or test lifecycle hooks rather than module scope.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@benchmarks/src/cli/build.bench.ts`:
- Around line 7-10: Replace the module-level mutable variables small, medium,
large, xl with an immutable factory pattern: remove the top-level let
declarations for GeneratedFixture and add a pure factory function (e.g.,
createFixtures or getFixtures) that constructs and returns a const object {
small, medium, large, xl } containing the GeneratedFixture instances; update any
test/benchmark setup to call this factory and use the returned const fixtures,
ensuring initialization/cleanup remains inside the factory or setup scope and no
top-level let is used.
In `@benchmarks/src/cli/sync.bench.ts`:
- Around line 26-41: The benchmarks currently measure warm incremental sync
because repeated bench iterations reuse the persisted .zpress state; add a small
helper named runColdSync that calls runCli(['clean'], cwd) before calling
runCli(['sync', '--quiet'], cwd) and replace the current runCli(['sync',
'--quiet'], ...) invocations in each bench case with runColdSync(...) so every
iteration performs a cold full sync (ensuring .zpress is cleaned between
iterations).
In `@benchmarks/src/code/sync.bench.ts`:
- Around line 27-57: Bench loops reuse fixture dirs so sync() runs in warm-state
(manifests in .generated/manifest.json), skewing results; update the benchmark
to either remove/clean the output manifest between iterations or run a separate
"cold" benchmark. Modify the bench calls that call createPaths(...) and
loadConfig(...) so each iteration deletes or recreates the fixture output (e.g.,
remove .generated/manifest.json or the entire output dir) before calling
sync(config, { paths, quiet: true }), or add a new set of benches that
explicitly prepare a cold state prior to invoking sync; ensure changes reference
the existing functions bench, createPaths, loadConfig, sync and the afterAll
cleanup to avoid double-cleanup.
In `@benchmarks/src/helpers/fixtures.ts`:
- Around line 134-178: Remove the mutable accumulator totalFiles and replace it
with a derived constant computed from the dimensions (uniqueSectionNames.length
* dirNames.length * files); update any downstream use of totalFiles to use this
new constant (e.g., fileCount) instead of incrementing inside the nested
iteration that writes files (the block using uniqueSectionNames.forEach,
dirNames.forEach and the Array.from file generator where fs.writeFileSync is
called); ensure you eliminate the let totalFiles declaration and any increments
(totalFiles += 1) and keep all other behavior (title/slug/content generation and
write) unchanged.
---
Duplicate comments:
In `@benchmarks/src/cli/build.bench.ts`:
- Around line 26-41: The benchmarks are reusing the same fixture directories so
later iterations measure warm/incremental builds; change each bench case to run
against a fresh workspace per iteration by copying the fixture into a new
temporary directory (or creating a fresh temp dir) before calling runCli. Update
the bench cases that reference small.dir, medium.dir, large.dir, xl.dir so they
clone the fixture into a temp path and pass that temp path to runCli (or
alternatively clear build/artifact files in the fixture before each run); ensure
the preparation happens inside the bench callback so each iteration gets an
isolated cold workspace.
In `@benchmarks/src/cli/sync.bench.ts`:
- Around line 7-10: The four module-scoped mutable bindings (small, medium,
large, xl) should be replaced with an immutable fixture accessor/manager so
lifecycle stays internal: remove the top-level let declarations and implement a
single factory or closure function (e.g., makeFixtures or getFixtures) that
initializes GeneratedFixture instances inside its scope and returns a readonly
object/map { small, medium, large, xl } or accessor functions; update callers in
this file to call that accessor (or destructure its returned const) so all state
is const and encapsulated, and ensure any setup/teardown remains inside the
factory or test lifecycle hooks rather than module scope.
In `@benchmarks/src/code/sync.bench.ts`:
- Around line 29-32: The current pattern silently skips benchmark work when
loadConfig returns a falsy config; update the logic around loadConfig(...) and
sync(...) so a missing config is treated as an error instead of a no-op: after
calling loadConfig(paths.repoRoot) validate the returned config and if it's
falsy throw or return a non-zero-failing error (or call a test/bench harness
fail helper) with a clear message (e.g., "failed to load config") before calling
sync(paths, { paths, quiet: true }); apply the same change to the other
occurrences that call loadConfig and then conditionally call sync so benchmarks
do not report successful iterations when config load fails.
- Around line 7-10: Module-scoped mutable variables small, medium, large, xl
create shared mutable state; replace them with an immutable accessor utility
that encapsulates lifecycle and returns the fixtures immutably (e.g., provide a
getFixtures() or makeFixtures() function that constructs the four
GeneratedFixture instances with const/local scope and returns an object { small,
medium, large, xl } or an array, and update callers to use that accessor instead
of the module-level lets); ensure no use of let/classes/loops and keep the
accessor pure/functional so fixtures are accessed immutably.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: a5527411-6e24-4e16-9e68-0a0d52cca5ca
📒 Files selected for processing (4)
benchmarks/src/cli/build.bench.tsbenchmarks/src/cli/sync.bench.tsbenchmarks/src/code/sync.bench.tsbenchmarks/src/helpers/fixtures.ts
generateFixture({ files: 750 }) now takes total file count and
auto-derives sections/directories. Optional sections override.
All bench files loop over shared TIERS constant instead of
duplicating fixture setup.
Co-Authored-By: Claude <noreply@anthropic.com>
There was a problem hiding this comment.
♻️ Duplicate comments (2)
benchmarks/src/helpers/fixtures.ts (1)
142-203: 🧹 Nitpick | 🔵 TrivialMutable state and
forEachloops violate immutability guidelines.Lines 142-143, 151 use
letwith mutation, and lines 145, 157 useforEachloops. Per coding guidelines: "Never use classes, loops,let, orthrow."That said, given this is benchmark fixture scaffolding with heavy filesystem side effects and a non-trivial remainder-distribution algorithm, this pattern is pragmatically acceptable. The mutation is bounded within the function scope and not observable externally.
If you do want to align with guidelines, the file distribution logic could be refactored to use
reducewith an accumulator that tracksremainingandglobalIdx, returning the final count.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@benchmarks/src/helpers/fixtures.ts` around lines 142 - 203, The code mutates local variables (remaining, globalIdx) and uses forEach loops (over sectionNames and dirNames) which violates the immutability guideline; refactor the distribution logic into a pure reducer: replace the outer sectionNames.forEach and inner dirNames.forEach with Array.prototype.reduce (or a single reduce over sectionNames that returns an accumulator {remaining, globalIdx, actions}) that computes sectionFiles and dirFiles without mutating outer-scope lets, and produce file write actions (using the same target generation logic and LOREM_PARAGRAPHS usage) from the accumulator; specifically target the variables and identifiers remaining, globalIdx, sectionNames, sectionCount, dirsPerSection, dirNames, sectionFiles, dirFiles and the file-writing block (fs.writeFileSync) so the final implementation returns or executes writes from an immutable accumulator rather than incrementing globals.benchmarks/src/code/sync.bench.ts (1)
19-28:⚠️ Potential issue | 🟡 MinorSilent skip on fixture or config failure masks benchmark issues.
Lines 21-27 silently skip work if either
fixtureis undefined orconfigis falsy. If fixture generation or config loading fails, benchmarks will report fast times without actually measuring sync performance.bench(`${tier.name} (~${tier.files} files)`, async () => { const fixture = fixtures.get(tier.name) - if (fixture) { - const paths = createPaths(fixture.dir) - const [, config] = await loadConfig(paths.repoRoot) - if (config) { - await sync(config, { paths, quiet: true }) - } + if (!fixture) { + throw new Error(`Fixture not found for tier: ${tier.name}`) } + const paths = createPaths(fixture.dir) + const [error, config] = await loadConfig(paths.repoRoot) + if (error || !config) { + throw new Error(`Failed to load config: ${error}`) + } + await sync(config, { paths, quiet: true }) })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@benchmarks/src/code/sync.bench.ts` around lines 19 - 28, The benchmark currently silently skips work when a fixture is missing or when loadConfig returns a falsy config; update the bench callback so it fails loudly instead of returning early: inside the function that calls fixtures.get(tier.name) and loadConfig(paths.repoRoot) (the bench block using bench, fixtures, createPaths, loadConfig, and sync), throw or rethrow an Error when fixture is undefined or when config is falsy (include identifying details like tier.name and paths.repoRoot in the message) so the test run fails and exposes setup/config issues rather than measuring an empty no-op.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@benchmarks/src/code/sync.bench.ts`:
- Around line 19-28: The benchmark currently silently skips work when a fixture
is missing or when loadConfig returns a falsy config; update the bench callback
so it fails loudly instead of returning early: inside the function that calls
fixtures.get(tier.name) and loadConfig(paths.repoRoot) (the bench block using
bench, fixtures, createPaths, loadConfig, and sync), throw or rethrow an Error
when fixture is undefined or when config is falsy (include identifying details
like tier.name and paths.repoRoot in the message) so the test run fails and
exposes setup/config issues rather than measuring an empty no-op.
In `@benchmarks/src/helpers/fixtures.ts`:
- Around line 142-203: The code mutates local variables (remaining, globalIdx)
and uses forEach loops (over sectionNames and dirNames) which violates the
immutability guideline; refactor the distribution logic into a pure reducer:
replace the outer sectionNames.forEach and inner dirNames.forEach with
Array.prototype.reduce (or a single reduce over sectionNames that returns an
accumulator {remaining, globalIdx, actions}) that computes sectionFiles and
dirFiles without mutating outer-scope lets, and produce file write actions
(using the same target generation logic and LOREM_PARAGRAPHS usage) from the
accumulator; specifically target the variables and identifiers remaining,
globalIdx, sectionNames, sectionCount, dirsPerSection, dirNames, sectionFiles,
dirFiles and the file-writing block (fs.writeFileSync) so the final
implementation returns or executes writes from an immutable accumulator rather
than incrementing globals.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 72327611-2424-4bf0-ba5f-284bf43d2849
📒 Files selected for processing (4)
benchmarks/src/cli/build.bench.tsbenchmarks/src/cli/sync.bench.tsbenchmarks/src/code/sync.bench.tsbenchmarks/src/helpers/fixtures.ts
Moves loadConfig() out of the bench callback so only sync() is measured. Reduces async overhead in the hot path — CodSpeed warns that async profiling can lose stack traces across the event loop. Co-Authored-By: Claude <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@benchmarks/src/code/sync.bench.ts`:
- Around line 33-38: The bench callback silently returns when a tier's prepared
fixture is missing (prepared.get(tier.name) is falsy), leading to misleading
no-op benchmark runs; change the bench body (where prepared.get, tier, and sync
are used) to explicitly throw or fail when p is undefined (e.g., throw new Error
with context including tier.name) so the benchmark exits loudly instead of
skipping work.
- Around line 16-25: Replace the imperative for...of in the beforeAll hook with
a functional Promise.all over TIERS: map TIERS to an async function that calls
generateFixture({ files: tier.files }), createPaths(fixture.dir), and await
loadConfig(paths.repoRoot); if loadConfig returns an undefined config then throw
a descriptive Error (include tier.name) rather than silently skipping; otherwise
call prepared.set(tier.name, { fixture, config, paths }). Await Promise.all on
the mapped promises so all tiers run in parallel and failures surface explicitly
from beforeAll.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 2bf416fd-2cb3-4f8b-a70f-4877dbac67b3
📒 Files selected for processing (1)
benchmarks/src/code/sync.bench.ts
Replace forEach + bench() with describe.each(TIERS) + bench(). Cleaner pattern that leverages vitest's built-in parameterization. Co-Authored-By: Claude <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (3)
benchmarks/src/code/sync.bench.ts (1)
16-24:⚠️ Potential issue | 🟠 MajorPrevent silent benchmark no-ops and switch to functional tier preparation.
Line 21 and Line 34 currently allow silent skips. That can hide fixture/config failures and produce misleading benchmark timings. Also, Line 17 uses
for...of, which violates the repo TS rule set.♻️ Proposed refactor
beforeAll(async () => { - for (const tier of TIERS) { - const fixture = generateFixture({ files: tier.files }) - const paths = createPaths(fixture.dir) - const [, config] = await loadConfig(paths.repoRoot) - if (config) { - prepared.set(tier.name, { fixture, config, paths }) - } - } + await Promise.all( + TIERS.map(async (tier) => { + const fixture = generateFixture({ files: tier.files }) + const paths = createPaths(fixture.dir) + const [error, config] = await loadConfig(paths.repoRoot) + + if (error || !config) { + fixture.cleanup() + return Promise.reject( + new Error(`Failed to prepare sync benchmark fixture for tier: ${tier.name}`) + ) + } + + prepared.set(tier.name, { fixture, config, paths }) + return Promise.resolve() + }) + ) }) @@ describe.each(TIERS)('sync() (code) — $name (~$files files)', (tier) => { bench('sync', async () => { const p = prepared.get(tier.name) - if (p) { - await sync(p.config, { paths: p.paths, quiet: true }) - } + if (!p) { + return Promise.reject( + new Error(`Missing prepared fixture for benchmark tier: ${tier.name}`) + ) + } + await sync(p.config, { paths: p.paths, quiet: true }) }) })As per coding guidelines, "Never use classes, loops,
let, orthrow— instead usemap,filter,reduce..." and "Usemap,filter,reducefor data transformations..."Also applies to: 33-36
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@benchmarks/src/code/sync.bench.ts` around lines 16 - 24, The current beforeAll uses a for...of loop and silently skips tiers when loadConfig returns no config, which violates the repo rules and can hide failures; replace the loop with a functional pipeline (e.g., TIERS.map -> Promise.all) that for each tier calls generateFixture, createPaths and await loadConfig(paths.repoRoot), then explicitly fail (throw or processLogger.error + fail) if any tier yields a falsy config instead of silently skipping, and finally populate prepared via prepared.set(tier.name, { fixture, config, paths }) for each successful entry; reference symbols: beforeAll, TIERS, generateFixture, createPaths, loadConfig, and prepared.set.benchmarks/src/cli/build.bench.ts (1)
17-23:⚠️ Potential issue | 🟠 MajorBenchmark currently mixes cold and warm build timings.
Line 21 reruns
buildin the same directory, so later iterations can reuse persisted state/artifacts and under-measure cold build cost.Suggested fix
+const runColdBuild = (cwd: string): void => { + runCli(['clean'], cwd) + runCli(['build', '--no-check', '--quiet'], cwd) +} + describe.each(TIERS)('zpress build (cli) — $name (~$files files)', (tier) => { bench('build', () => { const fixture = fixtures.get(tier.name) if (fixture) { - runCli(['build', '--no-check', '--quiet'], fixture.dir) + runColdBuild(fixture.dir) } }) })#!/bin/bash # Verify bench calls do not reset state before each iteration. rg -nP --type=ts -C2 "bench\\('build'|runCli\\(\\['build'" benchmarks/src/cli/build.bench.ts rg -nP --type=ts -C2 "runCli\\(\\['clean'" benchmarks/src/cli/build.bench.tsBased on learnings:
packages/core/src/sync/index.tsuses persisted manifest/page-count state (skipMtimeOptimization), so reused directories can bias timings toward warm runs.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@benchmarks/src/cli/build.bench.ts` around lines 17 - 23, The benchmark is mixing cold and warm builds because bench('build') calls runCli(['build', ...], fixture.dir) repeatedly against the same fixture so persisted state (see packages/core sync optimizations) biases later iterations; update the describe.each/bench block so each iteration uses a fresh clean workspace or resets state before running build (e.g., copy the fixture to a temp dir per iteration or invoke a clean step before runCli), referencing the fixtures.get(...) result and runCli(['build', ...], fixture.dir) to locate where to implement the reset/copy logic.benchmarks/src/cli/sync.bench.ts (1)
17-23:⚠️ Potential issue | 🟠 MajorSync benchmark reuses persisted state across iterations.
Line 21 runs
syncrepeatedly in the same fixture directory, so later iterations can hit incremental fast paths and skew results.Suggested fix
+const runColdSync = (cwd: string): void => { + runCli(['clean'], cwd) + runCli(['sync', '--quiet'], cwd) +} + describe.each(TIERS)('zpress sync (cli) — $name (~$files files)', (tier) => { bench('sync', () => { const fixture = fixtures.get(tier.name) if (fixture) { - runCli(['sync', '--quiet'], fixture.dir) + runColdSync(fixture.dir) } }) })Based on learnings:
packages/core/src/sync/index.tsintentionally uses prior manifest/page-count state for optimization, so benchmark directory reuse can bias toward warm incremental timings.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@benchmarks/src/cli/sync.bench.ts` around lines 17 - 23, The benchmark currently reuses the same fixture directory for each iteration (describe.each over TIERS calling bench that invokes runCli(['sync','--quiet'], fixture.dir)), causing warm incremental behavior; modify the bench setup so each iteration runs against a fresh copy of the fixture (e.g., create a temporary directory and copy fixture.dir into it or clean/remove persistent state files like manifests/pages before calling runCli) so runCli('sync') always sees a cold state and results are not biased by prior runs.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@benchmarks/src/cli/build.bench.ts`:
- Around line 9-15: Replace the imperative forEach mutations in
beforeAll/afterAll with declarative transformations: in beforeAll use
TIERS.map(...) to produce entries and construct the fixtures Map (or call
fixtures = new Map(entries)) by mapping each tier to [tier.name,
generateFixture({ files: tier.files })] instead of fixtures.set(...); in
afterAll replace fixtures.forEach(f => f.cleanup()) with
Array.from(fixtures.values()).map(f => f.cleanup()) or use fixtures.forEach but
invoked via a mapped array to avoid direct mutation/looping; update references
to the fixtures Map accordingly and keep the lifecycle hooks as beforeAll and
afterAll while removing use of forEach/side-effectful loops and let/throws.
In `@benchmarks/src/cli/sync.bench.ts`:
- Around line 9-15: The benchmark lifecycle uses imperative forEach with side
effects which violates the repo's immutable/declarative guideline; replace those
mutations by using functional operations: in beforeAll replace
TIERS.forEach(...) with TIERS.map(tier => fixtures.set(tier.name,
generateFixture({ files: tier.files }))) (or build a new Map from TIERS via
reduce) so fixtures is populated immutably, and in afterAll replace
fixtures.forEach(f => f.cleanup()) with Array.from(fixtures.values()).map(f =>
f.cleanup()) (or reduce to trigger cleanup without using forEach) referencing
the existing beforeAll, afterAll, TIERS, fixtures, generateFixture, and cleanup
symbols.
---
Duplicate comments:
In `@benchmarks/src/cli/build.bench.ts`:
- Around line 17-23: The benchmark is mixing cold and warm builds because
bench('build') calls runCli(['build', ...], fixture.dir) repeatedly against the
same fixture so persisted state (see packages/core sync optimizations) biases
later iterations; update the describe.each/bench block so each iteration uses a
fresh clean workspace or resets state before running build (e.g., copy the
fixture to a temp dir per iteration or invoke a clean step before runCli),
referencing the fixtures.get(...) result and runCli(['build', ...], fixture.dir)
to locate where to implement the reset/copy logic.
In `@benchmarks/src/cli/sync.bench.ts`:
- Around line 17-23: The benchmark currently reuses the same fixture directory
for each iteration (describe.each over TIERS calling bench that invokes
runCli(['sync','--quiet'], fixture.dir)), causing warm incremental behavior;
modify the bench setup so each iteration runs against a fresh copy of the
fixture (e.g., create a temporary directory and copy fixture.dir into it or
clean/remove persistent state files like manifests/pages before calling runCli)
so runCli('sync') always sees a cold state and results are not biased by prior
runs.
In `@benchmarks/src/code/sync.bench.ts`:
- Around line 16-24: The current beforeAll uses a for...of loop and silently
skips tiers when loadConfig returns no config, which violates the repo rules and
can hide failures; replace the loop with a functional pipeline (e.g., TIERS.map
-> Promise.all) that for each tier calls generateFixture, createPaths and await
loadConfig(paths.repoRoot), then explicitly fail (throw or processLogger.error +
fail) if any tier yields a falsy config instead of silently skipping, and
finally populate prepared via prepared.set(tier.name, { fixture, config, paths
}) for each successful entry; reference symbols: beforeAll, TIERS,
generateFixture, createPaths, loadConfig, and prepared.set.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 057599ed-c893-49d5-85b5-ada43b13bfc9
📒 Files selected for processing (3)
benchmarks/src/cli/build.bench.tsbenchmarks/src/cli/sync.bench.tsbenchmarks/src/code/sync.bench.ts
- setup.bench.ts: inline setup/cleanup into bench callback for per-iteration isolation, remove mutable let - code/sync.bench.ts: replace for...of with Promise.all, throw on config/fixture failure instead of silent skip - sync.ts: gate intro/outro messages on --quiet flag Co-Authored-By: Claude <noreply@anthropic.com>
Summary
zpress syncCLI command for benchmarking sync in isolationbenchmarks/package (es-toolkit pattern) with vitest bench +@codspeed/vitest-pluginsync,build,setupagainst small/medium/large fixture projectssync()imported directly from@zpress/corefor granular measurementcodspeed-macrorunner, triggers on push/PR/dispatchChanges
packages/clisynccommandbenchmarks/.github/workflowsci-benchmarks.ymlfor CodSpeedbench/bench:runscripts, workspace registration, lint exclusionTest plan
pnpm buildpassespnpm checkpasses (typecheck + lint + format)zpress syncworks against kitchen-sink examplepnpm bench:runexecutes all benchmarks successfully