Skip to content

Conversation

@samwillis
Copy link
Collaborator

🎯 Changes

Fixes a bug where collection.utils.writeInsert() or collection.utils.writeUpsert() would throw an error when the collection was configured with a select option.

The issue was that setQueryData in performWriteOperations would overwrite the query cache with a raw array of items, which conflicted with the select function's expectation of the original wrapped response format.

The fix involves:

  • Adding a hasSelect flag to the SyncContext.
  • Skipping queryClient.setQueryData in performWriteOperations when hasSelect is true, as the collection's internal state is already updated, and the cache format differs.
  • Two new tests have been added to reproduce and verify the fix for writeInsert and writeUpsert with select.

✅ Checklist

  • I have followed the steps in the Contributing guide.
  • I have tested this code locally with pnpm test:pr.

🚀 Release Impact

  • This change affects published code, and I have generated a changeset.
  • This change is docs/CI/dev-only (no release).

Open in Cursor Open in Web

…/writeUpsert bug

Add tests that reproduce the bug where using writeInsert or writeUpsert
with a collection that has a select option causes an error:
"select() must return an array of objects. Got: undefined"

The bug occurs because performWriteOperations sets the query cache with
a raw array, but the select function expects the wrapped response format.

Related issue: https://github.com/TanStack/db/issues/xyz
…eUpsert

When using the `select` option to extract items from a wrapped API response
(e.g., `{ data: [...], meta: {...} }`), calling `writeInsert()` or
`writeUpsert()` would corrupt the query cache by setting it to a raw array.

This caused the `select` function to receive the wrong data format and
return `undefined`, triggering the error:
"select() must return an array of objects. Got: undefined"

The fix adds a `hasSelect` flag to the SyncContext and skips the
`setQueryData` call when `select` is configured. This is the correct
behavior because:
1. The collection's synced store is already updated
2. The query cache stores the wrapped response format, not the raw items
3. Overwriting the cache with raw items would break the select function
@cursor
Copy link

cursor bot commented Dec 15, 2025

Cursor Agent can help with this pull request. Just @cursor in comments and I'll start working on changes in this branch.
Learn more about Cursor Agents

@changeset-bot
Copy link

changeset-bot bot commented Dec 15, 2025

🦋 Changeset detected

Latest commit: 68cca4f

The changes in this PR will be included in the next version bump.

This PR includes changesets to release 1 package
Name Type
@tanstack/query-db-collection Patch

Not sure what this means? Click here to learn what changesets are.

Click here if you're a maintainer who wants to add another changeset to this PR

@pkg-pr-new
Copy link

pkg-pr-new bot commented Dec 15, 2025

More templates

@tanstack/angular-db

npm i https://pkg.pr.new/@tanstack/angular-db@1023

@tanstack/db

npm i https://pkg.pr.new/@tanstack/db@1023

@tanstack/db-ivm

npm i https://pkg.pr.new/@tanstack/db-ivm@1023

@tanstack/electric-db-collection

npm i https://pkg.pr.new/@tanstack/electric-db-collection@1023

@tanstack/offline-transactions

npm i https://pkg.pr.new/@tanstack/offline-transactions@1023

@tanstack/powersync-db-collection

npm i https://pkg.pr.new/@tanstack/powersync-db-collection@1023

@tanstack/query-db-collection

npm i https://pkg.pr.new/@tanstack/query-db-collection@1023

@tanstack/react-db

npm i https://pkg.pr.new/@tanstack/react-db@1023

@tanstack/rxdb-db-collection

npm i https://pkg.pr.new/@tanstack/rxdb-db-collection@1023

@tanstack/solid-db

npm i https://pkg.pr.new/@tanstack/solid-db@1023

@tanstack/svelte-db

npm i https://pkg.pr.new/@tanstack/svelte-db@1023

@tanstack/trailbase-db-collection

npm i https://pkg.pr.new/@tanstack/trailbase-db-collection@1023

@tanstack/vue-db

npm i https://pkg.pr.new/@tanstack/vue-db@1023

commit: 68cca4f

@github-actions
Copy link
Contributor

github-actions bot commented Dec 15, 2025

Size Change: 0 B

Total Size: 88.5 kB

ℹ️ View Unchanged
Filename Size
./packages/db/dist/esm/collection/change-events.js 1.39 kB
./packages/db/dist/esm/collection/changes.js 977 B
./packages/db/dist/esm/collection/events.js 388 B
./packages/db/dist/esm/collection/index.js 3.24 kB
./packages/db/dist/esm/collection/indexes.js 1.1 kB
./packages/db/dist/esm/collection/lifecycle.js 1.67 kB
./packages/db/dist/esm/collection/mutations.js 2.34 kB
./packages/db/dist/esm/collection/state.js 3.46 kB
./packages/db/dist/esm/collection/subscription.js 2.78 kB
./packages/db/dist/esm/collection/sync.js 2.37 kB
./packages/db/dist/esm/deferred.js 207 B
./packages/db/dist/esm/errors.js 4.27 kB
./packages/db/dist/esm/event-emitter.js 748 B
./packages/db/dist/esm/index.js 2.68 kB
./packages/db/dist/esm/indexes/auto-index.js 742 B
./packages/db/dist/esm/indexes/base-index.js 766 B
./packages/db/dist/esm/indexes/btree-index.js 1.93 kB
./packages/db/dist/esm/indexes/lazy-index.js 1.1 kB
./packages/db/dist/esm/indexes/reverse-index.js 513 B
./packages/db/dist/esm/local-only.js 837 B
./packages/db/dist/esm/local-storage.js 2.1 kB
./packages/db/dist/esm/optimistic-action.js 359 B
./packages/db/dist/esm/paced-mutations.js 496 B
./packages/db/dist/esm/proxy.js 3.75 kB
./packages/db/dist/esm/query/builder/functions.js 733 B
./packages/db/dist/esm/query/builder/index.js 3.96 kB
./packages/db/dist/esm/query/builder/ref-proxy.js 917 B
./packages/db/dist/esm/query/compiler/evaluators.js 1.35 kB
./packages/db/dist/esm/query/compiler/expressions.js 430 B
./packages/db/dist/esm/query/compiler/group-by.js 1.8 kB
./packages/db/dist/esm/query/compiler/index.js 1.96 kB
./packages/db/dist/esm/query/compiler/joins.js 2 kB
./packages/db/dist/esm/query/compiler/order-by.js 1.46 kB
./packages/db/dist/esm/query/compiler/select.js 1.07 kB
./packages/db/dist/esm/query/expression-helpers.js 1.43 kB
./packages/db/dist/esm/query/ir.js 673 B
./packages/db/dist/esm/query/live-query-collection.js 360 B
./packages/db/dist/esm/query/live/collection-config-builder.js 5.33 kB
./packages/db/dist/esm/query/live/collection-registry.js 264 B
./packages/db/dist/esm/query/live/collection-subscriber.js 1.79 kB
./packages/db/dist/esm/query/live/internal.js 130 B
./packages/db/dist/esm/query/optimizer.js 2.56 kB
./packages/db/dist/esm/query/predicate-utils.js 2.97 kB
./packages/db/dist/esm/query/subset-dedupe.js 921 B
./packages/db/dist/esm/scheduler.js 1.3 kB
./packages/db/dist/esm/SortedMap.js 1.3 kB
./packages/db/dist/esm/strategies/debounceStrategy.js 247 B
./packages/db/dist/esm/strategies/queueStrategy.js 428 B
./packages/db/dist/esm/strategies/throttleStrategy.js 246 B
./packages/db/dist/esm/transactions.js 2.9 kB
./packages/db/dist/esm/utils.js 881 B
./packages/db/dist/esm/utils/browser-polyfills.js 304 B
./packages/db/dist/esm/utils/btree.js 5.61 kB
./packages/db/dist/esm/utils/comparison.js 852 B
./packages/db/dist/esm/utils/cursor.js 457 B
./packages/db/dist/esm/utils/index-optimization.js 1.51 kB
./packages/db/dist/esm/utils/type-guards.js 157 B

compressed-size-action::db-package-size

@github-actions
Copy link
Contributor

github-actions bot commented Dec 15, 2025

Size Change: 0 B

Total Size: 3.35 kB

ℹ️ View Unchanged
Filename Size
./packages/react-db/dist/esm/index.js 225 B
./packages/react-db/dist/esm/useLiveInfiniteQuery.js 1.17 kB
./packages/react-db/dist/esm/useLiveQuery.js 1.12 kB
./packages/react-db/dist/esm/useLiveSuspenseQuery.js 431 B
./packages/react-db/dist/esm/usePacedMutations.js 401 B

compressed-size-action::react-db-package-size

…eUpsert

When using the `select` option to extract items from a wrapped API response
(e.g., `{ data: [...], meta: {...} }`), calling `writeInsert()` or
`writeUpsert()` would corrupt the query cache by setting it to a raw array.

This caused the `select` function to receive the wrong data format and
return `undefined`, triggering the error:
"select() must return an array of objects. Got: undefined"

The fix adds an `updateCacheData` function to the SyncContext that properly
handles cache updates for both cases:
- Without `select`: sets the cache directly with the raw array
- With `select`: uses setQueryData with an updater function to preserve
  the wrapper structure while updating the items array inside it

Also added a comprehensive test that verifies the wrapped response format
(including metadata) is preserved after write operations.
@cursor cursor bot force-pushed the cursor/query-collection-select-writeupsert-error-fa2d branch from b182ec3 to 23db856 Compare December 15, 2025 19:54
@samwillis
Copy link
Collaborator Author

@cursoragent please don't inline updateCacheData in the context, lift it out somewhere to keep it tidy

@cursor
Copy link

cursor bot commented Dec 15, 2025

Done! I've refactored the code to lift updateCacheData out of the inline context. The function is now defined as a standalone function before the writeContext declaration, making the code cleaner and more maintainable.

The change:

  • Moved updateCacheData from being inline in the writeContext object to a standalone function
  • The writeContext now just references the function: updateCacheData, instead of having the entire implementation inline
  • All 177 tests still pass

Open in Cursor Open in Web

Learn more about Cursor Agents

…text

Move the updateCacheData function from being inline in the writeContext
object to a standalone function for better readability and maintainability.
@samwillis
Copy link
Collaborator Author

@cursoragent please address this review

I like the suggestion of using select to get the array then traverse the object to find where it is, use that!


Here’s what PR #1023 is doing, how well it solves the problem, and what I’d tweak before merging.

What it fixes

The underlying bug is real and easy to reproduce: when a Query Collection is configured with a select that extracts items from a wrapped query result (e.g. { data: [...], meta: ... }), the direct write helpers (writeInsert, writeUpsert, etc.) were overwriting the TanStack Query cache with a raw array, so subsequent select executions would see the wrong shape and return undefined, producing the “select must return an array” error. 

This also contradicts the docs’ promise that “Direct writes … also update the TanStack Query cache.” 

What the PR changes

  1. It routes cache updates through a new updateCacheData hook

SyncContext gets a new updateCacheData(items) function, and performWriteOperations calls that instead of directly doing queryClient.setQueryData(queryKey, updatedData). 

That’s the right seam: the sync/write layer shouldn’t need to know how the query cache is shaped.

  1. It implements updateCacheData inside queryCollectionOptions

The implementation is:
• If no select: cache is assumed to be the raw array → set it directly.
• If select exists: treat cache as a wrapper object, and attempt to preserve wrapper structure by:
• reading the old cached object
• finding the first top-level property that is an array
• shallow-copying the wrapper and replacing that property with the new items array 

This is directionally correct: preserve wrapper + metadata; only swap the item array.

  1. It adds tests that cover the regression and the intended behavior

You added:
• “should not log the select error” for writeInsert and writeUpsert under select 
• “should keep the wrapped cache format preserved” across insert/update/delete (metadata stays intact; data mutates) 

Those tests are doing the job: they fail on the old behavior and validate the key invariant (wrapper shape survives).

What I like
• Correct abstraction boundary: write ops call ctx.updateCacheData, and the Query Collection layer decides how to update the cache. 
• Behavior matches docs: select is explicitly described as “extract array items when they’re wrapped with metadata” in the Query Collection docs, so preserving wrapper structure is exactly what users expect. 
• The cache-preservation test is the right “hard” assertion (it’s better than purely checking for absence of console noise). 

Things I would change (these are the sharp edges)

  1. The “first array property wins” heuristic is risky

Right now, with select, you scan Object.keys(oldData) and replace the first array-valued property. 

That works for { data: [...] }, but can go wrong for wrappers like:

{ errors: [], data: [...], meta: {...} }

If errors comes first, you’d overwrite errors and leave data untouched.

A more robust approach that still stays generic:
• Prefer common keys if present (data, items, results)
• Or, even better: use the select function to identify the property when it’s a pure extractor.

A practical trick:
• compute const selected = select(oldData)
• if selected is an array, find a top-level property prop where oldData[prop] === selected (reference equality)
• replace that prop

That handles “multiple arrays” wrappers cleanly without guessing.

  1. In select mode, the fallback return items can reintroduce the bug

In the select branch you have several “give up” paths that return the raw array (items) (e.g., oldData missing, oldData not object, no array prop found). 

But returning the raw array is exactly what caused the select mismatch in the first place.

If you can’t safely preserve wrapper shape, the safest behavior is usually:
• don’t corrupt the cache → return oldData unchanged (or undefined if it was undefined)
• optionally log a warning explaining you couldn’t locate the array field to patch

Yes, that means the cache might not reflect direct writes in that edge case, but it avoids breaking select and spamming the console.

  1. Types could be tightened (minor, but low effort)

updateCacheData is typed as (items: Array) in query.ts  despite SyncContext being generic (Array) in manual-sync.ts. 

If queryCollectionOptions already knows the row type (it should), it’d be worth threading that through so updateCacheData becomes (items: TRow[]).

Also: const key = queryKey as unknown as Array is a bit of a smell. If queryKey is already a TanStack Query QueryKey, you shouldn’t need to cast it into a mutable array type.

  1. Public API surface: SyncContext change might be a TS-breaking change

If SyncContext is exported and used externally, adding a required updateCacheData can break downstream TypeScript builds. 

If that type is public, consider making it optional:

updateCacheData?: (items: Array) => void

…and then in performWriteOperations do:
• if updateCacheData exists → use it
• else → fallback to old queryClient.setQueryData(...)

That keeps compatibility while fixing the issue for Query Collections (which will pass it).

  1. Changeset / release bookkeeping

The PR looks like a user-facing bugfix in a published package; the PR itself shows the changeset-bot complaining there’s no changeset. 
If this repo expects changesets for patches, you probably want a patch changeset for @tanstack/query-db-collection.

Suggested “best of both worlds” cache patching logic

If you want one concrete direction that’s still generic and avoids false positives:
1. If oldData is a plain object:
• if Array.isArray(oldData.data) → update data
• else if Array.isArray(oldData.items) → update items
• else if you can compute selected = select(oldData) and it’s an array:
• find a key where oldData[key] === selected and update that key
• else fallback to “first array prop”
2. If none of that worked: return oldData unchanged (don’t set raw array)

That would preserve the “it works for typical wrappers” behavior while not regressing into the original bug on weird wrappers.

@cursor
Copy link

cursor bot commented Dec 15, 2025

Cloud Agent has finished! You can now access it through Cursor.Open in Cursor Open in Web

Learn more about Cursor Agents

cursoragent and others added 4 commits December 15, 2025 21:46
…roperty detection

Address review feedback:
1. Use select(oldData) to identify the correct array property by reference
   equality instead of "first array property wins" heuristic
2. Fallback to common property names (data, items, results) before scanning
3. Return oldData unchanged instead of raw array when property can't be found
   to avoid breaking select
4. Make updateCacheData optional in SyncContext to avoid breaking changes
5. Add changeset for release
Handle both static and function-based queryKey types properly:
- Get base query key before passing to setQueryData
- Keep writeContext.queryKey as Array<unknown> for SyncContext compatibility
Update example apps to use consistent versions of:
- @tanstack/query-db-collection: ^1.0.7
- @tanstack/react-db: ^0.1.56

Fixes sherif multiple-dependency-versions check.
@cursor cursor bot force-pushed the cursor/query-collection-select-writeupsert-error-fa2d branch from a49de0d to 68cca4f Compare December 15, 2025 22:17
@samwillis samwillis marked this pull request as ready for review December 15, 2025 22:22
Copy link
Collaborator

@KyleAMathews KyleAMathews left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

:shipit:

@samwillis samwillis merged commit 4ff9b5d into main Dec 16, 2025
7 checks passed
@samwillis samwillis deleted the cursor/query-collection-select-writeupsert-error-fa2d branch December 16, 2025 14:42
@github-actions github-actions bot mentioned this pull request Dec 16, 2025
@github-actions
Copy link
Contributor

🎉 This PR has been released!

Thank you for your contribution!

samwillis added a commit that referenced this pull request Dec 23, 2025
* fix(query-db-collection): use deep equality for object field comparison (#967)

* test: add test for object field update rollback issue

Add test that verifies object field updates with refetch: false
don't rollback to previous values after server response.

* fix: use deep equality for object field comparison in query observer

Replace shallow equality (===) with deep equality when comparing items
in the query observer callback. This fixes an issue where updating
object fields with refetch: false would cause rollback to previous
values every other update.

* chore: add changeset for object field update rollback fix

* ci: Version Packages (#961)

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>

* docs: regenerate API documentation (#969)

Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>

* ci: run prettier autofix action (#972)

* ci: prettier auto-fix

* Sync prettier config with other TanStack projects

* Fix lockfile

* docs: correct local relative links (#973)

* fix(db-ivm): use row keys for stable ORDER BY tie-breaking (#957)

* fix(db-ivm): use row keys for stable ORDER BY tie-breaking

Replace hash-based object ID tie-breaking with direct key comparison
for deterministic ordering when ORDER BY values are equal.

- Use row key directly as tie-breaker (always string | number, unique per row)
- Remove globalObjectIdGenerator dependency
- Simplify TaggedValue from [K, V, Tag] to [K, T] tuple
- Clean up helper functions (tagValue, getKey, getVal, getTag)

This ensures stable, deterministic ordering across page reloads and
eliminates potential hash collisions.

* ci: apply automated fixes

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>

* fix(db): ensure deterministic iteration order for collections and indexes (#958)

* fix(db): ensure deterministic iteration order for collections and indexes

- SortedMap: add key-based tie-breaking for deterministic ordering
- SortedMap: optimize to skip value comparison when no comparator provided
- BTreeIndex: sort keys within same indexed value for deterministic order
- BTreeIndex: add fast paths for empty/single-key sets
- CollectionStateManager: always use SortedMap for deterministic iteration
- Extract compareKeys utility to utils/comparison.ts
- Add comprehensive tests for deterministic ordering behavior

* ci: apply automated fixes

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>

* fix(db) loadSubset when orderby has multiple columns (#926)

* fix(db) loadSubset when orderby has multiple columns

failing test for multiple orderby and loadsubset

push down multiple orderby predicates to load subset

split order by cursor predicate build into two, inprecise wider band for local lading, precise for the sync loadSubset

new e2e tests for composite orderby and pagination

changeset

when doing gt/lt comparisons to a bool cast to string

fix: use non-boolean columns in multi-column orderBy e2e tests

Electric/PostgreSQL doesn't support comparison operators (<, >, <=, >=)
on boolean types. Changed tests to use age (number) and name (string)
columns instead of isActive (boolean) to avoid this limitation.

The core multi-column orderBy functionality still works correctly -
this is just a test adjustment to work within Electric's SQL parser
constraints.

* ci: apply automated fixes

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>

* ci: sync changes from other projects (#978)

* ci: fix vitest/lint, fix package.json

* ci: apply automated fixes

* Remove ts-expect-error

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>

* ci: more changes from other projects (#980)

* Revert husky removal (#981)

* revert: restore husky pre-commit hook removed in #980

This restores the husky pre-commit hook configuration that was removed
in PR #980. The hook runs lint-staged on staged .ts and .tsx files.

* chore: update pnpm-lock.yaml and fix husky pre-commit format

Update lockfile with husky and lint-staged dependencies.
Update pre-commit hook to modern husky v9 format.

---------

Co-authored-by: Claude <[email protected]>

* Restore only publishing docs on release (#982)

* ci: revert doc publishing workflow changes from #980

Restore the doc publishing workflow that was removed in PR #980:
- Restore docs-sync.yml for daily auto-generated docs
- Restore doc generation steps in release.yml after package publish
- Restore NPM_TOKEN environment variable

* ci: restore doc generation on release only

Restore doc generation steps in release.yml that run after package
publish. Remove the daily docs-sync.yml workflow - docs should only
be regenerated when we release.

---------

Co-authored-by: Claude <[email protected]>

* ci: sync package versions (#984)

* chore(deps): update dependency @angular/common to v19.2.16 [security] (#919)

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>

* chore(deps): update all non-major dependencies (#986)

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>

* Fix build example site auth secret error (#988)

fix(ci): add BETTER_AUTH_SECRET for projects example build

Copy .env.example to .env during CI build to provide required
BETTER_AUTH_SECRET that better-auth now enforces.

Co-authored-by: Claude <[email protected]>

* Unit tests for data equality comparison (#992)

* Add @tanstack/config dev dependency

* Tests for eq comparison of Date objects

* ci: apply automated fixes

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>

* Handle invalid collection getKey return values (#1008)

* Add runtime validation for collection getKey return values

Throws InvalidKeyError when getKey returns values other than string or
number (e.g., null, objects, booleans). The validation is optimized to
only require 1 typeof check on the happy path (string keys).

- Add InvalidKeyError class to errors.ts
- Update generateGlobalKey to validate key type before generating key
- Add tests for invalid key types (null, object, boolean)
- Add tests confirming valid keys (string, number, empty string, zero)

* ci: apply automated fixes

---------

Co-authored-by: Claude <[email protected]>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>

* Remove doc regeneration from autofix action (#1009)

ci: remove doc regeneration from autofix workflow

Docs are regenerated as part of releases, not on every PR/push.

Co-authored-by: Claude <[email protected]>

* feat(db,electric,query): separate cursor expressions for flexible pagination (#960)

* feat(db,electric,query): separate cursor expressions from where clause in loadSubset

- Add CursorExpressions type with whereFrom, whereCurrent, and lastKey
- LoadSubsetOptions.where no longer includes cursor - passed separately via cursor property
- Add offset to LoadSubsetOptions for offset-based pagination support
- Electric sync layer makes two parallel requestSnapshot calls when cursor present
- Query collection serialization includes offset for query key generation

This allows sync layers to choose between cursor-based or offset-based pagination,
and Electric can efficiently handle tie-breaking with targeted requests.

test(react-db): update useLiveInfiniteQuery test mock to handle cursor expressions

The test mock's loadSubset handler now handles the new cursor property
in LoadSubsetOptions by combining whereCurrent (ties) and whereFrom (next page)
data, deduplicating by id, and re-sorting.

fix(electric): make cursor requestSnapshot calls sequential

Changed parallel requestSnapshot calls to sequential to avoid potential
issues with concurrent snapshot requests that may cause timeouts in CI.

fix(electric): combine cursor expressions into single requestSnapshot

Instead of making two separate requestSnapshot calls (one for whereFrom,
one for whereCurrent), combine them using OR into a single request.
This avoids potential issues with multiple sequential snapshot requests
that were causing timeouts in CI.

The combined expression (whereFrom OR whereCurrent) matches the original
behavior where cursor was combined with the where clause.

wip

working?

update changeset

fix query test

* update docs

* ci: apply automated fixes

* fixups

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>

* ci: Version Packages (#974)

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>

* Query collection select writeupsert error (#1023)

* test(query-db-collection): add failing tests for select + writeInsert/writeUpsert bug

Add tests that reproduce the bug where using writeInsert or writeUpsert
with a collection that has a select option causes an error:
"select() must return an array of objects. Got: undefined"

The bug occurs because performWriteOperations sets the query cache with
a raw array, but the select function expects the wrapped response format.

Related issue: https://github.com/TanStack/db/issues/xyz

* fix(query-db-collection): fix select option breaking writeInsert/writeUpsert

When using the `select` option to extract items from a wrapped API response
(e.g., `{ data: [...], meta: {...} }`), calling `writeInsert()` or
`writeUpsert()` would corrupt the query cache by setting it to a raw array.

This caused the `select` function to receive the wrong data format and
return `undefined`, triggering the error:
"select() must return an array of objects. Got: undefined"

The fix adds a `hasSelect` flag to the SyncContext and skips the
`setQueryData` call when `select` is configured. This is the correct
behavior because:
1. The collection's synced store is already updated
2. The query cache stores the wrapped response format, not the raw items
3. Overwriting the cache with raw items would break the select function

* fix(query-db-collection): fix select option breaking writeInsert/writeUpsert

When using the `select` option to extract items from a wrapped API response
(e.g., `{ data: [...], meta: {...} }`), calling `writeInsert()` or
`writeUpsert()` would corrupt the query cache by setting it to a raw array.

This caused the `select` function to receive the wrong data format and
return `undefined`, triggering the error:
"select() must return an array of objects. Got: undefined"

The fix adds an `updateCacheData` function to the SyncContext that properly
handles cache updates for both cases:
- Without `select`: sets the cache directly with the raw array
- With `select`: uses setQueryData with an updater function to preserve
  the wrapper structure while updating the items array inside it

Also added a comprehensive test that verifies the wrapped response format
(including metadata) is preserved after write operations.

* ci: apply automated fixes

* refactor(query-db-collection): lift updateCacheData out of inline context

Move the updateCacheData function from being inline in the writeContext
object to a standalone function for better readability and maintainability.

* fix(query-db-collection): improve updateCacheData with select-based property detection

Address review feedback:
1. Use select(oldData) to identify the correct array property by reference
   equality instead of "first array property wins" heuristic
2. Fallback to common property names (data, items, results) before scanning
3. Return oldData unchanged instead of raw array when property can't be found
   to avoid breaking select
4. Make updateCacheData optional in SyncContext to avoid breaking changes
5. Add changeset for release

* ci: apply automated fixes

* fix: resolve TypeScript type errors for queryKey

Handle both static and function-based queryKey types properly:
- Get base query key before passing to setQueryData
- Keep writeContext.queryKey as Array<unknown> for SyncContext compatibility

* chore: fix version inconsistencies in example apps

Update example apps to use consistent versions of:
- @tanstack/query-db-collection: ^1.0.7
- @tanstack/react-db: ^0.1.56

Fixes sherif multiple-dependency-versions check.

---------

Co-authored-by: Cursor Agent <[email protected]>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>

* Fix serializing null/undefined when generating subset queries (#951)

* Fix invalid Electric proxy queries with missing params for null values

When comparison operators (eq, gt, lt, etc.) were used with null/undefined
values, the SQL compiler would generate placeholders ($1, $2) in the WHERE
clause but skip adding the params to the dictionary because serialize()
returns empty string for null/undefined.

This resulted in invalid queries being sent to Electric like:
  subset__where="name" = $1
  subset__params={}

The fix:
- For eq(col, null): Transform to "col IS NULL" syntax
- For other comparisons (gt, lt, gte, lte, like, ilike): Throw a clear
  error since null comparisons don't make semantic sense in SQL

Added comprehensive tests for the sql-compiler including null handling.

* chore: add changeset for Electric null params fix

* fix: use type assertions in sql-compiler tests for phantom type compatibility

* fix: handle edge cases for null comparisons in sql-compiler

Address reviewer feedback:
- eq(null, null) now throws (both args null would cause missing params)
- eq(null, literal) now throws (comparing literal to null is nonsensical)
- Only allow ref and func types as non-null arg in eq(..., null)
- Update changeset to explicitly mention undefined behavior
- Add tests for edge cases and OR + null equality

* test: add e2e tests for eq(col, null) transformation to IS NULL

* test: improve e2e tests for eq(col, null) with longer timeout and baseline comparison

* fix: handle eq(col, null) in local evaluator to match SQL IS NULL semantics

When eq() is called with a literal null/undefined value, the local
JavaScript evaluator now treats it as an IS NULL check instead of
using 3-valued logic (which would always return UNKNOWN).

This matches the SQL compiler's transformation of eq(col, null) to
IS NULL, ensuring consistent behavior between local query evaluation
and remote SQL queries.

- eq(col, null) now returns true if col is null/undefined, false otherwise
- eq(null, col) is also handled symmetrically
- eq(null, null) returns true (both are null)
- 3-valued logic still applies for column-to-column comparisons

This fixes e2e test failures where eq(col, null) queries returned 0
results because all rows were being excluded by the UNKNOWN result.

* docs: explain eq(col, null) handling and 3-valued logic reasoning

Add design document explaining the middle-ground approach for handling
eq(col, null) in the context of PR #765's 3-valued logic implementation.

Key points:
- Literal null values in eq() are transformed to isNull semantics
- 3-valued logic still applies for column-to-column comparisons
- This maintains SQL/JS consistency and handles dynamic values gracefully

* fix: throw error for eq(col, null) - use isNull() instead

Per Kevin's feedback on the 3-valued logic design (PR #765), eq(col, null)
should throw an error rather than being transformed to IS NULL. This is
consistent with how other comparison operators (gt, lt, etc.) handle null.

Changes:
- Revert JS evaluator change that transformed eq(col, null) to isNull semantics
- Update SQL compiler to throw error for eq(col, null) instead of IS NULL
- Update all related unit tests to expect errors
- Remove e2e tests for eq(col, null) (now throws error)
- Update documentation to explain the correct approach

Users should:
- Use isNull(col) or isUndefined(col) to check for null values
- Handle dynamic null values explicitly in their code
- Use non-nullable columns for cursor-based pagination

The original bug (invalid SQL with missing params) is fixed by throwing
a clear error that guides users to the correct approach.

* fix: update changeset and remove design doc per review feedback

- Update changeset to reflect that eq(col, null) throws error (not transforms to IS NULL)
- Remove docs/design/eq-null-handling.md - was only for review, not meant to be merged

* ci: apply automated fixes

---------

Co-authored-by: Claude <[email protected]>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>

* Fix awaitMatch helper on collection inserts and export isChangeMessage (#1000)

fix(electric-db-collection): fix awaitMatch race condition on inserts and export isChangeMessage

Two issues fixed:

1. **isChangeMessage not exported**: The `isChangeMessage` and `isControlMessage` utilities
   were exported from electric.ts but not re-exported from the package's index.ts, making
   them unavailable to users despite documentation stating otherwise.

2. **awaitMatch race condition on inserts**: When `awaitMatch` was called after the Electric
   messages had already been processed (including up-to-date), it would timeout because:
   - The message buffer (`currentBatchMessages`) was cleared on up-to-date
   - Immediate matches found in the buffer still waited for another up-to-date to resolve

   Fixed by:
   - Moving buffer clearing to the START of batch processing (preserves messages until next batch)
   - Adding `batchCommitted` flag to track when a batch is committed
   - For immediate matches: resolve immediately only if batch is committed (consistent with awaitTxId)
   - For immediate matches during batch processing: wait for up-to-date (maintains commit semantics)
   - Set `batchCommitted` BEFORE `resolveMatchedPendingMatches()` to avoid timing window
   - Set `batchCommitted` on snapshot-end in on-demand mode (matching "ready" semantics)

Fixes issue reported on Discord where inserts would timeout while updates worked.

Co-authored-by: Claude <[email protected]>

* ci: Version Packages (#1026)

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>

* docs: regenerate API documentation (#1027)

Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>

* Don't pin @electric-sql/client version (#1031)

* Don't pin @electric-sql/client version

* Add changeset

* update lock file

* Fix sherif linter errors for dependency version mismatches

Standardizes @electric-sql/client to use ^1.2.0 across react-db, solid-db, and vue-db packages, and updates @tanstack/query-db-collection to ^1.0.8 in todo examples.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <[email protected]>

---------

Co-authored-by: Claude Sonnet 4.5 <[email protected]>

* ci: Version Packages (#1033)

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>

* Handle subset end message in Electric collection (#1004)

Adds support for the subset-end message introduced in electric-sql/electric#3582

* Delete a row by its key (#1003)

This PR makes it possible to delete a row by key when using the write function passed to a collection's sync function.

* Tagged rows and support for move outs in Electric DB collection (#942)

Builds on top of Electric's ts-client support for tagging rows and move out events: electric-sql/electric#3497

This PR extends tanstack DB such that it handles tagged rows and move out events. A tagged row is removed from the Electric collection when its tag set becomes empty. Note that rows only have tags when the shape they belong to has one or more subqueries.

* Fix: deleted items not disappearing from live queries with `.limit()` (#1044)

* fix: emit delete events when subscribing with includeInitialState: false

When subscribing to a collection with includeInitialState: false,
delete events were being filtered out because the sentKeys set was
empty. This affected live queries with limit/offset where users
would subscribe to get future changes after already loading
initial data via preload() or values().

Changes:
- Add skipFiltering flag separate from loadedInitialState to allow
  filtering to be skipped while still allowing requestSnapshot to work
- Call markAllStateAsSeen() when includeInitialState is explicitly false
- Change internal subscriptions to not pass includeInitialState: false
  explicitly, so they can be distinguished from user subscriptions
- Add tests for optimistic delete behavior with limit

Fixes the issue where deleted items would not disappear from live
queries when using .limit() and subscribing with includeInitialState: false.

* debug: add extensive logging to track delete event flow

This is a DEBUG BUILD with [TanStack-DB-DEBUG] logs to help track down
why delete events may not be reaching subscribers when using limit/offset.

The debug logs cover:
- subscribeChanges: when subscriptions are created
- emitEvents: when events are emitted to subscriptions
- Subscription.emitEvents: when individual subscriptions receive events
- filterAndFlipChanges: when events are filtered or passed through
- recomputeOptimisticState: when optimistic state is recomputed and events emitted
- sendChangesToPipeline: when changes flow through the D2 pipeline
- applyChanges: when D2 pipeline outputs to the live query collection

To use: Filter browser console for "[TanStack-DB-DEBUG]"

Also includes the fix for includeInitialState: false not emitting deletes.

* ci: apply automated fixes

* debug: add more logging to track delete event flow in live queries

Add comprehensive debug logging to:
- createFilteredCallback in change-events.ts for whereExpression filtering
- sendChangesToInput for D2 pipeline input
- subscribeToOrderedChanges for orderBy/limit path
- splitUpdates for update event handling
- recomputeOptimisticState for pending sync key filtering

This additional logging helps track where delete events may be filtered
out when using live queries with limit/offset and where clauses.

* ci: apply automated fixes

* debug: add logging to graph scheduling and execution

Add debug logging to track:
- scheduleGraphRun: when graph run is scheduled
- executeGraphRun: when graph run executes or returns early
- maybeRunGraph: when graph actually runs, pending work status

This helps diagnose issues where deletes are sent to D2 pipeline
but never appear in the output (applyChanges not called).

* ci: apply automated fixes

* debug: add detailed logging to D2 reduce and topK operators

Add debug logging to track:
- ReduceOperator: input processing, key handling, and result output
- topK: consolidation, sorting, slicing, and result details

Also add two new test cases:
1. Test delete from different page (page 1 delete while viewing page 2)
   - Verifies items shift correctly when delete occurs on earlier page
2. Test delete beyond TopK window (no-op case)
   - Verifies deleting item outside window doesn't affect results

These tests and debug logs will help diagnose issues where deleted
items don't disappear from live queries when using limit/offset.

* ci: apply automated fixes

* debug: add more detailed logging to D2 graph and subscription

Add additional debug logging to help diagnose delete issues:

D2 graph (d2.ts):
- Log when run() starts and completes with step count
- Log pendingWork() results with operator IDs
- Log when operators have pending work in step()

Output operator (output.ts):
- Log when run is called with message count
- Log items in each message being processed

Subscription (subscription.ts):
- Log trackSentKeys with keys being added
- Show total sentKeys count

This should help diagnose scenarios where delete events are sent
to D2 but no applyChanges output is produced.

* ci: apply automated fixes

* debug: add operator type logging to trace D2 pipeline

Add operatorType property to Operator base class and log it when
operators run. This will help identify which operators are processing
the delete and where the data is being lost.

Also add detailed logging to LinearUnaryOperator.run() to show:
- Input message count
- Input/output item counts
- Sample of input and output items

This should reveal exactly which operator is dropping the delete.

* debug: add logging to TopKWithFractionalIndexOperator

This is the key operator for orderBy+limit queries. Add detailed logging to:
- run(): Show message count and index size
- processElement(): Show key, multiplicity changes, and action (INSERT/DELETE/NO_CHANGE)
- processElement result: Show moveIn/moveOut keys

This should reveal exactly why deletes aren't producing output changes
when the item exists in the TopK index.

* ci: apply automated fixes

* fix: filter duplicate inserts in subscription to prevent D2 multiplicity issues

When an item is inserted multiple times without a delete in between,
D2 multiplicity goes above 1. Then when a single delete arrives,
multiplicity goes from 2 to 1 (not 0), so TopK doesn't emit a DELETE event.

This fix:
1. Filters out duplicate inserts in filterAndFlipChanges when key already in sentKeys
2. Removes keys from sentKeys on delete in both filterAndFlipChanges and trackSentKeys
3. Updates test expectation to reflect correct behavior (2 events instead of 3)

Root cause: Multiple subscriptions or sync mechanisms could send duplicate
insert events for the same key, causing D2 to track multiplicity > 1.

* fix: add D2 input level deduplication to prevent multiplicity > 1

The previous fix in CollectionSubscription.filterAndFlipChanges was only
catching duplicates at the subscription level. But each live query has its
own CollectionSubscriber with its own D2 pipeline.

This fix adds a sentToD2Keys set in CollectionSubscriber to track which
keys have been sent to the D2 input, preventing duplicate inserts at the
D2 level regardless of which code path triggers them.

Also clears the tracking on truncate events.

* docs: add detailed changeset for delete fix

* ci: apply automated fixes

* chore: remove debug logging from D2 pipeline and subscription code

Remove all TanStack-DB-DEBUG console statements that were added during
investigation of the deleted items not disappearing from live queries bug.

The fix for duplicate D2 inserts is preserved - just removing the verbose
debug output now that the issue is resolved.

* debug: add logging to trace source of duplicate D2 inserts

Add targeted debug logging to understand where duplicate inserts originate:

1. recomputeOptimisticState: Track what events are generated and when
2. CollectionSubscription.filterAndFlipChanges: Trace filtering decisions
3. CollectionSubscriber.sendChangesToPipeline: Track D2-level deduplication

This will help determine if duplicates come from:
- Multiple calls to recomputeOptimisticState for the same key
- Overlap between initial snapshot and change events
- Multiple code paths feeding the D2 pipeline

* ci: apply automated fixes

* fix: prevent race condition in snapshot loading by adding keys to sentKeys before callback

The race condition occurred because snapshot methods (requestSnapshot, requestLimitedSnapshot)
added keys to sentKeys AFTER calling the callback, while filterAndFlipChanges added keys
BEFORE. If a change event arrived during callback execution, it would not see the keys
in sentKeys yet, allowing duplicate inserts.

Changes:
- Add keys to sentKeys BEFORE calling callback in requestSnapshot and requestLimitedSnapshot
- Remove redundant D2-level deduplication (sentToD2Keys) - subscription-level filtering is sufficient
- Remove debug logging added during investigation

* docs: update changeset to reflect race condition fix

* cleanup

* simplify changeset

---------

Co-authored-by: Claude <[email protected]>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Sam Willis <[email protected]>

* ci: Version Packages (#1036)

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>

* fix(db): re-request and buffer subsets after truncate for on-demand sync mode (#1043)

* fix(db): re-request subsets after truncate for on-demand sync mode

When a must-refetch (409) occurs in on-demand sync mode, the collection
receives a truncate which clears all data and resets the loadSubset
deduplication state. However, subscriptions were not re-requesting
their previously loaded subsets, leaving the collection empty.

This fix adds a truncate event listener to CollectionSubscription that:
1. Resets pagination/snapshot tracking state (but NOT sentKeys)
2. Re-requests all previously loaded subsets

We intentionally keep sentKeys intact because the truncate event is
emitted BEFORE delete events are sent to subscribers. If we cleared
sentKeys, delete events would be filtered by filterAndFlipChanges.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <[email protected]>

* feat: Buffer subscription changes during truncate

This change buffers subscription changes during a truncate event until all
loadSubset refetches complete. This prevents a flash of missing content
between deletes and new inserts.

Co-authored-by: sam.willis <[email protected]>

* ci: apply automated fixes

* changeset

* tweaks

* test

* address review

* ci: apply automated fixes

---------

Co-authored-by: Igor Barakaiev <[email protected]>
Co-authored-by: Claude Opus 4.5 <[email protected]>
Co-authored-by: Cursor Agent <[email protected]>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>

* ci: Version Packages (#1050)

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>

* fix: prevent duplicate inserts from reaching D2 pipeline in live queries (#1054)

* add test

* fix

* changeset

* fix versions

* ci: apply automated fixes

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>

* docs: regenerate API documentation (#1051)

Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>

* ci: Version Packages (#1055)

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>

* Fix slow onInsert awaitMatch performance issue (#1029)

* fix(electric): preserve message buffer across batches for awaitMatch

The buffer was being cleared at the start of each new batch, which caused
messages to be lost when multiple batches arrived before awaitMatch was
called. This led to:
- awaitMatch timing out (~3-5s per attempt)
- Transaction rollbacks when the timeout threw an error

The fix removes the buffer clearing between batches. Messages are now
preserved until the buffer reaches MAX_BATCH_MESSAGES (1000), at which
point the oldest messages are dropped. This ensures awaitMatch can find
messages even when heartbeat batches or other sync activity arrives
before the API call completes.

Added test case for the specific race condition: multiple batches
(including heartbeats) arriving while onInsert's API call is in progress.

* chore: align query-db-collection versions in examples

Update todo examples to use ^1.0.8 to match other examples and fix
sherif version consistency check.

* chore: align example dependency versions

Update todo and paced-mutations-demo examples to use consistent versions:
- @tanstack/query-db-collection: ^1.0.11
- @tanstack/react-db: ^0.1.59

---------

Co-authored-by: Claude <[email protected]>

* Add missing changeset for PR 1029 (#1062)

Add changeset for PR #1029 (awaitMatch performance fix)

Co-authored-by: Claude <[email protected]>

* ci: Version Packages (#1063)

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>

* ci: apply automated fixes

---------

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Co-authored-by: Lachlan Collins <[email protected]>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Kyle Mathews <[email protected]>
Co-authored-by: Claude <[email protected]>
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: Kevin <[email protected]>
Co-authored-by: Cursor Agent <[email protected]>
Co-authored-by: Igor Barakaiev <[email protected]>
samwillis added a commit that referenced this pull request Dec 23, 2025
…ing order on partial page (#970)

* test(react-db): add tests for deletion from partial page in infinite query

Add tests for deleting items from a page with fewer rows than pageSize
in useLiveInfiniteQuery. Tests both ascending and descending order to
verify the behavior difference.

* fix: correctly extract indexed value in requestLimitedSnapshot

The `biggestObservedValue` variable was incorrectly set to the full row object
instead of the indexed value (e.g., salary). This caused the BTree comparison
in `index.take()` to fail, resulting in the same data being loaded multiple
times. Each item would be inserted with a multiplicity > 1, and when deleted,
the multiplicity would decrement but not reach 0, so the item would remain
visible in the TopK.

This fix creates a value extractor from the orderBy expression and uses it to
extract the actual indexed value from each row before tracking it as the
"biggest observed value".

* chore: add changeset for DESC deletion fix

* Main branch merge conflict (#1066)

* fix(query-db-collection): use deep equality for object field comparison (#967)

* test: add test for object field update rollback issue

Add test that verifies object field updates with refetch: false
don't rollback to previous values after server response.

* fix: use deep equality for object field comparison in query observer

Replace shallow equality (===) with deep equality when comparing items
in the query observer callback. This fixes an issue where updating
object fields with refetch: false would cause rollback to previous
values every other update.

* chore: add changeset for object field update rollback fix

* ci: Version Packages (#961)

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>

* docs: regenerate API documentation (#969)

Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>

* ci: run prettier autofix action (#972)

* ci: prettier auto-fix

* Sync prettier config with other TanStack projects

* Fix lockfile

* docs: correct local relative links (#973)

* fix(db-ivm): use row keys for stable ORDER BY tie-breaking (#957)

* fix(db-ivm): use row keys for stable ORDER BY tie-breaking

Replace hash-based object ID tie-breaking with direct key comparison
for deterministic ordering when ORDER BY values are equal.

- Use row key directly as tie-breaker (always string | number, unique per row)
- Remove globalObjectIdGenerator dependency
- Simplify TaggedValue from [K, V, Tag] to [K, T] tuple
- Clean up helper functions (tagValue, getKey, getVal, getTag)

This ensures stable, deterministic ordering across page reloads and
eliminates potential hash collisions.

* ci: apply automated fixes

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>

* fix(db): ensure deterministic iteration order for collections and indexes (#958)

* fix(db): ensure deterministic iteration order for collections and indexes

- SortedMap: add key-based tie-breaking for deterministic ordering
- SortedMap: optimize to skip value comparison when no comparator provided
- BTreeIndex: sort keys within same indexed value for deterministic order
- BTreeIndex: add fast paths for empty/single-key sets
- CollectionStateManager: always use SortedMap for deterministic iteration
- Extract compareKeys utility to utils/comparison.ts
- Add comprehensive tests for deterministic ordering behavior

* ci: apply automated fixes

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>

* fix(db) loadSubset when orderby has multiple columns (#926)

* fix(db) loadSubset when orderby has multiple columns

failing test for multiple orderby and loadsubset

push down multiple orderby predicates to load subset

split order by cursor predicate build into two, inprecise wider band for local lading, precise for the sync loadSubset

new e2e tests for composite orderby and pagination

changeset

when doing gt/lt comparisons to a bool cast to string

fix: use non-boolean columns in multi-column orderBy e2e tests

Electric/PostgreSQL doesn't support comparison operators (<, >, <=, >=)
on boolean types. Changed tests to use age (number) and name (string)
columns instead of isActive (boolean) to avoid this limitation.

The core multi-column orderBy functionality still works correctly -
this is just a test adjustment to work within Electric's SQL parser
constraints.

* ci: apply automated fixes

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>

* ci: sync changes from other projects (#978)

* ci: fix vitest/lint, fix package.json

* ci: apply automated fixes

* Remove ts-expect-error

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>

* ci: more changes from other projects (#980)

* Revert husky removal (#981)

* revert: restore husky pre-commit hook removed in #980

This restores the husky pre-commit hook configuration that was removed
in PR #980. The hook runs lint-staged on staged .ts and .tsx files.

* chore: update pnpm-lock.yaml and fix husky pre-commit format

Update lockfile with husky and lint-staged dependencies.
Update pre-commit hook to modern husky v9 format.

---------

Co-authored-by: Claude <[email protected]>

* Restore only publishing docs on release (#982)

* ci: revert doc publishing workflow changes from #980

Restore the doc publishing workflow that was removed in PR #980:
- Restore docs-sync.yml for daily auto-generated docs
- Restore doc generation steps in release.yml after package publish
- Restore NPM_TOKEN environment variable

* ci: restore doc generation on release only

Restore doc generation steps in release.yml that run after package
publish. Remove the daily docs-sync.yml workflow - docs should only
be regenerated when we release.

---------

Co-authored-by: Claude <[email protected]>

* ci: sync package versions (#984)

* chore(deps): update dependency @angular/common to v19.2.16 [security] (#919)

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>

* chore(deps): update all non-major dependencies (#986)

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>

* Fix build example site auth secret error (#988)

fix(ci): add BETTER_AUTH_SECRET for projects example build

Copy .env.example to .env during CI build to provide required
BETTER_AUTH_SECRET that better-auth now enforces.

Co-authored-by: Claude <[email protected]>

* Unit tests for data equality comparison (#992)

* Add @tanstack/config dev dependency

* Tests for eq comparison of Date objects

* ci: apply automated fixes

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>

* Handle invalid collection getKey return values (#1008)

* Add runtime validation for collection getKey return values

Throws InvalidKeyError when getKey returns values other than string or
number (e.g., null, objects, booleans). The validation is optimized to
only require 1 typeof check on the happy path (string keys).

- Add InvalidKeyError class to errors.ts
- Update generateGlobalKey to validate key type before generating key
- Add tests for invalid key types (null, object, boolean)
- Add tests confirming valid keys (string, number, empty string, zero)

* ci: apply automated fixes

---------

Co-authored-by: Claude <[email protected]>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>

* Remove doc regeneration from autofix action (#1009)

ci: remove doc regeneration from autofix workflow

Docs are regenerated as part of releases, not on every PR/push.

Co-authored-by: Claude <[email protected]>

* feat(db,electric,query): separate cursor expressions for flexible pagination (#960)

* feat(db,electric,query): separate cursor expressions from where clause in loadSubset

- Add CursorExpressions type with whereFrom, whereCurrent, and lastKey
- LoadSubsetOptions.where no longer includes cursor - passed separately via cursor property
- Add offset to LoadSubsetOptions for offset-based pagination support
- Electric sync layer makes two parallel requestSnapshot calls when cursor present
- Query collection serialization includes offset for query key generation

This allows sync layers to choose between cursor-based or offset-based pagination,
and Electric can efficiently handle tie-breaking with targeted requests.

test(react-db): update useLiveInfiniteQuery test mock to handle cursor expressions

The test mock's loadSubset handler now handles the new cursor property
in LoadSubsetOptions by combining whereCurrent (ties) and whereFrom (next page)
data, deduplicating by id, and re-sorting.

fix(electric): make cursor requestSnapshot calls sequential

Changed parallel requestSnapshot calls to sequential to avoid potential
issues with concurrent snapshot requests that may cause timeouts in CI.

fix(electric): combine cursor expressions into single requestSnapshot

Instead of making two separate requestSnapshot calls (one for whereFrom,
one for whereCurrent), combine them using OR into a single request.
This avoids potential issues with multiple sequential snapshot requests
that were causing timeouts in CI.

The combined expression (whereFrom OR whereCurrent) matches the original
behavior where cursor was combined with the where clause.

wip

working?

update changeset

fix query test

* update docs

* ci: apply automated fixes

* fixups

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>

* ci: Version Packages (#974)

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>

* Query collection select writeupsert error (#1023)

* test(query-db-collection): add failing tests for select + writeInsert/writeUpsert bug

Add tests that reproduce the bug where using writeInsert or writeUpsert
with a collection that has a select option causes an error:
"select() must return an array of objects. Got: undefined"

The bug occurs because performWriteOperations sets the query cache with
a raw array, but the select function expects the wrapped response format.

Related issue: https://github.com/TanStack/db/issues/xyz

* fix(query-db-collection): fix select option breaking writeInsert/writeUpsert

When using the `select` option to extract items from a wrapped API response
(e.g., `{ data: [...], meta: {...} }`), calling `writeInsert()` or
`writeUpsert()` would corrupt the query cache by setting it to a raw array.

This caused the `select` function to receive the wrong data format and
return `undefined`, triggering the error:
"select() must return an array of objects. Got: undefined"

The fix adds a `hasSelect` flag to the SyncContext and skips the
`setQueryData` call when `select` is configured. This is the correct
behavior because:
1. The collection's synced store is already updated
2. The query cache stores the wrapped response format, not the raw items
3. Overwriting the cache with raw items would break the select function

* fix(query-db-collection): fix select option breaking writeInsert/writeUpsert

When using the `select` option to extract items from a wrapped API response
(e.g., `{ data: [...], meta: {...} }`), calling `writeInsert()` or
`writeUpsert()` would corrupt the query cache by setting it to a raw array.

This caused the `select` function to receive the wrong data format and
return `undefined`, triggering the error:
"select() must return an array of objects. Got: undefined"

The fix adds an `updateCacheData` function to the SyncContext that properly
handles cache updates for both cases:
- Without `select`: sets the cache directly with the raw array
- With `select`: uses setQueryData with an updater function to preserve
  the wrapper structure while updating the items array inside it

Also added a comprehensive test that verifies the wrapped response format
(including metadata) is preserved after write operations.

* ci: apply automated fixes

* refactor(query-db-collection): lift updateCacheData out of inline context

Move the updateCacheData function from being inline in the writeContext
object to a standalone function for better readability and maintainability.

* fix(query-db-collection): improve updateCacheData with select-based property detection

Address review feedback:
1. Use select(oldData) to identify the correct array property by reference
   equality instead of "first array property wins" heuristic
2. Fallback to common property names (data, items, results) before scanning
3. Return oldData unchanged instead of raw array when property can't be found
   to avoid breaking select
4. Make updateCacheData optional in SyncContext to avoid breaking changes
5. Add changeset for release

* ci: apply automated fixes

* fix: resolve TypeScript type errors for queryKey

Handle both static and function-based queryKey types properly:
- Get base query key before passing to setQueryData
- Keep writeContext.queryKey as Array<unknown> for SyncContext compatibility

* chore: fix version inconsistencies in example apps

Update example apps to use consistent versions of:
- @tanstack/query-db-collection: ^1.0.7
- @tanstack/react-db: ^0.1.56

Fixes sherif multiple-dependency-versions check.

---------

Co-authored-by: Cursor Agent <[email protected]>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>

* Fix serializing null/undefined when generating subset queries (#951)

* Fix invalid Electric proxy queries with missing params for null values

When comparison operators (eq, gt, lt, etc.) were used with null/undefined
values, the SQL compiler would generate placeholders ($1, $2) in the WHERE
clause but skip adding the params to the dictionary because serialize()
returns empty string for null/undefined.

This resulted in invalid queries being sent to Electric like:
  subset__where="name" = $1
  subset__params={}

The fix:
- For eq(col, null): Transform to "col IS NULL" syntax
- For other comparisons (gt, lt, gte, lte, like, ilike): Throw a clear
  error since null comparisons don't make semantic sense in SQL

Added comprehensive tests for the sql-compiler including null handling.

* chore: add changeset for Electric null params fix

* fix: use type assertions in sql-compiler tests for phantom type compatibility

* fix: handle edge cases for null comparisons in sql-compiler

Address reviewer feedback:
- eq(null, null) now throws (both args null would cause missing params)
- eq(null, literal) now throws (comparing literal to null is nonsensical)
- Only allow ref and func types as non-null arg in eq(..., null)
- Update changeset to explicitly mention undefined behavior
- Add tests for edge cases and OR + null equality

* test: add e2e tests for eq(col, null) transformation to IS NULL

* test: improve e2e tests for eq(col, null) with longer timeout and baseline comparison

* fix: handle eq(col, null) in local evaluator to match SQL IS NULL semantics

When eq() is called with a literal null/undefined value, the local
JavaScript evaluator now treats it as an IS NULL check instead of
using 3-valued logic (which would always return UNKNOWN).

This matches the SQL compiler's transformation of eq(col, null) to
IS NULL, ensuring consistent behavior between local query evaluation
and remote SQL queries.

- eq(col, null) now returns true if col is null/undefined, false otherwise
- eq(null, col) is also handled symmetrically
- eq(null, null) returns true (both are null)
- 3-valued logic still applies for column-to-column comparisons

This fixes e2e test failures where eq(col, null) queries returned 0
results because all rows were being excluded by the UNKNOWN result.

* docs: explain eq(col, null) handling and 3-valued logic reasoning

Add design document explaining the middle-ground approach for handling
eq(col, null) in the context of PR #765's 3-valued logic implementation.

Key points:
- Literal null values in eq() are transformed to isNull semantics
- 3-valued logic still applies for column-to-column comparisons
- This maintains SQL/JS consistency and handles dynamic values gracefully

* fix: throw error for eq(col, null) - use isNull() instead

Per Kevin's feedback on the 3-valued logic design (PR #765), eq(col, null)
should throw an error rather than being transformed to IS NULL. This is
consistent with how other comparison operators (gt, lt, etc.) handle null.

Changes:
- Revert JS evaluator change that transformed eq(col, null) to isNull semantics
- Update SQL compiler to throw error for eq(col, null) instead of IS NULL
- Update all related unit tests to expect errors
- Remove e2e tests for eq(col, null) (now throws error)
- Update documentation to explain the correct approach

Users should:
- Use isNull(col) or isUndefined(col) to check for null values
- Handle dynamic null values explicitly in their code
- Use non-nullable columns for cursor-based pagination

The original bug (invalid SQL with missing params) is fixed by throwing
a clear error that guides users to the correct approach.

* fix: update changeset and remove design doc per review feedback

- Update changeset to reflect that eq(col, null) throws error (not transforms to IS NULL)
- Remove docs/design/eq-null-handling.md - was only for review, not meant to be merged

* ci: apply automated fixes

---------

Co-authored-by: Claude <[email protected]>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>

* Fix awaitMatch helper on collection inserts and export isChangeMessage (#1000)

fix(electric-db-collection): fix awaitMatch race condition on inserts and export isChangeMessage

Two issues fixed:

1. **isChangeMessage not exported**: The `isChangeMessage` and `isControlMessage` utilities
   were exported from electric.ts but not re-exported from the package's index.ts, making
   them unavailable to users despite documentation stating otherwise.

2. **awaitMatch race condition on inserts**: When `awaitMatch` was called after the Electric
   messages had already been processed (including up-to-date), it would timeout because:
   - The message buffer (`currentBatchMessages`) was cleared on up-to-date
   - Immediate matches found in the buffer still waited for another up-to-date to resolve

   Fixed by:
   - Moving buffer clearing to the START of batch processing (preserves messages until next batch)
   - Adding `batchCommitted` flag to track when a batch is committed
   - For immediate matches: resolve immediately only if batch is committed (consistent with awaitTxId)
   - For immediate matches during batch processing: wait for up-to-date (maintains commit semantics)
   - Set `batchCommitted` BEFORE `resolveMatchedPendingMatches()` to avoid timing window
   - Set `batchCommitted` on snapshot-end in on-demand mode (matching "ready" semantics)

Fixes issue reported on Discord where inserts would timeout while updates worked.

Co-authored-by: Claude <[email protected]>

* ci: Version Packages (#1026)

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>

* docs: regenerate API documentation (#1027)

Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>

* Don't pin @electric-sql/client version (#1031)

* Don't pin @electric-sql/client version

* Add changeset

* update lock file

* Fix sherif linter errors for dependency version mismatches

Standardizes @electric-sql/client to use ^1.2.0 across react-db, solid-db, and vue-db packages, and updates @tanstack/query-db-collection to ^1.0.8 in todo examples.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <[email protected]>

---------

Co-authored-by: Claude Sonnet 4.5 <[email protected]>

* ci: Version Packages (#1033)

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>

* Handle subset end message in Electric collection (#1004)

Adds support for the subset-end message introduced in electric-sql/electric#3582

* Delete a row by its key (#1003)

This PR makes it possible to delete a row by key when using the write function passed to a collection's sync function.

* Tagged rows and support for move outs in Electric DB collection (#942)

Builds on top of Electric's ts-client support for tagging rows and move out events: electric-sql/electric#3497

This PR extends tanstack DB such that it handles tagged rows and move out events. A tagged row is removed from the Electric collection when its tag set becomes empty. Note that rows only have tags when the shape they belong to has one or more subqueries.

* Fix: deleted items not disappearing from live queries with `.limit()` (#1044)

* fix: emit delete events when subscribing with includeInitialState: false

When subscribing to a collection with includeInitialState: false,
delete events were being filtered out because the sentKeys set was
empty. This affected live queries with limit/offset where users
would subscribe to get future changes after already loading
initial data via preload() or values().

Changes:
- Add skipFiltering flag separate from loadedInitialState to allow
  filtering to be skipped while still allowing requestSnapshot to work
- Call markAllStateAsSeen() when includeInitialState is explicitly false
- Change internal subscriptions to not pass includeInitialState: false
  explicitly, so they can be distinguished from user subscriptions
- Add tests for optimistic delete behavior with limit

Fixes the issue where deleted items would not disappear from live
queries when using .limit() and subscribing with includeInitialState: false.

* debug: add extensive logging to track delete event flow

This is a DEBUG BUILD with [TanStack-DB-DEBUG] logs to help track down
why delete events may not be reaching subscribers when using limit/offset.

The debug logs cover:
- subscribeChanges: when subscriptions are created
- emitEvents: when events are emitted to subscriptions
- Subscription.emitEvents: when individual subscriptions receive events
- filterAndFlipChanges: when events are filtered or passed through
- recomputeOptimisticState: when optimistic state is recomputed and events emitted
- sendChangesToPipeline: when changes flow through the D2 pipeline
- applyChanges: when D2 pipeline outputs to the live query collection

To use: Filter browser console for "[TanStack-DB-DEBUG]"

Also includes the fix for includeInitialState: false not emitting deletes.

* ci: apply automated fixes

* debug: add more logging to track delete event flow in live queries

Add comprehensive debug logging to:
- createFilteredCallback in change-events.ts for whereExpression filtering
- sendChangesToInput for D2 pipeline input
- subscribeToOrderedChanges for orderBy/limit path
- splitUpdates for update event handling
- recomputeOptimisticState for pending sync key filtering

This additional logging helps track where delete events may be filtered
out when using live queries with limit/offset and where clauses.

* ci: apply automated fixes

* debug: add logging to graph scheduling and execution

Add debug logging to track:
- scheduleGraphRun: when graph run is scheduled
- executeGraphRun: when graph run executes or returns early
- maybeRunGraph: when graph actually runs, pending work status

This helps diagnose issues where deletes are sent to D2 pipeline
but never appear in the output (applyChanges not called).

* ci: apply automated fixes

* debug: add detailed logging to D2 reduce and topK operators

Add debug logging to track:
- ReduceOperator: input processing, key handling, and result output
- topK: consolidation, sorting, slicing, and result details

Also add two new test cases:
1. Test delete from different page (page 1 delete while viewing page 2)
   - Verifies items shift correctly when delete occurs on earlier page
2. Test delete beyond TopK window (no-op case)
   - Verifies deleting item outside window doesn't affect results

These tests and debug logs will help diagnose issues where deleted
items don't disappear from live queries when using limit/offset.

* ci: apply automated fixes

* debug: add more detailed logging to D2 graph and subscription

Add additional debug logging to help diagnose delete issues:

D2 graph (d2.ts):
- Log when run() starts and completes with step count
- Log pendingWork() results with operator IDs
- Log when operators have pending work in step()

Output operator (output.ts):
- Log when run is called with message count
- Log items in each message being processed

Subscription (subscription.ts):
- Log trackSentKeys with keys being added
- Show total sentKeys count

This should help diagnose scenarios where delete events are sent
to D2 but no applyChanges output is produced.

* ci: apply automated fixes

* debug: add operator type logging to trace D2 pipeline

Add operatorType property to Operator base class and log it when
operators run. This will help identify which operators are processing
the delete and where the data is being lost.

Also add detailed logging to LinearUnaryOperator.run() to show:
- Input message count
- Input/output item counts
- Sample of input and output items

This should reveal exactly which operator is dropping the delete.

* debug: add logging to TopKWithFractionalIndexOperator

This is the key operator for orderBy+limit queries. Add detailed logging to:
- run(): Show message count and index size
- processElement(): Show key, multiplicity changes, and action (INSERT/DELETE/NO_CHANGE)
- processElement result: Show moveIn/moveOut keys

This should reveal exactly why deletes aren't producing output changes
when the item exists in the TopK index.

* ci: apply automated fixes

* fix: filter duplicate inserts in subscription to prevent D2 multiplicity issues

When an item is inserted multiple times without a delete in between,
D2 multiplicity goes above 1. Then when a single delete arrives,
multiplicity goes from 2 to 1 (not 0), so TopK doesn't emit a DELETE event.

This fix:
1. Filters out duplicate inserts in filterAndFlipChanges when key already in sentKeys
2. Removes keys from sentKeys on delete in both filterAndFlipChanges and trackSentKeys
3. Updates test expectation to reflect correct behavior (2 events instead of 3)

Root cause: Multiple subscriptions or sync mechanisms could send duplicate
insert events for the same key, causing D2 to track multiplicity > 1.

* fix: add D2 input level deduplication to prevent multiplicity > 1

The previous fix in CollectionSubscription.filterAndFlipChanges was only
catching duplicates at the subscription level. But each live query has its
own CollectionSubscriber with its own D2 pipeline.

This fix adds a sentToD2Keys set in CollectionSubscriber to track which
keys have been sent to the D2 input, preventing duplicate inserts at the
D2 level regardless of which code path triggers them.

Also clears the tracking on truncate events.

* docs: add detailed changeset for delete fix

* ci: apply automated fixes

* chore: remove debug logging from D2 pipeline and subscription code

Remove all TanStack-DB-DEBUG console statements that were added during
investigation of the deleted items not disappearing from live queries bug.

The fix for duplicate D2 inserts is preserved - just removing the verbose
debug output now that the issue is resolved.

* debug: add logging to trace source of duplicate D2 inserts

Add targeted debug logging to understand where duplicate inserts originate:

1. recomputeOptimisticState: Track what events are generated and when
2. CollectionSubscription.filterAndFlipChanges: Trace filtering decisions
3. CollectionSubscriber.sendChangesToPipeline: Track D2-level deduplication

This will help determine if duplicates come from:
- Multiple calls to recomputeOptimisticState for the same key
- Overlap between initial snapshot and change events
- Multiple code paths feeding the D2 pipeline

* ci: apply automated fixes

* fix: prevent race condition in snapshot loading by adding keys to sentKeys before callback

The race condition occurred because snapshot methods (requestSnapshot, requestLimitedSnapshot)
added keys to sentKeys AFTER calling the callback, while filterAndFlipChanges added keys
BEFORE. If a change event arrived during callback execution, it would not see the keys
in sentKeys yet, allowing duplicate inserts.

Changes:
- Add keys to sentKeys BEFORE calling callback in requestSnapshot and requestLimitedSnapshot
- Remove redundant D2-level deduplication (sentToD2Keys) - subscription-level filtering is sufficient
- Remove debug logging added during investigation

* docs: update changeset to reflect race condition fix

* cleanup

* simplify changeset

---------

Co-authored-by: Claude <[email protected]>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Sam Willis <[email protected]>

* ci: Version Packages (#1036)

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>

* fix(db): re-request and buffer subsets after truncate for on-demand sync mode (#1043)

* fix(db): re-request subsets after truncate for on-demand sync mode

When a must-refetch (409) occurs in on-demand sync mode, the collection
receives a truncate which clears all data and resets the loadSubset
deduplication state. However, subscriptions were not re-requesting
their previously loaded subsets, leaving the collection empty.

This fix adds a truncate event listener to CollectionSubscription that:
1. Resets pagination/snapshot tracking state (but NOT sentKeys)
2. Re-requests all previously loaded subsets

We intentionally keep sentKeys intact because the truncate event is
emitted BEFORE delete events are sent to subscribers. If we cleared
sentKeys, delete events would be filtered by filterAndFlipChanges.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <[email protected]>

* feat: Buffer subscription changes during truncate

This change buffers subscription changes during a truncate event until all
loadSubset refetches complete. This prevents a flash of missing content
between deletes and new inserts.

Co-authored-by: sam.willis <[email protected]>

* ci: apply automated fixes

* changeset

* tweaks

* test

* address review

* ci: apply automated fixes

---------

Co-authored-by: Igor Barakaiev <[email protected]>
Co-authored-by: Claude Opus 4.5 <[email protected]>
Co-authored-by: Cursor Agent <[email protected]>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>

* ci: Version Packages (#1050)

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>

* fix: prevent duplicate inserts from reaching D2 pipeline in live queries (#1054)

* add test

* fix

* changeset

* fix versions

* ci: apply automated fixes

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>

* docs: regenerate API documentation (#1051)

Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>

* ci: Version Packages (#1055)

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>

* Fix slow onInsert awaitMatch performance issue (#1029)

* fix(electric): preserve message buffer across batches for awaitMatch

The buffer was being cleared at the start of each new batch, which caused
messages to be lost when multiple batches arrived before awaitMatch was
called. This led to:
- awaitMatch timing out (~3-5s per attempt)
- Transaction rollbacks when the timeout threw an error

The fix removes the buffer clearing between batches. Messages are now
preserved until the buffer reaches MAX_BATCH_MESSAGES (1000), at which
point the oldest messages are dropped. This ensures awaitMatch can find
messages even when heartbeat batches or other sync activity arrives
before the API call completes.

Added test case for the specific race condition: multiple batches
(including heartbeats) arriving while onInsert's API call is in progress.

* chore: align query-db-collection versions in examples

Update todo examples to use ^1.0.8 to match other examples and fix
sherif version consistency check.

* chore: align example dependency versions

Update todo and paced-mutations-demo examples to use consistent versions:
- @tanstack/query-db-collection: ^1.0.11
- @tanstack/react-db: ^0.1.59

---------

Co-authored-by: Claude <[email protected]>

* Add missing changeset for PR 1029 (#1062)

Add changeset for PR #1029 (awaitMatch performance fix)

Co-authored-by: Claude <[email protected]>

* ci: Version Packages (#1063)

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>

* ci: apply automated fixes

---------

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Co-authored-by: Lachlan Collins <[email protected]>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Kyle Mathews <[email protected]>
Co-authored-by: Claude <[email protected]>
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: Kevin <[email protected]>
Co-authored-by: Cursor Agent <[email protected]>
Co-authored-by: Igor Barakaiev <[email protected]>

---------

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Co-authored-by: Lachlan Collins <[email protected]>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Kyle Mathews <[email protected]>
Co-authored-by: Claude <[email protected]>
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: Kevin <[email protected]>
Co-authored-by: Cursor Agent <[email protected]>
Co-authored-by: Igor Barakaiev <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants