Replies: 1 comment
-
|
have you seen https://tanstack.com/query/latest/docs/reference/streamedQuery |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Proposal: Add built-in support for AI streaming queries (partial responses, token-level updates, abort, and resume)
Description:
Modern AI APIs (OpenAI, Anthropic, Google, AWS, Vercel AI SDK, etc.) use streaming responses that deliver output chunk-by-chunk. Many production AI apps need to:
TanStack Query works well for REST/JSON, but AI streaming introduces unique challenges:
Problems:
Proposal:
Introduce a new “streaming query mode” with:
Built-in chunk reducer
(chunk, prev) => next
Native AbortController integration
Cache only final output, not partial chunks
Streaming refetch policy
Why this matters:
Most modern AI apps depend on streaming. Without first-class support, teams must build custom fetchEventSource or ReadableStream handlers. Native streaming query support would make TanStack Query a top-tier choice for AI applications.
Beta Was this translation helpful? Give feedback.
All reactions