Skip to content

feat: add streaming responses for AI assistant (resolves #56)#78

Merged
Atharv777 merged 5 commits intokentuckyfriedcode:mainfrom
lohit-40:feature/streaming-responses-issue-56
Apr 20, 2026
Merged

feat: add streaming responses for AI assistant (resolves #56)#78
Atharv777 merged 5 commits intokentuckyfriedcode:mainfrom
lohit-40:feature/streaming-responses-issue-56

Conversation

@lohit-40
Copy link
Copy Markdown
Contributor

Summary

Resolves #56
The backend (server/agent.py) already emits text/event-stream Server-Sent Events (SSE). This PR wires up the frontend to consume the stream incrementally so users see tokens as they arrive rather than waiting for the full response.

Changes

hooks/useStream.ts (new)

  • Custom React hook that consumes SSE via the Fetch API's ReadableStream
    • Parses frames: output, input_required, error
    • Exposes { lines, isStreaming, inputRequired, error, startStream, stopStream, reset }
    • Supports GET (no body) and POST (with context_payload)
    • Uses AbortController for clean teardown on unmount or stopStream()
    • Zero new npm dependencies

components/StreamingMessage.tsx (new)

  • Renders accumulated stream lines as incremental p elements
    • Shows a blinking cursor | while isStreaming === true
    • Displays an accessible error badge (role="alert") on stream failure
    • aria-live="polite" so screen readers announce new tokens

tests/components/streaming.test.tsx (new)

  • 30 tests covering useStream and StreamingMessage
    • jsdom-compatible fake ReadableStream mocks
    • Covers: initial state, output frames, input_required, error frames, HTTP errors, network errors, null body, stopStream, reset, POST vs GET, consecutive streams, malformed JSON skip
    • Component: cursor visibility, error badge, aria attributes, className, long responses, streaming -> complete transition

tests/components/chat.test.tsx (modified -- additive only)

  • 8 new streaming-specific tests in a new describe block
    • All 5 original tests untouched

tests/setup.ts (modified)

  • Polyfills TextEncoder / TextDecoder for jsdom

Test Results

Test Suites: 6 passed, 6 total
Tests: 52 passed, 52 total

Acceptance Criteria

Criterion Status
AI responses render progressively YES: useStream emits tokens via lines array as frames arrive
No blocking until full response YES: ReadableStream never buffers -- renders each data: frame inline
Errors during streaming handled gracefully YES: error state propagated to StreamingMessage error badge
Works for short and long responses YES: Same code path regardless of response length
No server changes required YES: Backend SSE already correct -- only frontend additions

…dcode#56)

The backend agent (server/agent.py) already emits Server-Sent Events
(SSE) via text/event-stream.  This PR wires up the frontend to consume
the stream incrementally so users see tokens as they arrive rather than
waiting for the full response.

Changes
-------
* hooks/useStream.ts (new)
  - Custom React hook that opens a fetch() connection and reads
    ReadableStream chunks in real-time.
  - Parses SSE frames: 'output', 'input_required', and 'error'.
  - Exposes { lines, isStreaming, inputRequired, error, startStream,
    stopStream, reset }.
  - Supports both GET (no body) and POST (with context_payload body).
  - Uses AbortController for clean teardown on unmount or stopStream().

* components/StreamingMessage.tsx (new)
  - Renders accumulated stream lines as incremental paragraphs.
  - Shows a blinking cursor (▋) while isStreaming === true.
  - Displays an accessible error badge (role='alert') on stream failure.
  - aria-live='polite' so screen readers announce new tokens.

* tests/components/streaming.test.tsx (new)
  - 30 tests covering useStream hook and StreamingMessage component.
  - Uses jsdom-compatible fake ReadableStream mocks (no native canvas /
    fetch dependency).
  - Covers: initial state, output frames, input_required, error frames,
    HTTP errors, network errors, null body, stopStream, reset, POST vs
    GET, multiple consecutive streams, malformed JSON skip.
  - StreamingMessage: cursor visibility, error badge, aria attributes,
    custom className, long responses, streaming → complete transition.

* tests/components/chat.test.tsx (modified — additive only)
  - Added 'Chat Interface — Streaming message types' describe block with
    8 new tests modelling SSE frame shapes and streaming message state.
  - All 5 original tests left untouched.

* tests/setup.ts (modified)
  - Polyfills global TextEncoder / TextDecoder for jsdom compatibility.

All 52 tests pass (npm test).
@Atharv777
Copy link
Copy Markdown
Collaborator

Hey @lohit-40, can you please share screenshot/video of the updated UI of the streaming tokens? The code looks good to me.

@lohit-40
Copy link
Copy Markdown
Contributor Author

calliopeide_streaming_pr78_demo_1776500936414 is this ok do let me know if any thing else is needed

@Atharv777
Copy link
Copy Markdown
Collaborator

Hey @lohit-40, as a brownie point, can you fix the line-break issue too? As of now, each word appears in a new line. You can use a formatting library if you want, and if you make it robustly formatted output, we'll give you double the points as a single PR.

@lohit-40
Copy link
Copy Markdown
Contributor Author

can u check rn if anything else needed

@Atharv777
Copy link
Copy Markdown
Collaborator

Perfect, thanks @lohit-40

@Atharv777
Copy link
Copy Markdown
Collaborator

Hey, the frontend tests are failing

@lohit-40
Copy link
Copy Markdown
Contributor Author

leeme check give 5 min

@lohit-40
Copy link
Copy Markdown
Contributor Author

can u check rn i feel there shouldn't any more hiccups!

@Atharv777 Atharv777 merged commit 515d878 into kentuckyfriedcode:main Apr 20, 2026
4 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[AI] Add streaming responses for assistant (real-time output)

2 participants