Skip to content

[AI] Add streaming responses for assistant (real-time output) #56

@Atharv777

Description

@Atharv777

Summary

Implement streaming responses for AI so users can see output generated in real-time instead of waiting for full responses.

Why this matters

Streaming improves:

  • perceived performance
  • user experience
  • responsiveness of the assistant

This is especially important for longer outputs like contract generation.

Scope

  • Modify backend to support streaming responses
  • Update frontend to consume streamed data
  • Render tokens incrementally in chat UI
  • Handle stream interruptions and errors

Acceptance Criteria

  • AI responses appear progressively in UI
  • No blocking until full response completes
  • Errors during streaming are handled gracefully
  • Works for both short and long responses

Files Involved

  • server/start.py
  • server/agent.py
  • pages/app/index.jsx
  • components/* (chat UI)

Difficulty

Medium

Labels: ai backend frontend enhancement

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions