feat: add streaming AI chat responses via Server-Sent Events#80
Open
woydarko wants to merge 5 commits intokentuckyfriedcode:mainfrom
Open
feat: add streaming AI chat responses via Server-Sent Events#80woydarko wants to merge 5 commits intokentuckyfriedcode:mainfrom
woydarko wants to merge 5 commits intokentuckyfriedcode:mainfrom
Conversation
Closes kentuckyfriedcode#48 ## What was added ### server/utils/contract_templates.py (new) Template registry and generator with 4 starter templates: - hello_world: minimal greeting contract (beginner) - token: fungible token with initialize/transfer/balance (intermediate) - nft: non-fungible token with mint/transfer/owner_of/token_uri (intermediate) - governance: DAO proposal + voting with quorum threshold (advanced) Functions: list_templates(), get_template(id), generate_template(id, path, name) Each template generates: Cargo.toml, src/lib.rs, README.md ### server/routes/template_routes.py (new) Three endpoints registered under /api/templates/: GET /api/templates - Lists all 4 templates with id, name, description, difficulty, tags GET /api/templates/<template_id> - Returns metadata for one template, 404 with available list if not found POST /api/templates/generate - Validates session ownership and resolves instance_dir - Validates project_name with regex (alphanumeric, hyphens, underscores) - Blocks path traversal - Generates template files into session workspace - Returns files_created list and project_path ### server/start.py Registered templates_bp blueprint ### server/tests/test_contract_templates.py (new) 28 tests covering all components: - list_templates: count, required fields, expected IDs - get_template: valid/invalid IDs, all templates retrievable - generate_template: all 4 templates, Cargo.toml content, src/lib.rs has soroban_sdk, README mentions template name, raises for unknown template, raises if path exists, files_created list correct - GET /api/templates: 200, returns 4, required fields - GET /api/templates/<id>: 200 valid, 404 unknown, available list on 404 - POST /api/templates/generate: missing fields, session not found, path traversal blocked, invalid project_name, unknown template 404, successful generation with files on disk ## Results Tests: 88 passed, 0 failed (full suite)
Closes kentuckyfriedcode#56 ## What was added ### server/routes/streaming_chat_routes.py (new) Three endpoints registered under /api/chat/: POST /api/chat/stream - Streams AI responses token-by-token via Server-Sent Events - Validates session ownership before starting stream - Persists user message to DB before streaming - Persists complete AI response to DB after stream_end - Each stream gets a unique UUID stream_id for cancellation - SSE frame types: stream_start, token, stream_end, error - Response headers: Cache-Control: no-cache, X-Accel-Buffering: no - Uses Gemini 2.0 Flash with stream=True for progressive token delivery - Optional system_prompt override per request - Optional conversation history for multi-turn context POST /api/chat/stream/stop - Cancels an in-progress stream by stream_id - Sets a threading.Event that the generator checks on each chunk GET /api/chat/stream/active - Lists active stream IDs (debug endpoint) ### server/start.py Registered streaming_chat_bp blueprint ### server/tests/test_streaming_chat.py (new) 18 tests covering all endpoints: - SSE helper: bytes output, event type, double newline format - Stream registry: register/deregister, safe deregister of nonexistent - POST /api/chat/stream: missing session_id, missing message, empty message, no JSON body, session not found, successful stream returns text/event-stream, no-cache headers - POST /api/chat/stream/stop: missing stream_id, nonexistent stream, active stream cancellation - GET /api/chat/stream/active: returns list and count ## Results Tests: 128 passed, 0 failed (18 new + all existing)
The test was failing in full suite because server.models was stubbed with MagicMock by other test files, causing db.create_all() to not register the RefreshToken model — resulting in 'no such table: refresh_tokens'. Fix: purge all server.* modules at start of test_refresh_token_type_validation and explicitly import server.models before db.create_all() so SQLAlchemy registers every table including refresh_tokens. Also expanded conftest.py stub cleanup list to include database and db_utils. Tests: 129 passed, 0 failed
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Closes #56
Adds
POST /api/chat/stream— a Server-Sent Events endpoint that streams Gemini AI responses token-by-token to the client, so users see output progressively instead of waiting for the full response. Especially important for longer outputs like contract generation.What was added
server/routes/streaming_chat_routes.py(new)POST /api/chat/streamstream_idper request for cancellation supportstream=True— each chunk yields an SSEtokenframestream_endhistorylist for multi-turn conversation contextsystem_promptoverride per requestCache-Control: no-cache,X-Accel-Buffering: no,Transfer-Encoding: chunkedSSE frame types:
stream_start—{"stream_id": "uuid"}token—{"text": "...", "stream_id": "uuid"}stream_end—{"stream_id": "uuid", "stopped": bool, "full_text": "..."}error—{"message": "..."}POST /api/chat/stream/stopstream_idthreading.Eventthat the generator checks on each chunkGET /api/chat/stream/activeserver/start.pyRegistered
streaming_chat_bpblueprintserver/tests/test_streaming_chat.py(new)18 tests covering all components:
POST /api/chat/stream: missing session_id/message, empty message, no JSON body, session not found, successful stream returnstext/event-stream, no-cache headersPOST /api/chat/stream/stop: missing stream_id, nonexistent stream, active stream cancellation with event.is_set() verifiedGET /api/chat/stream/active: returns list and countResults