Conversation
📝 WalkthroughWalkthrough스트리밍 완료 후 특정 노드(Solve_Writer, Explain)에서 추가 줄바꿈을 발생시키는 로직을 도입했습니다. 새로운 상수 정의 및 메시지 생성 경로(v1, v2)에 걸쳐 뒤따르는 줄바꿈 처리를 추가했으며, 중복 방지를 위해 발생 추적을 구현했습니다. Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~10 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/service/service.py`:
- Around line 1084-1095: Add v1-style deduplication to the v2 path by
introducing a per-request set (e.g., streaming_newline_emitted) in
message_generator_v2's top-level state and update the on_chat_model_end handler
to only emit the trailing newline for a given (active_llm_message_id,
active_llm_node) once: check (active_llm_message_id, active_llm_node) against
streaming_newline_emitted before emitting to STREAMING_TRAILING_NEWLINE_NODES,
add the tuple to the set after emitting, and ensure the set is reset
appropriately when a new request lifecycle starts so duplicates across
sequential on_chat_model_start/on_chat_model_end pairs are prevented.
- Around line 443-453: The v1 trailing-newline emit block (the branch using
STREAMING_TRAILING_NEWLINE_NODES, streaming_newline_emitted,
user_input.stream_tokens and node_name) lacks the same "skip_stream" tag check
present in the v2 on_chat_model_end path; update this block to first inspect the
node's tag list (e.g., the node's tags or tag_list used elsewhere) and skip
emitting the newline if "skip_stream" is present, so the logic matches the v2
handler and avoids emitting trailing newlines for nodes marked with skip_stream.
| # 스트리밍 노드 완료 후 줄바꿈 토큰 emit | ||
| if ( | ||
| user_input.stream_tokens | ||
| and node_name in STREAMING_TRAILING_NEWLINE_NODES | ||
| and node_name not in streaming_newline_emitted | ||
| ): | ||
| streaming_newline_emitted.add(node_name) | ||
| newline_payload = json.dumps( | ||
| {"type": "token", "content": "\n\n"}, ensure_ascii=False | ||
| ) | ||
| yield f"data: {newline_payload}\n\n" |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
rg -n "skip_stream" --type py -C 3Repository: Team-Proovy/Proovy-ai
Length of output: 10802
🏁 Script executed:
rg -n "STREAMING_TRAILING_NEWLINE_NODES" --type py -C 3Repository: Team-Proovy/Proovy-ai
Length of output: 1515
🏁 Script executed:
rg -n "Solve_Writer\|Explain" --type py -C 2Repository: Team-Proovy/Proovy-ai
Length of output: 47
🏁 Script executed:
# Check the context around line 548 for stream_mode == "messages" handler
sed -n '540,560p' src/service/service.pyRepository: Team-Proovy/Proovy-ai
Length of output: 1151
🏁 Script executed:
# Check lines 443-453 in detail along with surrounding context
sed -n '430,465p' src/service/service.pyRepository: Team-Proovy/Proovy-ai
Length of output: 1613
🏁 Script executed:
# Search for on_chat_model_end handler mentioned in the comment
rg -n "on_chat_model_end" --type py -C 5Repository: Team-Proovy/Proovy-ai
Length of output: 740
🏁 Script executed:
fd -t f "\.py$" | xargs rg -l "\"Explain\".*node|add_node.*Explain" | head -10Repository: Team-Proovy/Proovy-ai
Length of output: 127
🏁 Script executed:
rg -n "def.*[Ee]xplain|\"Explain\"" --type py -C 2 | head -50Repository: Team-Proovy/Proovy-ai
Length of output: 2702
🏁 Script executed:
# Search for Explain node in feature graphs
rg -n "Explain" src/agents/workflows/subgraphs/step4_features/ --type py -C 2Repository: Team-Proovy/Proovy-ai
Length of output: 2647
v1 경로의 줄바꿈 emit 로직이 skip_stream 태그를 확인하지 않아, v2 경로와 비일관적입니다.
확인 결과 STREAMING_TRAILING_NEWLINE_NODES에 포함된 Solve_Writer (solve/graph.py:428)와 Explain (explain/graph.py:100) 노드 모두 skip_stream 태그 없이 구성되어 있습니다. 따라서 현재는 실제 문제가 발생하지 않습니다.
그러나 v2 경로 (on_chat_model_end 핸들러, 1078-1082줄)는 명시적으로 if "skip_stream" in tag_list: continue를 확인하는 반면, v1 경로 (443-453줄)는 이 검사가 없습니다. 향후 이 노드들에 skip_stream을 추가할 경우 v1 경로에서만 불필요한 줄바꿈이 emit될 수 있으므로, v1 경로에도 동일한 skip_stream 확인 로직을 추가하는 것이 좋습니다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/service/service.py` around lines 443 - 453, The v1 trailing-newline emit
block (the branch using STREAMING_TRAILING_NEWLINE_NODES,
streaming_newline_emitted, user_input.stream_tokens and node_name) lacks the
same "skip_stream" tag check present in the v2 on_chat_model_end path; update
this block to first inspect the node's tag list (e.g., the node's tags or
tag_list used elsewhere) and skip emitting the newline if "skip_stream" is
present, so the logic matches the v2 handler and avoids emitting trailing
newlines for nodes marked with skip_stream.
| # Writer 노드 등 스트리밍 feature 응답 완료 후 줄바꿈 추가 | ||
| if active_llm_node in STREAMING_TRAILING_NEWLINE_NODES: | ||
| yield emitter.emit( | ||
| "llm.token.delta", | ||
| { | ||
| "message_id": active_llm_message_id, | ||
| "node": active_llm_node, | ||
| "delta": "\n\n", | ||
| "index": active_llm_token_index, | ||
| }, | ||
| ) | ||
| active_llm_token_index += 1 |
There was a problem hiding this comment.
v2 경로에 v1의 streaming_newline_emitted 에 해당하는 중복 방지 로직이 없습니다.
v1 경로는 streaming_newline_emitted 집합을 통해 요청당 노드당 정확히 1회만 줄바꿈을 emit합니다. 반면 v2 경로는 on_chat_model_end 가 발생할 때마다 active_llm_node in STREAMING_TRAILING_NEWLINE_NODES 이면 줄바꿈을 emit하므로, 동일 노드 내에서 LLM 호출이 순차적으로 N회 발생하면 N개의 줄바꿈이 클라이언트에 전달됩니다.
active_llm_message_id = None 리셋은 동일 LLM 호출의 중복 emit만 방지하며, 두 번째 on_chat_model_start → on_chat_model_end 쌍이 오면 새로운 active_llm_message_id 로 다시 emit이 허용됩니다.
🐛 제안 수정: v2에 deduplication 집합 추가
message_generator_v2 상단 변수 초기화 블록에 집합 추가:
emitted_artifacts: set[str] = set()
last_credit_signature: tuple[Any, Any, Any] | None = None
+streaming_newline_emitted: set[str] = set()on_chat_model_end 핸들러 내 줄바꿈 emit 조건에 중복 방지 체크 추가:
- if active_llm_node in STREAMING_TRAILING_NEWLINE_NODES:
+ if (
+ active_llm_node in STREAMING_TRAILING_NEWLINE_NODES
+ and active_llm_node not in streaming_newline_emitted
+ ):
+ streaming_newline_emitted.add(active_llm_node)
yield emitter.emit(
"llm.token.delta",🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/service/service.py` around lines 1084 - 1095, Add v1-style deduplication
to the v2 path by introducing a per-request set (e.g.,
streaming_newline_emitted) in message_generator_v2's top-level state and update
the on_chat_model_end handler to only emit the trailing newline for a given
(active_llm_message_id, active_llm_node) once: check (active_llm_message_id,
active_llm_node) against streaming_newline_emitted before emitting to
STREAMING_TRAILING_NEWLINE_NODES, add the tuple to the set after emitting, and
ensure the set is reset appropriately when a new request lifecycle starts so
duplicates across sequential on_chat_model_start/on_chat_model_end pairs are
prevented.
🏷️ PR 타입
📝 작업 내용
✅ 체크리스트
📎 기타 참고사항
Summary by CodeRabbit
스타일 개선