Skip to content

[Fix] writer 노드 종료시 줄바꿈#106

Open
Ncs89 wants to merge 1 commit intodevfrom
fix/106-writer-enter
Open

[Fix] writer 노드 종료시 줄바꿈#106
Ncs89 wants to merge 1 commit intodevfrom
fix/106-writer-enter

Conversation

@Ncs89
Copy link
Copy Markdown
Collaborator

@Ncs89 Ncs89 commented Feb 18, 2026

🏷️ PR 타입

  • ✨ 기능 추가 (Feature)
  • 🐛 버그 수정 (Bug Fix)
  • ♻️ 리팩토링 (Refactoring)
  • 📝 문서 수정 (Documentation)
  • 🎨 스타일 변경 (Style)
  • ✅ 테스트 추가 (Test)

📝 작업 내용

  • Solve_Writer와 Explain 노드에서 토큰 스트리밍 완료 시 \n\n 줄바꿈 토큰을 실제로 클라이언트에 emit하도록 service.py의 v1/v2 스트리밍 제너레이터를 수정했습니다.

✅ 체크리스트

  • 코드 리뷰를 받을 준비가 완료되었습니다
  • 테스트를 작성하고 모두 통과했습니다
  • 문서를 업데이트했습니다 (필요한 경우)
  • 코드 스타일 가이드를 준수했습니다
  • 셀프 리뷰를 완료했습니다

📎 기타 참고사항

Summary by CodeRabbit

스타일 개선

  • 특정 스트리밍 응답이 완료된 후 줄바꿈을 추가하여 응답 간의 시각적 구분 개선

@Ncs89 Ncs89 self-assigned this Feb 18, 2026
@Ncs89 Ncs89 added bug Something isn't working enhancement New feature or request labels Feb 18, 2026
@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Feb 18, 2026

📝 Walkthrough

Walkthrough

스트리밍 완료 후 특정 노드(Solve_Writer, Explain)에서 추가 줄바꿈을 발생시키는 로직을 도입했습니다. 새로운 상수 정의 및 메시지 생성 경로(v1, v2)에 걸쳐 뒤따르는 줄바꿈 처리를 추가했으며, 중복 방지를 위해 발생 추적을 구현했습니다.

Changes

Cohort / File(s) Summary
Streaming Trailing Newline Logic
src/service/service.py
STREAMING_TRAILING_NEWLINE_NODES 상수 추가 및 message_generator, _pump_stream, v2 message flow에서 스트리밍 완료 후 "\n\n" 토큰 발생 로직 구현. 중복 방지를 위한 emitted-tracking 메커니즘 적용.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

Poem

🐰 새로운 줄 두 개, 흐르는 스트림 뒤에,
Solve와 Explain이 인사하네요—
매끄러운 분리, 깔끔한 마무리,
중복은 방지하며 우아하게! ✨

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 66.67% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed 제목은 PR의 주요 변경 사항을 정확하게 반영합니다. 'writer 노드 종료시 줄바꿈'은 Solve_Writer 및 Explain 노드 완료 시 줄바꿈 토큰을 추가하는 변경 사항을 명확하게 나타냅니다.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch fix/106-writer-enter

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/service/service.py`:
- Around line 1084-1095: Add v1-style deduplication to the v2 path by
introducing a per-request set (e.g., streaming_newline_emitted) in
message_generator_v2's top-level state and update the on_chat_model_end handler
to only emit the trailing newline for a given (active_llm_message_id,
active_llm_node) once: check (active_llm_message_id, active_llm_node) against
streaming_newline_emitted before emitting to STREAMING_TRAILING_NEWLINE_NODES,
add the tuple to the set after emitting, and ensure the set is reset
appropriately when a new request lifecycle starts so duplicates across
sequential on_chat_model_start/on_chat_model_end pairs are prevented.
- Around line 443-453: The v1 trailing-newline emit block (the branch using
STREAMING_TRAILING_NEWLINE_NODES, streaming_newline_emitted,
user_input.stream_tokens and node_name) lacks the same "skip_stream" tag check
present in the v2 on_chat_model_end path; update this block to first inspect the
node's tag list (e.g., the node's tags or tag_list used elsewhere) and skip
emitting the newline if "skip_stream" is present, so the logic matches the v2
handler and avoids emitting trailing newlines for nodes marked with skip_stream.

Comment thread src/service/service.py
Comment on lines +443 to +453
# 스트리밍 노드 완료 후 줄바꿈 토큰 emit
if (
user_input.stream_tokens
and node_name in STREAMING_TRAILING_NEWLINE_NODES
and node_name not in streaming_newline_emitted
):
streaming_newline_emitted.add(node_name)
newline_payload = json.dumps(
{"type": "token", "content": "\n\n"}, ensure_ascii=False
)
yield f"data: {newline_payload}\n\n"
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

rg -n "skip_stream" --type py -C 3

Repository: Team-Proovy/Proovy-ai

Length of output: 10802


🏁 Script executed:

rg -n "STREAMING_TRAILING_NEWLINE_NODES" --type py -C 3

Repository: Team-Proovy/Proovy-ai

Length of output: 1515


🏁 Script executed:

rg -n "Solve_Writer\|Explain" --type py -C 2

Repository: Team-Proovy/Proovy-ai

Length of output: 47


🏁 Script executed:

# Check the context around line 548 for stream_mode == "messages" handler
sed -n '540,560p' src/service/service.py

Repository: Team-Proovy/Proovy-ai

Length of output: 1151


🏁 Script executed:

# Check lines 443-453 in detail along with surrounding context
sed -n '430,465p' src/service/service.py

Repository: Team-Proovy/Proovy-ai

Length of output: 1613


🏁 Script executed:

# Search for on_chat_model_end handler mentioned in the comment
rg -n "on_chat_model_end" --type py -C 5

Repository: Team-Proovy/Proovy-ai

Length of output: 740


🏁 Script executed:

fd -t f "\.py$" | xargs rg -l "\"Explain\".*node|add_node.*Explain" | head -10

Repository: Team-Proovy/Proovy-ai

Length of output: 127


🏁 Script executed:

rg -n "def.*[Ee]xplain|\"Explain\"" --type py -C 2 | head -50

Repository: Team-Proovy/Proovy-ai

Length of output: 2702


🏁 Script executed:

# Search for Explain node in feature graphs
rg -n "Explain" src/agents/workflows/subgraphs/step4_features/ --type py -C 2

Repository: Team-Proovy/Proovy-ai

Length of output: 2647


v1 경로의 줄바꿈 emit 로직이 skip_stream 태그를 확인하지 않아, v2 경로와 비일관적입니다.

확인 결과 STREAMING_TRAILING_NEWLINE_NODES에 포함된 Solve_Writer (solve/graph.py:428)와 Explain (explain/graph.py:100) 노드 모두 skip_stream 태그 없이 구성되어 있습니다. 따라서 현재는 실제 문제가 발생하지 않습니다.

그러나 v2 경로 (on_chat_model_end 핸들러, 1078-1082줄)는 명시적으로 if "skip_stream" in tag_list: continue를 확인하는 반면, v1 경로 (443-453줄)는 이 검사가 없습니다. 향후 이 노드들에 skip_stream을 추가할 경우 v1 경로에서만 불필요한 줄바꿈이 emit될 수 있으므로, v1 경로에도 동일한 skip_stream 확인 로직을 추가하는 것이 좋습니다.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/service/service.py` around lines 443 - 453, The v1 trailing-newline emit
block (the branch using STREAMING_TRAILING_NEWLINE_NODES,
streaming_newline_emitted, user_input.stream_tokens and node_name) lacks the
same "skip_stream" tag check present in the v2 on_chat_model_end path; update
this block to first inspect the node's tag list (e.g., the node's tags or
tag_list used elsewhere) and skip emitting the newline if "skip_stream" is
present, so the logic matches the v2 handler and avoids emitting trailing
newlines for nodes marked with skip_stream.

Comment thread src/service/service.py
Comment on lines +1084 to +1095
# Writer 노드 등 스트리밍 feature 응답 완료 후 줄바꿈 추가
if active_llm_node in STREAMING_TRAILING_NEWLINE_NODES:
yield emitter.emit(
"llm.token.delta",
{
"message_id": active_llm_message_id,
"node": active_llm_node,
"delta": "\n\n",
"index": active_llm_token_index,
},
)
active_llm_token_index += 1
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

v2 경로에 v1의 streaming_newline_emitted 에 해당하는 중복 방지 로직이 없습니다.

v1 경로는 streaming_newline_emitted 집합을 통해 요청당 노드당 정확히 1회만 줄바꿈을 emit합니다. 반면 v2 경로는 on_chat_model_end 가 발생할 때마다 active_llm_node in STREAMING_TRAILING_NEWLINE_NODES 이면 줄바꿈을 emit하므로, 동일 노드 내에서 LLM 호출이 순차적으로 N회 발생하면 N개의 줄바꿈이 클라이언트에 전달됩니다.

active_llm_message_id = None 리셋은 동일 LLM 호출의 중복 emit만 방지하며, 두 번째 on_chat_model_starton_chat_model_end 쌍이 오면 새로운 active_llm_message_id 로 다시 emit이 허용됩니다.

🐛 제안 수정: v2에 deduplication 집합 추가

message_generator_v2 상단 변수 초기화 블록에 집합 추가:

 emitted_artifacts: set[str] = set()
 last_credit_signature: tuple[Any, Any, Any] | None = None
+streaming_newline_emitted: set[str] = set()

on_chat_model_end 핸들러 내 줄바꿈 emit 조건에 중복 방지 체크 추가:

-                    if active_llm_node in STREAMING_TRAILING_NEWLINE_NODES:
+                    if (
+                        active_llm_node in STREAMING_TRAILING_NEWLINE_NODES
+                        and active_llm_node not in streaming_newline_emitted
+                    ):
+                        streaming_newline_emitted.add(active_llm_node)
                         yield emitter.emit(
                             "llm.token.delta",
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/service/service.py` around lines 1084 - 1095, Add v1-style deduplication
to the v2 path by introducing a per-request set (e.g.,
streaming_newline_emitted) in message_generator_v2's top-level state and update
the on_chat_model_end handler to only emit the trailing newline for a given
(active_llm_message_id, active_llm_node) once: check (active_llm_message_id,
active_llm_node) against streaming_newline_emitted before emitting to
STREAMING_TRAILING_NEWLINE_NODES, add the tuple to the set after emitting, and
ensure the set is reset appropriately when a new request lifecycle starts so
duplicates across sequential on_chat_model_start/on_chat_model_end pairs are
prevented.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

bug Something isn't working enhancement New feature or request

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant