Skip to content

[Fix] 비정상적인 LLM 스트리밍 문제 수정#113

Open
gaeunee2 wants to merge 1 commit intodevfrom
fix/112-llm-streaming-issue
Open

[Fix] 비정상적인 LLM 스트리밍 문제 수정#113
gaeunee2 wants to merge 1 commit intodevfrom
fix/112-llm-streaming-issue

Conversation

@gaeunee2
Copy link
Copy Markdown
Collaborator

@gaeunee2 gaeunee2 commented Feb 20, 2026

📌 관련 이슈

🏷️ PR 타입

  • ✨ 기능 추가 (Feature)
  • 🐛 버그 수정 (Bug Fix)
  • ♻️ 리팩토링 (Refactoring)
  • 📝 문서 수정 (Documentation)
  • 🎨 스타일 변경 (Style)
  • ✅ 테스트 추가 (Test)

📝 작업 내용

  • Review 노드와 problem_splitter.py에서 모델 호출 시 skip_stream 설정 추가
  • 비정상적인 스트리밍 문제 수정 (사용자에게 불필요한 정보 노출 방지)

📸 스크린샷

스크린샷 2026-02-20 094150

이 문제를 해결했습니다 ㅇㅇ

✅ 체크리스트

  • 코드 리뷰를 받을 준비가 완료되었습니다
  • 테스트를 작성하고 모두 통과했습니다
  • 문서를 업데이트했습니다 (필요한 경우)
  • 코드 스타일 가이드를 준수했습니다
  • 셀프 리뷰를 완료했습니다

📎 기타 참고사항

  • 이제 Review/Suggestion과 같은 내부 노드는 출력이 숨겨지고, Solve_Writer 같은 사용자에게 보여주는 노드만 스트리밍됩니다.

Summary by CodeRabbit

릴리스 노트

  • 버그 수정
    • 더 안정적인 모델 응답을 위해 토큰 스트리밍 구성 개선
    • 이전 실행의 오래된 데이터가 남지 않도록 데이터 초기화 수정

@gaeunee2 gaeunee2 requested a review from haein45 February 20, 2026 00:48
@gaeunee2 gaeunee2 self-assigned this Feb 20, 2026
@gaeunee2 gaeunee2 added the bug Something isn't working label Feb 20, 2026
@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Feb 20, 2026

📝 Walkthrough

Walkthrough

세 개의 파일에서 LLM 스트리밍 동작을 수정했습니다. 두 내부 노드(Problem Splitter, Review)에 skip_stream 태그를 추가하여 스트리밍을 비활성화하고, 서비스 입력에 두 개의 필드(final_output, partial_responses)를 초기화하여 이전 실행 상태를 정리합니다.

Changes

Cohort / File(s) Summary
LLM 스트리밍 비활성화
src/agents/workflows/problem_splitter.py, src/agents/workflows/review_logic.py
내부 노드의 LLM 모델 초기화에 .with_config(tags=["skip_stream"]) 체이닝을 추가하여 토큰 스트리밍 비활성화
입력 페이로드 초기화
src/service/service.py
_handle_input 함수에서 매 호출 시 final_output (빈 딕셔너리)과 partial_responses (빈 리스트) 필드를 초기화하여 이전 실행의 잔여 데이터 정리

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

Possibly related PRs

  • PR #58: 동일한 코드 수준의 변경으로 LLM 모델 초기화에 with_config(tags=["skip_stream"]) 추가하여 스트리밍 비활성화
  • PR #65: 서비스 수준에서 skip_stream 태그를 감지하여 토큰 스트리밍을 건너뛰는 로직과 직접 연관
  • PR #34: skip_stream 태그 적용 및 partial_responses/final_output 필드 도입에서 직접적으로 겹치는 변경사항

Suggested reviewers

  • haein45
  • lucy7noh

Poem

🐰 내부 노드의 목소리를 조용히 하고,
스트림을 막아 사용자 화면을 맑게 하네.
이제 리뷰와 문제 분석은 몰래몰래,
필요한 데이터만 쏙쏙 담아 전달하고,
잔여 데이터도 깔끔하게 정리하니,
시스템이 한 박자 더 깔끔해졌네! ✨

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 66.67% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed 제목은 PR의 주요 변경 사항인 LLM 스트리밍 문제 수정을 명확하게 요약하고 있습니다.
Linked Issues check ✅ Passed 코드 변경 사항들이 연결된 이슈 #112의 모든 요구 사항을 충족합니다. Review와 problem_splitter 노드에 skip_stream이 추가되었고, service.py에서 상태 필드가 초기화되었습니다.
Out of Scope Changes check ✅ Passed 모든 변경 사항들이 이슈 #112 범위 내에 있습니다. 내부 노드의 스트리밍 문제를 해결하기 위한 필요한 변경만 포함되어 있습니다.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch fix/112-llm-streaming-issue

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
src/agents/workflows/review_logic.py (1)

228-230: 디버그 print문을 logger 호출로 교체하는 것이 권장됩니다.

Lines 228–230(run_review)과 Lines 270–272(run_suggestion)의 raw print()는 로그 레벨 제어 및 구조화 로깅을 우회합니다. 주석의 "로그 찍어보기기" 오타(기기)도 개발용 임시 코드임을 보여줍니다.

♻️ 제안하는 변경
-        `#모델` 호출 로그 찍어보기기
-        print(
-            f"---REVIEW: MODEL={MODEL_NAME} openrouter_key_set={bool(settings.OPENROUTER_API_KEY)}---"
-        )
+        logger.debug(
+            "REVIEW: MODEL=%s openrouter_key_set=%s", MODEL_NAME, bool(settings.OPENROUTER_API_KEY)
+        )

run_suggestion 상단의 동일한 패턴도 같은 방식으로 교체하면 됩니다.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/agents/workflows/review_logic.py` around lines 228 - 230, Replace the raw
print statements in run_review and run_suggestion with structured logger calls:
instead of print(f"---REVIEW: MODEL={MODEL_NAME}
openrouter_key_set={bool(settings.OPENROUTER_API_KEY)}---") use the module/class
logger (e.g., logger.info or logger.debug) to emit the same message and remove
the "로그 찍어보기기" typo; reference MODEL_NAME and settings.OPENROUTER_API_KEY in the
log message and ensure message level is appropriate for debug output so logging
configuration can control it.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/agents/workflows/problem_splitter.py`:
- Line 120: Update the model instantiation inside the simple_response function
to include the skip_stream tag: locate the
get_model(OpenRouterModelName.GPT_5_MINI) call in simple_response (in
maingraph.py) and chain .with_config(tags=["skip_stream"]) so the internal
routing node disables streaming the same way as problem_splitter's model
variable.

---

Nitpick comments:
In `@src/agents/workflows/review_logic.py`:
- Around line 228-230: Replace the raw print statements in run_review and
run_suggestion with structured logger calls: instead of print(f"---REVIEW:
MODEL={MODEL_NAME} openrouter_key_set={bool(settings.OPENROUTER_API_KEY)}---")
use the module/class logger (e.g., logger.info or logger.debug) to emit the same
message and remove the "로그 찍어보기기" typo; reference MODEL_NAME and
settings.OPENROUTER_API_KEY in the log message and ensure message level is
appropriate for debug output so logging configuration can control it.


try:
model = get_model(OpenRouterModelName.GPT_5_MINI)
model = get_model(OpenRouterModelName.GPT_5_MINI).with_config(tags=["skip_stream"])
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# First, let's examine the file structure and the specific line mentioned
wc -l src/agents/workflows/problem_splitter.py

Repository: Team-Proovy/Proovy-ai

Length of output: 109


🏁 Script executed:

# View the context around line 120
sed -n '110,130p' src/agents/workflows/problem_splitter.py

Repository: Team-Proovy/Proovy-ai

Length of output: 729


🏁 Script executed:

# Run the verification script to find all get_model calls and their skip_stream status
rg -n "get_model\b" --type py -A 3

Repository: Team-Proovy/Proovy-ai

Length of output: 5848


🏁 Script executed:

# View context around maingraph.py:244 to understand the function and determine if skip_stream is needed
sed -n '235,260p' src/agents/workflows/maingraph.py

Repository: Team-Proovy/Proovy-ai

Length of output: 1093


🏁 Script executed:

# Also check the function name and purpose to confirm if this is an internal node
sed -n '220,250p' src/agents/workflows/maingraph.py

Repository: Team-Proovy/Proovy-ai

Length of output: 1219


🏁 Script executed:

# Find the function definition that contains the get_model call at line 244
sed -n '200,245p' src/agents/workflows/maingraph.py | grep -n "def "

Repository: Team-Proovy/Proovy-ai

Length of output: 47


🏁 Script executed:

# Get a clearer view of the function definition
sed -n '200,220p' src/agents/workflows/maingraph.py

Repository: Team-Proovy/Proovy-ai

Length of output: 736


🏁 Script executed:

# Find the function that contains line 244
sed -n '180,245p' src/agents/workflows/maingraph.py | head -40

Repository: Team-Proovy/Proovy-ai

Length of output: 1352


🏁 Script executed:

# Let's also look for all function definitions in maingraph.py to identify the function at line 244
rg -n "^def " src/agents/workflows/maingraph.py

Repository: Team-Proovy/Proovy-ai

Length of output: 687


🏁 Script executed:

# Check the final_response function to understand the node type better
rg -n "def final_response" src/agents/workflows/maingraph.py -A 20

Repository: Team-Proovy/Proovy-ai

Length of output: 47


🏁 Script executed:

# Also check what nodes are marked as public-facing by examining the graph structure
rg -n "add_node\|add_edge" src/agents/workflows/maingraph.py | head -20

Repository: Team-Proovy/Proovy-ai

Length of output: 47


🏁 Script executed:

# Let's look at the entire maingraph.py file structure to find graph definition and node additions
rg -n "graph\." src/agents/workflows/maingraph.py | head -30

Repository: Team-Proovy/Proovy-ai

Length of output: 111


🏁 Script executed:

# Also check the imports to see where final_response is imported from
head -50 src/agents/workflows/maingraph.py

Repository: Team-Proovy/Proovy-ai

Length of output: 1507


🏁 Script executed:

# Check if simple_response is added to the graph and what nodes are in the graph
rg -n "simple_response\|add_node.*simple" src/agents/workflows/maingraph.py

Repository: Team-Proovy/Proovy-ai

Length of output: 47


🏁 Script executed:

# Look at the end of maingraph.py to see the graph structure and which nodes are public-facing
tail -100 src/agents/workflows/maingraph.py

Repository: Team-Proovy/Proovy-ai

Length of output: 2662


🏁 Script executed:

# Confirm simple_response is an internal workflow node by checking complete function
sed -n '198,261p' src/agents/workflows/maingraph.py

Repository: Team-Proovy/Proovy-ai

Length of output: 2253


문제_분할기의 skip_stream 태그는 올바르게 추가되었으나, 메인그래프의 simple_response 함수에서 누락되었습니다.

이슈 #112에 따라 skip_stream이 누락된 모든 내부 노드를 업데이트해야 합니다. 현재 PR의 문제_분할기 변경은 정확하지만, 다음 위치에서 누락이 발견되었습니다:

  • src/agents/workflows/maingraph.py, 244줄: simple_response 함수의 get_model(OpenRouterModelName.GPT_5_MINI) 호출에 skip_stream 태그가 없습니다. 이 함수는 내부 라우팅 노드이므로 스트리밍을 비활성화해야 합니다.
model = get_model(OpenRouterModelName.GPT_5_MINI).with_config(tags=["skip_stream"])

로 수정하세요.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/agents/workflows/problem_splitter.py` at line 120, Update the model
instantiation inside the simple_response function to include the skip_stream
tag: locate the get_model(OpenRouterModelName.GPT_5_MINI) call in
simple_response (in maingraph.py) and chain .with_config(tags=["skip_stream"])
so the internal routing node disables streaming the same way as
problem_splitter's model variable.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

bug Something isn't working

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Fix] 비정상적인 LLM 스트리밍 문제 수정

2 participants