Skip to content

fix: incomplete response handling#239

Merged
mikehostetler merged 4 commits intoagentjido:mainfrom
pkgodara:fix-incomplete-resp-bug
Mar 30, 2026
Merged

fix: incomplete response handling#239
mikehostetler merged 4 commits intoagentjido:mainfrom
pkgodara:fix-incomplete-resp-bug

Conversation

@pkgodara
Copy link
Copy Markdown
Contributor

@pkgodara pkgodara commented Mar 29, 2026

Runner classified every non-tool-call LLM turn as :final_answer regardless of the response's finish_reason. When a provider returns a streaming response with finish_reason: :incomplete/:error/:cancelled and no text content, the runner accepted it as a successful completion, emitted :request_completed with result: "", and ask_sync/3 returned {:ok, ""}. Instead, it should return error tuple.

I found this issue while building the Greeter from docs.

This is what I defined -

defmodule GreeterAi do
  use Jido.AI.Agent,
    name: "greeter",
    description: "Generates a friendly greeting",
    tools: [],
    model: "openai:gpt-5-mini",
    system_prompt: """
    You are a friendly greeter.
    Generate a short, warm welcome message.
    One or two sentences maximum.
    """

  def test do
    {:ok, pid} = Jido.AgentServer.start_link(agent: GreeterAi)
    GreeterAi.ask_sync(pid, "Say hello to me")
  end
end

On running,

> GreeterAi.test()
{:ok, ""}

I did not have openai credits left, so should've been an error.
I confirmed with sync call -

> Jido.AI.ask("Say Hi", model: "openai:gpt-5-mini")

And it did respond correctly with error tuple. That led to finding this bug.

@mikehostetler mikehostetler merged commit 5601aa5 into agentjido:main Mar 30, 2026
6 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants