Skip to content

feat(discord): include reply/quote context in agent prompt (#339)#527

Open
ChunHao-dev wants to merge 2 commits intoopenabdev:mainfrom
ChunHao-dev:feat/discord-reply-context
Open

feat(discord): include reply/quote context in agent prompt (#339)#527
ChunHao-dev wants to merge 2 commits intoopenabdev:mainfrom
ChunHao-dev:feat/discord-reply-context

Conversation

@ChunHao-dev
Copy link
Copy Markdown
Contributor

Summary

When a user replies to (quotes) a message in a Discord thread, the bot only sends the new message text to the agent. The quoted/referenced message content is lost — the agent has no idea what "this" refers to.

This PR reads msg.referenced_message and prepends the quoted content to the prompt:

[Quoted message from @username]:
<content of the quoted message>

summarize this

Implementation

  • resolve_referenced_message() — prefers gateway-provided referenced_message (zero cost); falls back to HTTP API call via message_reference if the gateway didn't include the full message
  • format_quote_context() — pure function that formats the quote block
  • Injected after resolve_mentions(), before the prompt is sent to the router
  • Non-reply messages are completely unaffected (resolve_referenced_message returns None)

Testing

  • 4 unit tests for format_quote_context() (normal, empty content, empty prompt, multiline)
  • Manual testing on OrbStack K8s deployment
  • All 109 existing tests pass

Closes #339

Discord Discussion URL: https://discord.com/channels/1491295327620169908/1496538680142069800

@ChunHao-dev ChunHao-dev requested a review from thepagent as a code owner April 22, 2026 15:52
@github-actions github-actions Bot added the pending-screening PR awaiting automated screening label Apr 22, 2026
@shaun-agent
Copy link
Copy Markdown
Contributor

OpenAB PR Screening

This is auto-generated by the OpenAB project-screening flow for context collection and reviewer handoff.
Click 👍 if you find this useful. Human review will be done within 24 hours. We appreciate your support and contribution 🙏

Screening report ## Intent

This PR fixes a Discord conversation-context gap: when a user replies to or quotes an earlier message in a thread, the agent currently only sees the new message text and loses the referenced message content. That makes prompts like “summarize this” or “answer that” ambiguous and degrades response quality for Discord users.

Feat

This is a feature-sized fix to Discord prompt assembly. It adds reply/quote context to the agent prompt by resolving the referenced Discord message, formatting it into a quote block, and prepending it to the user’s new message before routing to the agent. Non-reply messages remain unchanged.

Who It Serves

The primary beneficiary is Discord end users interacting with OpenAB agents in threads and reply chains. Secondarily, it helps maintainers and reviewers by making agent behavior more predictable and aligned with how users naturally converse in Discord.

Rewritten Prompt

Update the Discord adapter so reply messages include the referenced message content in the prompt sent to the agent.

Requirements:

  • Detect when an incoming Discord message references another message.
  • Resolve the referenced message by preferring gateway-provided referenced_message data and falling back to a Discord API fetch only when necessary.
  • Format the referenced content as a clearly labeled quote block including the original author handle.
  • Prepend that quote block to the current user prompt after mention resolution and before router dispatch.
  • Preserve current behavior for non-reply messages.
  • Add unit tests for formatting and resolution edge cases, including empty quoted content, multiline content, and missing referenced payloads.

Merge Pitch

This is worth moving forward because it fixes a real prompt-quality issue in a core user interaction path without changing the broader routing model. The risk profile is low to moderate: behavior is isolated to Discord reply handling, but reviewers will likely want to confirm prompt formatting is stable, API fallback is safe, and quoted context does not introduce noisy or misleading prompt injection in edge cases.

Best-Practice Comparison

OpenClaw principles:

  • Relevant: explicit delivery context is relevant here. Passing quoted content into the prompt is a lightweight form of better delivery routing because the agent receives the conversational state needed to interpret the message correctly.
  • Somewhat relevant: retry/backoff matters only for the fallback API fetch path if referenced message resolution depends on an HTTP call.
  • Not especially relevant: gateway-owned scheduling, durable job persistence, isolated executions, and run logs are broader execution-system concerns and do not materially affect this PR’s narrow prompt-enrichment scope.

Hermes Agent principles:

  • Relevant: self-contained prompts for scheduled tasks maps well to this PR’s core idea. The prompt should contain enough local context to stand on its own, and reply context improves that.
  • Somewhat relevant: fresh-session thinking is indirectly aligned, because when memory is thin or absent, embedding the quoted message in the prompt makes each turn more self-sufficient.
  • Not relevant: gateway tick model, file locking, and atomic persisted writes do not fit this feature because no scheduler or persisted state is being introduced.

Overall:

  • The strongest best-practice alignment is with “self-contained inputs” and “explicit context delivery.”
  • This PR does not need scheduler-grade durability or persistence patterns unless reply resolution is later expanded into a more complex message-state subsystem.

Implementation Options

  1. Conservative option: inline quoted text only
  • Use referenced_message when present.
  • If missing, skip quote context instead of performing a fallback fetch.
  • Keep the implementation fully gateway-bound and low-risk.
  1. Balanced option: gateway-first with HTTP fallback
  • Prefer referenced_message.
  • Fallback to message_reference API fetch when the gateway payload is incomplete.
  • Format the quote consistently and inject it before routing.
  • Add focused tests around formatting and fallback behavior.
  1. Ambitious option: structured conversation-context builder
  • Build a reusable prompt-context layer for Discord that can include quoted messages, attachments, embeds, author metadata, and possibly limited parent-thread context.
  • Centralize prompt assembly rules instead of adding a single reply-specific branch.
  • Prepare the adapter for future multi-message context enrichment.

Comparison Table

Option Speed to ship Complexity Reliability Maintainability User impact Fit for OpenAB right now
Conservative: inline quoted text only Fast Low Medium High Medium Good
Balanced: gateway-first with HTTP fallback Medium-fast Medium High High High Best
Ambitious: structured context builder Slow High Medium-high Medium Very high Premature

Recommendation

The balanced option is the right path for merge discussion. It solves the actual user problem reliably, keeps the change scoped to the Discord adapter, and avoids overbuilding a general context system before the project has validated the need for broader prompt-enrichment rules.

If this moves forward, the likely follow-up split is:

  1. Merge reply/quote context support now.
  2. Later evaluate whether attachments, embeds, or thread-parent context deserve the same treatment through a shared context-builder abstraction.

@CHC-Agent
Copy link
Copy Markdown

Fix: resolve mentions in quoted message content

The quoted message content from referenced_message was being injected into the prompt without running resolve_mentions(). This meant raw Discord mention markup (<@BOT_ID>, <@&ROLE_ID>) could leak into the LLM prompt — and if the LLM echoed them back, Discord would actually ping those roles/users.

Change: Added resolve_mentions(&quoted.content, bot_id) before passing quoted content to format_quote_context(), consistent with how the user's own message is already processed.

Some(quoted) => {
    let quoted_content = resolve_mentions(&quoted.content, bot_id);
    format_quote_context(&quoted.author.name, &quoted_content, &prompt)
}

Minimal change — 4 lines added, 1 removed.

ChunHao-dev and others added 2 commits April 30, 2026 06:40
Quoted message content was injected into the prompt without running
resolve_mentions(), leaking raw Discord mention markup (<@BOT_ID>,
<@&ROLE_ID>) into the LLM prompt. Apply the same resolve_mentions()
pass used for the user's own message content.
Copy link
Copy Markdown
Contributor

@masami-agent masami-agent left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PR Review: #527

Summary

  • Problem: When a user replies to (quotes) a message in a Discord thread, the quoted content is lost — the agent only sees the new message text with no context about what "this" refers to.
  • Approach: Read msg.referenced_message (gateway-provided, zero cost) with HTTP API fallback, prepend formatted quote block to the prompt before sending to the ACP agent.
  • Risk level: Low

Core Assessment

  1. Problem clearly stated: ✅ — well-documented in both issue #339 and PR description
  2. Approach appropriate: ✅ — two-tier resolution (gateway cache → HTTP fallback) is the correct pattern for serenity 0.12
  3. Alternatives considered: ✅ — the gateway-first + HTTP-fallback design is explicitly documented
  4. Best approach for now: ✅ — minimal, focused, non-breaking

Findings

Code correctness:

  • resolve_referenced_message() correctly handles serenity 0.12 types: message_reference.channel_id is ChannelId (non-optional), message_id is Option<MessageId> — the ? operator usage is correct.
  • *referenced.clone() dereferences the Box<Message> — correct pattern for Option<Box<Message>>.
  • resolve_mentions() is applied to the quoted content, which correctly strips bot mentions from the quoted text too. Good attention to detail.
  • Insertion point is correct: after resolve_mentions() on the user's own message, before the empty-check gate. This means a reply with empty user text but non-empty quoted content will still be processed — which is the right behavior.
  • The tracing::warn! on HTTP fetch failure with structured fields follows the project's existing logging pattern.

format_quote_context() design:

  • Pure function, easy to test — good separation.
  • Empty quoted_content returns prompt unchanged — correct guard.
  • The format [Quoted message from @{author_name}]:\n{content}\n\n{prompt} is clean and gives the agent clear context about who said what.

Review Summary

🔧 Suggested Changes

  • Consider adding a length cap on quoted content. If someone quotes a very long message (e.g., a full code dump from the bot), the entire thing gets prepended to the prompt. A reasonable truncation (e.g., first 2000 chars with a [truncated] marker) would prevent unexpectedly large prompts. Not blocking — this can be a follow-up.

ℹ️ Info

  • This only handles single-level quoting (the direct referenced_message). Nested quotes (quoting a message that itself was a reply) won't include the deeper context. This is fine for now — Discord's own UI only shows one level of reply context.
  • The HTTP fallback path (channel_id.message(http, message_id)) counts against Discord's rate limit. In practice this should be rare since the gateway almost always includes referenced_message, but worth noting for awareness.

⚪ Nits

  • None — code is clean and well-structured.

Verdict

APPROVE — Clean, focused implementation. Single file changed, 4 unit tests, all 109 existing tests pass, CI green across all 7 smoke-test variants. The code correctly handles serenity 0.12 types, follows existing project patterns, and the insertion point is well-chosen. Ready for maintainer review.

Copy link
Copy Markdown
Collaborator

@obrutjack obrutjack left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewed. Clean, focused implementation — gateway-first with HTTP fallback, resolve_mentions applied to quoted content, good test coverage. LGTM.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

pending-screening PR awaiting automated screening

Projects

None yet

Development

Successfully merging this pull request may close these issues.

feat: include Discord reply/quote context in agent prompt

5 participants