-
Notifications
You must be signed in to change notification settings - Fork 478
Reimplement wsh ai
, fix text file attaching format
#2435
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…g so AI recognizes it as a file and doesn't try to re-read it. implement RPC calls to interact with Wave AI.
WalkthroughReworks the CLI ai command to accept positional file arguments (including "-" for stdin), an optional message (-m), and flags to submit (-s) or start a new chat (-n). Adds MIME detection, per-file size limits (text/PDF/image/other), binary-data checks, and Base64-encoded AIAttachedFile payloads. Replaces prior block messaging with a route-based WaveAIAddContext flow using a new RPC command Command_WaveAIAddContext/CommandWaveAIAddContextData. Frontend: WaveAIModel gains chat transport hooks (registerUseChatData, useChatSendMessage/useChatSetMessages, getAndClearMessage, handleSubmit, uiLoadChat); TabClient handles waveaiaddcontext to inject files/text and optionally submit or clear/start a chat. Adds data-URL decoding, util helpers, new types, docs updates, and increases WebSocket max message size to 10 MB. Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Pre-merge checks and finishing touches❌ Failed checks (2 warnings)
✅ Passed checks (1 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Warning There were issues while running some tools. Please review the errors and either fix the tool's configuration or disable the tool if it's a critical failure. 🔧 golangci-lint (2.5.0)Error: can't load config: unsupported version of the configuration: "" See https://golangci-lint.run/docs/product/migration-guide for migration instructions Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
…. also allow submitting when we have an error status
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
🧹 Nitpick comments (5)
cmd/wsh/cmd/wshcmd-ai.go (1)
175-183
: Consider batching small files instead of erroring when exceeding 7MB.Failing the whole run on cumulative small files > 7MB hurts UX. Flush the current batch, then continue accumulating.
- smallFilesSize += file.Size - if smallFilesSize > maxBatchSize { - return fmt.Errorf("small files total size exceeds maximum batch size of 7MB") - } - smallFiles = append(smallFiles, file) + // If adding this file would exceed the batch, flush current batch first + if smallFilesSize+file.Size > maxBatchSize { + if len(smallFiles) > 0 { + batch := wshrpc.CommandWaveAIAddContextData{Files: smallFiles} + if err := wshclient.WaveAIAddContextCommand(RpcClient, batch, &wshrpc.RpcOpts{Route: route, Timeout: rpcTimeout}); err != nil { + return fmt.Errorf("adding small files batch: %w", err) + } + } + smallFiles = nil + smallFilesSize = 0 + } + smallFiles = append(smallFiles, file) + smallFilesSize += file.SizeThis keeps memory bounds while improving reliability for larger inputs.
frontend/app/aipanel/aipanel.tsx (2)
208-223
: Potential race: relying on globalmodel.realMessage
betweensendMessage
and request prepIf users trigger multiple submits rapidly,
realMessage
can be overwritten beforeprepareSendMessagesRequest
runs, mis-associating the payload. You partially mitigate withstatus !== "ready"
checks, but there’s still a brief window.
- Prefer passing a per-request payload through
sendMessage
options and reading it fromopts
inprepareSendMessagesRequest
, or queue messages in the model andshift()
here to preserve order.- At minimum, add a lightweight “isSubmitting” flag in the model to guard double-submits until
sendMessage
resolves.
208-233
: Minor: remove unusederror
fromuseChat
destructuring
error
isn’t used. Drop it to appease linters and avoid confusion.- const { messages, sendMessage, status, setMessages, error } = useChat({ + const { messages, sendMessage, status, setMessages } = useChat({frontend/app/aipanel/waveai-model.tsx (2)
137-150
: Clear input when starting a new chatUsers expect the input box to reset on “New Chat.” Add clearing
inputAtom
inclearChat()
.clearChat() { this.clearFiles(); this.isChatEmpty = true; const newChatId = crypto.randomUUID(); globalStore.set(this.chatId, newChatId); const tabId = globalStore.get(atoms.staticTabId); RpcApi.SetMetaCommand(TabRpcClient, { oref: WOS.makeORef("tab", tabId), meta: { "waveai:chatid": newChatId }, }); this.useChatSetMessages?.([]); + globalStore.set(this.inputAtom, ""); }
127-133
: Only revoke object URLs; data URLs don’t need revocation
createImagePreview
returns a data URL. CallingURL.revokeObjectURL
on data URLs is a no‑op. Guard to only revoke blob/object URLs.- currentFiles.forEach((file) => { - if (file.previewUrl) { - URL.revokeObjectURL(file.previewUrl); - } - }); + currentFiles.forEach((file) => { + if (file.previewUrl && file.previewUrl.startsWith("blob:")) { + URL.revokeObjectURL(file.previewUrl); + } + });
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
package-lock.json
is excluded by!**/package-lock.json
📒 Files selected for processing (17)
cmd/wsh/cmd/wshcmd-ai.go
(1 hunks)docs/docs/wsh-reference.mdx
(1 hunks)docs/docs/wsh.mdx
(1 hunks)frontend/app/aipanel/aipanel.tsx
(8 hunks)frontend/app/aipanel/aitypes.ts
(2 hunks)frontend/app/aipanel/waveai-model.tsx
(7 hunks)frontend/app/store/tabrpcclient.ts
(2 hunks)frontend/app/store/wshclientapi.ts
(1 hunks)frontend/app/workspace/workspace-layout-model.ts
(2 hunks)frontend/types/gotypes.d.ts
(2 hunks)frontend/util/util.ts
(3 hunks)pkg/aiusechat/openai/openai-convertmessage.go
(3 hunks)pkg/aiusechat/usechat.go
(1 hunks)pkg/util/utilfn/marshal.go
(2 hunks)pkg/web/ws.go
(1 hunks)pkg/wshrpc/wshclient/wshclient.go
(1 hunks)pkg/wshrpc/wshrpctypes.go
(3 hunks)
🧰 Additional context used
🧬 Code graph analysis (9)
frontend/types/gotypes.d.ts (1)
pkg/wshrpc/wshrpctypes.go (2)
AIAttachedFile
(739-744)CommandWaveAIAddContextData
(746-751)
cmd/wsh/cmd/wshcmd-ai.go (4)
cmd/wsh/cmd/wshcmd-root.go (2)
OutputHelpMessage
(78-82)RpcClient
(32-32)pkg/wshrpc/wshrpctypes.go (3)
AIAttachedFile
(739-744)CommandWaveAIAddContextData
(746-751)RpcOpts
(299-305)pkg/wshutil/wshrouter.go (1)
MakeTabRouteId
(72-74)pkg/wshrpc/wshclient/wshclient.go (1)
WaveAIAddContextCommand
(585-588)
pkg/wshrpc/wshclient/wshclient.go (3)
frontend/app/store/wshclientapi.ts (1)
WaveAIAddContextCommand
(491-493)pkg/wshutil/wshrpc.go (1)
WshRpc
(46-60)pkg/wshrpc/wshrpctypes.go (2)
CommandWaveAIAddContextData
(746-751)RpcOpts
(299-305)
frontend/app/store/wshclientapi.ts (1)
pkg/wshrpc/wshrpctypes.go (2)
CommandWaveAIAddContextData
(746-751)RpcOpts
(299-305)
frontend/app/store/tabrpcclient.ts (3)
pkg/wshrpc/wshrpctypes.go (1)
CommandWaveAIAddContextData
(746-751)frontend/app/aipanel/waveai-model.tsx (1)
WaveAIModel
(26-344)frontend/util/util.ts (1)
base64ToArray
(409-409)
frontend/app/aipanel/aipanel.tsx (2)
frontend/app/aipanel/waveai-model.tsx (1)
loadChat
(226-249)frontend/app/aipanel/aipanelheader.tsx (1)
AIPanelHeader
(15-85)
pkg/wshrpc/wshrpctypes.go (2)
frontend/app/store/wshclientapi.ts (1)
WaveAIAddContextCommand
(491-493)pkg/wshrpc/wshclient/wshclient.go (1)
WaveAIAddContextCommand
(585-588)
frontend/app/aipanel/waveai-model.tsx (3)
frontend/app/aipanel/aitypes.ts (3)
UseChatSendMessageType
(30-54)UseChatSetMessagesType
(26-28)WaveUIMessagePart
(24-24)pkg/aiusechat/uctypes/usechat-types.go (1)
AIMessagePart
(268-284)frontend/app/aipanel/ai-utils.ts (2)
normalizeMimeType
(98-113)createDataUrl
(131-138)
pkg/aiusechat/openai/openai-convertmessage.go (2)
pkg/util/utilfn/marshal.go (1)
DecodeDataURL
(169-203)pkg/aiusechat/uctypes/usechat-types.go (2)
UIMessagePart
(35-68)UIMessageDataUserFile
(70-75)
🪛 Biome (2.1.2)
frontend/app/aipanel/waveai-model.tsx
[error] 307-307: This hook is being called conditionally, but all hooks must be called in the exact same order in every component render.
Hooks should not be called after an early return.
For React to preserve state between calls, hooks needs to be called unconditionally and always in the same order.
See https://reactjs.org/docs/hooks-rules.html#only-call-hooks-at-the-top-level
(lint/correctness/useHookAtTopLevel)
🪛 LanguageTool
docs/docs/wsh.mdx
[grammar] ~145-~145: There might be a mistake here.
Context: ...or user) - -n, --new
- Clear chat and start fresh conversation File Limits: - ...
(QB_NEW_EN)
[grammar] ~147-~147: There might be a mistake here.
Context: ...start fresh conversation File Limits: - Text files: 200KB max - PDFs: 5MB max - ...
(QB_NEW_EN)
[grammar] ~148-~148: There might be a mistake here.
Context: ...File Limits: - Text files: 200KB max - PDFs: 5MB max - Images: 7MB max - Maximu...
(QB_NEW_EN)
[grammar] ~149-~149: There might be a mistake here.
Context: ...mits:** - Text files: 200KB max - PDFs: 5MB max - Images: 7MB max - Maximum 15 file...
(QB_NEW_EN)
[grammar] ~149-~149: There might be a mistake here.
Context: ... - Text files: 200KB max - PDFs: 5MB max - Images: 7MB max - Maximum 15 files per c...
(QB_NEW_EN)
[grammar] ~150-~150: There might be a mistake here.
Context: ...es: 200KB max - PDFs: 5MB max - Images: 7MB max - Maximum 15 files per command ## ...
(QB_NEW_EN)
[grammar] ~150-~150: There might be a mistake here.
Context: ...KB max - PDFs: 5MB max - Images: 7MB max - Maximum 15 files per command ## Tips & ...
(QB_NEW_EN)
docs/docs/wsh-reference.mdx
[grammar] ~141-~141: There might be a mistake here.
Context: ... system design" ``` File Size Limits: - Text files: 200KB maximum - PDF files: 5...
(QB_NEW_EN)
[grammar] ~142-~142: There might be a mistake here.
Context: ...ze Limits:** - Text files: 200KB maximum - PDF files: 5MB maximum - Image files: 7M...
(QB_NEW_EN)
[grammar] ~143-~143: There might be a mistake here.
Context: ... Text files: 200KB maximum - PDF files: 5MB maximum - Image files: 7MB maximum (acc...
(QB_NEW_EN)
[grammar] ~143-~143: There might be a mistake here.
Context: ...: 200KB maximum - PDF files: 5MB maximum - Image files: 7MB maximum (accounts for b...
(QB_NEW_EN)
[grammar] ~144-~144: There might be a mistake here.
Context: ...- PDF files: 5MB maximum - Image files: 7MB maximum (accounts for base64 encoding o...
(QB_NEW_EN)
[grammar] ~144-~144: There might be a mistake here.
Context: ... (accounts for base64 encoding overhead) - Maximum 15 files per command Flags:...
(QB_NEW_EN)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
- GitHub Check: Analyze (go)
- GitHub Check: Analyze (javascript-typescript)
- GitHub Check: Build Docsite
- GitHub Check: Build for TestDriver.ai
- GitHub Check: merge-gatekeeper
🔇 Additional comments (12)
pkg/web/ws.go (1)
30-30
: LGTM! Appropriate increase for file attachments.The increase from 8 MB to 10 MB accommodates the new Wave AI file attachment feature with base64 encoding overhead. Given the documented file limits (200KB text, 5MB PDF, 7MB images) and ~33% base64 overhead, this limit provides adequate headroom.
frontend/util/util.ts (1)
31-31
: LGTM! More explicit return type.The return type
Uint8Array<ArrayBuffer>
is more explicit than the previousUint8Array
and correctly reflects the underlying buffer type. This supports the Wave AI file attachment flow where base64 data is decoded into binary buffers for File/Blob creation.docs/docs/wsh.mdx (1)
119-152
: Excellent documentation of the new AI workflow.The updated documentation clearly describes the attach-first, review-then-submit workflow. The examples progress from simple to complex use cases, flags are well-explained, and file size limits are prominently displayed. The emphasis on reviewing attachments before submission provides a good user experience.
pkg/wshrpc/wshrpctypes.go (1)
146-146
: Well-designed RPC types for Wave AI context addition.The new
Command_WaveAIAddContext
command and associated types are well-structured:
AIAttachedFile
captures essential file metadata (name, type, size) plus base64 dataCommandWaveAIAddContextData
provides flexibility with optional files, text, submit, and newchat flags- RPC method signature follows existing patterns
- Types align with frontend declarations and documented workflow
Also applies to: 277-277, 739-751
docs/docs/wsh-reference.mdx (1)
113-151
: Comprehensive reference documentation for the AI command.The reference documentation is thorough and well-organized:
- Clear description of the attach-without-autosubmit default behavior
- Multiple examples demonstrating various use cases
- File size limits clearly stated with specific per-type values
- Flag descriptions are concise and informative
- Consistent with the overview documentation in wsh.mdx
frontend/types/gotypes.d.ts (1)
8-14
: Auto-generated types correctly reflect Go definitions.The TypeScript type declarations for
AIAttachedFile
andCommandWaveAIAddContextData
accurately mirror their Go counterparts. Field naming conventions (camelCase), optional field markers, and type mappings are all correct.Also applies to: 331-337
pkg/util/utilfn/marshal.go (1)
168-203
: Well-implemented data URL decoder.The
DecodeDataURL
function is robust and correct:
- Validates the data URL format before processing
- Correctly extracts MIME type with appropriate default ("text/plain" per spec)
- Handles both base64 and percent-encoded data paths
- Provides clear error messages for each failure mode
- Proper separation of concerns (parsing vs decoding)
pkg/aiusechat/usechat.go (1)
68-71
: Clear instructions for handling inline attached files.The system prompt additions effectively guide the AI to:
- Recognize inline attached text files with their file_name metadata
- Treat each attachment as a separate source
- Use inline content directly without redundant tool calls
This aligns well with the new Wave AI context flow where file content is embedded inline rather than referenced by path.
pkg/wshrpc/wshclient/wshclient.go (1)
584-588
: LGTM: RPC wrapper matches existing patterns.Name, payload type, and call helper align with the server method and TS client.
frontend/app/store/wshclientapi.ts (1)
490-494
: LGTM: API surface consistent with existing commands.Method name and signature align with backend type CommandWaveAIAddContextData.
frontend/app/workspace/workspace-layout-model.ts (1)
221-259
: LGTM: Optional nofocus prevents unwanted autofocus.Good use of clearing timeouts and guarding focus. Backward compatible signature.
frontend/app/aipanel/aitypes.ts (1)
4-4
: Imports are correct. The web search confirms that bothChatRequestOptions
andFileUIPart
are exported from the 'ai' package, matching the import statement in your code. No issues found.
async handle_waveaiaddcontext(rh: RpcResponseHelper, data: CommandWaveAIAddContextData): Promise<void> { | ||
const workspaceLayoutModel = WorkspaceLayoutModel.getInstance(); | ||
if (!workspaceLayoutModel.getAIPanelVisible()) { | ||
workspaceLayoutModel.setAIPanelVisible(true, { nofocus: true }); | ||
} | ||
|
||
const model = WaveAIModel.getInstance(); | ||
|
||
if (data.newchat) { | ||
model.clearChat(); | ||
} | ||
|
||
if (data.files && data.files.length > 0) { | ||
for (const fileData of data.files) { | ||
const decodedData = base64ToArray(fileData.data64); | ||
const blob = new Blob([decodedData], { type: fileData.type }); | ||
const file = new File([blob], fileData.name, { type: fileData.type }); | ||
await model.addFile(file); | ||
} | ||
} | ||
|
||
if (data.text) { | ||
model.appendText(data.text); | ||
} | ||
|
||
if (data.submit) { | ||
await model.handleSubmit(false); | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Prevent lost auto-submit: wait for chat transport readiness before calling handleSubmit.
If the AI panel isn’t fully mounted, useChatSendMessage may be null and submit will no-op. Add a short bounded wait for readiness before submitting.
Apply this diff inside the handler before calling handleSubmit:
@@
- if (data.submit) {
- await model.handleSubmit(false);
- }
+ if (data.submit) {
+ // Ensure the chat transport is mounted to avoid dropping the send
+ const waitFor = async (cond: () => boolean, timeoutMs = 2000, stepMs = 50) => {
+ const start = Date.now();
+ while (Date.now() - start < timeoutMs) {
+ try {
+ if (cond()) return true;
+ } catch {
+ // ignore
+ }
+ await new Promise((r) => setTimeout(r, stepMs));
+ }
+ return false;
+ };
+ // Waiting for useChatSendMessage registration
+ await waitFor(() => typeof (model as any).useChatSendMessage === "function");
+ await model.handleSubmit(false);
+ }
Optionally, if you prefer a first-class API, expose a WaveAIModel.isReady()/waitUntilReady() instead of peeking at the field.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
async handle_waveaiaddcontext(rh: RpcResponseHelper, data: CommandWaveAIAddContextData): Promise<void> { | |
const workspaceLayoutModel = WorkspaceLayoutModel.getInstance(); | |
if (!workspaceLayoutModel.getAIPanelVisible()) { | |
workspaceLayoutModel.setAIPanelVisible(true, { nofocus: true }); | |
} | |
const model = WaveAIModel.getInstance(); | |
if (data.newchat) { | |
model.clearChat(); | |
} | |
if (data.files && data.files.length > 0) { | |
for (const fileData of data.files) { | |
const decodedData = base64ToArray(fileData.data64); | |
const blob = new Blob([decodedData], { type: fileData.type }); | |
const file = new File([blob], fileData.name, { type: fileData.type }); | |
await model.addFile(file); | |
} | |
} | |
if (data.text) { | |
model.appendText(data.text); | |
} | |
if (data.submit) { | |
await model.handleSubmit(false); | |
} | |
} | |
async handle_waveaiaddcontext(rh: RpcResponseHelper, data: CommandWaveAIAddContextData): Promise<void> { | |
const workspaceLayoutModel = WorkspaceLayoutModel.getInstance(); | |
if (!workspaceLayoutModel.getAIPanelVisible()) { | |
workspaceLayoutModel.setAIPanelVisible(true, { nofocus: true }); | |
} | |
const model = WaveAIModel.getInstance(); | |
if (data.newchat) { | |
model.clearChat(); | |
} | |
if (data.files && data.files.length > 0) { | |
for (const fileData of data.files) { | |
const decodedData = base64ToArray(fileData.data64); | |
const blob = new Blob([decodedData], { type: fileData.type }); | |
const file = new File([blob], fileData.name, { type: fileData.type }); | |
await model.addFile(file); | |
} | |
} | |
if (data.text) { | |
model.appendText(data.text); | |
} | |
if (data.submit) { | |
// Ensure the chat transport is mounted to avoid dropping the send | |
const waitFor = async (cond: () => boolean, timeoutMs = 2000, stepMs = 50) => { | |
const start = Date.now(); | |
while (Date.now() - start < timeoutMs) { | |
try { | |
if (cond()) return true; | |
} catch { | |
// ignore | |
} | |
await new Promise((r) => setTimeout(r, stepMs)); | |
} | |
return false; | |
}; | |
// Waiting for useChatSendMessage registration | |
await waitFor(() => typeof (model as any).useChatSendMessage === "function"); | |
await model.handleSubmit(false); | |
} | |
} |
🤖 Prompt for AI Agents
In frontend/app/store/tabrpcclient.ts around lines 63 to 91, the handler calls
model.handleSubmit(false) immediately which can no-op if the AI panel or chat
transport isn't ready; add a short bounded wait for the model's send-ready
indicator before calling handleSubmit. Implement a small loop with a timeout
(e.g., poll every 50–100ms up to a total ~1–2s) that checks the model readiness
flag (or model._chatSendMessage if no API exists) and returns early if it
becomes ready; only call await model.handleSubmit(false) after readiness or when
the timeout elapses. Optionally replace the direct flag access by adding and
using a WaveAIModel.isReady()/waitUntilReady() method and await that instead
before submitting.
encodedFileName := strings.ReplaceAll(fileName, `"`, """) | ||
quotedFileName := strconv.Quote(encodedFileName) | ||
|
||
randomSuffix := uuid.New().String()[0:8] | ||
formattedText := fmt.Sprintf("<AttachedTextFile_%s file_name=%s>\n%s\n</AttachedTextFile_%s>", randomSuffix, quotedFileName, textContent, randomSuffix) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Encode angle brackets in file_name to avoid tag parsing ambiguities.
Only quotes are encoded. If the filename contains '<' or '>', ConvertToUIMessage’s naive tag slicing can break. Encode them to < / >.
- encodedFileName := strings.ReplaceAll(fileName, `"`, """)
+ encodedFileName := strings.ReplaceAll(fileName, `"`, """)
+ encodedFileName = strings.ReplaceAll(encodedFileName, "<", "<")
+ encodedFileName = strings.ReplaceAll(encodedFileName, ">", ">")
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
encodedFileName := strings.ReplaceAll(fileName, `"`, """) | |
quotedFileName := strconv.Quote(encodedFileName) | |
randomSuffix := uuid.New().String()[0:8] | |
formattedText := fmt.Sprintf("<AttachedTextFile_%s file_name=%s>\n%s\n</AttachedTextFile_%s>", randomSuffix, quotedFileName, textContent, randomSuffix) | |
encodedFileName := strings.ReplaceAll(fileName, `"`, """) | |
encodedFileName = strings.ReplaceAll(encodedFileName, "<", "<") | |
encodedFileName = strings.ReplaceAll(encodedFileName, ">", ">") | |
quotedFileName := strconv.Quote(encodedFileName) | |
randomSuffix := uuid.New().String()[0:8] | |
formattedText := fmt.Sprintf("<AttachedTextFile_%s file_name=%s>\n%s\n</AttachedTextFile_%s>", randomSuffix, quotedFileName, textContent, randomSuffix) |
🤖 Prompt for AI Agents
In pkg/aiusechat/openai/openai-convertmessage.go around lines 352 to 356, the
filename is only replacing quotes which leaves '<' and '>' unescaped and can
break ConvertToUIMessage's tag parsing; replace the naive strings.ReplaceAll
call with a proper HTML-escape of the fileName (e.g., use
html.EscapeString(fileName) or first replace '&' then '<' -> '<' and '>' ->
'>' and '"' -> '"') before calling strconv.Quote so all angle brackets
and ampersands are encoded and avoid tag parsing ambiguities.
if strings.HasPrefix(block.Text, "<AttachedTextFile_") { | ||
openTagEnd := strings.Index(block.Text, ">") | ||
if openTagEnd == -1 { | ||
continue | ||
} | ||
|
||
openTag := block.Text[:openTagEnd] | ||
fileName, ok := extractXmlAttribute(openTag, "file_name") | ||
if !ok { | ||
continue | ||
} | ||
|
||
parts = append(parts, uctypes.UIMessagePart{ | ||
Type: "data-userfile", | ||
Data: uctypes.UIMessageDataUserFile{ | ||
FileName: fileName, | ||
MimeType: "text/plain", | ||
}, | ||
}) | ||
} else { | ||
parts = append(parts, uctypes.UIMessagePart{ | ||
Type: "text", | ||
Text: block.Text, | ||
}) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
🧩 Analysis chain
Make open tag parsing quote-aware to handle '>' inside attributes.
Using the first '>' as the tag end fails if a quoted attribute contains '>'. Scan for '>' outside quotes.
- if strings.HasPrefix(block.Text, "<AttachedTextFile_") {
- openTagEnd := strings.Index(block.Text, ">")
- if openTagEnd == -1 {
- continue
- }
-
- openTag := block.Text[:openTagEnd]
+ if strings.HasPrefix(block.Text, "<AttachedTextFile_") {
+ // Find end of opening tag while respecting quoted segments
+ inQuotes := false
+ openTagEnd := -1
+ for i, r := range block.Text {
+ if r == '"' {
+ inQuotes = !inQuotes
+ } else if r == '>' && !inQuotes {
+ openTagEnd = i
+ break
+ }
+ }
+ if openTagEnd == -1 {
+ continue
+ }
+ openTag := block.Text[:openTagEnd]
Also consider unit tests for filenames containing '>', '<', and quotes to ensure robust parsing.
I'll need to verify the code and its context to validate this review comment. Let me investigate:
</script>🏁 Script executed:
#!/bin/bash
# Find the file and verify it exists
echo "=== Checking for the file ==="
fd -type f "openai-convertmessage.go"
# Get context around lines 482-506
echo -e "\n=== Code context at lines 475-515 ==="
if [ -f "pkg/aiusechat/openai/openai-convertmessage.go" ]; then
sed -n '475,515p' "pkg/aiusechat/openai/openai-convertmessage.go" | cat -n
fi
# Search for related test files
echo -e "\n=== Looking for test files ==="
fd -type f ".*convertmessage.*test"
# Search for AttachedTextFile_ usage/examples
echo -e "\n=== Searching for AttachedTextFile_ patterns ==="
rg -n "AttachedTextFile_" --type go -C 3
Length of output: 3944
🏁 Script executed:
#!/bin/bash
echo "=== encodedFileName context ==="
rg -n "encodedFileName" -C3 pkg/aiusechat/openai/openai-convertmessage.go
echo -e "\n=== Test files under pkg/aiusechat/openai ==="
if [ -d "pkg/aiusechat/openai" ]; then
ls pkg/aiusechat/openai | grep "_test.go$" || echo "No tests"
fi
echo -e "\n=== AttachedTextFile_ in tests ==="
rg -n "AttachedTextFile_" -g "*_test.go" -C2
Length of output: 503
Make open-tag parsing quote-aware
In pkg/aiusechat/openai/openai-convertmessage.go, replace the naive
openTagEnd := strings.Index(block.Text, ">")
with a loop that skips >
inside "
-delimited attributes. Add unit tests covering filenames with >
, <
and "
to verify correct parsing.
…e input validation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
pkg/aiusechat/usechat.go (1)
52-52
: Fix typo in system prompt."informationa" should be "information".
Apply this diff:
- `Widget context is provided as informationa only.`, + `Widget context is provided as information only.`,
🧹 Nitpick comments (1)
frontend/app/aipanel/waveai-model.tsx (1)
327-327
: Remove unsafe type cast.Casting
messages
toany
bypasses type safety. IfUIMessage[]
andWaveUIMessage[]
are compatible, remove the cast; otherwise, add proper type conversion or validation.If the types are compatible, apply this diff:
- this.useChatSetMessages?.(messages as any); + this.useChatSetMessages?.(messages);If conversion is needed, implement a proper type adapter instead of using
any
.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
frontend/app/aipanel/aitypes.ts
(2 hunks)frontend/app/aipanel/waveai-model.tsx
(7 hunks)pkg/aiusechat/usechat.go
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- frontend/app/aipanel/aitypes.ts
🧰 Additional context used
🧬 Code graph analysis (1)
frontend/app/aipanel/waveai-model.tsx (4)
frontend/app/aipanel/aitypes.ts (3)
UseChatSendMessageType
(31-55)UseChatSetMessagesType
(27-29)WaveUIMessagePart
(25-25)frontend/app/aipanel/aimessage.tsx (1)
AIMessage
(462-505)pkg/aiusechat/uctypes/usechat-types.go (1)
AIMessagePart
(269-285)frontend/app/aipanel/ai-utils.ts (2)
normalizeMimeType
(98-113)createDataUrl
(131-138)
🪛 Biome (2.1.2)
frontend/app/aipanel/waveai-model.tsx
[error] 307-307: This hook is being called conditionally, but all hooks must be called in the exact same order in every component render.
Hooks should not be called after an early return.
For React to preserve state between calls, hooks needs to be called unconditionally and always in the same order.
See https://reactjs.org/docs/hooks-rules.html#only-call-hooks-at-the-top-level
(lint/correctness/useHookAtTopLevel)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: Analyze (go)
- GitHub Check: Build for TestDriver.ai
🔇 Additional comments (5)
pkg/aiusechat/usechat.go (1)
68-71
: Align file attachment instructions for Anthropic prompt.In pkg/aiusechat/usechat.go, SystemPromptText_OpenAI includes user-attached text file guidance (lines 68–71), but the Anthropic prompt (SystemPromptText) does not. If Anthropic supports inline attachments, duplicate those lines into SystemPromptText; otherwise add a comment explaining why it’s omitted.
frontend/app/aipanel/waveai-model.tsx (4)
251-322
: Past review comment addressed correctly.The submission logic now allows sending files without text (line 256) and only includes text parts when input is non-empty (lines 274-277). This correctly implements the file-attachment flow described in the PR objectives.
168-172
: LGTM! New utility methods are well-implemented.
registerUseChatData
: Clean registration pattern for external chat handlersgetAndClearMessage
: Proper single-use message injection patternappendText
: Correctly handles spacing by checking for trailing space/newlineAlso applies to: 187-191, 198-208
149-149
: LGTM! Chat reset integration is correct.Calling
useChatSetMessages?.([])
properly clears the UI messages when resetting the chat, integrating correctly with the new chat API.
255-261
: Confirm "error" status permits retry submissions. The guard now allowsuseChatStatus === "error"
; ensure this matches expected behavior, otherwise restrict to only"ready"
.
// Attached text files | ||
`User-attached text files may appear inline as <AttachedTextFile_xxxxxx file_name="...">content</AttachedTextFile_xxxxxx>.`, | ||
`If multiple attached files exist, treat each as a separate source file with its own file_name.`, | ||
`When the user refers to these files, use their inline content directly; do NOT call any read_text_file or file-access tools to re-read them unless asked.`, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Verify tag format and consider content escaping.
The XML-like tag format <AttachedTextFile_xxxxxx file_name="...">content</AttachedTextFile_xxxxxx>
could break if the file content itself contains the closing tag string. Additionally, the placeholder format "xxxxxx" is not clearly documented (UUID? hash? random string?).
Run the following script to verify the tag format implementation and identify any escaping mechanisms:
Recommendation (optional refactor):
Consider clarifying the placeholder format in the prompt. For example:
- `User-attached text files may appear inline as <AttachedTextFile_xxxxxx file_name="...">content</AttachedTextFile_xxxxxx>.`,
+ `User-attached text files may appear inline as <AttachedTextFile_[unique_id] file_name="...">content</AttachedTextFile_[unique_id]>, where [unique_id] is a unique identifier for each file.`,
🏁 Script executed:
#!/bin/bash
# Description: Find where AttachedTextFile tags are generated and verify escaping
# Search for AttachedTextFile tag generation
rg -n -C5 'AttachedTextFile' --type=go --type=ts --type=tsx
# Search for data URL encoding/decoding related to file attachments
rg -n -C3 'data:.*text' --type=go --type=ts --type=tsx
# Look for XML/HTML escaping in file attachment code
rg -n -C3 '(EscapeString|html\.Escape|xml\.Escape)' --type=go
Length of output: 55
---
🏁 Script executed:
```shell
#!/bin/bash
# Description: Find where AttachedTextFile tags are generated with corrected flags
# Search for AttachedTextFile tag generation in Go and TypeScript files
rg -n -C5 'AttachedTextFile' --type=go --type=ts
# Broader search for attached file handling
rg -n -C3 'Attached.*[Ff]ile' --type=go --type=ts
# Search for XML/HTML escaping in file attachment code
rg -n -C3 '(EscapeString|Escape)' --type=go | head -50
# Look for file content embedding or tag generation
rg -n -C3 'file_name' --type=go --type=ts | head -50
Length of output: 12183
🏁 Script executed:
#!/bin/bash
# Print parsing logic for AttachedTextFile tags
sed -n '470,550p' pkg/aiusechat/openai/openai-convertmessage.go
Length of output: 2239
File content escaping is missing; this causes parsing failure if content contains the closing tag pattern.
The implementation in pkg/aiusechat/openai/openai-convertmessage.go:356
generates tags like <AttachedTextFile_XXXXXXXX file_name="...">content</AttachedTextFile_XXXXXXXX>
where XXXXXXXX
is a UUID prefix (first 8 chars, not literal "xxxxxx"). The critical issue: file content is not escaped (line 356 uses plain %s
). If the file content contains </AttachedTextFile_
, the parser at line 482-488 will incorrectly identify the closing tag, breaking content extraction.
Additionally, the prompt placeholder should reference the actual implementation: <AttachedTextFile_[uuid_prefix]
rather than xxxxxx
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (2)
frontend/app/aipanel/waveai-model.tsx (1)
263-267
: Consider guarding command keywords when files are attached.If a user types "/clear" or "/new" with files attached, the current logic clears the chat instead of sending the files with the command text. This is likely unexpected behavior.
Consider this refinement:
+ const droppedFiles = globalStore.get(this.droppedFiles); + - if (input.trim() === "/clear" || input.trim() === "/new") { + if ((input.trim() === "/clear" || input.trim() === "/new") && droppedFiles.length === 0) { this.clearChat(); globalStore.set(this.inputAtom, ""); return; }frontend/app/aipanel/aipanel.tsx (1)
235-235
: Consider wrapping in useEffect to avoid re-registering on every render.While not causing bugs, calling
registerUseChatData
at the top level means it re-runs on every render. This could be optimized:- model.registerUseChatData(sendMessage, setMessages, status, stop); + useEffect(() => { + model.registerUseChatData(sendMessage, setMessages, status, stop); + }, [model, sendMessage, setMessages, status, stop]);
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
frontend/app/aipanel/aipanel.tsx
(7 hunks)frontend/app/aipanel/waveai-model.tsx
(8 hunks)
🧰 Additional context used
🧬 Code graph analysis (2)
frontend/app/aipanel/waveai-model.tsx (4)
frontend/app/aipanel/aitypes.ts (3)
UseChatSendMessageType
(31-55)UseChatSetMessagesType
(27-29)WaveUIMessagePart
(25-25)frontend/app/aipanel/aimessage.tsx (1)
AIMessage
(462-505)pkg/aiusechat/uctypes/usechat-types.go (1)
AIMessagePart
(269-285)frontend/app/aipanel/ai-utils.ts (2)
normalizeMimeType
(98-113)createDataUrl
(131-138)
frontend/app/aipanel/aipanel.tsx (2)
frontend/app/aipanel/waveai-model.tsx (2)
WaveAIModel
(26-350)loadChat
(234-257)frontend/app/aipanel/aipanelheader.tsx (1)
AIPanelHeader
(15-85)
🪛 Biome (2.1.2)
frontend/app/aipanel/waveai-model.tsx
[error] 319-319: This hook is being called conditionally, but all hooks must be called in the exact same order in every component render.
Hooks should not be called after an early return.
For React to preserve state between calls, hooks needs to be called unconditionally and always in the same order.
See https://reactjs.org/docs/hooks-rules.html#only-call-hooks-at-the-top-level
(lint/correctness/useHookAtTopLevel)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
- GitHub Check: Analyze (javascript-typescript)
- GitHub Check: Analyze (go)
- GitHub Check: Build Docsite
- GitHub Check: Build for TestDriver.ai
- GitHub Check: merge-gatekeeper
🔇 Additional comments (7)
frontend/app/aipanel/waveai-model.tsx (4)
170-180
: LGTM! Clean integration point for chat transport.The method provides a clear API for wiring external chat hooks into the model's internal state.
195-216
: LGTM! Clean accessor and utility methods.Both methods are well-implemented:
getAndClearMessage()
provides a clean getter/clear pattern for message retrieval.appendText()
correctly handles spacing by checking for existing whitespace.
319-319
: Static analysis false positive - this is not a React hook.Biome is incorrectly flagging this line as a conditional hook call.
this.useChatSendMessage
is a stored function reference, not a React hook. The naming convention (starting with "use") is misleading the linter, but the code is correct.
326-334
: LGTM! Well-orchestrated loading flow.The method correctly manages loading state, delegates to
loadChat()
, updates the UI via the chat transport, and schedules a scroll after DOM updates.frontend/app/aipanel/aipanel.tsx (3)
208-233
: LGTM! Clean chat transport integration.The
useChat
configuration correctly:
- Integrates with the model via
getAndClearMessage()
for retrieving the pending message- Handles errors by setting model error state and removing the failed user message from history
260-265
: LGTM! Better loading pattern than previous implementation.The
initialLoadDone
flag ensures the welcome message is only shown after confirming the chat is actually empty, not just still loading. This prevents flickering and improves UX.
290-296
: LGTM! Clean delegation to model.Delegating submit handling to
model.handleSubmit()
properly separates concerns between UI event handling and business logic.
…text files get scanned for binary chars. so we can upload dotfiles more effectively instead of dying because we cant figure out the mimetype.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
cmd/wsh/cmd/wshcmd-ai.go (1)
167-189
: Large and small files may be processed out of order
Large files are sent immediately in the loop, while small files are batched and sent at the end, so the AI may receive them in a different order than specified. If preserving the original file order is required, batch all files together or document this behavior.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
cmd/wsh/cmd/wshcmd-ai.go
(1 hunks)pkg/util/utilfn/marshal.go
(2 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
cmd/wsh/cmd/wshcmd-ai.go (5)
cmd/wsh/cmd/wshcmd-root.go (2)
OutputHelpMessage
(78-82)RpcClient
(32-32)pkg/wshrpc/wshrpctypes.go (3)
AIAttachedFile
(739-744)CommandWaveAIAddContextData
(746-751)RpcOpts
(299-305)pkg/util/utilfn/marshal.go (1)
ContainsBinaryData
(207-217)pkg/wshutil/wshrouter.go (1)
MakeTabRouteId
(72-74)pkg/wshrpc/wshclient/wshclient.go (1)
WaveAIAddContextCommand
(585-588)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
- GitHub Check: Build Docsite
- GitHub Check: Analyze (javascript-typescript)
- GitHub Check: Analyze (go)
- GitHub Check: Build for TestDriver.ai
- GitHub Check: merge-gatekeeper
🔇 Additional comments (15)
pkg/util/utilfn/marshal.go (3)
8-8
: LGTM! Appropriate imports for data URL handling.The added standard library imports (
encoding/base64
andnet/url
) are necessary and appropriate for the new data URL decoding functionality.Also applies to: 11-11
168-203
: LGTM! Solid data URL parsing implementation.The implementation correctly follows RFC 2397 for data URL parsing:
- Proper validation of format and error handling
- Correct MIME type extraction with appropriate default
- Handles both base64 and percent-encoded data paths
The function returns raw bytes and MIME type string, appropriately leaving charset interpretation to the caller.
206-217
: LGTM! Reasonable heuristic for binary detection.The function uses a practical heuristic for detecting binary content by checking for null bytes and control characters (excluding common text control chars: tab, LF, CR).
While it may occasionally flag rare text control characters (form feed, vertical tab) as binary, this approach is appropriate for the AI file attachment use case and provides a good balance between simplicity and accuracy.
cmd/wsh/cmd/wshcmd-ai.go (12)
6-20
: LGTM!The new imports support the file attachment functionality:
base64
for encoding file data,net/http
for MIME type detection,path/filepath
for file path handling, andutilfn
for binary data detection.
22-39
: LGTM!The command documentation clearly describes the new file attachment workflow with helpful examples covering common use cases (piping, multiple files, auto-submit, new chat).
41-50
: LGTM!The flag definitions are clear and well-documented, with appropriate short options (-m, -s, -n) that match common CLI conventions.
52-55
: LGTM!The MIME type detection correctly uses
http.DetectContentType
and strips charset parameters, which is appropriate for subsequent type checks.
57-65
: LGTM!The size limits are appropriate for different file types, and the previous issue with slicing
mimeType[:6]
has been correctly fixed by usingstrings.HasPrefix(mimeType, "image/")
on line 61.
72-75
: LGTM!The validation ensures meaningful input is provided before processing, with helpful error messaging and usage display.
77-81
: LGTM!The constants define reasonable limits for file operations, with a 1MB threshold for distinguishing large files that need individual handling from small files that can be batched.
85-87
: LGTM!The early validation prevents unnecessary file I/O when too many files are specified, failing fast with a clear error message.
89-145
: LGTM!The file processing logic is well-structured:
- Properly distinguishes stdin from regular files
- Detects MIME types for regular files (stdin defaults to text/plain)
- Enforces file type restrictions (PDF, image, or text only)
- Validates against binary data in text files using
ContainsBinaryData
- Applies appropriate size limits per MIME type
- Base64-encodes data for transport
The stdin MIME type assumption of "text/plain" is reasonable since the binary data check on line 129 will catch non-text content piped through stdin.
147-152
: LGTM!The tab ID retrieval from environment is properly validated, and the route creation follows the expected pattern for targeting specific tabs in the RPC calls.
154-165
: LGTM!The new chat creation is correctly placed before file additions, ensuring files are added to the new chat context rather than an existing one.
191-203
: LGTM!The final context submission correctly bundles small files with the optional message and submit flag, using appropriate timeout and routing parameters.
…issues with the filename containing ">" chars...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (2)
pkg/aiusechat/usechat.go (1)
68-71
: Past review concerns remain unaddressed.The past review identified critical issues with the attached text file implementation:
Placeholder mismatch: The documentation uses
xxxxxx
but the implementation uses the first 8 characters of a UUID (seeopenai-convertmessage.go:355
).Missing content escaping: The implementation at
openai-convertmessage.go:356
does not escape file content. If the content contains the closing tag pattern</AttachedTextFile_XXXXXXXX>
, the parser will fail to extract the content correctly.These issues should be addressed before this documentation is finalized.
pkg/aiusechat/openai/openai-convertmessage.go (1)
352-356
: Past review concerns remain unaddressed.The past review identified critical issues that are still present in the current code:
Line 352: Only quotes are encoded as
"
. Angle brackets (<
,>
) in filenames should also be encoded as<
and>
to avoid tag parsing ambiguities.Line 356: File content is not escaped. If the content contains the closing tag pattern
</AttachedTextFile_XXXXXXXX>
, the parser at lines 482-506 will incorrectly identify the closing tag mid-content, breaking content extraction.The past review suggested encoding all angle brackets and ampersands in the filename, and properly escaping the content (e.g., by replacing
<
with<
,>
with>
, and&
with&
).
🧹 Nitpick comments (1)
pkg/aiusechat/openai/openai-convertmessage.go (1)
28-58
: Consider more robust attribute parsing for future extensibility.The
extractXmlAttribute
function works for the current controlled tag generation but has limitations:
Naive quote handling: Assumes the first quote after
attr=
is the opening quote and the next quote is the closing quote. This breaks if there are nested quotes or multiple attributes with quotes in values.Entity decoding order: The function decodes
"
afterstrconv.Unquote
, but if other entities (like<
,>
,&
) are used, they should be decoded as well.Since you control the tag generation (line 353-356), the current implementation is sufficient. However, consider using a proper XML/HTML parser or a more robust attribute extraction if the format becomes more complex.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
pkg/aiusechat/openai/openai-convertmessage.go
(3 hunks)pkg/aiusechat/usechat.go
(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
pkg/aiusechat/openai/openai-convertmessage.go (2)
pkg/util/utilfn/marshal.go (1)
DecodeDataURL
(169-203)pkg/aiusechat/uctypes/usechat-types.go (2)
UIMessagePart
(35-68)UIMessageDataUserFile
(70-75)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
- GitHub Check: Build for TestDriver.ai
- GitHub Check: Build Docsite
- GitHub Check: Analyze (javascript-typescript)
- GitHub Check: Analyze (go)
- GitHub Check: merge-gatekeeper
🔇 Additional comments (2)
pkg/aiusechat/openai/openai-convertmessage.go (2)
334-342
: LGTM! Data URL handling is correct.The data URL decoding logic properly handles both embedded data and data URLs for text/plain files. Error messages are clear and actionable.
482-506
: Parsing logic is adequate but depends on content escaping fix.The current parsing approach searches for
\n
and checks if the previous character is>
, which is safer than directly searching for>
(as mentioned in past reviews). This works correctly for the tag format generated at line 356:<AttachedTextFile_XXXXXXXX file_name="...">\ncontent\n</AttachedTextFile_XXXXXXXX>
.However, the parsing logic will fail if the content contains the closing tag pattern, which is the critical issue already flagged at lines 352-356. Once content escaping is implemented (e.g., encoding
<
as<
,>
as>
), this parsing logic will work correctly.Note: The current approach of finding
\n
is brittle and assumes the tag format doesn't change. Consider adding validation to ensure the opening tag format matches expectations (e.g., verify it ends with>
).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (2)
cmd/wsh/cmd/wshcmd-ai.go (2)
93-120
: Consider MIME detection for stdin or document text-only limitation.Stdin is hardcoded to
text/plain
(line 104) without MIME detection, while regular files usedetectMimeType
(line 119). This prevents piping PDFs or images through stdin, which users might reasonably attempt (e.g.,cat document.pdf | wsh ai -
). The binary check at line 127 will catch this, but the error message won't clarify the stdin limitation.Consider either:
- Apply MIME detection to stdin data for consistency with file handling
- Update help text to document that stdin only accepts text
- Provide a stdin-specific error message when binary data is detected
Option 1: Add MIME detection for stdin:
data, err = io.ReadAll(os.Stdin) if err != nil { return fmt.Errorf("reading from stdin: %w", err) } fileName = "stdin" - mimeType = "text/plain" + mimeType = detectMimeType(data)Option 2: Document the text-only limitation in help text (lines 28-35):
Arguments: - files... Files to attach (use '-' for stdin) + files... Files to attach (use '-' for stdin, text only)
165-176
: Consider batch-sending files in a single RPC call.Files are currently sent one at a time, but
CommandWaveAIAddContextData
accepts[]AIAttachedFile
(an array). Sending individually has two drawbacks:
- Performance: Multiple RPC round-trips for what could be a single call
- Atomicity: If file N fails, files 1 through N-1 have already been added to the chat, leaving it in a partially-completed state with no rollback
Consider sending all files in a single call:
- for _, file := range allFiles { - contextData := wshrpc.CommandWaveAIAddContextData{ - Files: []wshrpc.AIAttachedFile{file}, - } - err := wshclient.WaveAIAddContextCommand(RpcClient, contextData, &wshrpc.RpcOpts{ - Route: route, - Timeout: rpcTimeout, - }) - if err != nil { - return fmt.Errorf("adding file %s: %w", file.Name, err) - } + if len(allFiles) > 0 { + contextData := wshrpc.CommandWaveAIAddContextData{ + Files: allFiles, + } + err := wshclient.WaveAIAddContextCommand(RpcClient, contextData, &wshrpc.RpcOpts{ + Route: route, + Timeout: rpcTimeout, + }) + if err != nil { + return fmt.Errorf("adding files: %w", err) + } }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
cmd/wsh/cmd/wshcmd-ai.go
(1 hunks)pkg/aiusechat/usechat.go
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- pkg/aiusechat/usechat.go
🧰 Additional context used
🧬 Code graph analysis (1)
cmd/wsh/cmd/wshcmd-ai.go (5)
cmd/wsh/cmd/wshcmd-root.go (2)
OutputHelpMessage
(78-82)RpcClient
(32-32)pkg/wshrpc/wshrpctypes.go (3)
AIAttachedFile
(739-744)CommandWaveAIAddContextData
(746-751)RpcOpts
(299-305)pkg/util/utilfn/marshal.go (1)
ContainsBinaryData
(207-217)pkg/wshutil/wshrouter.go (1)
MakeTabRouteId
(72-74)pkg/wshrpc/wshclient/wshclient.go (1)
WaveAIAddContextCommand
(585-588)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: Build for TestDriver.ai
- GitHub Check: Analyze (go)
🔇 Additional comments (6)
cmd/wsh/cmd/wshcmd-ai.go (6)
6-39
: LGTM!The imports are appropriate for the new functionality, and the command definition with updated help text and examples provides clear guidance to users.
52-65
: LGTM!The helper functions are well-implemented.
detectMimeType
properly extracts the base MIME type, andgetMaxFileSize
provides appropriate size limits per file type. The use ofstrings.HasPrefix
correctly addresses the previous review's concern about potential panics.
178-190
: Verify intended behavior: submit flag without message.The condition
if aiMessageFlag != "" || aiSubmitFlag
means that using--submit
without--message
will send a request with an emptyText
field andSubmit: true
. This appears to submit the existing prompt without adding new text.Please confirm this is the intended behavior. If so, the logic is correct. If not, consider requiring a message when submit is used.
122-135
: LGTM!The MIME type validation and size limit enforcement are well-implemented. The logic correctly:
- Preserves PDF and image MIME types without binary checks
- Converts other types to text/plain and validates they don't contain binary data
- Enforces appropriate size limits per file type (5MB for PDFs, 7MB for images, 200KB for text)
145-150
: LGTM!Proper validation of the required environment variable and correct construction of the route identifier.
152-163
: LGTM!The new chat creation logic is correct with proper error handling.
Merges 59 commits from wavetermdev/waveterm v0.12.0 release into a5af fork. Resolves 49 merge conflicts across frontend, backend, and configuration files. ## Major Upstream Features Integrated ### AI Enhancements (v0.12.0) - AI Response Feedback + Copy Buttons (wavetermdev#2457) - Reasoning Deltas Display (wavetermdev#2443) - Google AI File Summarization (wavetermdev#2455) - `wsh ai` Command Reimplementation (wavetermdev#2435) - Terminal Context Improvements (wavetermdev#2444) - Batch Tool Approval System - Enhanced AI Panel with welcome message - Context menu support for AI messages ### Infrastructure Updates - Mobile User Agent Emulation for web widgets (wavetermdev#2454) - OSC 7 Support for Fish & PowerShell shells (wavetermdev#2456) - Log Rotation System (wavetermdev#2432) - Onboarding improvements - React 19 compatibility updates - Tailwind v4 migration progress - Dependency updates (50+ commits) ## Fork Features Preserved ✅ **Horizontal Widget Bar** (tabbar.tsx) - Widgets remain in horizontal tab bar (not reverted to sidebar) - Fork-specific layout maintained ✅ **Optional Pane Title Labels** (blockframe.tsx) - Auto-generated pane titles preserved - Custom block rendering logic intact ✅ **Layout Model Modifications** (layoutModel.ts) - Fork's widget positioning logic maintained - Horizontal layout integration preserved ## Conflict Resolution Summary **Configuration (8 files):** - Accepted upstream: .golangci.yml, Taskfile.yml, package.json, go.mod, etc. - All dependencies updated to v0.12.0 levels **Backend AI (13 files):** - Accepted upstream: All pkg/aiusechat/ files - New AI tools: read_dir, screenshot, terminal context - Enhanced OpenAI backend with reasoning support **Frontend AI Panel (12 files):** - Accepted upstream: All frontend/app/aipanel/ files - New features: reasoning display, feedback buttons, welcome message - Enhanced message handling and UX **Backend Infrastructure (7 files):** - Accepted upstream: emain, pkg/telemetry, pkg/wcore, pkg/wshrpc - Updated RPC types and telemetry data structures **Frontend Fork Features (8 files):** - Preserved fork: blockframe.tsx, tabbar.tsx, layoutModel.ts - Accepted upstream: keymodel.ts, wshclientapi.ts, termwrap.ts, etc. **Deleted Files (1 file):** - Removed: frontend/app/modals/tos.tsx (deleted upstream) ## Files Changed - Configuration: 8 files - Backend: 20+ files - Frontend: 25+ files - Total staged: 135 files ## Testing Required 1. Verify AI panel functionality (reasoning, feedback, tools) 2. Test horizontal widget bar (fork feature) 3. Test pane title labels (fork feature) 4. Build verification: `npm install && npm run build:dev` 5. Backend build: `go build ./...` 6. Full test suite: `npm test` ## Known Issues⚠️ Widget bar integration may need review - upstream removed widget sidebar⚠️ Layout changes may conflict with horizontal widget positioning⚠️ React 19 compatibility should be tested thoroughly ## Rollback If issues arise, rollback available at: - Tag: fork-v0.11.6-pre-v0.12-merge - Branch: backup-pre-v0.12-merge ## Next Steps 1. Test all functionality thoroughly 2. Fix any runtime errors from merge 3. Review fork feature integration points 4. Update documentation if needed 5. Create PR to main after validation --- 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <[email protected]>
No description provided.