Skip to content

MCP socket proxy: request ID collision causes tool calls to hang when multiple sessions share a proxy #324

@ofhtech

Description

@ofhtech

Bug Report

  • Agent Deck version: v0.3.1
  • OS: Ubuntu 24.04 (WSL2) / Linux 6.6.87
  • tmux version: 3.4

Description

When multiple Claude Code sessions share a pooled MCP server via the socket proxy, tool calls intermittently hang. The permission prompt never appears and the call blocks indefinitely until the user cancels (Ctrl+C). This happens because of a request ID collision in the socket proxy's multiplexing logic.

Steps to Reproduce

  1. Start agent-deck with an MCP pool (pool: true) containing any MCP server
  2. Open two or more Claude Code sessions that share the pooled server
  3. Issue tool calls from both sessions concurrently
  4. Observe that some calls hang — the response never arrives and the permission prompt never appears

Expected Behavior

All tool calls should complete regardless of how many sessions share the proxy.

Actual Behavior

When two clients happen to send JSON-RPC requests with the same id value (which is common since Claude Code uses small sequential integers starting from 1), one client's response is routed to the wrong session and the other client hangs forever.

Root Cause

In internal/mcppool/socket_proxy.go, the handleClient method stores a mapping from request ID → session ID:

// socket_proxy.go ~line 290
requestMap[req.ID] = sessionID

When two clients send requests with the same JSON-RPC id (e.g., both send id: 1), the second requestMap write overwrites the first. Then in broadcastResponses / routeToClient, the response for id: 1 is routed only to the second session — the first session never receives its response and hangs.

Additionally, both raw requests (with the same id) are forwarded verbatim to the MCP server's stdin. The JSON-RPC spec doesn't define behavior when a server receives two concurrent requests with the same ID — the server may only respond to one, or responses may be mismatched.

Suggested Fix

Rewrite request IDs at the proxy layer using an atomic counter, and maintain a reverse mapping to restore original IDs before forwarding responses back to clients:

type SocketProxy struct {
    // ...existing fields...
    nextID     atomic.Int64
    idMap      sync.Map // proxyID → {sessionID, originalID}
}

type idMapping struct {
    sessionID  string
    originalID interface{}
}

func (sp *SocketProxy) handleClient(conn net.Conn, sessionID string) {
    // ...
    proxyID := sp.nextID.Add(1)
    sp.idMap.Store(proxyID, idMapping{sessionID: sessionID, originalID: req.ID})
    req.ID = proxyID
    // forward rewritten request to MCP stdin
}

func (sp *SocketProxy) routeToClient(resp jsonrpcMessage) {
    if mapping, ok := sp.idMap.LoadAndDelete(resp.ID); ok {
        m := mapping.(idMapping)
        resp.ID = m.originalID          // restore original ID for the client
        sendToSession(m.sessionID, resp) // route to correct session
    }
}

This ensures every in-flight request has a globally unique ID from the MCP server's perspective, and responses are always routed to the correct session with the original ID restored.

Affected Files

  • internal/mcppool/socket_proxy.goSocketProxy struct, handleClient, routeToClient, broadcastResponses

Workaround

Disable pooling (pool: false) so each session gets its own MCP server instance. This avoids the shared proxy entirely but increases resource usage.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions