Skip to content

RPC Proxy

The Control Plane acts as an RPC proxy between the frontend/API and connected agents. RPCs are forwarded over the agent’s WebSocket connection.

Frontend → CP REST endpoint → WebSocket → Agent → Execute → Response → CP → Frontend
  1. Frontend sends HTTP request to a CP endpoint (e.g., POST /api/workspaces/{id}/rpc/pods)
  2. CP looks up the agent WebSocket for this workspace
  3. CP sends RPC message over WebSocket with a unique rpc_id
  4. Agent executes the command and sends the response back
  5. CP resolves the pending RPC future and returns HTTP response

RPCs have per-method timeouts defined in the Control Plane:

  • Default: 30 seconds
  • Long-running commands (capture, analysis): up to 120 seconds

If the agent doesn’t respond within the timeout, the CP returns 504 Gateway Timeout and cleans up the pending RPC entry.

The CP limits concurrent RPCs per agent to prevent overwhelming the agent. When slots are full, new RPCs are queued with a grace period before rejecting with 429 Too Many Requests.

ScenarioHTTP StatusBehavior
Agent not connected502Immediate failure
Agent disconnects during RPC504Pending RPC cleaned up
RPC timeout504Future cancelled, slot released
Agent returns errorVariesError forwarded to caller
Backpressure full429Grace period, then reject

Use remote_debug MCP tool to test RPC connectivity:

MCP: remote_debug
workspace_id: "<workspace_id>"
command: "ping"

If this times out, the issue is at the WebSocket/agent level, not the RPC logic.