gopls-mcp-proxy
A lightweight proxy that lets multiple AI agents share a single gopls mcp server process instead of each spawning their own.
Problem
When you configure gopls as an MCP server using stdio transport, every agent that starts spawns a separate gopls process — each loading the full module graph, running file watchers, and consuming ~200–500MB of RAM. With 3–4 concurrent agents, that's easily 1–2GB just for gopls.
This is a known upstream issue: golang/go#75270. This proxy is a workaround until gopls mcp -remote=auto lands upstream.
How It Works
gopls-mcp-proxy presents itself as a standard MCP stdio server to each agent. Internally, it starts a gopls mcp -listen daemon and bridges each agent's stdin/stdout to it over SSE.
Agent 1 (stdio) ─┐
Agent 2 (stdio) ──┤ gopls-mcp-proxy ──► gopls mcp -listen (one process per project)
Agent 3 (stdio) ─┘
Before: N agents → N gopls processes
After: N agents → 1 gopls process + N near-zero-cost bridges
Multi-workspace support
The proxy creates one daemon per project. The daemon port is derived from both the gopls binary path and the module root (the directory containing go.mod), so agents in different projects get separate daemons, each rooted at the correct module. Agents within the same module share one daemon regardless of which subdirectory they start in.
The proxy exits immediately (without starting gopls) when invoked outside a Go module. It walks up from the working directory looking for go.mod; if none is found, it exits cleanly. This makes it safe to add as a global MCP server — it only activates in Go projects.
Installation
go install github.com/Afterous/gopls-mcp-proxy@latest
Requires gopls to be installed and on your PATH.
Usage
Replace gopls with gopls-mcp-proxy in your MCP config. No other changes needed.
Claude Code (~/.claude/settings.json or .mcp.json):
{
"mcpServers": {
"gopls": {
"command": "gopls-mcp-proxy",
"type": "stdio"
}
}
}
Cursor (~/.cursor/mcp.json or .cursor/mcp.json):
{
"mcpServers": {
"gopls": {
"command": "gopls-mcp-proxy",
"type": "stdio"
}
}
}
Codex (~/.codex/config.json):
{
"mcpServers": {
"gopls": {
"command": "gopls-mcp-proxy",
"type": "stdio"
}
}
}
Options
-gopls string path to gopls binary (default: from PATH)
-port int override daemon port (default: derived from gopls binary path and module root)
-v verbose logging to stderr
Verifying it works
Run with -v to confirm sharing is happening:
gopls-mcp-proxy: daemon already running at http://localhost:60337
That line means the proxy connected to an existing daemon rather than spawning a new one. You can also check with ps aux | grep "gopls mcp" — you should see exactly one gopls mcp -listen process per project regardless of how many agents are open.
Notes
Daemon lifetime: The gopls daemon is intentionally long-lived. It starts on first use and keeps running after all agents disconnect so the next agent gets a warm start. To stop it, kill the gopls mcp -listen process manually.
Windows: Process detachment is not implemented on Windows, so the daemon may exit when the first proxy process closes. This is a known limitation.
Upstream Status
This proxy is a stepping stone in a three-phase roadmap:
- Now (this proxy) — per-project daemon as a community workaround. Each
(gopls binary, module root) pair gets its own daemon; agents within the same module share it.
- golang/go#75270 — per-project daemon built into gopls via
gopls mcp -remote=auto. Once this lands you can switch back to plain gopls with "args": ["mcp", "-remote=auto"] and retire this proxy.
- golang/go#78668 — single machine-wide daemon that dynamically loads workspace views as new clients connect from different projects, sharing type-checking caches across modules for maximum memory savings.
License
MIT