Show HN: MCP Mesh – one endpoint for all your MCP servers (OSS self-hosted)
Show HN (score: 6)Description
MCP is quickly becoming the standard for agentic systems, but… once you go past a couple servers it turns into the same problems for every team:
- M×N config sprawl (every client wired to every server, each with its own JSON + ports + retries) - Token + tool bloat (dumping tool definitions into every prompt doesn’t scale) - Credentials + blast radius (tokens scattered across clients, hard to audit, hard to revoke) - No single place to debug (latency, errors, “what tool did it call, with what params?”)
MCP Mesh sits between MCP clients and MCP servers and collapses that mess into one production endpoint you can actually operate.
What it does:
- One endpoint for Cursor / Claude / VS Code / custom agents → all MCP traffic routes through the mesh - RBAC + policies + audit trails at the control plane (multi-tenant org/workspace/project scoping) - Full observability with OpenTelemetry (traces, errors, latency, cost attribution) - Runtime strategies as “gateways” to deal with tool bloat: Full-context (small toolsets), Smart selection (narrow toolset before execution), Code execution (load tools on-demand / run code in a sandbox) - Token vault + OAuth support, proxying remote servers without spraying secrets into every client - MCP Apps + Bindings so apps can target capability contracts and you can swap MCP providers without rewriting everything
A small but surprisingly useful thing: the UI shows every call, input/output, who ran it, and lets you replay calls. This ended up being our “Wireshark for MCP” during real workflows.
It’s open-source + self-hosted (run locally with SQLite; Postgres or Supabase for prod).
You can start with `npx @decocms/mesh` or clone + run with Bun.
We’d love your feedback!
Links below:
Repo: https://github.com/decocms/mesh
Landing: https://www.decocms.com/mcp-mesh
Blog post: https://www.decocms.com/blog/post/mcp-mesh
edit: layout
More from Show
Show HN: Agent MCP Studio – build multi-agent MCP systems in a browser tab
Show HN: Agent MCP Studio – build multi-agent MCP systems in a browser tab I built a browser-only studio for designing and orchestrating MCP agent systems for development and experimental purposes. The whole stack — tool authoring, multi-agent orchestration, RAG, code execution — runs from a single static HTML file via WebAssembly. No backend.<p>The bet: WASM is a hard sandbox for free. When you generate tools with an LLM (or write them by hand), the studio AST-validates the source, registers it lazily, and JIT-compiles into Pyodide on first call. SQL tools run in DuckDB-WASM in a Web Worker. The built-in RAG uses Xenova/all-MiniLM-L6-v2 via Transformers.js for on-device embeddings. Nothing leaves the browser; close the tab and the stack is gone. The WASM boundary is what makes it safe to execute LLM-generated code locally — no Docker, no per-tenant container, no server.<p>Above the tool layer sits an agentic system with 10 orchestration strategies:<p>- Supervisor (router → 1 expert) - Mixture of Experts (parallel + synthesizer) - Sequential Pipeline - Plan & Execute (planner decomposes, workers execute) - Swarm (peer handoffs) - Debate (contestants + judge) - Reflection (actor + critic loop) - Hierarchical (manager delegates via ask_<persona> tools) - Round-Robin (panel + moderator) - Map-Reduce (splitter → parallel → aggregator)<p>You build a team visually: drag tool chips onto persona nodes on a service graph, pick a strategy, and the topology reshapes to match. Each persona auto-registers as an MCP tool (ask_<name>), plus an agent_chat(query, strategy?) meta tool. A bundled Node bridge speaks stdio to Claude Desktop and WebSocket to your tab — your browser becomes an MCP server.<p>When you're done, Export gives you a real Python MCP server: server.py, agentic.py, tools/*.py, Dockerfile, requirements.txt, .env.example. The exported agentic.py is a faithful Python port of the same orchestration logic running in the browser, so the deployable artifact behaves identically to the prototype.<p>Also shipped: Project Packs. Export the whole project as a single .agentpack.json. Auto-detects required external services (OpenAI, GitHub, Stripe, Anthropic, Slack, Notion, Linear, etc.) by scanning tool source for os.environ.get(...) and cross-referencing against the network allowlist. Recipients get an import wizard that prompts for credentials. Manifests are reviewable, sharable, and never carry secrets.<p>Some things I'm honestly uncertain about:<p>- 10 strategies might be too many. My guess is most users only need Supervisor, Mixture of Experts, and Debate. Open to data on which ones actually pull weight. - Browser cold-starts (Pyodide warm-up on first load) are a real UX hit despite aggressive caching. - bridge.js is the only non-browser piece. A hosted variant is the obvious next step.<p>Built with Pyodide, DuckDB-WASM, Transformers.js, OpenAI Chat Completions (or a local Qwen 1.5 0.5B running in-browser via Transformers for fully offline mode). ~5K lines of HTML/CSS/JS in one file.<p><a href="https://www.agentmcp.studio" rel="nofollow">https://www.agentmcp.studio</a><p>Genuinely curious whether running this much LLM-generated code in a browser tab feels reasonable to you, or quietly terrifying.
No other tools from this source yet.