Show HN: VPS marketplace for 5 Clouds in Yer Terminal
Show HN (score: 6)Description
More from Show
Show HN: Agent MCP Studio – build multi-agent MCP systems in a browser tab
Show HN: Agent MCP Studio – build multi-agent MCP systems in a browser tab I built a browser-only studio for designing and orchestrating MCP agent systems for development and experimental purposes. The whole stack — tool authoring, multi-agent orchestration, RAG, code execution — runs from a single static HTML file via WebAssembly. No backend.<p>The bet: WASM is a hard sandbox for free. When you generate tools with an LLM (or write them by hand), the studio AST-validates the source, registers it lazily, and JIT-compiles into Pyodide on first call. SQL tools run in DuckDB-WASM in a Web Worker. The built-in RAG uses Xenova/all-MiniLM-L6-v2 via Transformers.js for on-device embeddings. Nothing leaves the browser; close the tab and the stack is gone. The WASM boundary is what makes it safe to execute LLM-generated code locally — no Docker, no per-tenant container, no server.<p>Above the tool layer sits an agentic system with 10 orchestration strategies:<p>- Supervisor (router → 1 expert) - Mixture of Experts (parallel + synthesizer) - Sequential Pipeline - Plan & Execute (planner decomposes, workers execute) - Swarm (peer handoffs) - Debate (contestants + judge) - Reflection (actor + critic loop) - Hierarchical (manager delegates via ask_<persona> tools) - Round-Robin (panel + moderator) - Map-Reduce (splitter → parallel → aggregator)<p>You build a team visually: drag tool chips onto persona nodes on a service graph, pick a strategy, and the topology reshapes to match. Each persona auto-registers as an MCP tool (ask_<name>), plus an agent_chat(query, strategy?) meta tool. A bundled Node bridge speaks stdio to Claude Desktop and WebSocket to your tab — your browser becomes an MCP server.<p>When you're done, Export gives you a real Python MCP server: server.py, agentic.py, tools/*.py, Dockerfile, requirements.txt, .env.example. The exported agentic.py is a faithful Python port of the same orchestration logic running in the browser, so the deployable artifact behaves identically to the prototype.<p>Also shipped: Project Packs. Export the whole project as a single .agentpack.json. Auto-detects required external services (OpenAI, GitHub, Stripe, Anthropic, Slack, Notion, Linear, etc.) by scanning tool source for os.environ.get(...) and cross-referencing against the network allowlist. Recipients get an import wizard that prompts for credentials. Manifests are reviewable, sharable, and never carry secrets.<p>Some things I'm honestly uncertain about:<p>- 10 strategies might be too many. My guess is most users only need Supervisor, Mixture of Experts, and Debate. Open to data on which ones actually pull weight. - Browser cold-starts (Pyodide warm-up on first load) are a real UX hit despite aggressive caching. - bridge.js is the only non-browser piece. A hosted variant is the obvious next step.<p>Built with Pyodide, DuckDB-WASM, Transformers.js, OpenAI Chat Completions (or a local Qwen 1.5 0.5B running in-browser via Transformers for fully offline mode). ~5K lines of HTML/CSS/JS in one file.<p><a href="https://www.agentmcp.studio" rel="nofollow">https://www.agentmcp.studio</a><p>Genuinely curious whether running this much LLM-generated code in a browser tab feels reasonable to you, or quietly terrifying.
Show HN: VT Code – Rust TUI coding agent with multi-provider support
Show HN: VT Code – Rust TUI coding agent with multi-provider support Hi HN, I built VT Code, a semantic coding agent. Supports all SOTA and open sources model. Anthropic, OpenAI, Gemini, Codex. Agent Skills, Model Context Protocol and Agent Client Protocol (ACP) ready. All open source models are support. Local inference via LM Studio and Ollama (experiment). Semantic context understanding is supported by ast-grep for structured code search and ripgrep for powered grep.<p>I built VT Code in Rust on Ratatui. Architecture and agent loop documented in the README and DeepWiki.<p>Repo: <a href="https://github.com/vinhnx/VTCode" rel="nofollow">https://github.com/vinhnx/VTCode</a><p>DeepWiki: <a href="https://deepwiki.com/vinhnx/VTCode" rel="nofollow">https://deepwiki.com/vinhnx/VTCode</a><p>Happy to answer questions!<p>I believe coding harnesses should be open, and everyone should have a choice of their preferred way to work in this agentic engineering era.
Show HN: RoboAPI – A unified REST API for robots, like Stripe but for hardware
Show HN: RoboAPI – A unified REST API for robots, like Stripe but for hardware Every robot manufacturer ships a different SDK and a different protocol. A Boston Dynamics Spot speaks nothing like a Universal Robots arm. Every team building on top of robots rewrites the same integration layer from scratch. This is a massive tax on the industry.<p>RoboAPI is a unified API layer that abstracts all of that into one clean developer experience. One SDK, one API key, any robot — simulated or real hardware.<p>You can connect a simulated robot and read live telemetry in under 5 minutes:<p><pre><code> pip install fastapi uvicorn roslibpy uvicorn api.main:app --reload curl -X POST localhost:8000/v1/robots/connect -d '{"robot_id":"bot-01","brand":"simulated"}' curl localhost:8000/v1/robots/bot-01/sense </code></pre> It also connects to real ROS2 robots via rosbridge — I tested it today controlling a turtlesim robot drawing circles through the API.<p>The architecture is pluggable — each robot brand is a separate adapter implementing a common interface (like a payment gateway in Stripe). Adding a new brand means one file.<p>Currently supports: simulated robots and any ROS2 robot. Boston Dynamics and Universal Robots adapters are next.<p>Would love feedback from anyone working in robotics — especially on the API design and what's missing for real-world use.
No other tools from this source yet.