๐ ๏ธ All DevTools
Showing 1–20 of 4340 tools
Last Updated
April 26, 2026 at 08:00 AM
Show HN: Browse GitHub repos in Emacs without cloning
Show HN (score: 9)[Other] Show HN: Browse GitHub repos in Emacs without cloning Wouldn't be nice to press <C-x C-f> (find-file) and instead of a file path, give it a GitHub URL, and then just browse the repo in Dired?
[Other] Show HN: AI Visibility Monitor โ Track if your site gets cited by GPT/Claude
CJackHwang/ds2api
GitHub Trending[API/SDK] Deepseek to API: A lightweight, high-performance full-stack middleware converting client protocols to universal APIs. Supports multi-account rotation, compiled binaries, Vercel Serverless, and Docker. Compatible with Google, Claude, and OpenAI API formats.
Mine, an IDE for Coalton and Common Lisp
Hacker News (score: 53)[IDE/Editor] Mine, an IDE for Coalton and Common Lisp
Using coding assistance tools to revive projects you never were going to finish
Hacker News (score: 146)[Other] Using coding assistance tools to revive projects you never were going to finish
ComposioHQ/awesome-codex-skills
GitHub Trending[Other] A curated list of practical Codex skills for automating workflows across the Codex CLI and API.
RooCodeInc/Roo-Code
GitHub Trending[IDE/Editor] Roo Code gives you a whole dev team of AI agents in your code editor.
A web-based RDP client built with Go WebAssembly and grdp
Hacker News (score: 51)[Other] A web-based RDP client built with Go WebAssembly and grdp
Show HN: A Karpathy-style LLM wiki your agents maintain (Markdown and Git)
Hacker News (score: 98)[Other] Show HN: A Karpathy-style LLM wiki your agents maintain (Markdown and Git) I shipped a wiki layer for AI agents that uses markdown + git as the source of truth, with a bleve (BM25) + SQLite index on top. No vector or graph db yet.<p>It runs locally in ~/.wuphf/wiki/ and you can git clone it out if you want to take your knowledge with you.<p>The shape is the one Karpathy has been circling for a while: an LLM-native knowledge substrate that agents both read from and write into, so context compounds across sessions rather than getting re-pasted every morning. Most implementations of that idea land on Postgres, pgvector, Neo4j, Kafka, and a dashboard.<p>I wanted to go back to the basics and see how far markdown + git could go before I added anything heavier.<p>What it does: -> Each agent gets a private notebook at agents/{slug}/notebook/.md, plus access to a shared team wiki at team/.<p>-> Draft-to-wiki promotion flow. Notebook entries are reviewed (agent or human) and promoted to the canonical wiki with a back-link. A small state machine drives expiry and auto-archive.<p>-> Per-entity fact log: append-only JSONL at team/entities/{kind}-{slug}.facts.jsonl. A synthesis worker rebuilds the entity brief every N facts. Commits land under a distinct "Pam the Archivist" git identity so provenance is visible in git log.<p>-> [[Wikilinks]] with broken-link detection rendered in red.<p>-> Daily lint cron for contradictions, stale entries, and broken wikilinks.<p>-> /lookup slash command plus an MCP tool for cited retrieval. A heuristic classifier routes short lookups to BM25 and narrative queries to a cited-answer loop.<p>Substrate choices: Markdown for durability. The wiki outlives the runtime, and a user can walk away with every byte. Bleve for BM25. SQLite for structured metadata (facts, entities, edges, redirects, and supersedes). No vectors yet. The current benchmark (500 artifacts, 50 queries) clears 85% recall@20 on BM25 alone, which is the internal ship gate. sqlite-vec is the pre-committed fallback if a query class drops below that.<p>Canonical IDs are first-class. Fact IDs are deterministic and include sentence offset. Canonical slugs are assigned once, merged via redirect stubs, and never renamed. A rebuild is logically identical, not byte-identical.<p>Known limits: -> Recall tuning is ongoing. 85% on the benchmark is not a universal guarantee.<p>-> Synthesis quality is bounded by agent observation quality. Garbage facts in, garbage briefs out. The lint pass helps. It is not a judgment engine.<p>-> Single-office scope today. No cross-office federation.<p>Demo. 5-minute terminal walkthrough that records five facts, fires synthesis, shells out to the user's LLM CLI, and commits the result under Pam's identity: <a href="https://asciinema.org/a/vUvjJsB5vtUQQ4Eb" rel="nofollow">https://asciinema.org/a/vUvjJsB5vtUQQ4Eb</a><p>Script lives at ./scripts/demo-entity-synthesis.sh.<p>Context. The wiki ships as part of WUPHF, an open source collaborative office for AI agents like Claude Code, Codex, OpenClaw, and local LLMs via OpenCode. MIT, self-hosted, bring-your-own keys. You do not have to use the full office to use the wiki layer. If you already have an agent setup, point WUPHF at it and the wiki attaches.<p>Source: <a href="https://github.com/nex-crm/wuphf" rel="nofollow">https://github.com/nex-crm/wuphf</a><p>Install: npx wuphf@latest<p>Happy to go deep on the substrate tradeoffs, the promotion-flow state machine, the BM25-first retrieval bet, or the canonical-ID stability rules. Also happy to take "why not an Obsidian vault with a plugin" as a fair question.
Show HN: Agent MCP Studio โ build multi-agent MCP systems in a browser tab
Show HN (score: 6)[Other] Show HN: Agent MCP Studio โ build multi-agent MCP systems in a browser tab I built a browser-only studio for designing and orchestrating MCP agent systems for development and experimental purposes. The whole stack โ tool authoring, multi-agent orchestration, RAG, code execution โ runs from a single static HTML file via WebAssembly. No backend.<p>The bet: WASM is a hard sandbox for free. When you generate tools with an LLM (or write them by hand), the studio AST-validates the source, registers it lazily, and JIT-compiles into Pyodide on first call. SQL tools run in DuckDB-WASM in a Web Worker. The built-in RAG uses Xenova/all-MiniLM-L6-v2 via Transformers.js for on-device embeddings. Nothing leaves the browser; close the tab and the stack is gone. The WASM boundary is what makes it safe to execute LLM-generated code locally โ no Docker, no per-tenant container, no server.<p>Above the tool layer sits an agentic system with 10 orchestration strategies:<p>- Supervisor (router โ 1 expert) - Mixture of Experts (parallel + synthesizer) - Sequential Pipeline - Plan & Execute (planner decomposes, workers execute) - Swarm (peer handoffs) - Debate (contestants + judge) - Reflection (actor + critic loop) - Hierarchical (manager delegates via ask_<persona> tools) - Round-Robin (panel + moderator) - Map-Reduce (splitter โ parallel โ aggregator)<p>You build a team visually: drag tool chips onto persona nodes on a service graph, pick a strategy, and the topology reshapes to match. Each persona auto-registers as an MCP tool (ask_<name>), plus an agent_chat(query, strategy?) meta tool. A bundled Node bridge speaks stdio to Claude Desktop and WebSocket to your tab โ your browser becomes an MCP server.<p>When you're done, Export gives you a real Python MCP server: server.py, agentic.py, tools/*.py, Dockerfile, requirements.txt, .env.example. The exported agentic.py is a faithful Python port of the same orchestration logic running in the browser, so the deployable artifact behaves identically to the prototype.<p>Also shipped: Project Packs. Export the whole project as a single .agentpack.json. Auto-detects required external services (OpenAI, GitHub, Stripe, Anthropic, Slack, Notion, Linear, etc.) by scanning tool source for os.environ.get(...) and cross-referencing against the network allowlist. Recipients get an import wizard that prompts for credentials. Manifests are reviewable, sharable, and never carry secrets.<p>Some things I'm honestly uncertain about:<p>- 10 strategies might be too many. My guess is most users only need Supervisor, Mixture of Experts, and Debate. Open to data on which ones actually pull weight. - Browser cold-starts (Pyodide warm-up on first load) are a real UX hit despite aggressive caching. - bridge.js is the only non-browser piece. A hosted variant is the obvious next step.<p>Built with Pyodide, DuckDB-WASM, Transformers.js, OpenAI Chat Completions (or a local Qwen 1.5 0.5B running in-browser via Transformers for fully offline mode). ~5K lines of HTML/CSS/JS in one file.<p><a href="https://www.agentmcp.studio" rel="nofollow">https://www.agentmcp.studio</a><p>Genuinely curious whether running this much LLM-generated code in a browser tab feels reasonable to you, or quietly terrifying.
Show HN: Bunny Agent โ Build Coding Agent SaaS via Native AI SDK UI
Show HN (score: 8)[Other] Show HN: Bunny Agent โ Build Coding Agent SaaS via Native AI SDK UI
Show HN: VT Code โ Rust TUI coding agent with multi-provider support
Show HN (score: 6)[Other] Show HN: VT Code โ Rust TUI coding agent with multi-provider support Hi HN, I built VT Code, a semantic coding agent. Supports all SOTA and open sources model. Anthropic, OpenAI, Gemini, Codex. Agent Skills, Model Context Protocol and Agent Client Protocol (ACP) ready. All open source models are support. Local inference via LM Studio and Ollama (experiment). Semantic context understanding is supported by ast-grep for structured code search and ripgrep for powered grep.<p>I built VT Code in Rust on Ratatui. Architecture and agent loop documented in the README and DeepWiki.<p>Repo: <a href="https://github.com/vinhnx/VTCode" rel="nofollow">https://github.com/vinhnx/VTCode</a><p>DeepWiki: <a href="https://deepwiki.com/vinhnx/VTCode" rel="nofollow">https://deepwiki.com/vinhnx/VTCode</a><p>Happy to answer questions!<p>I believe coding harnesses should be open, and everyone should have a choice of their preferred way to work in this agentic engineering era.
[API/SDK] Show HN: RoboAPI โ A unified REST API for robots, like Stripe but for hardware Every robot manufacturer ships a different SDK and a different protocol. A Boston Dynamics Spot speaks nothing like a Universal Robots arm. Every team building on top of robots rewrites the same integration layer from scratch. This is a massive tax on the industry.<p>RoboAPI is a unified API layer that abstracts all of that into one clean developer experience. One SDK, one API key, any robot โ simulated or real hardware.<p>You can connect a simulated robot and read live telemetry in under 5 minutes:<p><pre><code> pip install fastapi uvicorn roslibpy uvicorn api.main:app --reload curl -X POST localhost:8000/v1/robots/connect -d '{"robot_id":"bot-01","brand":"simulated"}' curl localhost:8000/v1/robots/bot-01/sense </code></pre> It also connects to real ROS2 robots via rosbridge โ I tested it today controlling a turtlesim robot drawing circles through the API.<p>The architecture is pluggable โ each robot brand is a separate adapter implementing a common interface (like a payment gateway in Stripe). Adding a new brand means one file.<p>Currently supports: simulated robots and any ROS2 robot. Boston Dynamics and Universal Robots adapters are next.<p>Would love feedback from anyone working in robotics โ especially on the API design and what's missing for real-world use.
FusionCore: ROS 2 sensor fusion (IMU and GPS and encoders)
Hacker News (score: 14)[Other] FusionCore: ROS 2 sensor fusion (IMU and GPS and encoders)
Show HN: I've built a nice home server OS
Hacker News (score: 37)[DevOps] Show HN: I've built a nice home server OS ohai!<p>I've released Lightwhale 3, which is possibly the easiest way to self-host Docker containers.<p>It's a free, immutable Linux system purpose-built to live-boot straight into a working Docker Engine, thereby shortcutting the need for installation, configuration, and maintenance. Its simple design makes it easy to learn, and its low memory footprint should make it especially attractive during these times of RAMageddon.<p>If this has piqued your interest, do check it out, along with its easy-to-follow Getting Started guide.<p>In any event, have a nice day! =)
Show HN: Codex context bloat? 87% avg reduction on SWE-bench Verified traces
Show HN (score: 6)[Other] Show HN: Codex context bloat? 87% avg reduction on SWE-bench Verified traces If you had to build a context window manager in 24h, would you stick to the existing model or come up with something better?<p>Here's what I did:<p>1. Built a proxy that intercepts Codex's calls to OpenAI and rewrites them on the fly.<p>2. Replayed 3,807 rounds of SWE-bench Verified traces through it: avg prompt 44k โ 6k tokens (-87%).<p>3. Posted it to HN to get the next reduction applied to my confidence interval โ starting with the inevitable "How about accuracy?"<p>npx -y pando-proxy ยท github.com/human-software-us/pando-proxy
Show HN: I built a CLI that turns your codebase into clean LLM input
Show HN (score: 6)[CLI Tool] Show HN: I built a CLI that turns your codebase into clean LLM input
Show HN: I Reverse Engineered Codex Background Computer Use
Show HN (score: 6)[Other] Show HN: I Reverse Engineered Codex Background Computer Use
Show HN: Obscura โ V8-powered headless browser for scraping and AI agents
Show HN (score: 6)[Other] Show HN: Obscura โ V8-powered headless browser for scraping and AI agents
Show HN: Claude Code Manager
Show HN (score: 10)[Other] Show HN: Claude Code Manager I built this for myself but I figured why not share.<p>The aim of CCM is to be able to fully manage all Claude Code configuration files, both globally and those in your project.<p>Some neat features:<p>- Manages your CLAUDE.md, rules, hooks, agents, memories and so on.<p>- Elevate memories to rules<p>- Copy/Move any asset from one scope to another, or elevate it to global scope<p>- Install marketplaces and plugins<p>The full app is embedded right on the site as a demo so you can try it out.<p>I'm happy to receive feedback, I know it's not perfect. Thanks for taking a look.