🛠️ All DevTools
Showing 1–20 of 4243 tools
Last Updated
April 21, 2026 at 08:00 AM
How to make a fast dynamic language interpreter
Hacker News (score: 148)[Other] How to make a fast dynamic language interpreter
Show HN: MCPfinder – An MCP server that finds and installs other MCP servers
Show HN (score: 5)[Other] Show HN: MCPfinder – An MCP server that finds and installs other MCP servers I’ve been building and using agents heavily lately. The Model Context Protocol ecosystem is growing insanely fast, but discovering and configuring new tools is still highly manual. Every time I needed to connect an agent to a new service, I had to browse registries, figure out the transport type, identify required env vars, and manually update "mcp.json" files.<p>So I built MCPfinder. It aggregates servers from the official MCP registry, Glama, and Smithery (around 25,000 combined entries) into a deduplicated, ranked catalog.<p>But the real twist is the DX: MCPfinder is itself an MCP server :D<p>You only install it once as your "base capability" via standard stdio: npx -y @mcpfinder/server<p>From then on, when you tell your AI, "I need to query my PostgreSQL database," the magic happens autonomously.<p>It's completely free, AGPL-3.0 licensed, and built purely to optimize AI-tool surface discovery.<p>I'd love to hear your thoughts, feedback, or edge cases where JSON generation for specific platforms is acting up.
Show HN: Holos – QEMU/KVM with a compose-style YAML, GPUs and health checks
Hacker News (score: 20)[DevOps] Show HN: Holos – QEMU/KVM with a compose-style YAML, GPUs and health checks I got tired of libvirt XML and Vagrant's Ruby/reload dance for single-host VM stacks, so I built a compose-style runtime directly on QEMU/KVM.<p>What's there: GPU passthrough as a first-class primitive (VFIO, OVMF, per-instance EFI vars), healthchecks that gate depends_on over SSH, socket-multicast L2 between VMs with no root and no bridge config, cloud-init wired through the YAML, Dockerfile support for provisioning.<p>What it's not: Kubernetes. No clustering, no live migration, no control plane. Single host. Prototype, but I'm running it on real hardware. Curious what breaks for people.
deepseek-ai/DeepGEMM
GitHub Trending[Other] DeepGEMM: clean and efficient FP8 GEMM kernels with fine-grained scaling
Show HN: Git Push No-Mistakes
Show HN (score: 7)[DevOps] Show HN: Git Push No-Mistakes no-mistakes is how I kill AI slop. It puts a local git proxy in front of my real remote. I push to no-mistakes instead of origin, and it spins up a disposable worktree, runs my coding agent as a validation pipeline, forwards upstream only after every check passes, opens a clean PR automatically, and babysits CI pipeline for me.
Kimi vendor verifier – verify accuracy of inference providers
Hacker News (score: 142)[Other] Kimi vendor verifier – verify accuracy of inference providers
Kimi K2.6: Advancing Open-Source Coding
Hacker News (score: 36)[Other] Kimi K2.6: Advancing Open-Source Coding
I prompted ChatGPT, Claude, Perplexity, and Gemini and watched my Nginx logs
Hacker News (score: 36)[Other] I prompted ChatGPT, Claude, Perplexity, and Gemini and watched my Nginx logs
Users unable to load ChatGPT, Codex and API Platform
Hacker News (score: 16)[Other] Users unable to load ChatGPT, Codex and API Platform
ChatGPT and Codex Down
Hacker News (score: 23)[Other] ChatGPT and Codex Down
Show HN: Libredesk – self-hosted, single binary Intercom/Zendesk alternative
Show HN (score: 5)[Other] Show HN: Libredesk – self-hosted, single binary Intercom/Zendesk alternative Libredesk is a 100% free and open-source helpdesk, a Zendesk/Intercom alternative. Backend in Go, frontend in Vue + shadcn/ui. Unlike many "open-core" alternatives that lock essential features behind enterprise plans, Libredesk is fully open-source and will always stay that way.<p>Last year I posted v0.1.0 here: <a href="https://news.ycombinator.com/item?id=43158166">https://news.ycombinator.com/item?id=43158166</a><p>A year later, it's omni-channel. Alongside email, you can drop a live chat widget (beta) onto your website and handle both channels in the same agent UI. The chat widget, CSAT pages, and email templates are all customizable, and self-hosters can swap out the bundled HTML/JS/CSS assets for full white-labeling.<p>Genuinely, if you're paying per-agent SaaS pricing for a helpdesk today, I really think Libredesk can replace it. It covers most of what mainstream helpdesks do, and more lands with each release. I'd love to hear what would stop you from switching.<p>I originally built Libredesk for what we needed at work, we were on osticket and wanted something cleaner. These days I work on Libredesk in evenings and weekends alongside a full-time job, so response times on issues aren't instant, but I read every one. Docs are a bit behind the code too, but catching up.<p>Agent dashboard demo: <a href="https://demo.libredesk.io/" rel="nofollow">https://demo.libredesk.io/</a><p>Live chat widget demo: <a href="https://libredesk.io/" rel="nofollow">https://libredesk.io/</a> (bottom-right corner)<p>Github: <a href="https://github.com/abhinavxd/libredesk" rel="nofollow">https://github.com/abhinavxd/libredesk</a>
Show HN: CyberWriter – a .md editor built on Apple's (barely-used) on-device AI
Show HN (score: 13)[IDE/Editor] Show HN: CyberWriter – a .md editor built on Apple's (barely-used) on-device AI Apple has quietly shipped a pretty complete on-device AI stack into macOS, with these features first getting API access in MacOS 26. There are multiple components in the foundation model, but the skills it shipped with actually make this ~3b parameter model useful. The API to hit the model is super easy, and no one is really wiring them together yet.<p>- Foundation Models (macOS 26) - a ~3B-parameter LLM with an API. Streaming, structured output, tool use. No API key, no cloud call, no per-token cost. - NLContextualEmbedding (Natural Language framework, macOS 14+) -- a BERT-style 512-dim text embedder. Exactly what OpenAI and Cohere sell, sitting in Apple's SDKs since iOS 17. - SFSpeechRecognizer / SpeechAnalyzer - on-device speech-to-text including live dictation. Solid accuracy on Apple Silicon.<p>I built cyberWriter, a Markdown editor, on top of all three, mostly as a test and showcase to see what it can do. I actually integrated local and cloud AI first, and then Apple shipped the foundation model, it stacked on super easy, and now users with no local or API AI knowledge can use it with just a click or two. Well the real reason is because most markdown editors need plugins that run with full system access, and I work on health data and can't have that.<p>Vault chat / semantic search. The app indexes your Markdown folder via NLContextualEmbedding (around 50 seconds for 1000 chunks on an M1). The search bar gets a "Related Ideas" section that matches by meaning - typing "orbital mechanics" surfaces notes about rockets and launch windows even when those exact words never appear. Ask the AI a question and it retrieves the top 5 chunks as context. Plain RAG, but the embedder, retrieval, chat model, and search all run locally.<p>AI Workspace. Command+Shift+A opens a chat panel, Command+J triggers inline quick actions (rewrite, summarize, change tone, fix grammar, continue). Apple Intelligence is the default; Claude, OpenAI, Ollama, and LM Studio all work if you prefer. The same context layer - document selection, attached files, retrieved vault chunks - feeds every provider through the same system-message path. Because the vault context is file and filename aware, it can create backlinks to the referenced file if it writes or edits a doc for you.<p>Voice notes and dictation. Record a voice note directly into your doc, transcribe it with SpeechAnalyzer, or just dictate into the editor while you think. Audio never leaves the Mac.<p>The privacy story is straightforward because the primitives are already private. Vectors live in a `.vault.embeddings.json` file next to your vault, never sent anywhere. If you use Apple Intelligence, even the retrieved text stays on-device. For cloud models there is a clear toggle and an inline warning before any filenames or snippets leave the machine.<p>Honest limitations:<p>- 512-dim embeddings are solid mid-tier. A GPT-4-class embedder catches subtler relationships this will miss. - 256-token chunks can split long paragraphs mid-argument. - Foundation Models caps its context window around 6K characters, so vault context is budgeted to 3K with truncation markers on the rest. - Multilingual support is English-only right now. NLContextualEmbedding has Latin, Cyrillic, and CJK model variants; wiring the language detector across chunks is Phase 2.<p>The developer experience for these APIs is genuinely good. Foundation Models streams cleanly, NLContextualEmbedding downloads assets on demand and gives you mean-poolable token vectors in a handful of lines. Curious what others here are building on this stack - feels like low-hanging fruit that has been sitting there for a while.<p><a href="https://imgur.com/a/HyhHLv2" rel="nofollow">https://imgur.com/a/HyhHLv2</a><p>The Apple AI embedding feature is going live today. I'm honestly surprised it even works out of the box.
Amazon's AI boom is creating mess of duplicate tools and data inside the company
Hacker News (score: 27)[Other] Amazon's AI boom is creating mess of duplicate tools and data inside the company
Stripe's Payment APIs: the first 10 years (2020)
Hacker News (score: 86)[Other] Stripe's Payment APIs: the first 10 years (2020)
Show HN: Modular – drop AI features into your app with two function calls
Show HN (score: 5)[API/SDK] Show HN: Modular – drop AI features into your app with two function calls I kept hitting the same wall at work every time we needed to ship an AI feature. What looked like a week of work turned into picking a model, setting up a vector DB, managing embeddings, wiring up chat history, handling retries — none of it was the actual feature. So I built Modular. You register a function that returns your app's data, then call ai.run() for one-shot features or ai.chat() for stateful conversation. Everything else — context management, embeddings, session history, model routing, retries — is handled. MCP-native from day one. Works with Claude, GPT-4o, and Gemini. Still early — collecting feedback before building the full SDK. Would love to hear if others have hit this same wall, or if you think I'm solving the wrong problem.
Show HN: A lightweight way to make agents talk without paying for API usage
Hacker News (score: 31)[Other] Show HN: A lightweight way to make agents talk without paying for API usage
Show HN: TRELLIS.2 image-to-3D running on Mac Silicon – no Nvidia GPU needed
Hacker News (score: 84)[Other] Show HN: TRELLIS.2 image-to-3D running on Mac Silicon – no Nvidia GPU needed I ported Microsoft's TRELLIS.2 (4B parameter image-to-3D model) to run on Apple Silicon via PyTorch MPS. The original requires CUDA with flash_attn, nvdiffrast, and custom sparse convolution kernels: none of which work on Mac.<p>I replaced the CUDA-specific ops with pure-PyTorch alternatives: a gather-scatter sparse 3D convolution, SDPA attention for sparse transformers, and a Python-based mesh extraction replacing CUDA hashmap operations. Total changes are a few hundred lines across 9 files.<p>Generates ~400K vertex meshes from single photos in about 3.5 minutes on M4 Pro (24GB). Not as fast as H100 (where it takes seconds), but it works offline with no cloud dependency.<p><a href="https://github.com/shivampkumar/trellis-mac" rel="nofollow">https://github.com/shivampkumar/trellis-mac</a>
Show HN: Clone, a small Rust VMM, forks VMs in under 20ms via CoW
Show HN (score: 10)[DevOps] Show HN: Clone, a small Rust VMM, forks VMs in under 20ms via CoW We needed a secure, multi-tenant way to offer shell accounts to users, but most VMMs were using too much memory and containers are unsafe. With clone, VMs are now more memory efficient than containers in most cases.<p>Since many other projects on HN looked like they were doing this too, open sourcing this was the right thing to do.<p>Feel free to use in whole or in part as you see fit!
Show HN: Faceoff – A terminal UI for following NHL games
Hacker News (score: 89)[CLI Tool] Show HN: Faceoff – A terminal UI for following NHL games Faceoff is a TUI app written in Python to follow live NHL games and browse standings and stats. I got the inspiration from Playball, a similar TUI app for MLB games that was featured on HN.<p>The app was mostly vibe-coded with Claude Code, but not one-shot. I added features and fixed bugs by using it, as I spent way too much time in the terminal over the last few months.<p>Try it out with `uvx faceoff` (requires uv).
Critical flaw in Protobuf library enables JavaScript code execution
Hacker News (score: 21)[Other] Critical flaw in Protobuf library enables JavaScript code execution