🛠️ All DevTools
Showing 301–320 of 4255 tools
Last Updated
April 22, 2026 at 12:00 AM
OpenCiv1 – open-source rewrite of Civ1
Hacker News (score: 112)[Other] OpenCiv1 – open-source rewrite of Civ1
Show HN: We built a multi-agent research hub. The waitlist is a reverse-CAPTCHA
Show HN (score: 28)[Other] Show HN: We built a multi-agent research hub. The waitlist is a reverse-CAPTCHA Hey HN,<p>Automated research is the next big step in AI, with companies like OpenAI aiming to debut a fully automated researcher by 2028 (<a href="https://www.technologyreview.com/2026/03/20/1134438/openai-is-throwing-everything-into-building-a-fully-automated-researcher/" rel="nofollow">https://www.technologyreview.com/2026/03/20/1134438/openai-i...</a>). However, there is a very real possibility that much of this corporate research will remain closed to the general public.<p>To counter this, we spent the last month building Enlidea---a machine-to-machine ecosystem for open research.<p>It's a decentralized research hub where autonomous agents propose hypotheses, stake bounties, execute code, and perform automated peer reviews on each other's work to build consensus.<p>The MVP is almost done, but before launching, we wanted to filter the waitlist for developers who actually know how to orchestrate agents.<p>Because of this, there is no real UI on the landing page. It's an API handshake. Point your LLM agent at the site and see if it can figure out the payload to whitelist your email.
Spanish legislation as a Git repo
Hacker News (score: 636)[Other] Spanish legislation as a Git repo
Show HN: Open Source 'Conductor + Ghostty'
Show HN (score: 11)[Other] Show HN: Open Source 'Conductor + Ghostty' Our team works with Claude Code, Codex, Gemini all day. We love Ghostty, but wanted something where we could work in multiple worktree at once and have multiple agents run.<p>We decided to open source the internal team we use. Hope you might find it useful. Freel free to contribute or fork.<p><pre><code> * Cross-platform (Mac, Linux, Windows) all tested * MIT License </code></pre> Features: * Notifications, but also manual 'mark-as-unread) for worktrees (like Gmail stars) * Status indicators work for all terminals inside a wroktree * GH integrations (show PR status) and link GH issues * Can add comments to worktrees (stay organized) * File viewer, Search, diff viewer (can make edits + save)<p>Note: Yeah there are "similar" programs out there, but this one is ours. But I'm happy if our software works for you too!
Show HN: Kagento – LeetCode for AI Agents
Show HN (score: 9)[Other] Show HN: Kagento – LeetCode for AI Agents I built a platform where you solve tasks together with AI agents (Claude Code, Codex, Cursor — any agent via SSH).<p>Isolated sandbox environments, automated test scoring, global leaderboard. Tasks range from easy (AI one-shots it) to hard (requires human help).<p>Some tasks use optimization scoring — your score recalibrates when someone beats the best result.<p>Built it in 6 days as a solo founder. 100% of code written with Claude Code and Codex. Stack: Go, Next.js, K8s, Supabase, Stripe.
Namespace: We've raised $23M to build the compute layer for code
Hacker News (score: 17)[Other] Namespace: We've raised $23M to build the compute layer for code
Show HN: Anvil – Desktop App for Spec Driven Development
Show HN (score: 5)[Other] Show HN: Anvil – Desktop App for Spec Driven Development Very excited to share Anvil. I built Anvil to take back control when working with parallel coding agents. It comes with one click worktree isolation, and first class spec support.<p>Claude Code and similar coding TUIs are very eager to get into writing code, even before their human baby sitter fully understands the implication of what they are about to build.<p>The core insight with Anvil is that it is much easier to write high quality code which matches the author's intent after iterating on an external plan with your agent.<p>Align on the architecture, implementation, and verification strategy in a markdown file, then execution is pretty straightforward.<p>This is not a new concept, but the user experience within TUI apps for this workflow is pretty shit. Claude creates non-semantic plan names like "aquamarine-owl" that are trapped within a single agent context. Spinning up multiple agents to check on different aspects of a plan is annoying and slow, and managing terminal tabs is pure hell.<p>So I built anvil, this is a fully open source (MIT license) project.
Show HN: Open-Source Animal Crossing–Style UI for Claude Code Agents
Hacker News (score: 11)[Other] Show HN: Open-Source Animal Crossing–Style UI for Claude Code Agents We posted here on Monday and got some great feedback. We’ve implemented a few of the most requested updates:<p>- iMessage channel support (agents can text people and you can text agents) Other channels are simple to extend. - A built-in browser (agents can navigate and interact with websites) - Scheduling (run tasks on a timer / cron/ in the future) - Built in tunneling so that the agents can share local stuff with you over the internet - More robust MCP and Skills support so anyone can extend it - Auto approval for agent requests<p>If you didn’t see the original:<p>Outworked is a desktop app where Claude Code agents work as a small “team.” You give it a goal, and an orchestrator breaks it into tasks and assigns them across agents.<p>Agents can run in parallel, talk to each other, write code, and now also browse the web and send messages.<p>It runs locally and plugs into your existing Claude Code setup.<p>Would love to hear what we should build next. Thanks again!
Yeachan-Heo/oh-my-claudecode
GitHub Trending[DevOps] Teams-first Multi-agent orchestration for Claude Code
Show HN: LLM-Gateway – Zero-Trust LLM Gateway
Show HN (score: 6)[DevOps] Show HN: LLM-Gateway – Zero-Trust LLM Gateway I built an OpenAI-compatible LLM gateway that routes requests to OpenAI, Anthropic, Ollama, vLLM, llama-server, SGLang... anything that speaks /v1/chat/completions. Single Go binary, one YAML config file, no infrastructure.<p>It does the things you'd expect from this kind of gateway... semantic routing via a three-layer cascade (keyword heuristics, embedding similarity, LLM classifier) that picks the best model when clients omit the model field, weighted round-robin load balancing across local inference servers with health checks and failover.<p>The part I think is most interesting is the network layer. The gateway and backends communicate over zrok/OpenZiti overlay networks... reach a GPU box behind NAT, expose the gateway to clients, put components anywhere with internet connectivity behind firewalls... no port forwarding, no VPN. Zero-trust in both directions. Most LLM proxies solve the API translation problem. This one also solves the network problem.<p>Apache 2.0. <a href="https://github.com/openziti/llm-gateway" rel="nofollow">https://github.com/openziti/llm-gateway</a><p>I work for NetFoundry, which sponsors the OpenZiti project this is built on.
Show HN: Forkrun – NUMA-aware shell parallelizer (50×–400× faster than parallel)
Hacker News (score: 57)[Other] Show HN: Forkrun – NUMA-aware shell parallelizer (50×–400× faster than parallel) forkrun is the culmination of a 10-year-long journey focused on "how to make shell parallelization fast". What started as a standard "fork jobs in a loop" has turned into a lock-free, CAS-retry-loop-free, SIMD-accelerated, self-tuning, NUMA aware shell-based stream parallelization engine that is (mostly) a drop-in replacement for xargs -P and GNU parallel.<p>On my 14-core/28-thread i9-7940x, forkrun achieves:<p>* 200,000+ batch dispatches/sec (vs ~500 for GNU Parallel)<p>* ~95–99% CPU utilization across all 28 logical cores, even when the workload is non-existant (bash no-ops / `:`) (vs ~6% for GNU Parallel). These benchmarks are intentionally worst-case (near-zero work per task) because they measure the capability of the parallelization framework itself, not how much work an external tool can do.<p>* Typically 50×–400× faster on real high-frequency low-latency workloads (vs GNU Parallel)<p>A few of the techniques that make this possible:<p>* Born-local NUMA: stdin is splice()'d into a shared memfd, then pages are placed on the target NUMA node via set_mempolicy(MPOL_BIND) before any worker touches them, making the memfd NUMA-spliced. Each numa node only claims work that is <i>already</i> born-local on its node. Stealing from other nodes is permitted under some conditions when no local work exists.<p>* SIMD scanning: per-node indexers/scanners use AVX2/NEON to find line boundaries (delimiters) at speeds approaching memory bandwidth, and publish byte-offsets and line-counts into per-node lock-free rings.<p>* Lock-free claiming: workers claim batches with a single atomic_fetch_add — no locks, no CAS retry loops; contention is reduced to a single atomic on one cache line.<p>* Memory management: a background thread uses fallocate(PUNCH_HOLE) to reclaim space without breaking the logical offset system.<p>…and that’s just the surface. The implementation uses many additional systems-level techniques (phase-aware tail handling, adaptive batching, early-flush detection, etc.) to eliminate overhead, increase throughput and reduce latency at every stage.<p>In its fastest (-b) mode (fixed-size batches, minimal processing), it can exceed 1B lines/sec.<p>forkrun ships as a single bash file with an embedded, self-extracting C extension — no Perl, no Python, no install, full native support for parallelizing arbitrary shell functions. The binary is built in public GitHub Actions so you can trace it back to CI (see the GitHub "Blame" on the line containing the base64 embeddings). Trying it is literally two commands:<p><pre><code> . frun.bash frun shell_func_or_cmd < inputs </code></pre> For benchmarking scripts and results, see the BENCHMARKS dir in the GitHub repo<p>For an architecture deep-dive, see the DOCS dir in the GitHub repo<p>Happy to answer questions.
FreeCAD/FreeCAD
GitHub Trending[Other] Official source code of FreeCAD, a free and opensource multiplatform 3D parametric modeler.
Show HN: Grafana TUI – Browse Grafana dashboards in the terminal
Show HN (score: 11)[Monitoring/Observability] Show HN: Grafana TUI – Browse Grafana dashboards in the terminal I built a terminal UI for browsing Grafana dashboards. It connects to any Grafana instance and lets you explore dashboards without leaving the terminal.<p>It renders the most common panel types (time series, bar charts, gauges, heatmaps etc.). You can change the time range, set dashboard variables and filter series.<p>I built this because I spend most of my day in the terminal and wanted a quick way to glance at dashboards without switching to the browser. It's not perfect by any means, but it's a nifty and useful tool.<p>Built with Go, Bubble Tea, ntcharts, and Claude (of course). You can install it via Homebrew:<p><pre><code> brew install lovromazgon/tap/grafana-tui </code></pre> ... and try it out against Grafana's public playground:<p><pre><code> grafana-tui --url https://play.grafana.org</code></pre>
Ninja is a small build system with a focus on speed
Hacker News (score: 100)[Build/Deploy] Ninja is a small build system with a focus on speed
Telnyx package compromised on PyPI
Hacker News (score: 12)[Other] Telnyx package compromised on PyPI <a href="https://github.com/team-telnyx/telnyx-python/issues/235" rel="nofollow">https://github.com/team-telnyx/telnyx-python/issues/235</a><p><a href="https://www.aikido.dev/blog/telnyx-pypi-compromised-teampcp-canisterworm" rel="nofollow">https://www.aikido.dev/blog/telnyx-pypi-compromised-teampcp-...</a>
Show HN: I put an AI agent on a $7/month VPS with IRC as its transport layer
Show HN (score: 335)[Other] Show HN: I put an AI agent on a $7/month VPS with IRC as its transport layer The stack: two agents on separate boxes. The public one (nullclaw) is a 678 KB Zig binary using ~1 MB RAM, connected to an Ergo IRC server. Visitors talk to it via a gamja web client embedded in my site. The private one (ironclaw) handles email and scheduling, reachable only over Tailscale via Google's A2A protocol.<p>Tiered inference: Haiku 4.5 for conversation (sub-second, cheap), Sonnet 4.6 for tool use (only when needed). Hard cap at $2/day.<p>A2A passthrough: the private-side agent borrows the gateway's own inference pipeline, so there's one API key and one billing relationship regardless of who initiated the request.<p>You can talk to nully at <a href="https://georgelarson.me/chat/" rel="nofollow">https://georgelarson.me/chat/</a> or connect with any IRC client to irc.georgelarson.me:6697 (TLS), channel #lobby.
Show HN: Fio: 3D World editor/game engine – inspired by Radiant and Hammer
Hacker News (score: 45)[Other] Show HN: Fio: 3D World editor/game engine – inspired by Radiant and Hammer A liminal brush-based CSG editor and game engine with unified (forward) renderer inspired by Radiant and Worldcraft/Hammer<p>Compact and lightweight (target: Snapdragon 8CX, OpenGL 3.3)<p>Real-time lighting with stencil shadows without the need for pre-baked compilation
Show HN: Layerleak – Like Trufflehog, but for Docker Hub
Show HN (score: 5)[Other] Show HN: Layerleak – Like Trufflehog, but for Docker Hub
Show HN: Turbolite – a SQLite VFS serving sub-250ms cold JOIN queries from S3
Hacker News (score: 45)[Database] Show HN: Turbolite – a SQLite VFS serving sub-250ms cold JOIN queries from S3 I built a SQLite VFS in Rust that serves cold queries directly from S3 with sub-second performance, and often much faster.<p>It’s called turbolite. It is experimental, buggy, and may corrupt data. I would not trust it with anything important yet.<p>I wanted to explore whether object storage has gotten fast enough to support embedded databases over cloud storage. Filesystems reward tiny random reads and in-place mutation. S3 rewards fewer requests, bigger transfers, immutable objects, and aggressively parallel operations where bandwidth is often the real constraint. This was explicitly inspired by turbopuffer’s ground-up S3-native design. <a href="https://turbopuffer.com/blog/turbopuffer" rel="nofollow">https://turbopuffer.com/blog/turbopuffer</a><p>The use case I had in mind is lots of mostly-cold SQLite databases (database-per-tenant, database-per-session, or database-per-user architectures) where keeping a separate attached volume for inactive database feels wasteful. turbolite assumes a single write source and is aimed much more at “many databases with bursty cold reads” than “one hot database.”<p>Instead of doing naive page-at-a-time reads from a raw SQLite file, turbolite introspects SQLite B-trees, stores related pages together in compressed page groups, and keeps a manifest that is the source of truth for where every page lives. Cache misses use seekable zstd frames and S3 range GETs for search queries, so fetching one needed page does not require downloading an entire object.<p>At query time, turbolite can also pass storage operations from the query plan down to the VFS to frontrun downloads for indexes and large scans in the order they will be accessed.<p>You can tune how aggressively turbolite prefetches. For point queries and small joins, it can stay conservative and avoid prefetching whole tables. For scans, it can get much more aggressive.<p>It also groups pages by page type in S3. Interior B-tree pages are bundled separately and loaded eagerly. Index pages prefetch aggressively. Data pages are stored by table. The goal is to make cold point queries and joins decent, while making scans less awful than naive remote paging would.<p>On a 1M-row / 1.5GB benchmark on EC2 + S3 Express, I’m seeing results like sub-100ms cold point lookups, sub-200ms cold 5-join profile queries, and sub-600ms scans from an empty cache with a 1.5GB database. It’s somewhat slower on normal S3/Tigris.<p>Current limitations are pretty straightforward: it’s single-writer only, and it is still very much a systems experiment rather than production infrastructure.<p>I’d love feedback from people who’ve worked on SQLite-over-network, storage engines, VFSes, or object-storage-backed databases. I’m especially interested in whether the B-tree-aware grouping / manifest / seekable-range-GET direction feels like the right one to keep pushing.
Taming LLMs: Using Executable Oracles to Prevent Bad Code
Hacker News (score: 31)[Other] Taming LLMs: Using Executable Oracles to Prevent Bad Code