🛠️ Hacker News Tools

Showing 261–280 of 2458 tools from Hacker News

Last Updated
April 20, 2026 at 08:00 PM

[Other] Show HN: Home Maker: Declare Your Dev Tools in a Makefile A developer&#x27;s machine accumulates tools fast. A Rust CLI you compiled last year, a Python formatter installed via `uv`, a language server pulled from npm, a terminal emulator from a curl script, a Go binary built from source. Each came from a different package manager, each with its own install incantation you half-remember.<p>I wanted a way to declare what I need without adopting a complex system like Nix or Ansible just for a single laptop. The result was a plain old Makefile.<p>I wrote a short post on using Make (along with a tiny bash script and fzf) to create a searchable, single-command registry for all your local dev tools. It’s not a new framework or a heavy tool—just a simple way to organize the package managers we already use.<p>If you&#x27;re tired of losing track of your local environment, you might find it useful.

Found: March 29, 2026 ID: 4003

[Other] TreeTrek – A raw Git repository viewer web app

Found: March 28, 2026 ID: 3948

[Other] Show HN: QuickBEAM – run JavaScript as supervised Erlang/OTP processes QuickBEAM is a JavaScript runtime embedded inside the Erlang&#x2F;OTP VM.<p>If you’re building a full-stack app, JavaScript tends to leak in anyway — frontend, SSR, or third-party code.<p>QuickBEAM runs that JavaScript inside OTP supervision trees.<p>Each runtime is a process with a `Beam` global that can: - call Elixir code - send&#x2F;receive messages - spawn and monitor processes - inspect runtime&#x2F;system state<p>It also provides browser-style APIs backed by OTP&#x2F;native primitives (fetch, WebSocket, Worker, BroadcastChannel, localStorage, native DOM, etc.).<p>This makes it usable for: - SSR - sandboxed user code - per-connection state - backend JS with direct OTP interop<p>Notable bits:<p>- JS runtimes are supervised and restartable - sandboxing with memory&#x2F;reduction limits and API control - native DOM that Erlang can read directly (no string rendering step) - no JSON boundary between JS and Erlang - built-in TypeScript, npm support, and native addons<p>QuickBEAM is part of Elixir Volt — a full-stack frontend toolchain built on Erlang&#x2F;OTP with no Node.js.<p>Still early, feedback welcome.

Found: March 28, 2026 ID: 3958

[Other] Show HN: Git bayesect – Bayesian Git bisection for non-deterministic bugs

Found: March 28, 2026 ID: 3984

[Other] Show HN: I built an OS that is pure AI I&#x27;ve been building Pneuma, a desktop computing environment where software doesn&#x27;t need to exist before you need it. There are no pre-installed applications. You boot to a blank screen with a prompt. You describe what you want: a CPU monitor, a game, a notes app, a data visualizer and a working program materializes in seconds. Once generated, agents persist. You can reuse them, they can communicate with each other through IPC, and you can share them through a community agent store. The idea isn&#x27;t that everything is disposable. It&#x27;s that creation is instant and the barrier to having exactly the tool you need is just describing it.<p>Under the hood: your input goes to an LLM, which generates a self-contained Rust module. That gets compiled to WebAssembly in under a second, then JIT-compiled and executed in a sandboxed Wasmtime instance. Everything is GPU-rendered via wgpu (Vulkan&#x2F;Metal&#x2F;DX12). If compilation fails, the error is automatically fed back for correction. ~90% first-attempt success rate.<p>The architecture is a microkernel: agents run in isolated WASM sandboxes with a typed ABI for drawing, input, storage, and networking. An agent crash can&#x27;t bring down the system. Agents can run side by side, persist to a local store, and be shared or downloaded from the community store.<p>Currently it runs as a desktop app on Linux, macOS, and Windows. The longer-term goal is to run on bare metal and support existing ARM64 binaries alongside generated agents. A full computing environment where AI-generated software and traditional applications coexist.<p>Built entirely in Rust.<p>I built this because I think the traditional software model of find an app, install it, learn it, configure it; is unnecessary friction. If a computer can generate exactly the tool you need in the moment you need it, and then keep it around when it&#x27;s useful, why maintain a library of pre-built software at all?<p>Free tier available (no credit card). There&#x27;s a video on the landing page showing it in action.<p>Interested in feedback on the concept, the UX, and whether this is something you&#x27;d actually use.

Found: March 28, 2026 ID: 3950

[Other] OpenCiv1 – open-source rewrite of Civ1

Found: March 28, 2026 ID: 3947

[Other] Show HN: We built a multi-agent research hub. The waitlist is a reverse-CAPTCHA Hey HN,<p>Automated research is the next big step in AI, with companies like OpenAI aiming to debut a fully automated researcher by 2028 (<a href="https:&#x2F;&#x2F;www.technologyreview.com&#x2F;2026&#x2F;03&#x2F;20&#x2F;1134438&#x2F;openai-is-throwing-everything-into-building-a-fully-automated-researcher&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.technologyreview.com&#x2F;2026&#x2F;03&#x2F;20&#x2F;1134438&#x2F;openai-i...</a>). However, there is a very real possibility that much of this corporate research will remain closed to the general public.<p>To counter this, we spent the last month building Enlidea---a machine-to-machine ecosystem for open research.<p>It&#x27;s a decentralized research hub where autonomous agents propose hypotheses, stake bounties, execute code, and perform automated peer reviews on each other&#x27;s work to build consensus.<p>The MVP is almost done, but before launching, we wanted to filter the waitlist for developers who actually know how to orchestrate agents.<p>Because of this, there is no real UI on the landing page. It&#x27;s an API handshake. Point your LLM agent at the site and see if it can figure out the payload to whitelist your email.

Found: March 28, 2026 ID: 3959

Spanish legislation as a Git repo

Hacker News (score: 636)

[Other] Spanish legislation as a Git repo

Found: March 28, 2026 ID: 3943

[Other] Show HN: Open Source 'Conductor + Ghostty' Our team works with Claude Code, Codex, Gemini all day. We love Ghostty, but wanted something where we could work in multiple worktree at once and have multiple agents run.<p>We decided to open source the internal team we use. Hope you might find it useful. Freel free to contribute or fork.<p><pre><code> * Cross-platform (Mac, Linux, Windows) all tested * MIT License </code></pre> Features: * Notifications, but also manual &#x27;mark-as-unread) for worktrees (like Gmail stars) * Status indicators work for all terminals inside a wroktree * GH integrations (show PR status) and link GH issues * Can add comments to worktrees (stay organized) * File viewer, Search, diff viewer (can make edits + save)<p>Note: Yeah there are &quot;similar&quot; programs out there, but this one is ours. But I&#x27;m happy if our software works for you too!

Found: March 27, 2026 ID: 3938

[Other] Show HN: Kagento – LeetCode for AI Agents I built a platform where you solve tasks together with AI agents (Claude Code, Codex, Cursor — any agent via SSH).<p>Isolated sandbox environments, automated test scoring, global leaderboard. Tasks range from easy (AI one-shots it) to hard (requires human help).<p>Some tasks use optimization scoring — your score recalibrates when someone beats the best result.<p>Built it in 6 days as a solo founder. 100% of code written with Claude Code and Codex. Stack: Go, Next.js, K8s, Supabase, Stripe.

Found: March 27, 2026 ID: 3942

[Other] Namespace: We've raised $23M to build the compute layer for code

Found: March 27, 2026 ID: 3937

[Other] Show HN: Anvil – Desktop App for Spec Driven Development Very excited to share Anvil. I built Anvil to take back control when working with parallel coding agents. It comes with one click worktree isolation, and first class spec support.<p>Claude Code and similar coding TUIs are very eager to get into writing code, even before their human baby sitter fully understands the implication of what they are about to build.<p>The core insight with Anvil is that it is much easier to write high quality code which matches the author&#x27;s intent after iterating on an external plan with your agent.<p>Align on the architecture, implementation, and verification strategy in a markdown file, then execution is pretty straightforward.<p>This is not a new concept, but the user experience within TUI apps for this workflow is pretty shit. Claude creates non-semantic plan names like &quot;aquamarine-owl&quot; that are trapped within a single agent context. Spinning up multiple agents to check on different aspects of a plan is annoying and slow, and managing terminal tabs is pure hell.<p>So I built anvil, this is a fully open source (MIT license) project.

Found: March 27, 2026 ID: 3941

[Other] Show HN: Open-Source Animal Crossing–Style UI for Claude Code Agents We posted here on Monday and got some great feedback. We’ve implemented a few of the most requested updates:<p>- iMessage channel support (agents can text people and you can text agents) Other channels are simple to extend. - A built-in browser (agents can navigate and interact with websites) - Scheduling (run tasks on a timer &#x2F; cron&#x2F; in the future) - Built in tunneling so that the agents can share local stuff with you over the internet - More robust MCP and Skills support so anyone can extend it - Auto approval for agent requests<p>If you didn’t see the original:<p>Outworked is a desktop app where Claude Code agents work as a small “team.” You give it a goal, and an orchestrator breaks it into tasks and assigns them across agents.<p>Agents can run in parallel, talk to each other, write code, and now also browse the web and send messages.<p>It runs locally and plugs into your existing Claude Code setup.<p>Would love to hear what we should build next. Thanks again!

Found: March 27, 2026 ID: 3935

[DevOps] Show HN: LLM-Gateway – Zero-Trust LLM Gateway I built an OpenAI-compatible LLM gateway that routes requests to OpenAI, Anthropic, Ollama, vLLM, llama-server, SGLang... anything that speaks &#x2F;v1&#x2F;chat&#x2F;completions. Single Go binary, one YAML config file, no infrastructure.<p>It does the things you&#x27;d expect from this kind of gateway... semantic routing via a three-layer cascade (keyword heuristics, embedding similarity, LLM classifier) that picks the best model when clients omit the model field, weighted round-robin load balancing across local inference servers with health checks and failover.<p>The part I think is most interesting is the network layer. The gateway and backends communicate over zrok&#x2F;OpenZiti overlay networks... reach a GPU box behind NAT, expose the gateway to clients, put components anywhere with internet connectivity behind firewalls... no port forwarding, no VPN. Zero-trust in both directions. Most LLM proxies solve the API translation problem. This one also solves the network problem.<p>Apache 2.0. <a href="https:&#x2F;&#x2F;github.com&#x2F;openziti&#x2F;llm-gateway" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;openziti&#x2F;llm-gateway</a><p>I work for NetFoundry, which sponsors the OpenZiti project this is built on.

Found: March 27, 2026 ID: 3936

[Other] Show HN: Forkrun – NUMA-aware shell parallelizer (50×–400× faster than parallel) forkrun is the culmination of a 10-year-long journey focused on &quot;how to make shell parallelization fast&quot;. What started as a standard &quot;fork jobs in a loop&quot; has turned into a lock-free, CAS-retry-loop-free, SIMD-accelerated, self-tuning, NUMA aware shell-based stream parallelization engine that is (mostly) a drop-in replacement for xargs -P and GNU parallel.<p>On my 14-core&#x2F;28-thread i9-7940x, forkrun achieves:<p>* 200,000+ batch dispatches&#x2F;sec (vs ~500 for GNU Parallel)<p>* ~95–99% CPU utilization across all 28 logical cores, even when the workload is non-existant (bash no-ops &#x2F; `:`) (vs ~6% for GNU Parallel). These benchmarks are intentionally worst-case (near-zero work per task) because they measure the capability of the parallelization framework itself, not how much work an external tool can do.<p>* Typically 50×–400× faster on real high-frequency low-latency workloads (vs GNU Parallel)<p>A few of the techniques that make this possible:<p>* Born-local NUMA: stdin is splice()&#x27;d into a shared memfd, then pages are placed on the target NUMA node via set_mempolicy(MPOL_BIND) before any worker touches them, making the memfd NUMA-spliced. Each numa node only claims work that is <i>already</i> born-local on its node. Stealing from other nodes is permitted under some conditions when no local work exists.<p>* SIMD scanning: per-node indexers&#x2F;scanners use AVX2&#x2F;NEON to find line boundaries (delimiters) at speeds approaching memory bandwidth, and publish byte-offsets and line-counts into per-node lock-free rings.<p>* Lock-free claiming: workers claim batches with a single atomic_fetch_add — no locks, no CAS retry loops; contention is reduced to a single atomic on one cache line.<p>* Memory management: a background thread uses fallocate(PUNCH_HOLE) to reclaim space without breaking the logical offset system.<p>…and that’s just the surface. The implementation uses many additional systems-level techniques (phase-aware tail handling, adaptive batching, early-flush detection, etc.) to eliminate overhead, increase throughput and reduce latency at every stage.<p>In its fastest (-b) mode (fixed-size batches, minimal processing), it can exceed 1B lines&#x2F;sec.<p>forkrun ships as a single bash file with an embedded, self-extracting C extension — no Perl, no Python, no install, full native support for parallelizing arbitrary shell functions. The binary is built in public GitHub Actions so you can trace it back to CI (see the GitHub &quot;Blame&quot; on the line containing the base64 embeddings). Trying it is literally two commands:<p><pre><code> . frun.bash frun shell_func_or_cmd &lt; inputs </code></pre> For benchmarking scripts and results, see the BENCHMARKS dir in the GitHub repo<p>For an architecture deep-dive, see the DOCS dir in the GitHub repo<p>Happy to answer questions.

Found: March 27, 2026 ID: 3982

[Monitoring/Observability] Show HN: Grafana TUI – Browse Grafana dashboards in the terminal I built a terminal UI for browsing Grafana dashboards. It connects to any Grafana instance and lets you explore dashboards without leaving the terminal.<p>It renders the most common panel types (time series, bar charts, gauges, heatmaps etc.). You can change the time range, set dashboard variables and filter series.<p>I built this because I spend most of my day in the terminal and wanted a quick way to glance at dashboards without switching to the browser. It&#x27;s not perfect by any means, but it&#x27;s a nifty and useful tool.<p>Built with Go, Bubble Tea, ntcharts, and Claude (of course). You can install it via Homebrew:<p><pre><code> brew install lovromazgon&#x2F;tap&#x2F;grafana-tui </code></pre> ... and try it out against Grafana&#x27;s public playground:<p><pre><code> grafana-tui --url https:&#x2F;&#x2F;play.grafana.org</code></pre>

Found: March 27, 2026 ID: 3939

[Build/Deploy] Ninja is a small build system with a focus on speed

Found: March 27, 2026 ID: 3969

Telnyx package compromised on PyPI

Hacker News (score: 12)

[Other] Telnyx package compromised on PyPI <a href="https:&#x2F;&#x2F;github.com&#x2F;team-telnyx&#x2F;telnyx-python&#x2F;issues&#x2F;235" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;team-telnyx&#x2F;telnyx-python&#x2F;issues&#x2F;235</a><p><a href="https:&#x2F;&#x2F;www.aikido.dev&#x2F;blog&#x2F;telnyx-pypi-compromised-teampcp-canisterworm" rel="nofollow">https:&#x2F;&#x2F;www.aikido.dev&#x2F;blog&#x2F;telnyx-pypi-compromised-teampcp-...</a>

Found: March 27, 2026 ID: 3934

[Other] Show HN: I put an AI agent on a $7/month VPS with IRC as its transport layer The stack: two agents on separate boxes. The public one (nullclaw) is a 678 KB Zig binary using ~1 MB RAM, connected to an Ergo IRC server. Visitors talk to it via a gamja web client embedded in my site. The private one (ironclaw) handles email and scheduling, reachable only over Tailscale via Google&#x27;s A2A protocol.<p>Tiered inference: Haiku 4.5 for conversation (sub-second, cheap), Sonnet 4.6 for tool use (only when needed). Hard cap at $2&#x2F;day.<p>A2A passthrough: the private-side agent borrows the gateway&#x27;s own inference pipeline, so there&#x27;s one API key and one billing relationship regardless of who initiated the request.<p>You can talk to nully at <a href="https:&#x2F;&#x2F;georgelarson.me&#x2F;chat&#x2F;" rel="nofollow">https:&#x2F;&#x2F;georgelarson.me&#x2F;chat&#x2F;</a> or connect with any IRC client to irc.georgelarson.me:6697 (TLS), channel #lobby.

Found: March 26, 2026 ID: 3960

[Other] Show HN: Fio: 3D World editor/game engine – inspired by Radiant and Hammer A liminal brush-based CSG editor and game engine with unified (forward) renderer inspired by Radiant and Worldcraft&#x2F;Hammer<p>Compact and lightweight (target: Snapdragon 8CX, OpenGL 3.3)<p>Real-time lighting with stencil shadows without the need for pre-baked compilation

Found: March 26, 2026 ID: 3931
Previous Page 14 of 123 Next