🛠️ All DevTools

Showing 1–20 of 3808 tools

Last Updated
March 17, 2026 at 04:02 PM

[Database] Show HN: Antfly: Distributed, Multimodal Search and Memory and Graphs in Go Hey HN, I’m excited to share Antfly: a distributed document database and search engine written in Go that combines full-text, vector, and graph search. Use it for distributed multimodal search and memory, or for local dev and small deployments.<p>I built this to give developers a single-binary deployment with native ML inference (via a built-in service called Termite), meaning you don&#x27;t need external API calls for vector search unless you want to use them.<p>Some things that might interest this crowd:<p>Capabilities: Multimodal indexing (images, audio, video), MongoDB-style in-place updates, and streaming RAG.<p>Distributed Systems: Multi-Raft setup built on etcd&#x27;s library, backed by Pebble (CockroachDB&#x27;s storage engine). Metadata and data shards get their own Raft groups.<p>Single Binary: antfly swarm gives you a single-process deployment with everything running. Good for local dev and small deployments. Scale out by adding nodes when you need to.<p>Ecosystem: Ships with a Kubernetes operator and an MCP server for LLM tool use.<p>Native ML inference: Antfly ships with Termite. Think of it like a built-in Ollama for non-generative models too (embeddings, reranking, chunking, text generation). No external API calls needed, but also supports them (OpenAI, Ollama, Bedrock, Gemini, etc.)<p>License: I went with Elastic License v2, not an OSI-approved license. I know that&#x27;s a topic with strong feelings here. The practical upshot: you can use it, modify it, self-host it, build products on top of it, you just can&#x27;t offer Antfly itself as a managed service. Felt like the right tradeoff for sustainability while still making the source available.<p>Happy to answer questions about the architecture, the Raft implementation, or anything else. Feedback welcome!

Found: March 17, 2026 ID: 3808

jarrodwatts/claude-hud

GitHub Trending

[Other] A Claude Code plugin that shows what's happening - context usage, active tools, running agents, and todo progress

Found: March 17, 2026 ID: 3805

Building a Shell

Hacker News (score: 110)

[Other] Building a Shell

Found: March 17, 2026 ID: 3807

[Other] Leanstral: Open-source agent for trustworthy coding and formal proof engineering Lean 4 paper (2021): <a href="https:&#x2F;&#x2F;dl.acm.org&#x2F;doi&#x2F;10.1007&#x2F;978-3-030-79876-5_37" rel="nofollow">https:&#x2F;&#x2F;dl.acm.org&#x2F;doi&#x2F;10.1007&#x2F;978-3-030-79876-5_37</a>

Found: March 16, 2026 ID: 3801

[Other] Show HN: Most GPU Upgrades Aren't Worth It, I Built a Calculator to Prove It I run a small project called best-gpu.com, a site that ranks GPUs by price-to-performance.<p>While browsing PC building forums and Reddit, I kept seeing the same question: “What should I upgrade to from my current GPU?” Most answers are just lists of cards without showing the actual performance gain, so people often end up paying for upgrades that barely improve performance.<p>So I built a small tool: a GPU Upgrade Calculator.<p>You enter your current GPU and it shows:<p>estimated performance gain<p>a value score based on price vs performance<p>a filtered list of upgrade options (brand, price, VRAM, etc.)<p>The goal is simply to help people avoid spending money on upgrades that aren’t really worth it.<p>Curious to hear feedback from HN on the approach, data sources, or features that would make something like this more useful.<p><a href="https:&#x2F;&#x2F;best-gpu.com&#x2F;upgrade.php" rel="nofollow">https:&#x2F;&#x2F;best-gpu.com&#x2F;upgrade.php</a>

Found: March 16, 2026 ID: 3804

[API/SDK] Show HN: Open-source, extract any brand's logos, colors, and assets from a URL Hi everyone, I just open sourced OpenBrand - extract any brand&#x27;s logos, colors, and assets from just a URL.<p>It&#x27;s MIT licensed, open source, completely free. Try it out at openbrand.sh<p>It also comes with a free API and MCP server for you to use in your code or agents.<p>Why we built this: while building another product, we needed to pull in customers&#x27; brand images as custom backgrounds. It felt like a simple enough problem with no open source solution - so we built one.

Found: March 16, 2026 ID: 3800

[Other] Meta’s renewed commitment to jemalloc <a href="https:&#x2F;&#x2F;github.com&#x2F;jemalloc&#x2F;jemalloc" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;jemalloc&#x2F;jemalloc</a>

Found: March 16, 2026 ID: 3798

[DevOps] Launch HN: Chamber (YC W26) – An AI Teammate for GPU Infrastructure Hey HN, we&#x27;re Jie Shen, Charles, Andreas, and Shaocheng. We built Chamber (<a href="https:&#x2F;&#x2F;usechamber.io">https:&#x2F;&#x2F;usechamber.io</a>), an AI agent that manages GPU infrastructure for you. You talk to it wherever your team already works and it handles things like provisioning clusters, diagnosing failed jobs, managing workloads. Demo: <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=xdqh2C_hif4" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=xdqh2C_hif4</a><p>We all worked on GPU infrastructure at Amazon. Between us we&#x27;ve spent years on this problem — monitoring GPU fleets, debugging failures at scale, building the tooling around it. After leaving we talked to a bunch of AI teams and kept hearing the same stuff. Platform engineers spend half their time just keeping things running. Building dashboards, writing scheduling configs, answering &quot;when will my job start?&quot; all day. Researchers lose hours when a training run fails because figuring out why means digging through Kubernetes events, node logs, and GPU metrics in totally separate tools. Pretty much everyone had stitched together Prometheus, Grafana, Kubernetes scheduling policies, and a bunch of homegrown scripts, and they were spending as much time maintaining all of it as actually using it.<p>The thing we kept noticing is that most of this work follows patterns. Triage the failure, correlate a few signals, figure out what to do about it. If you had a platform with structured access to the full state of a GPU environment, you could have an agent do that work for you.<p>So that&#x27;s what we built. Chamber is a control plane that keeps a live model of your GPU fleet: nodes, workloads, team structure, cluster health. Every operation it supports is exposed as a tool the agent can call. Inspecting node health, reading cluster topology, managing workload lifecycle, adjusting resource configs, provisioning infrastructure. These are structured operations with validation and rollback, not just raw shell commands. When we add new capabilities to the platform, they automatically become things the agent can do too.<p>We spent a lot of time on safety because we&#x27;ve seen what happens when infrastructure automation goes wrong. A wrong call can kill a multi-day training run or cascade across a cluster. So the agent has graduated autonomy. Routine stuff it handles on its own: diagnosing a failed job, resubmitting with corrected resources, cordoning a bad node. But anything that touches other teams&#x27; workloads or production jobs needs human approval first. Every action gets logged with what the agent saw, why it acted, and what it changed.<p>The platform underneath is really what makes the diagnosis work. When the agent investigates a failure, it queries GPU state, workload history, node health timelines, and cluster topology. That&#x27;s the difference between &quot;your job OOMed&quot; and &quot;your job OOMed because the batch size exceeded available VRAM on this node, here&#x27;s a corrected config.&quot; Different root causes get different fixes.<p>One thing that surprised us, even coming from Amazon where we&#x27;d seen large GPU fleets: most teams we talk to can&#x27;t even tell you how many GPUs are in use right now. The monitoring just doesn&#x27;t exist. They&#x27;re flying blind on their most expensive hardware.<p>We’ve launched with a few early customers and are onboarding new teams. We’re still refining pricing and are currently evaluating models like per-GPU-under-management and tiered plans. We plan to publish transparent pricing once we’ve validated what works best for customers. In the meantime, we know “contact us” isn’t ideal.<p>Would love to hear from anyone running GPU clusters. What&#x27;s the most tedious part of your setup? What would you actually trust an agent to do? What&#x27;s off limits? Looking forward to feedback!

Found: March 16, 2026 ID: 3794

[Other] Show HN: Claude Code skills that build complete Godot games I’ve been working on this for about a year through four major rewrites. Godogen is a pipeline that takes a text prompt, designs the architecture, generates 2D&#x2F;3D assets, writes the GDScript, and tests it visually. The output is a complete, playable Godot 4 project.<p>Getting LLMs to reliably generate functional games required solving three specific engineering bottlenecks:<p>1. The Training Data Scarcity: LLMs barely know GDScript. It has ~850 classes and a Python-like syntax that will happily let a model hallucinate Python idioms that fail to compile. To fix this, I built a custom reference system: a hand-written language spec, full API docs converted from Godot&#x27;s XML source, and a quirks database for engine behaviors you can&#x27;t learn from docs alone. Because 850 classes blow up the context window, the agent lazy-loads only the specific APIs it needs at runtime.<p>2. The Build-Time vs. Runtime State: Scenes are generated by headless scripts that build the node graph in memory and serialize it to .tscn files. This avoids the fragility of hand-editing Godot&#x27;s serialization format. But it means certain engine features (like `@onready` or signal connections) aren&#x27;t available at build time—they only exist when the game actually runs. Teaching the model which APIs are available at which phase — and that every node needs its owner set correctly or it silently vanishes on save — took careful prompting but paid off.<p>3. The Evaluation Loop: A coding agent is inherently biased toward its own output. To stop it from cheating, a separate Gemini Flash agent acts as visual QA. It sees only the rendered screenshots from the running engine—no code—and compares them against a generated reference image. It catches the visual bugs text analysis misses: z-fighting, floating objects, physics explosions, and grid-like placements that should be organic.<p>Architecturally, it runs as two Claude Code skills: an orchestrator that plans the pipeline, and a task executor that implements each piece in a `context: fork` window so mistakes and state don&#x27;t accumulate.<p>Everything is open source: <a href="https:&#x2F;&#x2F;github.com&#x2F;htdt&#x2F;godogen" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;htdt&#x2F;godogen</a><p>Demo video (real games, not cherry-picked screenshots): <a href="https:&#x2F;&#x2F;youtu.be&#x2F;eUz19GROIpY" rel="nofollow">https:&#x2F;&#x2F;youtu.be&#x2F;eUz19GROIpY</a><p>Blog post with the full story (all the wrong turns) coming soon. Happy to answer questions.

Found: March 16, 2026 ID: 3796

[CLI Tool] Apideck CLI – An AI-agent interface with much lower context consumption than MCP

Found: March 16, 2026 ID: 3793

YishenTu/claudian

GitHub Trending

[Other] An Obsidian plugin that embeds Claude Code as an AI collaborator in your vault

Found: March 16, 2026 ID: 3791

[Other] Agent harness built with LangChain and LangGraph. Equipped with a planning tool, a filesystem backend, and the ability to spawn subagents - well-equipped to handle complex agentic tasks.

Found: March 16, 2026 ID: 3790

volcengine/OpenViking

GitHub Trending

[Database] OpenViking is an open-source context database designed specifically for AI Agents(such as openclaw). OpenViking unifies the management of context (memory, resources, and skills) that Agents need through a file system paradigm, enabling hierarchical context delivery and self-evolving.

Found: March 16, 2026 ID: 3789

thedotmack/claude-mem

GitHub Trending

[Other] A Claude Code plugin that automatically captures everything Claude does during your coding sessions, compresses it with AI (using Claude's agent-sdk), and injects relevant context back into future sessions.

Found: March 16, 2026 ID: 3788

[CLI Tool] Lazycut: A simple terminal video trimmer using FFmpeg

Found: March 16, 2026 ID: 3795

[API/SDK] Show HN: Signbee – An API that lets AI agents send documents for signature Hi HN, I built Signbee while working on AI agents that handle contracting workflows. The agents could draft agreements, negotiate terms, manage deals — but the moment a signature was needed, the workflow broke. It always ended with &quot;please upload this to DocuSign&quot; — which meant human intervention, account setup, and manual uploads. So I built a simple API. You POST markdown and Signbee generates the PDF, or you pass a URL to your own PDF if you already have one designed the way you want it. No templates, no editor. Either way, it verifies both parties via email OTP and produces a signed document. curl -X POST <a href="https:&#x2F;&#x2F;signb.ee&#x2F;api&#x2F;v1&#x2F;send" rel="nofollow">https:&#x2F;&#x2F;signb.ee&#x2F;api&#x2F;v1&#x2F;send</a> \ -H &quot;Content-Type: application&#x2F;json&quot; \ -d &#x27;{ &quot;markdown&quot;: &quot;# NDA\n\nTerms...&quot;, &quot;sender_name&quot;: &quot;You&quot;, &quot;sender_email&quot;: &quot;you@company.com&quot;, &quot;recipient_name&quot;: &quot;Client&quot;, &quot;recipient_email&quot;: &quot;client@co.com&quot; }&#x27; Under the hood: - Markdown → PDF generation, or bring your own PDF via URL - Both parties verified via email OTP - Timestamps and IP addresses recorded - Final document hashed with SHA-256 - Certificate page appended with full audit trail One interesting challenge: the certificate page itself is part of the document that gets hashed, so any modification — even to the certificate — invalidates the integrity check. I also built an MCP server (npx -y signbee-mcp) so tools like Claude or Cursor can call it directly. Curious to hear from people who&#x27;ve dealt with document signing systems or automated agent workflows — what would you want to automate? <a href="https:&#x2F;&#x2F;signb.ee" rel="nofollow">https:&#x2F;&#x2F;signb.ee</a>

Found: March 16, 2026 ID: 3792

[Other] An experiment to use GitHub Actions as a control plane for a PaaS

Found: March 16, 2026 ID: 3784

[Other] Show HN: Lockstep – A data-oriented programming language <a href="https:&#x2F;&#x2F;github.com&#x2F;seanwevans&#x2F;lockstep" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;seanwevans&#x2F;lockstep</a><p>I want to share my work-in-progress systems language with a v0.1.0 release of Lockstep. It is a data-oriented systems programming language designed for high-throughput, deterministic compute pipelines.<p>I built Lockstep to bridge the gap between the productivity of C and the execution efficiency of GPU compute shaders. Instead of traditional control flow, Lockstep enforces straight-line SIMD execution. You will not find any if, for, or while statements inside compute kernels; branching is entirely replaced by hardware-native masking and stream-splitting.<p>Memory is handled via a static arena provided by the Host. There is no malloc, no hidden threads, and no garbage collection, which guarantees predictable performance and eliminates race conditions by construction.<p>Under the hood, Lockstep targets LLVM IR directly to leverage industrial-grade optimization passes. It also generates a C-compatible header for easy integration with host applications written in C, C++, Rust, or Zig.<p>v0.1.0 includes a compiler with LLVM IR and C header emission, a CLI simulator for validating pipeline wiring and cardinality on small datasets and an opt-in LSP server for real-time editor diagnostics, hover type info, and autocompletion.<p>You can check out the repository to see the syntax, and the roadmap outlines where the project is heading next, including parameterized SIMD widths and multi-stage pipeline composition.<p>I would love to hear feedback on the language semantics, the type system, and the overall architecture!

Found: March 16, 2026 ID: 3787

[Other] Show HN: Open-source playground to red-team AI agents with exploits published We build runtime security for AI agents. The playground started as an internal tool that we used to test our own guardrails. But we kept finding the same types of vulnerabilities because we think about attacks a certain way. At some point you need people who don&#x27;t think like you.<p>So we open-sourced it. Each challenge is a live agent with real tools and a published system prompt. Whenever a challenge is over, the full winning conversation transcript and guardrail logs get documented publicly.<p>Building the general-purpose agent itself was probably the most fun part. Getting it to reliably use tools, stay in character, and follow instructions while still being useful is harder than it sounds. That alone reminded us how early we all are in understanding and deploying these systems at scale.<p>First challenge was to get an agent to call a tool it&#x27;s been told to never call.<p>Someone got through in around 60 seconds without ever asking for the secret directly (which taught us a lot).<p>Next challenge is focused on data exfiltration with harder defences: <a href="https:&#x2F;&#x2F;playground.fabraix.com" rel="nofollow">https:&#x2F;&#x2F;playground.fabraix.com</a>

Found: March 15, 2026 ID: 3786

[Other] Show HN: Free OpenAI API Access with ChatGPT Account

Found: March 15, 2026 ID: 3785
Previous Page 1 of 191 Next