🛠️ Hacker News Tools

Showing 401–420 of 2464 tools from Hacker News

Last Updated
April 21, 2026 at 08:00 AM

Building a Shell

Hacker News (score: 110)

[Other] Building a Shell

Found: March 17, 2026 ID: 3807

[CLI Tool] Show HN: Pgit – A Git-like CLI backed by PostgreSQL

Found: March 17, 2026 ID: 3821

[Other] Flash-KMeans: Fast and Memory-Efficient Exact K-Means

Found: March 17, 2026 ID: 3842

[CLI Tool] Show HN: Crust – A CLI framework for TypeScript and Bun We&#x27;ve been building Crust (<a href="https:&#x2F;&#x2F;crustjs.com&#x2F;" rel="nofollow">https:&#x2F;&#x2F;crustjs.com&#x2F;</a>), a TypeScript-first, Bun-native CLI framework with zero dependencies. It&#x27;s been powering our core product internally for a while, and we&#x27;re now open-sourcing it.<p>The problem we kept running into: existing CLI frameworks in the JS ecosystem are either minimal arg parsers where you wire everything yourself, or heavyweight frameworks with large dependency trees and Node-era assumptions. We wanted something in between.<p>What Crust does differently:<p>- Full type inference from definitions — args and flags are inferred automatically. No manual type annotations, no generics to wrangle. You define a flag as type: &quot;string&quot; and it flows through to your handler.<p>- Compile-time validation — catches flag alias collisions and variadic arg mistakes before your code runs, not at runtime.<p>- Zero runtime dependencies — @crustjs&#x2F;core is ~3.6kB gzipped (21kB install). For comparison: yargs is 509kB, oclif is 411kB.<p>- Composable modules — core, plugins, prompts, styling, validation, and build tooling are all separate packages. Install only what you need.<p>- Plugin system — middleware-based with lifecycle hooks (preRun&#x2F;postRun). Official plugins for help, version, and shell autocompletion.<p>- Built for Bun — no Node compatibility layers, no legacy baggage.<p>Quick example:<p><pre><code> import { Crust } from &quot;@crustjs&#x2F;core&quot;; import { helpPlugin, versionPlugin } from &quot;@crustjs&#x2F;plugins&quot;; const main = new Crust(&quot;greet&quot;) .args([{ name: &quot;name&quot;, type: &quot;string&quot;, default: &quot;world&quot; }]) .flags({ shout: { type: &quot;boolean&quot;, short: &quot;s&quot; } }) .use(helpPlugin()) .use(versionPlugin(&quot;1.0.0&quot;)) .run(({ args, flags }) =&gt; { const msg = `Hello, ${args.name}!`; console.log(flags.shout ? msg.toUpperCase() : msg); }); await main.execute(); </code></pre> Scaffold a new project:<p><pre><code> bun create crust my-cli </code></pre> Site: <a href="https:&#x2F;&#x2F;crustjs.com" rel="nofollow">https:&#x2F;&#x2F;crustjs.com</a> GitHub: <a href="https:&#x2F;&#x2F;github.com&#x2F;chenxin-yan&#x2F;crust" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;chenxin-yan&#x2F;crust</a><p>Happy to answer any questions about the design decisions or internals.

Found: March 17, 2026 ID: 3811

[Other] Video Encoding and Decoding with Vulkan Compute Shaders in FFmpeg

Found: March 17, 2026 ID: 3845

[Other] Leanstral: Open-source agent for trustworthy coding and formal proof engineering Lean 4 paper (2021): <a href="https:&#x2F;&#x2F;dl.acm.org&#x2F;doi&#x2F;10.1007&#x2F;978-3-030-79876-5_37" rel="nofollow">https:&#x2F;&#x2F;dl.acm.org&#x2F;doi&#x2F;10.1007&#x2F;978-3-030-79876-5_37</a>

Found: March 16, 2026 ID: 3801

[Other] Show HN: Most GPU Upgrades Aren't Worth It, I Built a Calculator to Prove It I run a small project called best-gpu.com, a site that ranks GPUs by price-to-performance.<p>While browsing PC building forums and Reddit, I kept seeing the same question: “What should I upgrade to from my current GPU?” Most answers are just lists of cards without showing the actual performance gain, so people often end up paying for upgrades that barely improve performance.<p>So I built a small tool: a GPU Upgrade Calculator.<p>You enter your current GPU and it shows:<p>estimated performance gain<p>a value score based on price vs performance<p>a filtered list of upgrade options (brand, price, VRAM, etc.)<p>The goal is simply to help people avoid spending money on upgrades that aren’t really worth it.<p>Curious to hear feedback from HN on the approach, data sources, or features that would make something like this more useful.<p><a href="https:&#x2F;&#x2F;best-gpu.com&#x2F;upgrade.php" rel="nofollow">https:&#x2F;&#x2F;best-gpu.com&#x2F;upgrade.php</a>

Found: March 16, 2026 ID: 3804

[API/SDK] Show HN: Open-source, extract any brand's logos, colors, and assets from a URL Hi everyone, I just open sourced OpenBrand - extract any brand&#x27;s logos, colors, and assets from just a URL.<p>It&#x27;s MIT licensed, open source, completely free. Try it out at openbrand.sh<p>It also comes with a free API and MCP server for you to use in your code or agents.<p>Why we built this: while building another product, we needed to pull in customers&#x27; brand images as custom backgrounds. It felt like a simple enough problem with no open source solution - so we built one.

Found: March 16, 2026 ID: 3800

[Other] Meta’s renewed commitment to jemalloc <a href="https:&#x2F;&#x2F;github.com&#x2F;jemalloc&#x2F;jemalloc" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;jemalloc&#x2F;jemalloc</a>

Found: March 16, 2026 ID: 3798

[DevOps] Launch HN: Chamber (YC W26) – An AI Teammate for GPU Infrastructure Hey HN, we&#x27;re Jie Shen, Charles, Andreas, and Shaocheng. We built Chamber (<a href="https:&#x2F;&#x2F;usechamber.io">https:&#x2F;&#x2F;usechamber.io</a>), an AI agent that manages GPU infrastructure for you. You talk to it wherever your team already works and it handles things like provisioning clusters, diagnosing failed jobs, managing workloads. Demo: <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=xdqh2C_hif4" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=xdqh2C_hif4</a><p>We all worked on GPU infrastructure at Amazon. Between us we&#x27;ve spent years on this problem — monitoring GPU fleets, debugging failures at scale, building the tooling around it. After leaving we talked to a bunch of AI teams and kept hearing the same stuff. Platform engineers spend half their time just keeping things running. Building dashboards, writing scheduling configs, answering &quot;when will my job start?&quot; all day. Researchers lose hours when a training run fails because figuring out why means digging through Kubernetes events, node logs, and GPU metrics in totally separate tools. Pretty much everyone had stitched together Prometheus, Grafana, Kubernetes scheduling policies, and a bunch of homegrown scripts, and they were spending as much time maintaining all of it as actually using it.<p>The thing we kept noticing is that most of this work follows patterns. Triage the failure, correlate a few signals, figure out what to do about it. If you had a platform with structured access to the full state of a GPU environment, you could have an agent do that work for you.<p>So that&#x27;s what we built. Chamber is a control plane that keeps a live model of your GPU fleet: nodes, workloads, team structure, cluster health. Every operation it supports is exposed as a tool the agent can call. Inspecting node health, reading cluster topology, managing workload lifecycle, adjusting resource configs, provisioning infrastructure. These are structured operations with validation and rollback, not just raw shell commands. When we add new capabilities to the platform, they automatically become things the agent can do too.<p>We spent a lot of time on safety because we&#x27;ve seen what happens when infrastructure automation goes wrong. A wrong call can kill a multi-day training run or cascade across a cluster. So the agent has graduated autonomy. Routine stuff it handles on its own: diagnosing a failed job, resubmitting with corrected resources, cordoning a bad node. But anything that touches other teams&#x27; workloads or production jobs needs human approval first. Every action gets logged with what the agent saw, why it acted, and what it changed.<p>The platform underneath is really what makes the diagnosis work. When the agent investigates a failure, it queries GPU state, workload history, node health timelines, and cluster topology. That&#x27;s the difference between &quot;your job OOMed&quot; and &quot;your job OOMed because the batch size exceeded available VRAM on this node, here&#x27;s a corrected config.&quot; Different root causes get different fixes.<p>One thing that surprised us, even coming from Amazon where we&#x27;d seen large GPU fleets: most teams we talk to can&#x27;t even tell you how many GPUs are in use right now. The monitoring just doesn&#x27;t exist. They&#x27;re flying blind on their most expensive hardware.<p>We’ve launched with a few early customers and are onboarding new teams. We’re still refining pricing and are currently evaluating models like per-GPU-under-management and tiered plans. We plan to publish transparent pricing once we’ve validated what works best for customers. In the meantime, we know “contact us” isn’t ideal.<p>Would love to hear from anyone running GPU clusters. What&#x27;s the most tedious part of your setup? What would you actually trust an agent to do? What&#x27;s off limits? Looking forward to feedback!

Found: March 16, 2026 ID: 3794

[Other] Show HN: Claude Code skills that build complete Godot games I’ve been working on this for about a year through four major rewrites. Godogen is a pipeline that takes a text prompt, designs the architecture, generates 2D&#x2F;3D assets, writes the GDScript, and tests it visually. The output is a complete, playable Godot 4 project.<p>Getting LLMs to reliably generate functional games required solving three specific engineering bottlenecks:<p>1. The Training Data Scarcity: LLMs barely know GDScript. It has ~850 classes and a Python-like syntax that will happily let a model hallucinate Python idioms that fail to compile. To fix this, I built a custom reference system: a hand-written language spec, full API docs converted from Godot&#x27;s XML source, and a quirks database for engine behaviors you can&#x27;t learn from docs alone. Because 850 classes blow up the context window, the agent lazy-loads only the specific APIs it needs at runtime.<p>2. The Build-Time vs. Runtime State: Scenes are generated by headless scripts that build the node graph in memory and serialize it to .tscn files. This avoids the fragility of hand-editing Godot&#x27;s serialization format. But it means certain engine features (like `@onready` or signal connections) aren&#x27;t available at build time—they only exist when the game actually runs. Teaching the model which APIs are available at which phase — and that every node needs its owner set correctly or it silently vanishes on save — took careful prompting but paid off.<p>3. The Evaluation Loop: A coding agent is inherently biased toward its own output. To stop it from cheating, a separate Gemini Flash agent acts as visual QA. It sees only the rendered screenshots from the running engine—no code—and compares them against a generated reference image. It catches the visual bugs text analysis misses: z-fighting, floating objects, physics explosions, and grid-like placements that should be organic.<p>Architecturally, it runs as two Claude Code skills: an orchestrator that plans the pipeline, and a task executor that implements each piece in a `context: fork` window so mistakes and state don&#x27;t accumulate.<p>Everything is open source: <a href="https:&#x2F;&#x2F;github.com&#x2F;htdt&#x2F;godogen" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;htdt&#x2F;godogen</a><p>Demo video (real games, not cherry-picked screenshots): <a href="https:&#x2F;&#x2F;youtu.be&#x2F;eUz19GROIpY" rel="nofollow">https:&#x2F;&#x2F;youtu.be&#x2F;eUz19GROIpY</a><p>Blog post with the full story (all the wrong turns) coming soon. Happy to answer questions.

Found: March 16, 2026 ID: 3796

[CLI Tool] Apideck CLI – An AI-agent interface with much lower context consumption than MCP

Found: March 16, 2026 ID: 3793

Gluon: Explicit Performance

Hacker News (score: 17)

[Other] Gluon: Explicit Performance

Found: March 16, 2026 ID: 3837

[CLI Tool] Lazycut: A simple terminal video trimmer using FFmpeg

Found: March 16, 2026 ID: 3795

[Other] Toward automated verification of unreviewed AI-generated code

Found: March 16, 2026 ID: 3809

[API/SDK] Show HN: Signbee – An API that lets AI agents send documents for signature Hi HN, I built Signbee while working on AI agents that handle contracting workflows. The agents could draft agreements, negotiate terms, manage deals — but the moment a signature was needed, the workflow broke. It always ended with &quot;please upload this to DocuSign&quot; — which meant human intervention, account setup, and manual uploads. So I built a simple API. You POST markdown and Signbee generates the PDF, or you pass a URL to your own PDF if you already have one designed the way you want it. No templates, no editor. Either way, it verifies both parties via email OTP and produces a signed document. curl -X POST <a href="https:&#x2F;&#x2F;signb.ee&#x2F;api&#x2F;v1&#x2F;send" rel="nofollow">https:&#x2F;&#x2F;signb.ee&#x2F;api&#x2F;v1&#x2F;send</a> \ -H &quot;Content-Type: application&#x2F;json&quot; \ -d &#x27;{ &quot;markdown&quot;: &quot;# NDA\n\nTerms...&quot;, &quot;sender_name&quot;: &quot;You&quot;, &quot;sender_email&quot;: &quot;you@company.com&quot;, &quot;recipient_name&quot;: &quot;Client&quot;, &quot;recipient_email&quot;: &quot;client@co.com&quot; }&#x27; Under the hood: - Markdown → PDF generation, or bring your own PDF via URL - Both parties verified via email OTP - Timestamps and IP addresses recorded - Final document hashed with SHA-256 - Certificate page appended with full audit trail One interesting challenge: the certificate page itself is part of the document that gets hashed, so any modification — even to the certificate — invalidates the integrity check. I also built an MCP server (npx -y signbee-mcp) so tools like Claude or Cursor can call it directly. Curious to hear from people who&#x27;ve dealt with document signing systems or automated agent workflows — what would you want to automate? <a href="https:&#x2F;&#x2F;signb.ee" rel="nofollow">https:&#x2F;&#x2F;signb.ee</a>

Found: March 16, 2026 ID: 3792

[Other] An experiment to use GitHub Actions as a control plane for a PaaS

Found: March 16, 2026 ID: 3784

[Other] Show HN: Lockstep – A data-oriented programming language <a href="https:&#x2F;&#x2F;github.com&#x2F;seanwevans&#x2F;lockstep" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;seanwevans&#x2F;lockstep</a><p>I want to share my work-in-progress systems language with a v0.1.0 release of Lockstep. It is a data-oriented systems programming language designed for high-throughput, deterministic compute pipelines.<p>I built Lockstep to bridge the gap between the productivity of C and the execution efficiency of GPU compute shaders. Instead of traditional control flow, Lockstep enforces straight-line SIMD execution. You will not find any if, for, or while statements inside compute kernels; branching is entirely replaced by hardware-native masking and stream-splitting.<p>Memory is handled via a static arena provided by the Host. There is no malloc, no hidden threads, and no garbage collection, which guarantees predictable performance and eliminates race conditions by construction.<p>Under the hood, Lockstep targets LLVM IR directly to leverage industrial-grade optimization passes. It also generates a C-compatible header for easy integration with host applications written in C, C++, Rust, or Zig.<p>v0.1.0 includes a compiler with LLVM IR and C header emission, a CLI simulator for validating pipeline wiring and cardinality on small datasets and an opt-in LSP server for real-time editor diagnostics, hover type info, and autocompletion.<p>You can check out the repository to see the syntax, and the roadmap outlines where the project is heading next, including parameterized SIMD widths and multi-stage pipeline composition.<p>I would love to hear feedback on the language semantics, the type system, and the overall architecture!

Found: March 16, 2026 ID: 3787

[Other] Show HN: Open-source playground to red-team AI agents with exploits published We build runtime security for AI agents. The playground started as an internal tool that we used to test our own guardrails. But we kept finding the same types of vulnerabilities because we think about attacks a certain way. At some point you need people who don&#x27;t think like you.<p>So we open-sourced it. Each challenge is a live agent with real tools and a published system prompt. Whenever a challenge is over, the full winning conversation transcript and guardrail logs get documented publicly.<p>Building the general-purpose agent itself was probably the most fun part. Getting it to reliably use tools, stay in character, and follow instructions while still being useful is harder than it sounds. That alone reminded us how early we all are in understanding and deploying these systems at scale.<p>First challenge was to get an agent to call a tool it&#x27;s been told to never call.<p>Someone got through in around 60 seconds without ever asking for the secret directly (which taught us a lot).<p>Next challenge is focused on data exfiltration with harder defences: <a href="https:&#x2F;&#x2F;playground.fabraix.com" rel="nofollow">https:&#x2F;&#x2F;playground.fabraix.com</a>

Found: March 15, 2026 ID: 3786

[Other] Show HN: Free OpenAI API Access with ChatGPT Account

Found: March 15, 2026 ID: 3785
Previous Page 21 of 124 Next