🛠️ Hacker News Tools

Showing 141–160 of 1891 tools from Hacker News

Last Updated
March 05, 2026 at 08:11 PM

RCade: Building a Community Arcade Cabinet

Found: February 26, 2026 ID: 3546

[Other] Show HN: Rev-dep – 20x faster knip.dev alternative build in Go

Found: February 26, 2026 ID: 3445

[Other] Launch HN: Cardboard (YC W26) – Agentic video editor Hey HN - we&#x27;re Saksham and Ishan, and we’re building Cardboard (<a href="https:&#x2F;&#x2F;www.usecardboard.com">https:&#x2F;&#x2F;www.usecardboard.com</a>).<p>It lets you go from raw footage to an edited video by describing what you want in natural language.<p>Try it out at <a href="https:&#x2F;&#x2F;demo.usecardboard.com">https:&#x2F;&#x2F;demo.usecardboard.com</a> (no login required).<p>Also, there’s a demo video at: <a href="https:&#x2F;&#x2F;www.usecardboard.com&#x2F;share&#x2F;fUN2i9ft8B46">https:&#x2F;&#x2F;www.usecardboard.com&#x2F;share&#x2F;fUN2i9ft8B46</a><p>People sit on mountains of raw assets - product walkthroughs, customer interviews, travel videos, screen recordings, changelogs, etc. - that could become testimonials, ads, vlogs, launch videos, etc.<p>Instead they sit in cloud storage &#x2F; hard drives because getting to a first cut takes hours of scrubbing through the raw footage manually, arranging clips in correct sequence, syncing music, exporting, uploading to a cloud storage to share, and then getting feedback on WhatsApp&#x2F;iMessage&#x2F;Slack, then re-doing the same thing again till everyone is happy.<p>We grew up together and have been friends for 15 years. Saksham creates content on socials with ~250K views&#x2F;month and kept hitting the wall where editing took longer than creating. Ishan was producing launch videos for HackerRank&#x27;s all-hands demo days and spent most of his time on cuts and sequencing rather than storytelling. We both felt that while tools like Premiere Pro and DaVinci are powerful, they have a steep learning curve and involve lots of manual labor.<p>So we built Cardboard. You tell it to &quot;make a 60s recap from this raw footage&quot; or &quot;cut this into a 20s ad&quot; or &quot;beat-sync this to the music I just added&quot; and it proposes a first draft on the timeline that you can refine further.<p>We built a custom hardware-accelerated renderer on WebCodecs &#x2F; WebGL2, there’s no server-side rendering, no plugins, everything runs in your browser (client-side). Video understanding tasks go through a series of Cloud VLMs + traditional ML models, and we use third party foundational models for agent orchestration. We also give a dropdown for this to the end user.<p>We&#x27;ve shipped 13 releases since November (<a href="https:&#x2F;&#x2F;www.usecardboard.com&#x2F;changelog">https:&#x2F;&#x2F;www.usecardboard.com&#x2F;changelog</a>). The editor handles multi-track timelines with keyframe animations, shot detection, beat sync via percussion detection, voiceover generation, voice cloning, background removal, multilingual captions that are spatially aware of subjects in frame, and Premiere Pro&#x2F;DaVinci&#x2F;FCP XML exports so you can move projects into your existing tools if you want.<p>Where we&#x27;re headed next: real-time collaboration (video git) to avoid inefficient feedback loops, and eventually a prediction engine that learns your editing patterns and suggests the next low entropy actions - similar to how Cursor&#x27;s tab completion works, but for timeline actions.<p>We believe that video creation tools today are stuck where developer tools were in the early 2000s: local-first, zero collaboration with really slow feedback loops.<p>Here are some videos that we made with Cardboard: - <a href="https:&#x2F;&#x2F;www.usecardboard.com&#x2F;share&#x2F;YYsstWeWE9KI">https:&#x2F;&#x2F;www.usecardboard.com&#x2F;share&#x2F;YYsstWeWE9KI</a> - <a href="https:&#x2F;&#x2F;www.usecardboard.com&#x2F;share&#x2F;nyT9oj93sm1e">https:&#x2F;&#x2F;www.usecardboard.com&#x2F;share&#x2F;nyT9oj93sm1e</a> - <a href="https:&#x2F;&#x2F;www.usecardboard.com&#x2F;share&#x2F;xK9mP2vR7nQ4">https:&#x2F;&#x2F;www.usecardboard.com&#x2F;share&#x2F;xK9mP2vR7nQ4</a><p>We would love to hear your thoughts&#x2F;feedback.<p>We&#x27;ll be in the comments all day :)

Found: February 26, 2026 ID: 3444

Smallest transformer that can add two 10-digit numbers

Found: February 26, 2026 ID: 3492

[IDE/Editor] Show HN: Browser-based .NET IDE with visual designer, NuGet packages, code share Hi HN, I&#x27;m Giovanni, founder of Userware. We built XAML.io, a free browser-based IDE for C# and XAML that compiles and runs .NET projects entirely client-side via WebAssembly. No server-side build step.<p>The link above opens a sample project using Newtonsoft.Json. Click Run to compile and execute it in your browser. You can edit the code, add NuGet packages, and share your project via a URL.<p>What&#x27;s new in v0.6:<p>- NuGet package support (any library compatible with Blazor WebAssembly) - Code sharing via URL with GitHub-like forking and attribution - XAML autocompletion, AI error fixing, split editor views<p>The visual designer is the differentiator: 100+ drag-and-drop controls for building UIs. But the NuGet and sharing features work even if you ignore the designer entirely and just write C# code.<p>XAML.io is currently in tech preview. It&#x27;s built on OpenSilver (<a href="https:&#x2F;&#x2F;opensilver.net" rel="nofollow">https:&#x2F;&#x2F;opensilver.net</a>), a from-scratch reimplementation of the WPF API (subset) using modern .NET, WebAssembly, and the browser DOM. It&#x27;s open-source and has been in development for over 12 years (started as CSHTML5 in 2013, rebranded to OpenSilver in 2020).<p>Limitations: one project per solution, no C# IntelliSense yet (coming soon), no debugger yet, WPF compatibility improvements underway, desktop browsers recommended.<p>Full details and screenshots: <a href="https:&#x2F;&#x2F;blog.xaml.io&#x2F;post&#x2F;xaml-io-v0-6" rel="nofollow">https:&#x2F;&#x2F;blog.xaml.io&#x2F;post&#x2F;xaml-io-v0-6</a><p>Happy to answer questions about the architecture, WebAssembly compilation pipeline, or anything else.

Found: February 26, 2026 ID: 3452

[CLI Tool] Show HN: Deff – side-by-side Git diff review in your terminal deff is an interactive Rust TUI for reviewing git diffs side-by-side with syntax highlighting and added&#x2F;deleted line tinting. It supports keyboard&#x2F;mouse navigation, vim-style motions, in-diff search (&#x2F;, n, N), per-file reviewed toggles, and both upstream-based and explicit --base&#x2F;--head comparisons. It can also include uncommitted + untracked files (--include-uncommitted) so you can review your working tree before committing.<p>Would love to get some feedback

Found: February 26, 2026 ID: 3446

Interview with Øyvind Kolüs, GIMP developer (2017)

Found: February 26, 2026 ID: 3506

[Other] AirSnitch: Demystifying and breaking client isolation in Wi-Fi networks [pdf]

Found: February 26, 2026 ID: 3449

Use the Mikado Method to do safe changes in a complex codebase

Found: February 26, 2026 ID: 3535

[Build/Deploy] BuildKit: Docker's Hidden Gem That Can Build Almost Anything

Found: February 26, 2026 ID: 3447

[Other] Show HN: Mission Control – Open-source task management for AI agents I&#x27;ve been delegating work to Claude Code for the past few months, and it&#x27;s been genuinely transformative—but managing multiple agents doing different things became chaos. No tool existed for this workflow, so I built one. <i>The Problem</i><p>When you&#x27;re working with AI agents (Claude Code, Cursor, Windsurf), you end up in a weird situation: - You have tasks scattered across your head, Slack, email, and the CLI - Agents need clear work items, context, and role-specific instructions - You have no visibility into what agents are actually doing - Failed tasks just... disappear. No retry, no notification - Each agent context-switches constantly because you&#x27;re hand-feeding them work<p>I was manually shepherding agents, copying task descriptions, restarting failed sessions, and losing track of what needed done next. It felt like hiring expensive contractors but managing them like a disorganized chaos experiment.<p><i>The Solution</i><p>Mission Control is a task management app purpose-built for delegating work to AI agents. It&#x27;s got the expected stuff (Eisenhower matrix, kanban board, goal hierarchy) but built from the assumption that your collaborators are Claude, not humans.<p>The <i>killer feature is the autonomous daemon</i>. It runs in the background, polls your task queue, spawns Claude Code sessions automatically, handles retries, manages concurrency, and respects your cron-scheduled work. One click: your entire work queue activates.<p><i>The Architecture</i><p>- <i>Local-first</i>: Everything lives in JSON files. No database, no cloud dependency, no vendor lock-in. - <i>Token-optimized API</i>: The task&#x2F;decision payloads are ~50 tokens vs ~5,400 unfiltered. Matters when you&#x27;re spawning agents repeatedly. - <i>Rock-solid concurrency</i>: Zod validation + async-mutex locking prevents corruption under concurrent writes. - <i>193 automated tests</i>: This thing has to be reliable. It&#x27;s doing unattended work.<p>The app is Next.js 15 with 5 built-in agent roles (researcher, developer, marketer, business-analyst, plus you). You define reusable skills as markdown that get injected into agent prompts. Agents report back through an inbox + decisions queue.<p><i>Why Release This?</i><p>A few people have asked for access, and I think it&#x27;s genuinely useful for anyone delegating to AI. It&#x27;s MIT licensed, open source, and actively maintained.<p><i>What&#x27;s Next</i><p>- Human collaboration (sharing tasks with real team members) - Integrations with GitHub issues and email inboxes - Better observability dashboard for daemon execution - Custom agent templates (currently hardcoded roles)<p>If you&#x27;re doing something similar—delegating serious work to AI—check it out and let me know what&#x27;s broken.<p>GitHub: <a href="https:&#x2F;&#x2F;github.com&#x2F;MeisnerDan&#x2F;mission-control" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;MeisnerDan&#x2F;mission-control</a>

Found: February 26, 2026 ID: 3448

[Other] Show HN: Better Hub – A better GitHub experience Hey HN,<p>I’m Bereket, founder of Better Auth. Our team spends a huge amount of time on GitHub every day. Like anyone who’s spent enough time there, I’ve always wished for a much better GitHub experience.<p>I’ve asked a lot of people to do something about it, but it seems like no one is really tackling GitHub directly.<p>A couple of weeks ago, I saw a tweet from Mitchell (HashiCorp) complaining about the repo main page. That became the trigger. I decided to start hacking on a prototype to see how far I could push an alternative interface using GitHub’s APIs.<p>Within a week, I genuinely started using it as my default, same with the rest of our team. After fixing a few rough edges, I decided to put it out there.<p>A few things we’re trying to achieve:<p>- UI&#x2F;UX rethink* – A redesigned repo home, PR review flow, and overview pages focused on signal over noise. Faster navigation and clearer structure.<p>- Keyboard-first workflow: ⌘K-driven command center, ⌘&#x2F; for global search, ⌘I opens “Ghost,” an AI assistant, and more.<p>- Better AI integration: Context-aware AI that understands the repo, the PR you’re viewing, and the diff you’re looking at.<p>- New concepts: Prompt Requests, self-healing CI, auto-merge with automatic conflict resolution, etc.<p>It’s a simple Next.js server talking to the GitHub API, with heavy caching and local state management.<p>We’re considering optional git hosting (in collaboration with teams building alternative backends), but for now, the experiment is: how much can we improve without replacing GitHub<p>This is ambitious and very early. The goal is to explore what a more modern code collaboration experience could look like, and make it something we can all collaborate on.<p>I’d love your feedback on what you think should be improved about GitHub overall.

Found: February 26, 2026 ID: 3442

[Other] From Noise to Image – interactive guide to diffusion

Found: February 26, 2026 ID: 3479

[DevOps] Show HN: OpenSwarm – Multi‑Agent Claude CLI Orchestrator for Linear/GitHub I built OpenSwarm because I wanted an autonomous “AI dev team” that can actually plug into my real workflow instead of running toy tasks. OpenSwarm orchestrates multiple Claude Code CLI instances as agents to work on real Linear issues. It: • pulls issues from Linear and runs a Worker&#x2F;Reviewer&#x2F;Test&#x2F;Documenter pipeline • uses LanceDB + multilingual-e5 embeddings for long‑term memory and context reuse • builds a simple code knowledge graph for impact analysis • exposes everything through a Discord bot (status, dispatch, scheduling, logs) • can auto‑iterate on existing PRs and monitor long‑running jobs Right now it’s powering my own solo dev workflow (trading infra, LLM tools, other projects). It’s still early, so there are rough edges and a lot of TODOs around safety, scaling, and better task decomposition. I’d love feedback on: • what feels missing for this to be useful to other teams • failure modes you’d be worried about in autonomous code agents • ideas for better memory&#x2F;knowledge graph use in real‑world repos Repo: <a href="https:&#x2F;&#x2F;github.com&#x2F;Intrect-io&#x2F;OpenSwarm" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;Intrect-io&#x2F;OpenSwarm</a> Happy to answer questions and hear brutal feedback.

Found: February 26, 2026 ID: 3439

[Other] Show HN: ZSE – Open-source LLM inference engine with 3.9s cold starts I&#x27;ve been building ZSE (Z Server Engine) for the past few weeks — an open-source LLM inference engine focused on two things nobody has fully solved together: memory efficiency and fast cold starts.<p>The problem I was trying to solve: Running a 32B model normally requires ~64 GB VRAM. Most developers don&#x27;t have that. And even when quantization helps with memory, cold starts with bitsandbytes NF4 take 2+ minutes on first load and 45–120 seconds on warm restarts — which kills serverless and autoscaling use cases.<p>What ZSE does differently:<p>Fits 32B in 19.3 GB VRAM (70% reduction vs FP16) — runs on a single A100-40GB<p>Fits 7B in 5.2 GB VRAM (63% reduction) — runs on consumer GPUs<p>Native .zse pre-quantized format with memory-mapped weights: 3.9s cold start for 7B, 21.4s for 32B — vs 45s and 120s with bitsandbytes, ~30s for vLLM<p>All benchmarks verified on Modal A100-80GB (Feb 2026)<p>It ships with:<p>OpenAI-compatible API server (drop-in replacement)<p>Interactive CLI (zse serve, zse chat, zse convert, zse hardware)<p>Web dashboard with real-time GPU monitoring<p>Continuous batching (3.45× throughput)<p>GGUF support via llama.cpp<p>CPU fallback — works without a GPU<p>Rate limiting, audit logging, API key auth<p>Install:<p>----- pip install zllm-zse zse serve Qwen&#x2F;Qwen2.5-7B-Instruct For fast cold starts (one-time conversion):<p>----- zse convert Qwen&#x2F;Qwen2.5-Coder-7B-Instruct -o qwen-7b.zse zse serve qwen-7b.zse # 3.9s every time<p>The cold start improvement comes from the .zse format storing pre-quantized weights as memory-mapped safetensors — no quantization step at load time, no weight conversion, just mmap + GPU transfer. On NVMe SSDs this gets under 4 seconds for 7B. On spinning HDDs it&#x27;ll be slower.<p>All code is real — no mock implementations. Built at Zyora Labs. Apache 2.0.<p>Happy to answer questions about the quantization approach, the .zse format design, or the memory efficiency techniques.

Found: February 26, 2026 ID: 3436

Making MCP cheaper via CLI

Hacker News (score: 88)

[CLI Tool] Making MCP cheaper via CLI

Found: February 25, 2026 ID: 3435

[Other] PA bench: Evaluating web agents on real world personal assistant workflows We’re the team at Vibrant Labs (W24). We’ve been building envs for browser agents and quickly realized that existing benchmarks in this space didn’t capture the primary failure modes we were seeing in production (which scaled up as the number of applications and horizon length increase).<p>We built PA Bench (Personal Assistant Benchmark) to evaluate frontier computer&#x2F;web use models on their ability to handle multi-step workflows across simulated clones of Gmail and Calendar.<p>*What’s next:*<p>We’re currently scaling the dataset to 3+ tabs and are building more high-fidelity simulations for common enterprise workflows. We’d love to hear feedback on the benchmark and notes about what was&#x2F;wasn’t surprising about the results.<p>Blog post: <a href="https:&#x2F;&#x2F;vibrantlabs.com&#x2F;blog&#x2F;pa-bench">https:&#x2F;&#x2F;vibrantlabs.com&#x2F;blog&#x2F;pa-bench</a>

Found: February 25, 2026 ID: 3437

[Other] Show HN: I ported Tree-sitter to Go This started as a hard requirement for my TUI-based editor application, it ended up going in a few different directions.<p>A suite of tools that help with semantic code entities: <a href="https:&#x2F;&#x2F;github.com&#x2F;odvcencio&#x2F;gts-suite" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;odvcencio&#x2F;gts-suite</a><p>A next-gen version control system called Got: <a href="https:&#x2F;&#x2F;github.com&#x2F;odvcencio&#x2F;got" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;odvcencio&#x2F;got</a><p>I think this has some pretty big potential! I think there&#x27;s many classes of application (particularly legacy architecture) that can benefit from these kinds of analysis tooling. My next post will be about composing all these together, an exciting project I call GotHub. Thanks!

Found: February 25, 2026 ID: 3433

[Other] Show HN: I ported Manim to TypeScript (run 3b1B math animations in the browser) Hi HN, I&#x27;m Narek. I built Manim-Web, a TypeScript&#x2F;JavaScript port of 3Blue1Brown’s popular Manim math animation engine.<p>The Problem: Like many here, I love Manim&#x27;s visual style. But setting it up locally is notoriously painful - it requires Python, FFmpeg, Cairo, and a full LaTeX distribution. It creates a massive barrier to entry, especially for students or people who just want to quickly visualize a concept.<p>The Solution: I wanted to make it zero-setup, so I ported the engine to TypeScript. Manim-Web runs entirely client-side in the browser. No Python, no servers, no install. It runs animations in real-time at 60fps.<p>How it works underneath: - Rendering: Uses Canvas API &#x2F; WebGL (via Three.js for 3D scenes). - LaTeX: Rendered and animated via MathJax&#x2F;KaTeX (no LaTeX install needed!). - API: I kept the API almost identical to the Python version (e.g., scene.play(new Transform(square, circle))), meaning existing Manim knowledge transfers over directly. - Reactivity: Updaters and ValueTrackers follow the exact same reactive pattern as the Python original.<p>Because it&#x27;s web-native, the animations are now inherently interactive (objects can be draggable&#x2F;clickable) and can be embedded directly into React&#x2F;Vue apps, interactive textbooks, or blogs. I also included a py2ts converter to help migrate existing scripts.<p>Live Demo: <a href="https:&#x2F;&#x2F;maloyan.github.io&#x2F;manim-web&#x2F;examples" rel="nofollow">https:&#x2F;&#x2F;maloyan.github.io&#x2F;manim-web&#x2F;examples</a> GitHub: <a href="https:&#x2F;&#x2F;github.com&#x2F;maloyan&#x2F;manim-web" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;maloyan&#x2F;manim-web</a><p>It&#x27;s open-source (MIT). I&#x27;m still actively building out feature parity with the Python version, but core animations, geometry, plotting, and 3D orbiting are working great. I would love to hear your feedback, and I&#x27;ll be hanging around to answer any technical questions about rendering math in the browser!

Found: February 25, 2026 ID: 3464

[Other] Time-Travel Debugging: Replaying Production Bugs Locally

Found: February 25, 2026 ID: 3466
Previous Page 8 of 95 Next