šŸ› ļø All DevTools

Showing 1–20 of 3448 tools

Last Updated
February 26, 2026 at 12:01 PM

[Other] Show HN: Rev-dep – 20x faster knip.dev alternative build in Go

Found: February 26, 2026 ID: 3445

[Other] Launch HN: Cardboard (YC W26) – Agentic video editor Hey HN - we&#x27;re Saksham and Ishan, and we’re building Cardboard (<a href="https:&#x2F;&#x2F;www.usecardboard.com">https:&#x2F;&#x2F;www.usecardboard.com</a>).<p>It lets you go from raw footage to an edited video by describing what you want in natural language.<p>Try it out at <a href="https:&#x2F;&#x2F;demo.usecardboard.com">https:&#x2F;&#x2F;demo.usecardboard.com</a> (no login required).<p>Also, there’s a demo video at: <a href="https:&#x2F;&#x2F;www.usecardboard.com&#x2F;share&#x2F;fUN2i9ft8B46">https:&#x2F;&#x2F;www.usecardboard.com&#x2F;share&#x2F;fUN2i9ft8B46</a><p>People sit on mountains of raw assets - product walkthroughs, customer interviews, travel videos, screen recordings, changelogs, etc. - that could become testimonials, ads, vlogs, launch videos, etc.<p>Instead they sit in cloud storage &#x2F; hard drives because getting to a first cut takes hours of scrubbing through the raw footage manually, arranging clips in correct sequence, syncing music, exporting, uploading to a cloud storage to share, and then getting feedback on WhatsApp&#x2F;iMessage&#x2F;Slack, then re-doing the same thing again till everyone is happy.<p>We grew up together and have been friends for 15 years. Saksham creates content on socials with ~250K views&#x2F;month and kept hitting the wall where editing took longer than creating. Ishan was producing launch videos for HackerRank&#x27;s all-hands demo days and spent most of his time on cuts and sequencing rather than storytelling. We both felt that while tools like Premiere Pro and DaVinci are powerful, they have a steep learning curve and involve lots of manual labor.<p>So we built Cardboard. You tell it to &quot;make a 60s recap from this raw footage&quot; or &quot;cut this into a 20s ad&quot; or &quot;beat-sync this to the music I just added&quot; and it proposes a first draft on the timeline that you can refine further.<p>We built a custom hardware-accelerated renderer on WebCodecs &#x2F; WebGL2, there’s no server-side rendering, no plugins, everything runs in your browser (client-side). Video understanding tasks go through a series of Cloud VLMs + traditional ML models, and we use third party foundational models for agent orchestration. We also give a dropdown for this to the end user.<p>We&#x27;ve shipped 13 releases since November (<a href="https:&#x2F;&#x2F;www.usecardboard.com&#x2F;changelog">https:&#x2F;&#x2F;www.usecardboard.com&#x2F;changelog</a>). The editor handles multi-track timelines with keyframe animations, shot detection, beat sync via percussion detection, voiceover generation, voice cloning, background removal, multilingual captions that are spatially aware of subjects in frame, and Premiere Pro&#x2F;DaVinci&#x2F;FCP XML exports so you can move projects into your existing tools if you want.<p>Where we&#x27;re headed next: real-time collaboration (video git) to avoid inefficient feedback loops, and eventually a prediction engine that learns your editing patterns and suggests the next low entropy actions - similar to how Cursor&#x27;s tab completion works, but for timeline actions.<p>We believe that video creation tools today are stuck where developer tools were in the early 2000s: local-first, zero collaboration with really slow feedback loops.<p>Here are some videos that we made with Cardboard: - <a href="https:&#x2F;&#x2F;www.usecardboard.com&#x2F;share&#x2F;YYsstWeWE9KI">https:&#x2F;&#x2F;www.usecardboard.com&#x2F;share&#x2F;YYsstWeWE9KI</a> - <a href="https:&#x2F;&#x2F;www.usecardboard.com&#x2F;share&#x2F;nyT9oj93sm1e">https:&#x2F;&#x2F;www.usecardboard.com&#x2F;share&#x2F;nyT9oj93sm1e</a> - <a href="https:&#x2F;&#x2F;www.usecardboard.com&#x2F;share&#x2F;xK9mP2vR7nQ4">https:&#x2F;&#x2F;www.usecardboard.com&#x2F;share&#x2F;xK9mP2vR7nQ4</a><p>We would love to hear your thoughts&#x2F;feedback.<p>We&#x27;ll be in the comments all day :)

Found: February 26, 2026 ID: 3444

[CLI Tool] Show HN: Deff – side-by-side Git diff review in your terminal deff is an interactive Rust TUI for reviewing git diffs side-by-side with syntax highlighting and added&#x2F;deleted line tinting. It supports keyboard&#x2F;mouse navigation, vim-style motions, in-diff search (&#x2F;, n, N), per-file reviewed toggles, and both upstream-based and explicit --base&#x2F;--head comparisons. It can also include uncommitted + untracked files (--include-uncommitted) so you can review your working tree before committing.<p>Would love to get some feedback

Found: February 26, 2026 ID: 3446

[Build/Deploy] BuildKit: Docker's Hidden Gem That Can Build Almost Anything

Found: February 26, 2026 ID: 3447

[Other] Show HN: Mission Control – Open-source task management for AI agents I&#x27;ve been delegating work to Claude Code for the past few months, and it&#x27;s been genuinely transformative—but managing multiple agents doing different things became chaos. No tool existed for this workflow, so I built one. <i>The Problem</i><p>When you&#x27;re working with AI agents (Claude Code, Cursor, Windsurf), you end up in a weird situation: - You have tasks scattered across your head, Slack, email, and the CLI - Agents need clear work items, context, and role-specific instructions - You have no visibility into what agents are actually doing - Failed tasks just... disappear. No retry, no notification - Each agent context-switches constantly because you&#x27;re hand-feeding them work<p>I was manually shepherding agents, copying task descriptions, restarting failed sessions, and losing track of what needed done next. It felt like hiring expensive contractors but managing them like a disorganized chaos experiment.<p><i>The Solution</i><p>Mission Control is a task management app purpose-built for delegating work to AI agents. It&#x27;s got the expected stuff (Eisenhower matrix, kanban board, goal hierarchy) but built from the assumption that your collaborators are Claude, not humans.<p>The <i>killer feature is the autonomous daemon</i>. It runs in the background, polls your task queue, spawns Claude Code sessions automatically, handles retries, manages concurrency, and respects your cron-scheduled work. One click: your entire work queue activates.<p><i>The Architecture</i><p>- <i>Local-first</i>: Everything lives in JSON files. No database, no cloud dependency, no vendor lock-in. - <i>Token-optimized API</i>: The task&#x2F;decision payloads are ~50 tokens vs ~5,400 unfiltered. Matters when you&#x27;re spawning agents repeatedly. - <i>Rock-solid concurrency</i>: Zod validation + async-mutex locking prevents corruption under concurrent writes. - <i>193 automated tests</i>: This thing has to be reliable. It&#x27;s doing unattended work.<p>The app is Next.js 15 with 5 built-in agent roles (researcher, developer, marketer, business-analyst, plus you). You define reusable skills as markdown that get injected into agent prompts. Agents report back through an inbox + decisions queue.<p><i>Why Release This?</i><p>A few people have asked for access, and I think it&#x27;s genuinely useful for anyone delegating to AI. It&#x27;s MIT licensed, open source, and actively maintained.<p><i>What&#x27;s Next</i><p>- Human collaboration (sharing tasks with real team members) - Integrations with GitHub issues and email inboxes - Better observability dashboard for daemon execution - Custom agent templates (currently hardcoded roles)<p>If you&#x27;re doing something similar—delegating serious work to AI—check it out and let me know what&#x27;s broken.<p>GitHub: <a href="https:&#x2F;&#x2F;github.com&#x2F;MeisnerDan&#x2F;mission-control" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;MeisnerDan&#x2F;mission-control</a>

Found: February 26, 2026 ID: 3448

ruvnet/claude-flow

GitHub Trending

[DevOps] 🌊 The leading agent orchestration platform for Claude. Deploy intelligent multi-agent swarms, coordinate autonomous workflows, and build conversational AI systems. Features enterprise-grade architecture, distributed swarm intelligence, RAG integration, and native Claude Code / Codex Integration

Found: February 26, 2026 ID: 3441

farion1231/cc-switch

GitHub Trending

[Other] A cross-platform desktop All-in-One assistant tool for Claude Code, Codex, OpenCode & Gemini CLI.

Found: February 26, 2026 ID: 3440

[Other] Show HN: Better Hub – A better GitHub experience Hey HN,<p>I’m Bereket, founder of Better Auth. Our team spends a huge amount of time on GitHub every day. Like anyone who’s spent enough time there, I’ve always wished for a much better GitHub experience.<p>I’ve asked a lot of people to do something about it, but it seems like no one is really tackling GitHub directly.<p>A couple of weeks ago, I saw a tweet from Mitchell (HashiCorp) complaining about the repo main page. That became the trigger. I decided to start hacking on a prototype to see how far I could push an alternative interface using GitHub’s APIs.<p>Within a week, I genuinely started using it as my default, same with the rest of our team. After fixing a few rough edges, I decided to put it out there.<p>A few things we’re trying to achieve:<p>- UI&#x2F;UX rethink* – A redesigned repo home, PR review flow, and overview pages focused on signal over noise. Faster navigation and clearer structure.<p>- Keyboard-first workflow: ⌘K-driven command center, ⌘&#x2F; for global search, ⌘I opens ā€œGhost,ā€ an AI assistant, and more.<p>- Better AI integration: Context-aware AI that understands the repo, the PR you’re viewing, and the diff you’re looking at.<p>- New concepts: Prompt Requests, self-healing CI, auto-merge with automatic conflict resolution, etc.<p>It’s a simple Next.js server talking to the GitHub API, with heavy caching and local state management.<p>We’re considering optional git hosting (in collaboration with teams building alternative backends), but for now, the experiment is: how much can we improve without replacing GitHub<p>This is ambitious and very early. The goal is to explore what a more modern code collaboration experience could look like, and make it something we can all collaborate on.<p>I’d love your feedback on what you think should be improved about GitHub overall.

Found: February 26, 2026 ID: 3442

[DevOps] Show HN: OpenSwarm – Multi‑Agent Claude CLI Orchestrator for Linear/GitHub I built OpenSwarm because I wanted an autonomous ā€œAI dev teamā€ that can actually plug into my real workflow instead of running toy tasks. OpenSwarm orchestrates multiple Claude Code CLI instances as agents to work on real Linear issues. It: • pulls issues from Linear and runs a Worker&#x2F;Reviewer&#x2F;Test&#x2F;Documenter pipeline • uses LanceDB + multilingual-e5 embeddings for long‑term memory and context reuse • builds a simple code knowledge graph for impact analysis • exposes everything through a Discord bot (status, dispatch, scheduling, logs) • can auto‑iterate on existing PRs and monitor long‑running jobs Right now it’s powering my own solo dev workflow (trading infra, LLM tools, other projects). It’s still early, so there are rough edges and a lot of TODOs around safety, scaling, and better task decomposition. I’d love feedback on: • what feels missing for this to be useful to other teams • failure modes you’d be worried about in autonomous code agents • ideas for better memory&#x2F;knowledge graph use in real‑world repos Repo: <a href="https:&#x2F;&#x2F;github.com&#x2F;Intrect-io&#x2F;OpenSwarm" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;Intrect-io&#x2F;OpenSwarm</a> Happy to answer questions and hear brutal feedback.

Found: February 26, 2026 ID: 3439

[Other] Show HN: ZSE – Open-source LLM inference engine with 3.9s cold starts I&#x27;ve been building ZSE (Z Server Engine) for the past few weeks — an open-source LLM inference engine focused on two things nobody has fully solved together: memory efficiency and fast cold starts.<p>The problem I was trying to solve: Running a 32B model normally requires ~64 GB VRAM. Most developers don&#x27;t have that. And even when quantization helps with memory, cold starts with bitsandbytes NF4 take 2+ minutes on first load and 45–120 seconds on warm restarts — which kills serverless and autoscaling use cases.<p>What ZSE does differently:<p>Fits 32B in 19.3 GB VRAM (70% reduction vs FP16) — runs on a single A100-40GB<p>Fits 7B in 5.2 GB VRAM (63% reduction) — runs on consumer GPUs<p>Native .zse pre-quantized format with memory-mapped weights: 3.9s cold start for 7B, 21.4s for 32B — vs 45s and 120s with bitsandbytes, ~30s for vLLM<p>All benchmarks verified on Modal A100-80GB (Feb 2026)<p>It ships with:<p>OpenAI-compatible API server (drop-in replacement)<p>Interactive CLI (zse serve, zse chat, zse convert, zse hardware)<p>Web dashboard with real-time GPU monitoring<p>Continuous batching (3.45Ɨ throughput)<p>GGUF support via llama.cpp<p>CPU fallback — works without a GPU<p>Rate limiting, audit logging, API key auth<p>Install:<p>----- pip install zllm-zse zse serve Qwen&#x2F;Qwen2.5-7B-Instruct For fast cold starts (one-time conversion):<p>----- zse convert Qwen&#x2F;Qwen2.5-Coder-7B-Instruct -o qwen-7b.zse zse serve qwen-7b.zse # 3.9s every time<p>The cold start improvement comes from the .zse format storing pre-quantized weights as memory-mapped safetensors — no quantization step at load time, no weight conversion, just mmap + GPU transfer. On NVMe SSDs this gets under 4 seconds for 7B. On spinning HDDs it&#x27;ll be slower.<p>All code is real — no mock implementations. Built at Zyora Labs. Apache 2.0.<p>Happy to answer questions about the quantization approach, the .zse format design, or the memory efficiency techniques.

Found: February 26, 2026 ID: 3436

Making MCP cheaper via CLI

Hacker News (score: 88)

[CLI Tool] Making MCP cheaper via CLI

Found: February 25, 2026 ID: 3435

[Other] PA bench: Evaluating web agents on real world personal assistant workflows We’re the team at Vibrant Labs (W24). We’ve been building envs for browser agents and quickly realized that existing benchmarks in this space didn’t capture the primary failure modes we were seeing in production (which scaled up as the number of applications and horizon length increase).<p>We built PA Bench (Personal Assistant Benchmark) to evaluate frontier computer&#x2F;web use models on their ability to handle multi-step workflows across simulated clones of Gmail and Calendar.<p>*What’s next:*<p>We’re currently scaling the dataset to 3+ tabs and are building more high-fidelity simulations for common enterprise workflows. We’d love to hear feedback on the benchmark and notes about what was&#x2F;wasn’t surprising about the results.<p>Blog post: <a href="https:&#x2F;&#x2F;vibrantlabs.com&#x2F;blog&#x2F;pa-bench">https:&#x2F;&#x2F;vibrantlabs.com&#x2F;blog&#x2F;pa-bench</a>

Found: February 25, 2026 ID: 3437

[Other] Show HN: I ported Tree-sitter to Go This started as a hard requirement for my TUI-based editor application, it ended up going in a few different directions.<p>A suite of tools that help with semantic code entities: <a href="https:&#x2F;&#x2F;github.com&#x2F;odvcencio&#x2F;gts-suite" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;odvcencio&#x2F;gts-suite</a><p>A next-gen version control system called Got: <a href="https:&#x2F;&#x2F;github.com&#x2F;odvcencio&#x2F;got" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;odvcencio&#x2F;got</a><p>I think this has some pretty big potential! I think there&#x27;s many classes of application (particularly legacy architecture) that can benefit from these kinds of analysis tooling. My next post will be about composing all these together, an exciting project I call GotHub. Thanks!

Found: February 25, 2026 ID: 3433

[Other] Show HN: Sgai – Goal-driven multi-agent software dev (GOAL.md → working code) Hey HN,<p>We built Sgai to experiment with a different model of AI-assisted development.<p>Instead of prompting step-by-step, you define an outcome in GOAL.md (what should be built, not how), and Sgai runs a coordinated set of AI agents to execute it.<p>- It decomposes the goal into a DAG of roles (developer → reviewer → safety analyst, etc.) - It asks clarifying questions when needed - It writes code, runs tests, and iterates - Completion gates (e.g. make test) determine when it&#x27;s actually done<p>Everything runs locally in your repo. There’s a web dashboard showing real-time execution of the agent graph. Nothing auto-pushes to GitHub.<p>We’ve used it internally for prototyping small apps and internal tooling. It’s still early and rough in places, but functional enough to share.<p>Demo (4 min): <a href="https:&#x2F;&#x2F;youtu.be&#x2F;NYmjhwLUg8Q" rel="nofollow">https:&#x2F;&#x2F;youtu.be&#x2F;NYmjhwLUg8Q</a> GitHub: <a href="https:&#x2F;&#x2F;github.com&#x2F;sandgardenhq&#x2F;sgai" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;sandgardenhq&#x2F;sgai</a><p>Open source (Go). Works with Anthropic, OpenAI, or local models via opencode.<p>Curious what people think about DAG-based multi-agent workflows for coding. Has anyone here experimented with similar approaches?

Found: February 25, 2026 ID: 3434

[Other] Show HN: Django Control Room – All Your Tools Inside the Django Admin Over the past year I’ve been building a set of operational panels for Django:<p>- Redis inspection - cache visibility - Celery task introspection - URL discovery and testing<p>All of these tools have been built inside the Django admin.<p>Instead of jumping between tools like Flower, redis-cli, Swagger, or external services, I wanted something that sits where I’m already working.<p>I’ve grouped these under a single umbrella: Django Control Room.<p>The idea is pretty simple: the Django admin already gives you authentication, permissions, and a familiar interface. It can also act as an operational layer for your app.<p>Each panel is just a small Django app with a simple interface, so it’s easy to build your own and plug it in.<p>I’m working on more panels (signals, errors, etc.) and also thinking about how far this pattern can go.<p>Curious how others think about this. Does it make sense to consolidate this kind of tooling inside the admin, or do you prefer keeping it separate?

Found: February 25, 2026 ID: 3431

[Other] Launch HN: TeamOut (YC W22) – AI agent for planning company retreats Hi HN, I’m Vincent, CTO of TeamOut (<a href="https:&#x2F;&#x2F;www.teamout.com&#x2F;">https:&#x2F;&#x2F;www.teamout.com&#x2F;</a>). We build an AI agent that plans company events from start to finish entirely through conversation. Similar to how Lovable helps build websites through chat, we apply that approach to event planning. Our system handles venue sourcing, vendor coordination, flight cost estimation, itinerary building, and overall project management.<p>Here’s a demo: <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=QVyc-x-isjI" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=QVyc-x-isjI</a>. The product is live at <a href="https:&#x2F;&#x2F;app.teamout.com&#x2F;ai">https:&#x2F;&#x2F;app.teamout.com&#x2F;ai</a> and does not require signup.<p>We went through YC in 2022 but did not launch on HN at the time. Back then, the product was more traditional, closer to an Airbnb-style search marketplace. Over the past two years, after helping organize more than 1,200 events, we rebuilt the core system around an agent architecture that directly manages the planning process. With this new version live, it felt like the right moment to share it here since it represents a fundamentally different approach to planning events.<p>The problem: Planning a company retreat usually means choosing between three imperfect options: (1) Hire an event planner and pay significant fees and venue markups; (2) Do it yourself and spend dozens of hours on research, emails, and negotiation; or (3) Use tools like Airbnb that are not designed for group logistics or meeting space.<p>The difficulty is not just finding a venue. Even for 30 to 50 people, planning turns into weeks of back-and-forth emails for quotes, comparing inconsistent pricing across PDFs, and tracking budgets in spreadsheets. It becomes an ongoing coordination problem with evolving constraints and slow, asynchronous vendor responses. Most existing software is form-driven, but the real workflow is conversational and stateful.<p>Offsites are expensive and high stakes. A single event can represent a significant chunk of a team’s annual budget, and mistakes show up directly as cost overruns or poor experiences. Founders and operators often end up spending time on event logistics instead of their actual work.<p>I ran into this while organizing retreats at a previous company. Before TeamOut, I worked as an AI researcher at IBM on NLP and machine learning systems. Sitting inside long email threads and cost spreadsheets, it did not look like a marketplace gap to me. It looked like a reasoning and state management problem. As large language models improved at multi-step reasoning and tool use, it became realistic to automate the coordination layer itself.<p>Our Solution: The core agent relies on a combination of models such as Gemini, Claude, and GPT. A central LLM-based agent maintains planning context across turns and decides which specialized tool to call next. Each tool has a specific responsibility: - Venue search and filtering - Cost estimations (accommodation + flights) - Budget comparisons - Quote and outreach flows - Communication tool with our team<p>For venue recommendations across more than 10,000 venues, we do not rely purely on the language model. We embed both user requirements and venues into vector representations and retrieve candidates using similarity search. Hard constraints such as capacity and dates are applied first, and results are ranked before being presented.<p>On the interface side, we use a split layout: conversation on the left and structured results on the right. As you refine the plan in chat, the event updates in real time, allowing an iterative workflow rather than a static search experience.<p>What is different is that we treat event planning as a stateful coordination problem rather than a one-shot search query. The agent orchestrates tools, manages evolving constraints, and surfaces trade-offs explicitly. It does not invent venues or fabricate pricing, and it is not designed to replace human planners for very large or highly customized events.<p>We make money from commissions on venue bookings. It is free for teams to explore options and plan. If you’ve organized an offsite or large meetup before, I’d genuinely value your perspective. Where would you expect this to fail? What edge cases are we underestimating? Where wouldn’t you trust an agent to handle the details?<p>My engineering team and I will be here all day to answer questions, happy to go deep on architecture, tradeoffs, and lessons learned. We’d really appreciate your candid feedback.

Found: February 25, 2026 ID: 3438

[Other] Red Hat takes on Docker Desktop with its enterprise Podman Desktop build

Found: February 25, 2026 ID: 3432

katanemo/plano

GitHub Trending

[Other] Delivery infrastructure for agentic apps - Plano is an AI-native proxy and data plane that offloads plumbing work, so you stay focused on your agent's core logic (via any AI framework).

Found: February 25, 2026 ID: 3425

bytedance/deer-flow

GitHub Trending

[Other] An open-source SuperAgent harness that researches, codes, and creates. With the help of sandboxes, memories, tools, skills and subagents, it handles different levels of tasks that could take minutes to hours.

Found: February 25, 2026 ID: 3424

[Other] A comprehensive collection of Agent Skills for context engineering, multi-agent architectures, and production agent systems. Use when building, optimizing, or debugging agent systems that require effective context management.

Found: February 25, 2026 ID: 3423
Previous Page 1 of 173 Next