🛠️ All DevTools
Showing 1–20 of 4105 tools
Last Updated
April 10, 2026 at 04:01 PM
Show HN: A tool to manage a swarm of coding agents on Linux
Show HN (score: 5)[Other] Show HN: A tool to manage a swarm of coding agents on Linux
We've raised $17M to build what comes after Git
Hacker News (score: 172)[Other] We've raised $17M to build what comes after Git
Show HN: SmolVM – open-source sandbox for coding and computer-use agents
Show HN (score: 5)[Other] Show HN: SmolVM – open-source sandbox for coding and computer-use agents SmolVM is an open-source local sandbox for AI agents on macOS and Linux.<p>I started building it because agent workflows need more than isolated code execution. They need a reusable environment: write files in one step, come back later, snapshot state, pause/resume, and increasingly interact with browsers or full desktop environments.<p>Right now SmolVM is a Python SDK and CLI focused on local developer experience.<p>Current features include: - local sandbox environments - macOS and Linux support - snapshotting - pause/resume - persistent environments across turns<p>Install: ``` curl -sSL <a href="https://celesto.ai/install.sh" rel="nofollow">https://celesto.ai/install.sh</a> | bash smolvm ```<p>I’d love feedback from people building coding agents or computer-use agents. Interested in what feels missing, what feels clunky, and what you’d expect from a sandbox like this.
Show HN: Linear RNN/Reservoir hybrid generative model, one C file (no deps.)
Show HN (score: 6)[Other] Show HN: Linear RNN/Reservoir hybrid generative model, one C file (no deps.) I just noticed it takes literally ~5 minutes to train millions parameters on slow CPU...but before you call Yudkowsky that "it's over", an important note: the main bottleneck is the corpus size, params are just 'cleverness' but given limited info it's powerless.<p>Anyway, here is the project:<p><a href="https://github.com/bggb7781-collab/lrnnsmdds/tree/main" rel="nofollow">https://github.com/bggb7781-collab/lrnnsmdds/tree/main</a><p>couple of notes:<p>1. single C file, no dependencies. Below are literally all the "dependencies", not even custom header (copy paste from the top of the single c file):<p>#define _POSIX_C_SOURCE 200809L<p>#include <stdio.h> #include <stdlib.h> #include <string.h> #include <math.h> #include <time.h> #include <stdint.h> #include <stdbool.h> #include <float.h> #include <getopt.h> #include <errno.h><p>4136 lines of code in one file at the moment, that's all.<p>2. easiest way to compile on Windows: download Cygwin (<a href="https://www.cygwin.com/" rel="nofollow">https://www.cygwin.com/</a>), then navigate to the directory where your lrnnsmdds.c file is and just run gcc on it with some optimizations, such as:<p>gcc -std=c17 -O3 -march=native --fast-math -o lrnn lrnnsmdds.c -lm<p>On Linux just run gcc, if for whatever reason you don't have gcc on Linux do sudo && apt-get install gcc --y ,or something...<p>On Apple: i've no idea or maybe just use vmware and install ubuntu and then run it.<p>Of course you can 'git clone' and go to the dir, but again: it's one file! copy it...<p>The repo has tiny toy corpus included where i've borrowed (hopefully it's not plagiarism!) the name "John Gordon" from one of my favorite books "Star Kings", by E. Hamilton. Just the first and last name are copied, the content is unique (well several poorly written sentences by myself...). Obviously it will overfit and result on copy-paste on such small corpus, the sole goal is to check if everything runs and not if it's the A-G-I. You'd need your own 100kb+ if you want to generate unique meaningful text.<p>3. why/what/when/how?<p>The github repo is self-explanatory i believe about features, uses and goals but in an attempt to summarize:<p>My main motivation was to create a fast alternative to transformers which works on CPU only, hence you see the bizarre/not-easy task of doing this in C and not python and the lack of dependencies. In addition I was hoping it will also be clever alternative hence you see all those features more stacked than 90s BMW 850. The 'reservoir' is the most novel feature though, it offers quick exact recall arguably different than RWKV 8 or the latest Mamba, in fact name of the architecture SMDDS comes from the first letters of the implemented features:<p>* S. SwiGLU in Channel Mixing (more coherence) * M. Multi-Scale Token Shift (larger context) * D. Data-Dependent Decay with Low-Rank (speed in large context) * D. Dynamic State Checkpointing (faster/linear generation) * S. Slot-memory reservoir (perfect recall, transformers style).<p>If you face some issue just email me (easiest).<p>the good, the bad the ugly:<p>It is more or less working text-to-text novel alternative architecture, it's not trying to imitate transformers nor LSTM, Mamba, RWKV though it shares many features with them - the bad is that it's not blazing fast, if you're armed with ryzen/i7 16 cores or whatever and patience you can try training it on several small books via word tokenizer and low perplexity (under 1.2...) and see if it looks smarter/faster. Since this is open source obviously the hope is to be improved: make it cuda-friendly, improve the features, port to python etc.<p>Depending on many factors I may try to push for v2 in July, August, September. My focus at the moment will be to test and scale since the features are many, it compiles with zero warnings on the 2 laptops i've tested(windows/cygwin and ubuntu) and the speed is comparable to transformers. 10x!
Hegel, a universal property-based testing protocol and family of PBT libraries
Hacker News (score: 42)[Testing] Hegel, a universal property-based testing protocol and family of PBT libraries
Instant 1.0, a backend for AI-coded apps
Hacker News (score: 54)[Other] Instant 1.0, a backend for AI-coded apps
Show HN: Control your X/Twitter feed using a small on-device LLM
Show HN (score: 13)[Other] Show HN: Control your X/Twitter feed using a small on-device LLM We built a Chrome extension and iOS app that filters Twitter's feed using Qwen3.5-4B for contextual matching. You describe what you don't want in plain language—it removes posts that match semantically, not by keyword.<p>What surprised us was that because Twitter's ranking algorithm adapts based on what you engage with, consistent filtering starts reshaping the recommendations over time. You're implicitly signaling preferences to the algorithm. For some of us it "healed" our feed.<p>Currently running inference from our own servers with an experimental on-device option, and we're working on fully on-device execution to remove that dependency. Latency is acceptable on most hardware but not great on older machines. No data collection; everything except the model call runs locally.<p>It doesn't work perfectly (figurative language trips it up) but it's meaningfully better than muting keywords and we use it ourselves every day.<p>Also promising how local / open models can now start giving us more control over the algorithmic agents in our lives, because capability density is improving.
Research-Driven Agents: When an agent reads before it codes
Hacker News (score: 171)[Other] Research-Driven Agents: When an agent reads before it codes
Show HN: Mdpdf a 2k line C CLI to convert Markdown to tiny PDFs
Show HN (score: 5)[CLI Tool] Show HN: Mdpdf a 2k line C CLI to convert Markdown to tiny PDFs I have always searched for a simple tool which can convert a MD document to a well styled small PDF, just like a GitHub Readme.<p>This was inspired by TinyPDF <a href="https://news.ycombinator.com/item?id=46316968">https://news.ycombinator.com/item?id=46316968</a><p>I also used this project to get familiar with agentic coding, which I had dreaded before.<p>mdpdf supports:<p>- using the included PDF fonts to generate tiny valid PDFs<p>- outputs as A4 or Letter depending on your locale<p>- plenty of common MD syntax: code blocks, inline code, lists, tables, and jpg and png images<p>That's it. It covers probably most of the use cases and can help to simply convert a Markdown write-up to a PDF to share.<p>GitHub: <a href="https://github.com/schicho/mdpdf" rel="nofollow">https://github.com/schicho/mdpdf</a><p>A simple make call should build it for you.
Show HN: I built a Cargo-like build tool for C/C++
Hacker News (score: 80)[Build/Deploy] Show HN: I built a Cargo-like build tool for C/C++ I love C and C++, but setting up projects can sometimes be a pain.<p>Every time I wanted to start something new I'd spend the first hour writing CMakeLists.txt, figuring out find_package, copying boilerplate from my last project, and googling why my library isn't linking. By the time the project was actually set up I'd lost all momentum.<p>So, I built Craft - a lightweight build and workflow tool for C and C++. Instead of writing CMake, your project configuration goes in a simple craft.toml:<p><pre><code> [project] name = "my_app" version = "0.1.0" language = "c" c_standard = 99 [build] type = "executable" </code></pre> Run craft build and Craft generates the CMakeLists.txt automatically and builds your project. Want to add dependencies? That's just a simple command:<p><pre><code> craft add --git https://github.com/raysan5/raylib --links raylib craft add --path ../my_library craft add sfml </code></pre> Craft will clone the dependency, regenerate the CMake, and rebuild your project for you.<p>Other Craft features: craft init - adopt an existing C/C++ project into Craft or initialize an empty directory. craft template - save any project structure as a template to be initialized later. craft gen - generate header and source files with starter boilerplate code. craft upgrade - keeps itself up to date.<p>CMakeLists.extra.cmake for anything that Craft does not yet handle.<p>Cross platform - macOS, Linux, Windows.<p>It is still early (I just got it to v1.0.0) but I am excited to be able to share it and keep improving it.<p>Would love feedback. Please also feel free to make pull requests if you want to help with development!
The Vercel plugin on Claude Code wants to read your prompts
Hacker News (score: 252)[Other] The Vercel plugin on Claude Code wants to read your prompts
Show HN: Zoneless – Open-source Stripe Connect clone with $0.002 fees using USDC
Show HN (score: 8)[API/SDK] Show HN: Zoneless – Open-source Stripe Connect clone with $0.002 fees using USDC Hi HN,<p>I'm Ben / Tiny Projects (I once posted here about buying 300 emoji domains from Kazakhstan…).<p>For the past 3 years I've been solo bootstrapping PromptBase, an AI marketplace with 450k+ users. At the peak, I was burning $9,400/month in opaque Stripe Connect fees for seller payouts, so I built Zoneless to replace it:<p>- GitHub: <a href="https://github.com/zonelessdev/zoneless" rel="nofollow">https://github.com/zonelessdev/zoneless</a><p>- Website: <a href="https://zoneless.com" rel="nofollow">https://zoneless.com</a><p>Zoneless is a free, open-source (Apache 2.0) drop-in replacement for the payout part of Stripe Connect. It allows you to pay marketplace sellers globally with stablecoins (USDC) using an identical API to Stripe and at near-zero fees.<p>I've been dogfooding Zoneless on PromptBase for 3 months with some good results:<p>- 2,200+ sellers onboarded<p>- 1,400+ payouts completed<p>- Monthly payout fees reduced to just a few dollars<p>- 73% of sellers, when given the choice at onboarding, actively picked Zoneless over Stripe Connect<p>A massive part of running a marketplace is paying sellers. While Stripe Connect is a great product, it has big pain points:<p>- Expensive + complex fees: <i>$2/mo per active account, 0.25% + $0.25 domestic payout fee, $1.50 international payout fee, 0.25–1.25% cross-border fee, 0.50–1% FX fee.</i> It costs >$2 to move $1.<p>- Limited reach: Only supports around 47 countries.<p>- Slow payouts: Takes 2-7 days to settle.<p>- Platform risk: A massive single point of failure if your account gets randomly flagged.<p>Zoneless is designed to solve all this:<p>- Payouts cost ~$0.002 on Solana<p>- Global: 220+ countries/regions<p>- Instant payouts, 24/7.<p>- Self-hostable and open-source<p>The API/SDK is identical to Stripe (same webhook events, same object shapes, etc.). If you know Stripe, you already know how to use Zoneless. There’s also an Express Dashboard for sellers to onboard and track their earnings.<p>I've been able to remove annoying things on PromptBase like forcing sellers to accrue a $30 minimum balance before a payout just to keep our costs down. I can now also onboard sellers from more countries, which has helped spread the word and grow the buyer side too.<p>A big worry was that non-crypto users would be confused or hate getting paid in USDC, but they actually don’t mind at all, they just care about being paid faster. If they want to convert to their local currency, they simply use an exchange like Coinbase.<p>Zoneless is self-custodial, meaning you create and own your wallet, and the code never touches funds. You can also easily plug in providers for KYC/AML.<p>I appreciate that anything related to crypto is like Marmite (pretty polarizing); I’m a no-coiner and have never dabbled in NFTs, but I do think stablecoins are different: they’re just boring tech to move money around cheaply.<p>I'd love to hear your thoughts, feedback, or questions - especially if you've dealt with Stripe Connect / payouts / marketplaces before.<p>- GitHub: <a href="https://github.com/zonelessdev/zoneless" rel="nofollow">https://github.com/zonelessdev/zoneless</a><p>- Website: <a href="https://zoneless.com" rel="nofollow">https://zoneless.com</a><p>- Docs: <a href="https://zoneless.com/docs" rel="nofollow">https://zoneless.com/docs</a>
Launch HN: Relvy (YC F24) – On-call runbooks, automated
Hacker News (score: 20)[DevOps] Launch HN: Relvy (YC F24) – On-call runbooks, automated Hey HN! We are Bharath, and Simranjit from Relvy AI (<a href="https://www.relvy.ai">https://www.relvy.ai</a>). Relvy automates on-call runbooks for software engineering teams. It is an AI agent equipped with tools that can analyze telemetry data and code at scale, helping teams debug and resolve production issues in minutes. Here’s a video: [[[<a href="https://www.youtube.com/watch?v=BXr4_XlWXc0" rel="nofollow">https://www.youtube.com/watch?v=BXr4_XlWXc0</a>]]]<p>A lot of teams are using AI in some form to reduce their on-call burden. You may be pasting logs into Cursor, or using Claude Code with Datadog’s MCP server to help debug. What we’ve seen is that autonomous root cause analysis is a hard problem for AI. This shows up in benchmarks - Claude Opus 4.6 is currently at 36% accuracy on the OpenRCA dataset, in contrast to coding tasks.<p>There are three main reasons for this: (1) Telemetry data volume can drown the model in noise; (2) Data interpretation / reasoning is enterprise context dependent; (3) On-call is a time-constrained, high-stakes problem, with little room for AI to explore during investigation time. Errors that send the user down the wrong path are not easily forgiven.<p>At Relvy, we are tackling these problems by building specialized tools for telemetry data analysis. Our tools can detect anomalies and identify problem slices from dense time series data, do log pattern search, and reason about span trees, all without overwhelming the agent context.<p>Anchoring the agent around runbooks leads to less agentic exploration and more deterministic steps that reflect the most useful steps that an experienced engineer would take. That results in faster analysis, and less cognitive load on engineers to review and understand what the AI did.<p>How it works: Relvy is installed on a local machine via docker-compose (or via helm charts, or sign up on our cloud), connect your stack (observability and code), create your first runbook and have Relvy investigate a recent alert.<p>Each investigation is presented as a notebook in our web UI, with data visualizations that help engineers verify and build trust with the AI. From there on, Relvy can be configured to automatically respond to alerts from Slack<p>Some example runbook steps that Relvy automates: - Check so-and-so dashboard, see if the errors are isolated to a specific shard. - Check if there’s a throughput surge on the APM page, and if so, is it from a few IPs? - Check recent commits to see if anything changed for this endpoint.<p>You can also configure AWS CLI commands that Relvy can run to automate mitigation actions, with human approval.<p>A little bit about us - We did YC back in fall 2024. We started our journey experimenting with continuous log monitoring with small language models - that was too slow. We then invested deeply into solving root cause analysis effectively, and our product today is the result of about a year of work with our early customers.<p>Give us a try today. Happy to hear feedback, or about how you are tackling on-call burden at your company. Appreciate any comments or suggestions!
Show HN: CSS Studio. Design by hand, code by agent
Hacker News (score: 79)[Other] Show HN: CSS Studio. Design by hand, code by agent Hi HN! I've just released CSS Studio, a design tool that lives on your site, runs on your browser, sends updates to your existing AI agent, which edits any codebase. You can actually play around with the latest version directly on the site.<p>Technically, the way this works is you view your site in dev mode and start editing it. In your agent, you can run /studio which then polls (or uses Claude Channels) an MCP server. Changes are streamed as JSON via the MCP, along with some viewport and URL information, and the skill has some instructions on how best to implement them.<p>It contains a lot of the tools you'd expect from a visual editing tool, like text editing, styles and an animation timeline editor.
[Other] Show HN: I built a local data lake for AI powered data engineering and analytics I got tired of the overhead required to run even a simple data analysis - cloud setup, ETL pipelines, orchestration, cost monitoring - so I built a fully local data-stack/IDE where I can write SQL/Py, run it, see results, and iterate quickly and interactively.<p>You get data lake like catalog, zero-ETL, lineage, versioning, and analytics running entirely on your machine. You can import from a database, webpage, CSV, etc. and query in natural language or do your own work in SQL/Pyspark. Connect to local models like Gemma or cloud LLMs like Claude for querying and analysis. You don’t have to setup local LLMs, it comes built in.<p>This is completely free. No cloud account required.<p>Downloading the software - <a href="https://getnile.ai/downloads" rel="nofollow">https://getnile.ai/downloads</a><p>Watch a demo - <a href="https://www.youtube.com/watch?v=C6qSFLylryk" rel="nofollow">https://www.youtube.com/watch?v=C6qSFLylryk</a><p>Check the code repo - <a href="https://github.com/NileData/local" rel="nofollow">https://github.com/NileData/local</a><p>This is still early and I'd genuinely love your feedback on what's broken, what's missing, and if you find this useful for your data and analytics work.
USB for Software Developers: An introduction to writing userspace USB drivers
Hacker News (score: 18)[Other] USB for Software Developers: An introduction to writing userspace USB drivers
Expanding Swift's IDE Support
Hacker News (score: 67)[IDE/Editor] Expanding Swift's IDE Support
Show HN: 500k+ events/sec transformations for ClickHouse ingestion
Show HN (score: 5)[Other] Show HN: 500k+ events/sec transformations for ClickHouse ingestion Hi HN! We are Ashish and Armend, founders of GlassFlow.<p>Over the last year, we worked with teams running high-throughput pipelines into self-hosted ClickHouse. Mostly for observability and real-time analytics.<p>A question that came repeatedly was: What happens when throughput grows?<p>Usually, things work fine at 10k events/sec, but we started seeing backpressure and errors at >100k.<p>When the throughput per pipeline stops scaling, then adding more CPU/memory doesn’t help because often parts of the pipeline are not parallelized or are bottlenecked by state handling.<p>At this point, engineers usually scale by adding more pipeline instances.<p>That works but comes with some trade-offs: - You have to split the workload (e.g., multiple pipelines reading from the same source) - Transformation logic gets duplicated across pipelines - Stateful logic becomes harder to manage and keep consistent - Debugging and changes get more difficult because the data flow is fragmented<p>Another challenge arises when working with high-cardinality keys like user IDs, session IDs, or request IDs, and when you need to handle longer time windows (24h or more). The state grows quickly and many systems rely on in-memory state, which makes it expensive and harder to recover from failures.<p>We wanted to solve this problem and rebuild our approach at GlassFlow.<p>Instead of scaling by adding more pipelines, we scale within a single pipeline by using replicas. Each replica consumes, processes, and writes independently, and the workload is distributed across them.<p>In the benchmarks we’re sharing, this scales to 500k+ events/sec while still running stateful transformations and writing into ClickHouse.<p>A few things we think are interesting: - Scaling is close to linear as you add replicas - Works with stateful transformations (not just stateless ingestion) - State is backed by a file-based KV store instead of relying purely on memory - The ClickHouse sink is optimized for batching to avoid small inserts - The product is built with Go<p>Full write-up + benchmarks: <a href="https://www.glassflow.dev/blog/glassflow-now-scales-to-500k-events-per-sec" rel="nofollow">https://www.glassflow.dev/blog/glassflow-now-scales-to-500k-...</a><p>Repo: <a href="https://github.com/glassflow/clickhouse-etl" rel="nofollow">https://github.com/glassflow/clickhouse-etl</a><p>Happy to answer questions about the design or trade-offs.
Show HN: TUI-use: Let AI agents control interactive terminal programs
Hacker News (score: 25)[Other] Show HN: TUI-use: Let AI agents control interactive terminal programs
Show HN: BAREmail ʕ·ᴥ·ʔ – minimalist Gmail client for bad WiFi
Hacker News (score: 39)[Other] Show HN: BAREmail ʕ·ᴥ·ʔ – minimalist Gmail client for bad WiFi I've been frustrated one too many times by terrible airplane wifi and not being able to load Gmail or Superhuman when all I want to do is get a few simple text-only emails out the door.<p>These clients have become pretty bloated with the assumption you've always got great bandwidth.<p>So I vibe coded BAREMAIL. It's open source, has no backend, and you can just set it up for yourself. Takes ~3 mins to setup API access via Google Cloud Platform (thanks for making this not super easy Google!)<p>I tried to maintain nice design and some important keyboard shortcuts without getting to overBEARing.