🛠️ All DevTools
Showing 1–20 of 3781 tools
Last Updated
March 15, 2026 at 04:00 PM
voidzero-dev/vite-plus
GitHub Trending[Build/Deploy] Vite+ is the unified toolchain and entry point for web development. It manages your runtime, package manager, and frontend toolchain in one place.
Learning Creative Coding
Hacker News (score: 69)[Other] Learning Creative Coding
Show HN: Han – A Korean programming language written in Rust
Hacker News (score: 72)[Other] Show HN: Han – A Korean programming language written in Rust A few weeks ago I saw a post about someone converting an entire C++ codebase to Rust using AI in under two weeks.<p>That inspired me — if AI can rewrite a whole language stack that fast, I wanted to try building a programming language from scratch with AI assistance.<p>I've also been noticing growing global interest in Korean language and culture, and I wondered: what would a programming language look like if every keyword was in Hangul (the Korean writing system)?<p>Han is the result. It's a statically-typed language written in Rust with a full compiler pipeline (lexer → parser → AST → interpreter + LLVM IR codegen).<p>It supports arrays, structs with impl blocks, closures, pattern matching, try/catch, file I/O, module imports, a REPL, and a basic LSP server.<p>This is a side project, not a "you should use this instead of Python" pitch. Feedback on language design, compiler architecture, or the Korean keyword choices is very welcome.<p><a href="https://github.com/xodn348/han" rel="nofollow">https://github.com/xodn348/han</a>
Show HN: Zap Code – AI code generator that teaches kids real HTML/CSS/JS
Show HN (score: 6)[Other] Show HN: Zap Code – AI code generator that teaches kids real HTML/CSS/JS Zap Code generates working HTML/CSS/JS from plain English descriptions, designed for kids ages 8-16.<p>The core loop: kid types "make a space shooter game", AI generates the code, live preview renders it immediately. Three interaction modes - visual-only tweaks, read-only code view with annotations, and full code editing with AI autocomplete.<p>Technical details: Next.js frontend, Node.js backend, Monaco editor simplified for younger users, sandboxed iframe for preview execution (no external API calls from generated code). Progressive complexity engine uses a skill model to decide when to surface more advanced features.<p>Main thing that was focused on was the gap between block-based coding (Scratch, etc.) and actual programming. Block tools are great for ages 6-10 but the transition to real code is rough. This tries to smooth that curve by letting kids interact with real output first, then gradually exposing the code behind it.<p>Limitations: AI-generated code isn't always clean or idiomatic. Content is filtered for age-appropriateness but its not perfect. Collaboration features are still basic. The complexity engine needs more data to tune well.<p>Free tier, 3 projects. Pro at $9.99/mo.
Claudetop – htop for Claude Code sessions (see your AI spend in real-time)
Hacker News (score: 19)[Other] Claudetop – htop for Claude Code sessions (see your AI spend in real-time)
Show HN: KeyID – Free email and phone infrastructure for AI agents (MCP)
Show HN (score: 8)[Other] Show HN: KeyID – Free email and phone infrastructure for AI agents (MCP)
Show HN: Data-anim – Animate HTML with just data attributes
Show HN (score: 5)[Other] Show HN: Data-anim – Animate HTML with just data attributes Hey HN, I built data-anim — an animation library where you never have to write JavaScript yourself.<p>You just write:<p><pre><code> <div data-anim="fadeInUp">Hello</div> </code></pre> That's it. Scroll-triggered fade-in animation, zero JS to write.<p>What it does:<p>- 30+ built-in animations (fade, slide, zoom, bounce, rotate, etc.)<p>- 4 triggers: scroll (default), load, click, hover<p>- 3-layer anti-FOUC protection (immediate style injection → noscript fallback → 5s timeout)<p>- Responsive controls: disable per device or swap animations on mobile<p>- TypeScript autocomplete for all attributes<p>- Under 3KB gzipped, zero dependencies<p>Why I built this:<p>I noticed that most animation needs on landing pages and marketing sites are simple — fade in on scroll, slide in from left, bounce on hover. But the existing options are either too heavy (Framer Motion ~30KB) or require JS boilerplate.<p>I also think declarative HTML attributes are the most AI-friendly animation format. When LLMs generate UI, HTML attributes are the output they hallucinate least on — no selector matching, no JS API to misremember, no script execution order to get wrong.<p>Docs: <a href="https://ryo-manba.github.io/data-anim/" rel="nofollow">https://ryo-manba.github.io/data-anim/</a><p>Playground: <a href="https://ryo-manba.github.io/data-anim/playground/" rel="nofollow">https://ryo-manba.github.io/data-anim/playground/</a><p>npm: <a href="https://www.npmjs.com/package/data-anim" rel="nofollow">https://www.npmjs.com/package/data-anim</a><p>Happy to answer any questions about the implementation or design decisions.
Show HN: GitAgent – An open standard that turns any Git repo into an AI agent
Hacker News (score: 59)[API/SDK] Show HN: GitAgent – An open standard that turns any Git repo into an AI agent We built GitAgent because we kept seeing the same problem: every agent framework defines agents differently, and switching frameworks means rewriting everything.<p>GitAgent is a spec that defines an AI agent as files in a git repo.<p>Three core files — agent.yaml (config), SOUL.md (personality/instructions), and SKILL.md (capabilities) — and you get a portable agent definition that exports to Claude Code, OpenAI Agents SDK, CrewAI, Google ADK, LangChain, and others.<p>What you get for free by being git-native:<p>1. Version control for agent behavior (roll back a bad prompt like you'd revert a bad commit) 2. Branching for environment promotion (dev → staging → main) 3. Human-in-the-loop via PRs (agent learns a skill → opens a branch → human reviews before merge) 4. Audit trail via git blame and git diff 5. Agent forking and remixing (fork a public agent, customize it, PR improvements back) 6. CI/CD with GitAgent validate in GitHub Actions<p>The CLI lets you run any agent repo directly:<p>npx @open-gitagent/gitagent run -r <a href="https://github.com/user/agent" rel="nofollow">https://github.com/user/agent</a> -a claude<p>The compliance layer is optional, but there if you need it — risk tiers, regulatory mappings (FINRA, SEC, SR 11-7), and audit reports via GitAgent audit.<p>Spec is at <a href="https://gitagent.sh" rel="nofollow">https://gitagent.sh</a>, code is on GitHub.<p>Would love feedback on the schema design and what adapters people would want next.
Show HN: I built Wool, a lightweight distributed Python runtime
Show HN (score: 13)[DevOps] Show HN: I built Wool, a lightweight distributed Python runtime I spent a long time working in the payments industry, specifically on a rather niche reporting/aggregation platform with spiky workloads that were not easily parallelized. To pump as much data through our pipeline as possible, we had to rely on complex locking schemes across half a dozen or so not-so-micro services - keeping a clear mental picture of how the services interacted for a given data source was a major headache. This problem always intrigued me, even after I no longer worked at the company, and lead to the development of Wool.<p>If you've worked with frameworks like Ray or Prefect, you're probably familiar with the promise of going from script to scale in two lines of code (or something along those lines). This is essentially the solution I was looking for: a framework with limited boilerplate that facilitated arbitrary distribution schemes within a single, coherent codebase. What I was hoping for, though, was something a little bit more focused - I wasn't working on ML pipelines and didn't need much else other than the distribution layer. This is where Wool comes in. While it's API is very similar to those of Ray and Prefect, where it differentiates itself is in its scope and architecture.<p>First, Wool is not a task orchestrator. It provides push-based, best-effort, at-most-once execution. There is no built-in coordination state, retry logic, or durable task tracking. Those concerns remain application-defined. The beauty of Wool is that it looks and feels like native async Python, allowing you to use purpose-built libraries for your needs as you would for any other Python app (with some caveats).<p>Second, Wool was designed with speed in mind. Because it's not bloated with features, it's actually pretty fast, even in its current nascent state. Wool routines are dispatched directly to a decentralized peer-to-peer network of gRPC workers, who can distribute nested routines amongst themselves in turn. This results in low dispatch latencies and high throughput. I won't make any performance claims until I can assemble some more robust benchmarks, but running local workers on my M4 MacBook Pro (a trivial example, I know), I can easily achieve sub-millisecond dispatch latencies.<p>Anyway, check it out, any and all feedback is welcome. Regarding docs- the code is the documentation for now, but I promise I'll sort that out soon. I've got plenty of ideas for next steps, but it's always more fun when people actually use what you've built, so I'm open to suggestions for impactful features.<p>-Conrad
Claude Code's binary reveals silent A/B tests on core features
Hacker News (score: 28)[Other] Claude Code's binary reveals silent A/B tests on core features
Megadev: A Development Kit for the Sega Mega Drive and Mega CD Hardware
Hacker News (score: 10)[Other] Megadev: A Development Kit for the Sega Mega Drive and Mega CD Hardware
Show HN: Simple plugin to get Claude Code to listen to you
Hacker News (score: 14)[Other] Show HN: Simple plugin to get Claude Code to listen to you Hey HN,<p>My cofounder and I have gotten tired of CC ignoring our markdown files so we spent 4 days and built a plugin that automatically steers CC based on our previous sessions. The problem is usually post plan-mode.<p>What we've tried:<p>Heavily use plan mode (works great)<p>CLAUDE.md, AGENTS.md, MEMORY.md<p>Local context folder (upkeep is a pain)<p>Cursor rules (for Cursor)<p>claude-mem (OSS) -> does session continuity, not steering<p>We use fusion search to find your CC steering corrections.<p>- user prompt embeddings + bm25<p>- correction embeddings + bm25<p>- time decay<p>- target query embeddings<p>- exclusions<p>- metadata hard filters (such as files)<p>The CC plugin:<p>- Automatically captures memories/corrections without you having to remind CC<p>- Automatically injects corrections without you having to remind CC to do it.<p>The plugin will merge, update, and distill your memories, and then inject the highest relevant ones after each of your own prompts.<p>We're not sure if we're alone in this. We're working on some benchmarks to see how effective context injection actually is in steering CC and we know we need to keep improving extraction, search, and add more integrations.<p>We're passionate about the real-time and personalized context layer for agents. Giving Agents a way to understand what you mean when you say "this" or "that". Bringing the context of your world, into a secure, structured, real-time layer all your agents can access.<p>Would appreciate feedback on how you guys get CC to actually follow your markdown files, understand your modus operandi, feedback on the plugin, or anything else about real-time memory and context.<p>- Ankur
Show HN: Hardened OpenClaw on AWS with Terraform
Show HN (score: 7)[DevOps] Show HN: Hardened OpenClaw on AWS with Terraform I work on AWS infrastructure (ex-Percona, Box, Dropbox, Pinterest). When OpenClaw blew up, I wanted to run it properly on AWS and was surprised by the default deployment story. The Lightsail blueprint shipped with 31 unpatched CVEs. The standard install guide uses three separate curl-pipe-sh patterns as root. Bitsight found 30,000+ exposed instances in two weeks. OpenClaw's own maintainer said "if you can't understand how to run a command line, this is far too dangerous."<p>So I built a Terraform module that replaces the defaults with what I'd consider production-grade:<p>* Cognito + ALB instead of a shared gateway token (per-user identity, MFA) * GPG-verified APT packages instead of curl|bash * systemd with ProtectHome=tmpfs and BindPaths sandboxing * Secrets Manager + KMS instead of plaintext API keys * EFS for persistence across instance replacement * CloudWatch logging with 365-day retention Bedrock is the default LLM provider so it works without any API keys. One terraform apply. Full security writeup: <a href="https://infrahouse.com/blog/2026-03-09-deploying-openclaw-on-aws-without-the-security-disasters/" rel="nofollow">https://infrahouse.com/blog/2026-03-09-deploying-openclaw-on...</a><p>I'm sure I've missed things. What would you add or do differently for running an autonomous agent with shell access on a shared server?
Show HN: An addendum to the Agile Manifesto for the AI era
Show HN (score: 7)[Other] Show HN: An addendum to the Agile Manifesto for the AI era I'm a VP of Engineering with 20 years in the field. I've been thinking deeply on why AI is breaking every engineering practice, and it led me to the conclusion that the Agile Manifesto's values need updating.<p>The core argument: AI made producing software cheap, but understanding it is still expensive. The Manifesto optimizes for the former. This addendum shifts the emphasis toward the latter.<p>Four updated values, three refined principles, with reasoning for each. Happy to discuss and defend any of it.
The wild six weeks for NanoClaw's creator that led to a deal with Docker
Hacker News (score: 27)[Other] The wild six weeks for NanoClaw's creator that led to a deal with Docker
Mouser: An open source alternative to Logi-Plus mouse software
Hacker News (score: 281)[Other] Mouser: An open source alternative to Logi-Plus mouse software I discovered this project because all-of-a-sudden Logi Options Plus software updater started taking 40-60% of my Intel Macbook Pro until I killed the process (of course it restarts). In my searches I ended up at a reddit discussion where I found other people with same issues.<p>I'm a minor contributor to this project but it aims to reduce/eliminate the need to use Logitech proprietary software and telemetry. We could use help if other people are interested.<p>Please check out the github link for more detailed motivations (eliminating telemetry) as a part of this project. Here is link: <a href="https://github.com/TomBadash/MouseControl" rel="nofollow">https://github.com/TomBadash/MouseControl</a>
Show HN: AgentLog – a lightweight event bus for AI agents using JSONL logs
Show HN (score: 6)[Other] Show HN: AgentLog – a lightweight event bus for AI agents using JSONL logs I’ve been experimenting with infrastructure for multi-agent systems.<p>I built a small project called AgentLog.<p>The core idea is very simple, topics are just append-only JSONL files.<p>Agents publish events over HTTP and subscribe to streams using SSE.<p>The system is intentionally single-node and minimal for now.<p>Future ideas I’m exploring: - replayable agent workflows - tracing reasoning across agents - visualizing event timelines - distributed/federated agent logs<p>Curious if others building agent systems have run into similar needs.
Show HN: Context Gateway – Compress agent context before it hits the LLM
Hacker News (score: 32)[Other] Show HN: Context Gateway – Compress agent context before it hits the LLM We built an open-source proxy that sits between coding agents (Claude Code, OpenClaw, etc.) and the LLM, compressing tool outputs before they enter the context window.<p>Demo: <a href="https://www.youtube.com/watch?v=-vFZ6MPrwjw#t=9s" rel="nofollow">https://www.youtube.com/watch?v=-vFZ6MPrwjw#t=9s</a>.<p>Motivation: Agents are terrible at managing context. A single file read or grep can dump thousands of tokens into the window, most of it noise. This isn't just expensive — it actively degrades quality. Long-context benchmarks consistently show steep accuracy drops as context grows (OpenAI's GPT-5.4 eval goes from 97.2% at 32k to 36.6% at 1M <a href="https://openai.com/index/introducing-gpt-5-4/" rel="nofollow">https://openai.com/index/introducing-gpt-5-4/</a>).<p>Our solution uses small language models (SLMs): we look at model internals and train classifiers to detect which parts of the context carry the most signal. When a tool returns output, we compress it conditioned on the intent of the tool call—so if the agent called grep looking for error handling patterns, the SLM keeps the relevant matches and strips the rest.<p>If the model later needs something we removed, it calls expand() to fetch the original output. We also do background compaction at 85% window capacity and lazy-load tool descriptions so the model only sees tools relevant to the current step.<p>The proxy also gives you spending caps, a dashboard for tracking running and past sessions, and Slack pings when an agent is sitting there waiting on you.<p>Repo is here: <a href="https://github.com/Compresr-ai/Context-Gateway" rel="nofollow">https://github.com/Compresr-ai/Context-Gateway</a>. You can try it with:<p><pre><code> curl -fsSL https://compresr.ai/api/install | sh </code></pre> Happy to go deep on any of it: the compression model, how the lazy tool loading works, or anything else about the gateway. Try it out and let us know how you like it!
Launch HN: Captain (YC W26) – Automated RAG for Files
Hacker News (score: 38)[Other] Launch HN: Captain (YC W26) – Automated RAG for Files Hi HN, we’re Lewis and Edgar, building Captain to simplify unstructured data search (<a href="https://runcaptain.com">https://runcaptain.com</a>). Captain automates the building and maintenance of file-based RAG pipelines. It indexes cloud storage like S3 and GCS, plus SaaS sources like Google Drive. There’s a quick walkthrough at <a href="https://youtu.be/EIQkwAsIPmc" rel="nofollow">https://youtu.be/EIQkwAsIPmc</a>.<p>We also put up this demo site called “Ask PG’s Essays” which lets you ask/search the corpus of pg’s essays, to get a feel for how it works: <a href="https://pg.runcaptain.com">https://pg.runcaptain.com</a>. The RAG part of this took Captain about 3 minutes to set up.<p>Here are some sample prompts to get a feel for the experience:<p>“When do we do things that don't scale? When should we be more cautious?” <a href="https://pg.runcaptain.com/?q=When%20do%20we%20do%20things%20that%20don't%20scale%3F%20When%20should%20we%20be%20more%20cautious%3F">https://pg.runcaptain.com/?q=When%20do%20we%20do%20things%20...</a><p>“Give me some advice, I'm fundraising” <a href="https://pg.runcaptain.com/?q=Give%20me%20some%20advice%2C%20I'm%20fundraising">https://pg.runcaptain.com/?q=Give%20me%20some%20advice%2C%20...</a><p>“What are the biggest advantages of Lisp” <a href="https://pg.runcaptain.com/?q=what%20are%20the%20biggest%20advantages%20of%20Lisp">https://pg.runcaptain.com/?q=what%20are%20the%20biggest%20ad...</a><p>A good production RAG pipeline takes substantial effort to build, especially for file workloads. You have to handle ETL or text extraction, chunking, embedding, storage, search, re-ranking, inference, and often compliance and observability – all while optimizing for latency and reliability. It’s a lot to manage. grep works well in some cases, but for agents, semantic search provides significantly higher performance. Cursor uses both and reports 6.5%–23.5% accuracy gains from vector search over grep (<a href="https://cursor.com/blog/semsearch" rel="nofollow">https://cursor.com/blog/semsearch</a>).<p>We’ve spent the past four years scaling RAG pipelines for companies, and Edgar’s work at Purdue’s NLP lab directly informed our chunking techniques. In conversations with dozens of engineers, we repeatedly saw DIY pipelines produce inconsistent results, even after weeks of tuning. Many teams lacked clarity on which retrieval strategies best fit their data.<p>We realized that a system to provision storage and embeddings, handle indexing, and continuously update pipelines to reflect the latest search techniques could remove the need for every team to rebuild RAG themselves. That idea became Captain.<p>In practice, one API call indexes URLs, cloud storage buckets, directories, or individual files. Under the hood, we’re converting everything to Markdown. For this, we’ve had good results with Gemini 3 Pro for images, Reducto for complex documents, and Extend for basic OCR. For embedding models, ‘gemini-embedding-001’ performed reasonably well at first, but we later switched to the Contextualized Embeddings from ‘voyage-context-3’. It produced more relevant results than even the newer Voyage 4 models because its chunk embeddings are encoded with awareness of the surrounding document context. We then applied Voyage’s ‘rerank-2.5’ as second-stage re-ranking, reducing 50 initial chunks to a final top 15 (configurable in Captain’s API). Dense embeddings are just half the picture and full-text search with RRF complete our hybrid retrieval. In the Captain API, these techniques are exposed through a single /query endpoint. Access controls can be configured via metadata filters, and page number citations are returned automatically.<p>The stack is constantly changing but the Captain API creates a standard interface for this. You can try Captain, 1 month for free, and build your own pipelines at <a href="https://runcaptain.com">https://runcaptain.com</a>. We’re looking for candid feedback, especially anything that can make it more useful, and look forward to your comments!