🛠️ All DevTools
Showing 181–200 of 3610 tools
Last Updated
March 05, 2026 at 04:11 AM
Claude Code Remote Control
Hacker News (score: 64)[Other] Claude Code Remote Control
Show HN: Context Mode – 315 KB of MCP output becomes 5.4 KB in Claude Code
Hacker News (score: 35)[Other] Show HN: Context Mode – 315 KB of MCP output becomes 5.4 KB in Claude Code Every MCP tool call dumps raw data into Claude Code's 200K context window. A Playwright snapshot costs 56 KB, 20 GitHub issues cost 59 KB. After 30 minutes, 40% of your context is gone.<p>I built an MCP server that sits between Claude Code and these outputs. It processes them in sandboxes and only returns summaries. 315 KB becomes 5.4 KB.<p>It supports 10 language runtimes, SQLite FTS5 with BM25 ranking for search, and batch execution. Session time before slowdown goes from ~30 min to ~3 hours.<p>MIT licensed, single command install:<p>/plugin marketplace add mksglu/claude-context-mode<p>/plugin install context-mode@claude-context-mode<p>Benchmarks and source: <a href="https://github.com/mksglu/claude-context-mode" rel="nofollow">https://github.com/mksglu/claude-context-mode</a><p>Would love feedback from anyone hitting context limits in Claude Code.
Show HN: StreamHouse – S3-native Kafka alternative written in Rust
Show HN (score: 5)[DevOps] Show HN: StreamHouse – S3-native Kafka alternative written in Rust Hey HN,<p>I built StreamHouse, an open-source streaming platform that replaces Kafka's broker-managed storage with direct S3 writes. The goal: same semantics, fraction of the cost.<p>How it works: Producers batch and compress records, a stateless server manages partition routing and metadata (SQLite for dev, PostgreSQL for prod), and segments land directly in S3. Consumers read from S3 with a local segment cache. No broker disks to manage, no replication factor to tune — S3 gives you 11 nines of durability out of the box.<p>What's there today: - Producer API with batching, LZ4 compression, and offset tracking (62K records/sec) - Consumer API with consumer groups, auto-commit, and multi-partition fanout (30K+ records/sec) - Kafka-compatible protocol (works with existing Kafka clients) - REST API, gRPC API, CLI, and a web UI - Docker Compose setup for trying it locally in 5 minutes<p>The cost model is what motivated this. Kafka's storage costs scale with replication factor × retention × volume. With S3 at $0.023/GB/month, storing a TB of events costs ~$23/month instead of hundreds on broker EBS volumes.<p>Written in Rust, ~50K lines across 15 crates. Apache 2.0 licensed.<p>GitHub: <a href="https://github.com/gbram1/streamhouse" rel="nofollow">https://github.com/gbram1/streamhouse</a><p>Happy to answer questions about the architecture, tradeoffs, or what I learned building this.
Show HN: Moonshine Open-Weights STT models – higher accuracy than WhisperLargev3
Hacker News (score: 54)[API/SDK] Show HN: Moonshine Open-Weights STT models – higher accuracy than WhisperLargev3 I wanted to share our new speech to text model, and the library to use them effectively. We're a small startup (six people, sub-$100k monthly GPU budget) so I'm proud of the work the team has done to create streaming STT models with lower word-error rates than OpenAI's largest Whisper model. Admittedly Large v3 is a couple of years old, but we're near the top the HF OpenASR leaderboard, even up against Nvidia's Parakeet family. Anyway, I'd love to get feedback on the models and software, and hear about what people might build with it.
Pi – a minimal terminal coding harness
Hacker News (score: 97)[CLI Tool] Pi – a minimal terminal coding harness
Show HN: Recursively apply patterns for pathfinding
Hacker News (score: 14)[Other] Show HN: Recursively apply patterns for pathfinding I've been begrudgingly working on autorouters for 2 years, looking for new techniques or modern methods that might allow AI to create circuit boards.<p>One of the biggest problems in my view for training an AI to do autorouting is the traditional grid-based representation of autorouting problems which challenges spatial understanding. But we know that vision models are very good at classifying, so I wondered if we could train a model to output a path as a classification. But then how do you represent the path? This lead me down the track of trying to build an autorouter that represented paths as a bunch of patterns.<p>More details: <a href="https://blog.autorouting.com/p/the-recursive-pattern-pathfinder" rel="nofollow">https://blog.autorouting.com/p/the-recursive-pattern-pathfin...</a>
Show HN: MiniVim a Minimal Neovim Configuration
Show HN (score: 5)[IDE/Editor] Show HN: MiniVim a Minimal Neovim Configuration I built MiniVim, a small and minimal Neovim configuration focused on keeping things simple and readable.<p>The goal was to have a setup that:<p>starts fast<p>uses only essential plugins<p>avoids heavy frameworks<p>remains easy to understand and extend<p>The structure is intentionally small:<p>It’s not meant to compete with full Neovim distributions, but rather serve as a clean base configuration that can be extended gradually.<p>I use it across multiple machines (laptop, WSL, and servers), so reproducibility and simplicity were priorities.<p>Feedback is welcome.
Show HN: Declarative open-source framework for MCPs with search and execute
Show HN (score: 9)[API/SDK] Show HN: Declarative open-source framework for MCPs with search and execute Hi HN,<p>I’m Samrith, creator of Hyperterse.<p>Today I’m launching Hyperterse 2.0, a schema-first framework for building MCP servers directly on top of your existing production databases.<p>If you're building AI agents in production, you’ve probably run into agents needing access to structured, reliable data but wiring your business logic to MCP tools is tedious. Most teams end up writing fragile glue code. Or worse, giving agents unsafe, overbroad access.<p>There isn’t a clean, principled way to expose just the right data surface to agents.<p>Hyperterse lets you define a schema over your data and automatically exposes secure, typed MCP tools for AI agents.<p>Think of it as: Your business data → controlled, agent-ready interface.<p>Some key properties include a schema-first access layer, typed MCP tool generation, works with existing Postgres, MySQL, MongoDB, Redis databases, fine-grained exposure of queries, built for production agent workloads.<p>v2.0 focuses heavily on MCP with first-class MCP server support, cleaner schema ergonomics, better type safety, faster tool surfaces.<p>All of this, with only two tools - search & execute - reducing token usage drastically.<p>Hyperterse is useful if you are building AI agents/copilots, adding LLM features to existing SaaS, trying to safely expose internal data to agents or are just tired of bespoke MCP glue layers.<p>I’d love feedback, especially from folks running agents in production.<p>GitHub: <a href="https://github.com/hyperterse/hyperterse" rel="nofollow">https://github.com/hyperterse/hyperterse</a>
We Are Changing Our Developer Productivity Experiment Design
Hacker News (score: 25)[Other] We Are Changing Our Developer Productivity Experiment Design
D4Vinci/Scrapling
GitHub Trending[Other] 🕷️ An adaptive Web Scraping framework that handles everything from a single request to a full-scale crawl!
Show HN: Hacker Smacker – Spot great (and terrible) HN commenters at a glance
Hacker News (score: 81)[Other] Show HN: Hacker Smacker – Spot great (and terrible) HN commenters at a glance Hacker Smacker adds friend/foe functionality to Hacker News. Three little orbs appear next to every commenter's name. Click to friend or foe a commenter and you'll more easily spot them on future threads. Makes it easy to scroll and spot the commenters you love to read (and hate to read).<p>Main website: <a href="https://hackersmacker.org" rel="nofollow">https://hackersmacker.org</a><p>Chrome/Edge extension: <a href="https://chromewebstore.google.com/detail/hacker-smacker/lmcglejmapenkiabndkcnahfkmbohmhd" rel="nofollow">https://chromewebstore.google.com/detail/hacker-smacker/lmcg...</a> Safari extension: <a href="https://apps.apple.com/us/app/hacker-smacker/id1480749725">https://apps.apple.com/us/app/hacker-smacker/id1480749725</a> Firefox extension: <a href="https://addons.mozilla.org/en-US/firefox/addon/hacker-smacker/" rel="nofollow">https://addons.mozilla.org/en-US/firefox/addon/hacker-smacke...</a><p>The interesting part is friend-of-a-friend: if you friend someone who also uses Hacker Smacker, you'll see their friends and foes highlighted too. This lets you quickly scan long comment threads and find the good stuff based on people you trust.<p>I built this to learn how FoaF relationships work with Redis sets, then brought the same technique to NewsBlur's social layer. The backend is CoffeeScript/Node.js/Redis, and the extension works on Chrome, Edge, Firefox, and Safari.<p>Technically I wrote this back in 2011, but never built a proper auth system until now. So I've been using it for 15 years and it's been great. PG once saw it on my laptop (back when he was still moderating HN, in 2012) and remarked that it was neat.<p>Thanks to Mihai Parparita for help with the Chrome extension sandboxing and Greg Brockman for helping design the authentication system.<p>Source is on GitHub: <a href="https://github.com/samuelclay/hackersmacker" rel="nofollow">https://github.com/samuelclay/hackersmacker</a><p>Directly inspired by Slashdot's friend/foe system, which I always wished HN had. Happy to answer questions!
Open Letter to Google on Mandatory Developer Registration for App Distribution
Hacker News (score: 208)[Other] Open Letter to Google on Mandatory Developer Registration for App Distribution
Package Managers Ă la Carte: a formal model of dependency resolution
Hacker News (score: 21)[Package Manager] Package Managers Ă la Carte: a formal model of dependency resolution
Show HN: Out Plane – A PaaS I built solo from Istanbul in 3 months
Show HN (score: 9)[DevOps] Show HN: Out Plane – A PaaS I built solo from Istanbul in 3 months Hey HN,<p>I posted Out Plane here last week. Wanted to share an update because I've been shipping a lot.<p>I started this because deploying side projects was killing my motivation. Build something fun over a weekend, then waste two days on Dockerfiles, nginx, and SSL. So I built what I wanted — connect GitHub, push code, get a URL. Done.<p>Since December I've added managed PostgreSQL, managed Redis with RedisInsight built in, Dockerfile auto-detection that pre-fills your config, real-time metrics, and scale to zero — no traffic means no bill. Per-second pricing, not hourly. Same Next.js + Postgres app costs me $2.40/mo vs $12–47 on other platforms.<p>No CLI yet, docs need work, ~200 users. Just me, no team, no funding. But people are running real stuff on it.<p>$20 free credit, no credit card. I read all feedback personally — I'm the only one here.
Show HN: Git-native-issue – issues stored as commits in refs/issues/
Show HN (score: 5)[Other] Show HN: Git-native-issue – issues stored as commits in refs/issues/
Show HN: Beehive – Multi-Workspace Agent Orchestrator
Hacker News (score: 38)[DevOps] Show HN: Beehive – Multi-Workspace Agent Orchestrator hey hn,<p>i built beehive for myself mostly. it has gotten to the point where my work consists in supervising oc or cc labor at tasks for multiple issues in parallel. my set up used to be zellij with a couple tabs, each tab working in a separate dir and it was a pain to manage all that. i know i could use git worktrees but they're kind of complicated, if you don't know how to use them it is easy to mess up, and i just prefer letting agents run in separate dirs with their own .git and not risk it. while i like zellij and use it inside beehive, i dont like the tabs and i forget where i am half the time.<p>beehive is a way for me to abstract that away. the heuristic is simple - hives are repos, so you basically have a bunch of hives which correspond to repos you work out of. each hive can have many combs. a comb is a dir with the copy of the repo you're working on. fully isolated, standalone, no shared .git. so for work or for personal stuff, i usually set up the hive, and then have a bunch of combs that i jump between supervising the agents do their thing. if you have a big repo it takes a minute to clone, and you also need gh and git because i like the niceties of like checking if the repo is there at all and stuff like that.<p>the app is open source, mit license. i went with tauri because i hate electron. also i have friends and coworkers who updated to macos 26 and i dont know if the whole mem leak thing for electron apps has been fixed. the app is like 9 megs which is nice too. most of it is written with cc, but i guided the aesthetics and the approach. works on mac and there is a dmg signed and notarized (i reactivated my apple dev credentials).<p>sharing this to get a vibe check on the idea, also maybe this is useful for you. there are many arguments, reasonable ones, you can make for worktrees vs dirs. i just know that trees are too big brain for me, and i like simple things. if you like it, pls lmk and also if you want to help (like add linux support, or like add themes, other cool things) please make a pr / open an issue.
Show HN: Scheme-langserver – Digest incomplete code with static analysis
Show HN (score: 9)[IDE/Editor] Show HN: Scheme-langserver – Digest incomplete code with static analysis Scheme-langserver digest incomplete Scheme code to serve real-world programming requirements, including goto-definition, auto-completion, type inference and such many LSP-defined language feature supports. And this project is based here(<a href="https://github.com/ufo5260987423/scheme-langserver" rel="nofollow">https://github.com/ufo5260987423/scheme-langserver</a>).<p>I built it because I was tired of Scheme/Lisp's raggy development environment, especially of the lack of IDE-like highly customized programing experience. Though DrRacket and many REPL-based counterparts have don't much, following general cases aren't reach same-level as in other modern languages: (let* ([ready-for-reference 1]<p><pre><code> [call-reference (+ ready-for-)])) </code></pre> Apparently, the `ready-for-` behind `call-reference` should trigger an auto-complete option, in which has a candidate `ready-for-reference`. Besides, I also know both of them have the type of number, and their available scope is limited by `let*`'s outer brackets. I wish some IDE to provide such features and such small wishes gradually accumulated in past ten years, finally I wasn't satisfied with all the ready-made products.<p>If you want some further information, you may refer my github repository in which has a screen-record video showing how you code get help from this project and this project has detailed documentation so don't hesitate and use it.<p>Here're some other things sharing to Hacker News readers:<p>1. Why I don't use DrRacket: LSP follows KISS(Keep It Simple, Stupid) principle and I don't want to be involved with font things as I just read in its github issues.<p>2. What's the newest stage of scheme-langserve: It achieves kind of self-boost, in which stage I can continue develop it with its VScode plugin help. However, I directly used Chez Scheme's tokenizer and this leaded to several un-caught exceptions whom I promise to be fixed in the future, but I'm occupied with developing new feature. If you feel something wrong with scheme-langserver, you may reboot vscode, generally this always work.<p>3. Technology road map: I'm now developing a new macro expander so that the users can customize LSP behavior by coding their own macro and without altering this project. After this, I have a plan to improve efficiency and fix bugs. 4. Do I need any help: Yes. And I'd like to say, talking about scheme-langserver with me is also a kind of help.<p>5. Long-term View: I suspect 2 or 3 years later I will lose concentration on this project but according some of my friends, I may integrate this project with other fantastic work.
Show HN: AgentBudget – Real-time dollar budgets for AI agents
Show HN (score: 5)[API/SDK] Show HN: AgentBudget – Real-time dollar budgets for AI agents Hey HN,<p>I built AgentBudget after an AI agent loop cost me $187 in 10 minutes — GPT-4o retrying a failed analysis over and over. Existing tools (LangSmith, Langfuse) track costs after execution but don't prevent overspend.<p>AgentBudget is a Python SDK that gives each agent session a hard dollar budget with real-time enforcement. Integration is two lines:<p><pre><code> import agentbudget agentbudget.init("$5.00") </code></pre> It monkey-patches the OpenAI and Anthropic SDKs (same pattern as Sentry/Datadog), so existing code works without changes. When the budget is hit, it raises BudgetExhausted before the next API call goes out.<p>How it works:<p>- Two-phase enforcement: estimates cost pre-call (input tokens + average completion), reconciles post-call with actual usage. Worst-case overshoot is bounded to one call. - Loop detection: sliding window over (tool_name, argument_hash, timestamp) tuples. Catches infinite retries even if budget remains. - Cost engine: pricing table for 50+ models across OpenAI, Anthropic, Google, Mistral, Cohere. Fuzzy matching for dated model variants. - Unified ledger: tracks both LLM calls and external tool costs (via track() or @track_tool decorator) in a single session.<p>Benchmarks: 3.5μs median overhead per enforcement check. Zero budget overshoot across all tested scenarios. Loop detection: 0 false positives on diverse workloads, catches pathological loops at exactly N+1 calls.<p>No infrastructure needed — it's a library, not a platform. No Redis, no cloud services, no accounts.<p>I also wrote a whitepaper covering the architecture and integration with Coinbase's x402 payment protocol (where agents make autonomous stablecoin payments): <a href="https://doi.org/10.5281/zenodo.18720464" rel="nofollow">https://doi.org/10.5281/zenodo.18720464</a><p>1,300+ PyPI installs in the first 4 days, all organic. Apache 2.0.<p>Happy to answer questions about the design.
Show HN: enveil – hide your .env secrets from prAIng eyes
Hacker News (score: 39)[Other] Show HN: enveil – hide your .env secrets from prAIng eyes
Meta problem with URPF our bundle in Boca raton
Hacker News (score: 33)[Monitoring/Observability] Meta problem with URPF our bundle in Boca raton Meta has a problem in its clusters in Boca Raton, Miami; this is affecting the MNA content delivery network and direct content consumption. This has a regional impact in Latin America, since so far most non-cacheable content is consumed from the clusters in Florida.<p>The impact is traceable via ICMP, but also reproducible via TCP and difficult to measure via UDP. This is why monitoring tools are misleading: there is no “slowness” resulting from interface saturation; instead, there is data corruption where packets are discarded at the interface level. Therefore, if network performance is measured using those same data points, it won’t work and you won’t see any alerts.<p>The issue can also be replicated from the looking glass. In fact, I will attach images below, although you can also see them on the website attached to the post, as well as a more specific report<p>There is packet loss and probably flapping on a BGP instance, OSPF, or some IGP within Meta’s network. I believe it is between 129.134.101.34, 129.134.104.84, and 129.134.101.51. It is possible that it’s a faulty interface in a bundle or some hardware issue that a “show interface status” doesn’t reveal, which is why I’ve failed to report this problem through your NOC.<p>How can Meta replicate the failure?<p>1: Look for random MNA cluster IPs from your clients. 2: Ping from 157.240.14.15 with a payload larger than 500 bytes (a packet is more likely to get corrupted on a faulty interface if the payload increases). 3: Ping many servers from point 1.<p>You will see that once you find the affected upstream or downstream route combination, you will have 10-60% packet loss to the destination host.<p>How to fix it? Isolate the port or discard faulty hardware.<p>Why didn’t we see it before?<p>Simply put, your monitoring tools and troubleshooting protocols don’t work for these problems. The protocol is to attach a HAR file that bases its performance on window scaling and TCP RTT; if both are good, even with data loss, there’s “no problem.” Especially because that HAR file is extracted using QUIC, and QUIC is particularly good at mitigating slowness caused by data loss (since packets are retransmitted without the TCP penalty). You know what uses TCP? WhatsApp Statuses, and those are slow.<p>Can an MTR show where the problem is?<p>Generally not, this is because:<p>In any network route, there is a certain number of hops; for example, suppose there are 5 hops between host A and host B. To perform a traceroute, packets are sent with increasing TTL values (1, 2, 3, etc.). Each time a packet expires before reaching its destination, the transit hop reports a “TTL Time Exceeded” message, which is how the route is mapped. The problem is that these are basically point-to-point probes; it’s like pinging each hop individually. And when there’s a problem on an affected interface in an ECMP or bundle, those P2P connections won’t necessarily take the affected path. So they are unreliable; generally, you will see that the losses are produced by the final host even though the fault is in the middle. check metafixthis.com