🛠️ Hacker News Tools
Showing 621–640 of 2474 tools from Hacker News
Last Updated
April 22, 2026 at 12:00 AM
Claude Code escapes its own denylist and sandbox
Hacker News (score: 25)Claude Code escapes its own denylist and sandbox
Show HN: We want to displace Notion with collaborative Markdown files
Show HN (score: 14)Show HN: We want to displace Notion with collaborative Markdown files Hi HN! We at Moment[1] are working on Notion alternative which is (1) rich and collaborative, but (2) also just plain-old Markdown files, stored in git (ok, technically in jj), on local disk. We think the era of rigid SaaS UI is, basically, over: coding agents (`claude`, `amp`, `copilot`, `opencode`, <i>etc</i>.) are good enough now that they instantly build custom UI that fits your needs exactly. The very best agents in the world are coding agents, and we want to allow people to simply use them, <i>e.g.</i>, to build little internal tools—but without compromising on collaboration.<p>Moment aims to cover this and other gaps: seamless collaborative editing for teams, more robust programming capabilities built in (including a from-scratch React integration), and tools for accessing private APIs.<p>A lot of our challenge is just in making the collaborative editing work really well. We have found this is a lot harder than simply slapping Yjs on the frontend and calling it a day. We wrote about this previously and the post[2] did pretty well on HN: Lies I was Told About Collaborative editing (352 upvotes as of this writing). Beyond that, in part 2, we'll talk about the reasons we found it hard to get collab to run at 60fps consistently—for one, the Yjs ProseMirror bindings completely tear down and re-create the entire document on every single collaborative keystroke.<p>We hope you will try it out! At this stage even negative feedback is helpful. :)<p>[1]: <a href="https://www.moment.dev/" rel="nofollow">https://www.moment.dev/</a><p>[2]: <a href="https://news.ycombinator.com/item?id=42343953">https://news.ycombinator.com/item?id=42343953</a>
Show HN: Agent Action Protocol (AAP) – MCP got us started, but is insufficient Background: I've been working on agentic guardrails because agents act in expensive/terrible ways and something needs to be able to say "Maybe don't do that" to the agents, but guardrails are almost impossible to enforce with the current way things are built.<p>Context: We keep running into so many problems/limitations today with MCP. It was created so that agents have context on how to act in the world, it wasn't designed to become THE standard rails for agentic behavior. We keep tacking things on to it trying to improve it, but it needs to die a SOAP death so REST can rise in it's place. We need a standard protocol for whenever an agent is taking action. Anywhere.<p>I'm almost certainly the wrong person to design this, but I'm seeing more and more people tack things on to MCP rather than fix the underlying issues. The fastest way to get a good answer is to submit a bad one on the internet. So here I am. I think we need a new protocol. Whether it's AAP or something else, I submit my best effort.<p>Please rip it apart, lets make something better.
Show HN: A trainable, modular electronic nose for industrial use
Hacker News (score: 27)[Other] Show HN: A trainable, modular electronic nose for industrial use Hi HN,<p>I’m part of the team building Sniphi.<p>Sniphi is a modular digital nose that uses gas sensors and machine-learning models to convert volatile organic compound (VOC) data into a machine-readable signal that can be integrated into existing QA, monitoring, or automation systems. The system is currently in an R&D phase, but already exists as working hardware and software and is being tested in real environments.<p>The project grew out of earlier collaborations with university researchers on gas sensors and odor classification. What we kept running into was a gap between promising lab results and systems that could actually be deployed, integrated, and maintained in real production environments.<p>One of our core goals was to avoid building a single-purpose device. The same hardware and software stack can be trained for different use cases by changing the training data and models, rather than the physical setup. In that sense, we think of it as a “universal” electronic nose: one platform, multiple smell-based tasks.<p>Some design principles we optimized for:<p>- Composable architecture: sensor ingestion, ML inference, and analytics are decoupled and exposed via APIs/events<p>- Deployment-first thinking: designed for rollout in factories and warehouses, not just controlled lab setups<p>- Cloud-backed operations: model management, monitoring, updates run on Azure, which makes it easier to integrate with existing industrial IT setups<p>- Trainable across use cases: the same platform can be retrained for different classification or monitoring tasks without redesigning the hardware<p>One public demo we show is classifying different coffee aromas, but that’s just a convenient example. In practice, we’re exploring use cases such as:<p>- Quality control and process monitoring<p>- Early detection of contamination or spoilage<p>- Continuous monitoring in large storage environments (e.g. detecting parasite-related grain contamination in warehouses)<p>Because this is a hardware system, there’s no simple way to try it over the internet. To make it concrete, we’ve shared:<p>- A short end-to-end demo video showing the system in action (YouTube)<p>- A technical overview of the architecture and deployment model: <a href="https://sniphi.com/" rel="nofollow">https://sniphi.com/</a><p>At this stage, we’re especially interested in feedback and conversations with people who:<p>- Have deployed physical sensors at scale<p>- Have run into problems that smell data <i>might</i> help with<p>- Are curious about piloting or testing something like this in practice<p>We’re not fundraising here. We’re mainly trying to learn where this kind of sensing is genuinely useful and where it isn’t.<p>Happy to answer technical questions.
Show HN: Demucs music stem separator rewritten in Rust – runs in the browser Hi HN! I reimplemented HTDemucs v4 (Meta's music source separation model) in Rust, using Burn. It splits any song into individual stems — drums, bass, vocals, guitar, piano — with no Python runtime or server involved.<p>Try it now: <a href="https://nikhilunni.github.io/demucs-rs/" rel="nofollow">https://nikhilunni.github.io/demucs-rs/</a> (needs a WebGPU-capable browser — Chrome/Edge work best)<p>GitHub: <a href="https://github.com/nikhilunni/demucs-rs" rel="nofollow">https://github.com/nikhilunni/demucs-rs</a><p>It runs three ways:<p>- In the browser — the full ML inference pipeline compiles to WASM and runs on your GPU via WebGPU. No uploads, nothing leaves your machine.<p>- Native CLI — Metal on macOS, Vulkan on Linux/Windows. Faster than the browser path.<p>- DAW plugin — VST3/CLAP plugin for macOS with a native SwiftUI UI. Load a track, separate it, drag stems directly into your DAW timeline, or play as a MIDI instrument with solo / faders.<p>The core inference library is built on Burn (<a href="https://burn.dev" rel="nofollow">https://burn.dev</a>), a Rust deep learning framework. The same `demucs-core` crate compiles to both native and `wasm32-unknown-unknown` — the only thing that changes is the GPU backend.<p>Model weights are F16 safetensors hosted on Hugging Face and downloaded / cached automatically on first use on all platforms. Three variants: standard 4-stem (84 MB), 6-stem with guitar/piano (84 MB), and a fine-tuned bag-of-4-models for best quality (333 MB).<p>The existing implementations I found online were mostly wrappers around the original Python implementation, and not very portable -- the model works remarkably well and I wanted to be able to quickly create samples / remixes without leaving the DAW or my browser. Right now the implementation is pretty MacOS heavy, as that's what I'm testing with, but all of the building blocks for other platforms are ready to build on. I want this to grow to be a general utility for music producers, not just "works on my machine."<p>It was a fun first foray into DSP and the state of the art of ML over WASM, with lots of help from Claude!
Physics Girl: Super-Kamiokande – Imaging the sun by detecting neutrinos [video]
Hacker News (score: 400)Physics Girl: Super-Kamiokande – Imaging the sun by detecting neutrinos [video]
I'm reluctant to verify my identity or age for any online services
Hacker News (score: 239)I'm reluctant to verify my identity or age for any online services
Show HN: Reconstruct any image using primitive shapes, runs in-browser via WASM
Hacker News (score: 18)[Other] Show HN: Reconstruct any image using primitive shapes, runs in-browser via WASM I built a browser-based port of fogleman/primitive — a Go CLI tool that approximates images using primitive shapes (triangles, ellipses, beziers, etc.) via a hill-climbing algorithm. The original tool requires building from source and running from the terminal, which isn't exactly accessible. I compiled the core logic to WebAssembly so anyone can drop an image and watch it get reconstructed shape by shape, entirely client-side with no server involved.<p>Demo: <a href="https://primitive-playground.taiseiue.jp/" rel="nofollow">https://primitive-playground.taiseiue.jp/</a> Source: <a href="https://github.com/taiseiue/primitive-playground" rel="nofollow">https://github.com/taiseiue/primitive-playground</a><p>Curious if anyone has ideas for shapes or features worth adding.
AI-generated art can't be copyrighted (Supreme Court declines review)
Hacker News (score: 66)AI-generated art can't be copyrighted (Supreme Court declines review)
India's top court angry after junior judge cites fake AI-generated orders
Hacker News (score: 239)India's top court angry after junior judge cites fake AI-generated orders
Mullvad VPN: Banned TV Ad in the Streets of London [video]
Hacker News (score: 193)Mullvad VPN: Banned TV Ad in the Streets of London [video]
The Xkcd thing, now interactive
Hacker News (score: 667)The Xkcd thing, now interactive
Simplifying Application Architecture with Modular Design and MIM
Hacker News (score: 33)Simplifying Application Architecture with Modular Design and MIM I’ve written a deep dive into Software Design focusing on the "gray area" between High-Level Design (system architecture) and Low-Level Design (classes/functions).<p>What's inside:<p>* A step-by-step tutorial refactoring a legacy big-ball-of-mud into self-contained modules.<p>* A bit of a challenge to Clean/Hexagonal Architectures with a pattern I've seen in the wild (which I named MIM in the text).<p>* A solid appendix on the fundamentals of Modular Design.<p>(Warning: It’s a long read. I’ve seen shorter ebooks on Leanpub).
I built a pint-sized Macintosh
Hacker News (score: 10)I built a pint-sized Macintosh
Optimizing Recommendation Systems with JDK's Vector API
Hacker News (score: 37)Optimizing Recommendation Systems with JDK's Vector API
Show HN: Giggles – A batteries-included React framework for TUIs
Show HN (score: 5)Show HN: Giggles – A batteries-included React framework for TUIs i built a framework that handles focus and input routing automatically for you -- something born out of the things that ink leaves to you, and inspired by charmbracelet's bubbletea<p>- hierarchical focus and input routing: the hard part of terminal UIs, solved. define focus regions with useFocusScope, compose them freely -- a text input inside a list inside a panel just works. each component owns its keys; unhandled keypresses bubble up to the right parent automatically. no global handler like useInput, no coordination code<p>- 15 UI components: Select, TextInput, Autocomplete, Markdown, Modal, Viewport, CodeBlock (with diff support), VirtualList, CommandPalette, and more. sensible defaults, render props for full customization<p>- terminal process control: spawn processes and stream output into your TUI with hooks like useSpawn and useShellOut; hand off to vim, less, or any external program and reclaim control cleanly when they exit<p>- screen navigation, a keybinding registry (expose a ? help menu for free), and theming included<p>- react 19 compatible!<p>docs and live interactive demos in your browser: <a href="https://giggles.zzzzion.com" rel="nofollow">https://giggles.zzzzion.com</a><p>quick start: npx create-giggles-app
How to Build Your Own Quantum Computer
Hacker News (score: 50)How to Build Your Own Quantum Computer
Show HN: I simulated 1200 Iranian missiles attacking air defences in a browser I've built airdefense.dev, which is able to simulate all kinds of ballistic missiles, one-way-attack drones like Shaheds, and most of the commonly deploy anti-air defence systems. All of this inside the browser. I've now added a scenario of the current attacks in the Middle East by Iran. It was quite the challenge to optimize it enough to not completely kill a common laptop, although it still runs best on a bit beefier systems.
Show HN: I built a sub-500ms latency voice agent from scratch
Hacker News (score: 93)Show HN: I built a sub-500ms latency voice agent from scratch I built a voice agent from scratch that averages ~400ms end-to-end latency (phone stop → first syllable). That’s with full STT → LLM → TTS in the loop, clean barge-ins, and no precomputed responses.<p>What moved the needle:<p>Voice is a turn-taking problem, not a transcription problem. VAD alone fails; you need semantic end-of-turn detection.<p>The system reduces to one loop: speaking vs listening. The two transitions - cancel instantly on barge-in, respond instantly on end-of-turn - define the experience.<p>STT → LLM → TTS must stream. Sequential pipelines are dead on arrival for natural conversation.<p>TTFT dominates everything. In voice, the first token is the critical path. Groq’s ~80ms TTFT was the single biggest win.<p>Geography matters more than prompts. Colocate everything or you lose before you start.<p>GitHub Repo: <a href="https://github.com/NickTikhonov/shuo" rel="nofollow">https://github.com/NickTikhonov/shuo</a><p>Follow whatever I next tinker with: <a href="https://x.com/nick_tikhonov" rel="nofollow">https://x.com/nick_tikhonov</a>