š ļø Hacker News Tools
Showing 1961–1980 of 2575 tools from Hacker News
Last Updated
April 28, 2026 at 12:01 AM
Launch HN: RunRL (YC X25) ā Reinforcement learning as a service
Hacker News (score: 32)[Other] Launch HN: RunRL (YC X25) ā Reinforcement learning as a service Hey HN, weāre Andrew and Derik at RunRL (<a href="https://runrl.com/">https://runrl.com/</a>). We've built a platform to improve models and agents with reinforcement learning. If you can define a metric, we'll make your model or agent better, without you having to think about managing GPU clusters.<p>Here's a demo video: <a href="https://youtu.be/EtiBjs4jfCg" rel="nofollow">https://youtu.be/EtiBjs4jfCg</a><p>I (Andrew) was doing a PhD in reinforcement learning on language models, and everyone kept...not using RL because it was too hard to get running. At some point I realized that someone's got to sit down and actually write a good platform for running RL experiments.<p>Once this happened, people started using it for antiviral design, formal verification, browser agents, and a bunch of other cool applications, so we decided to make a startup out of it.<p>How it works:<p>- Choose an open-weight base model (weights are necessary for RL updates; Qwen3-4B-Instruct-2507 is a good starting point)<p>- Upload a set of initial prompts ("Generate an antiviral targeting Sars-CoV-2 protease", "Prove this theorem", "What's the average summer high in Windhoek?")<p>- Define a reward function, using Python, an LLM-as-a-judge, or both<p>- For complex settings, you can define an entire multi-turn environment<p>- Watch the reward go up!<p>For most well-defined problems, a small open model + RunRL outperforms frontier models. (For instance, we've seen Qwen-3B do better than Claude 4.1 Opus on antiviral design.) This is because LLM intelligence is notoriously "spiky"; often models are decent-but-not-great at common-sense knowledge, are randomly good at a few domains, but make mistakes on lots of other tasks. RunRL creates spikes precisely on the tasks where you need them.<p>Pricing: $80/node-hour. Most models up to 14B parameters fit on one node (0.6-1.2 TB of VRAM). We do full fine-tuning, at the cost of parameter-efficiency (with RL, people seem to care a lot about the last few percent gains in e.g. agent reliability).<p>Next up: continuous learning; tool use. Tool use is currently in private beta, which you can join here: <a href="https://forms.gle/D2mSmeQDVCDraPQg8" rel="nofollow">https://forms.gle/D2mSmeQDVCDraPQg8</a><p>We'd love to hear any thoughts, questions, or positive or negative reinforcement!
UUIDv47: Store UUIDv7 in DB, emit UUIDv4 outside (SipHash-masked timestamp)
Hacker News (score: 30)[Other] UUIDv47: Store UUIDv7 in DB, emit UUIDv4 outside (SipHash-masked timestamp)
Notion API importer, with Databases to Bases conversion bounty
Hacker News (score: 79)[Other] Notion API importer, with Databases to Bases conversion bounty
Show HN: A PSX/DOS style 3D game written in Rust with a custom software renderer
Show HN (score: 40)[Other] Show HN: A PSX/DOS style 3D game written in Rust with a custom software renderer So, after years of abandoning Rust after the hello world stage, I finally decided to do something substantial. It started with simple line rendering, but I liked how it was progressing so I figured I could make a reasonably complete PSX style renderer and a game with it.<p>My only dependency is SDL2; I treat it as my "platform", so it handles windowing, input and audio. This means my Cargo.toml is as simple as:<p>[dependencies.sdl2] version = "0.35" default-features = false features = ["mixer"]<p>this pulls around 6-7 other dependencies.<p>I am doing actual true color 3D rendering (with Z buffer, transforming, lighting and rasterizing each triangle and so on, no special techniques or raycasting), the framebuffer is 320x180 (widescreen 320x240). SDL handles the hardware-accelerated final scaling to the display resolution (if available, for example in VMs it's sometimes not so it's pure software). I do my own physics, quaternion/matrix/vector math, TGA and OBJ loading.<p>Performance: I have not spent a lot of time on this really, but I am kind of satisfied: FPS ranges from [200-500] on a 2011 i5 Thinkpad to [70-80] on a 2005 Pentium laptop (this could barely run rustc...I had to jump through some hoops to make it work on 32 bit Linux), to [40-50] on a RaspberryPi 3B+. I don't have more modern hardware to test.<p>All of this is single threaded, no SIMD, no inline asm. Also, implementing interlaced rendering provided a +50% perf boost (and a nice effect).<p>The Pentium laptop has an ATI (yes) chip which is, maybe not surprisingly, supported perfectly by SDL.<p>Regarding Rust: I've barely touched the language. I am using it more as a "C with vec!s, borrow checker, pattern matching, error propagation, and traits". I love the syntax of the subset that I use; it's crystal clear, readable, ergonomic. Things like matches/ifs returning values are extremely useful for concise and productive code. However, pro/idiomatic code that I see around, looks unreadable to me. I've written all of the code from scratch on my own terms, so this was not a problem, but still... In any case, the ecosystem and tooling are amazing. All in all, an amazing development experience. I am a bit afraid to switch back to C++ for my next project.<p>Also, rustup/cargo made things a walk in the park while creating a deployment script that automates the whole process: after a commit, it scans source files for used assets and packages only those, copies dependencies (DLLs for Win), sets up build dependencies depending on the target, builds all 3 targets (Win10_64, Linux32, Linux64), bundles everything into separate zips and uploads them to my local server. I am doing this from a 64bit Lubuntu 18.04 virtual machine.<p>You can try the game and read all info about it on the linked itch.io page: <a href="https://totenarctanz.itch.io/a-scavenging-trip" rel="nofollow">https://totenarctanz.itch.io/a-scavenging-trip</a><p>All assets (audio/images/fonts) where also made by me for this project (you could guess from the low quality).<p>Development tools: Geany (on Linux), notepad++ (on Windows), both vanilla with no plugins, Blender, Gimp, REAPER.
Irssi: IRC Client in a Docker Image
Hacker News (score: 19)[Other] Irssi: IRC Client in a Docker Image
Show HN: npm-daycare, an NPM proxy that filters out recent & small packages
Show HN (score: 6)[Other] Show HN: npm-daycare, an NPM proxy that filters out recent & small packages Hey all! npm-daycare is a simple NPM proxy built on Verdaccio which filters all packages that:<p>- are younger than 48h (it will just provide an old version instead)<p>- have fewer than 5,000 weekly downloads<p><a href="https://github.com/stack-auth/npm-daycare" rel="nofollow">https://github.com/stack-auth/npm-daycare</a><p>This is in response to the recent supply chain attacks that shattered the JavaScript ecosystem [1]. It's likely not a problem that will go away any time soon, so we figured we'd build something to protect against it.<p>Doing this on the proxy layer means it will work across the entire system, as proxies are set globally. In the future, we could also add more filters to the proxy.<p>To get started, just run the Docker container:<p><pre><code> docker run -d --rm --name npm-daycare -p 4873:4873 bgodil/npm-daycare npm set registry http://localhost:4873/ pnpm config set registry http://localhost:4873/ yarn config set registry http://localhost:4873/ bun config set registry http://localhost:4873/ npm view @types/node # has recent updates npm view pgmock # has <5,000 weekly downloads </code></pre> Downside: npm-daycare won't show packages that are younger than 48h on its default config, so be aware of that when you try to update your packages to patch a zero-day exploit.<p>You probably also shouldn't rely on this as your only line of defense. Curious to hear what you think!<p>[1] <a href="https://news.ycombinator.com/item?id=45260741">https://news.ycombinator.com/item?id=45260741</a>
Show HN: Ghostpipe ā Connect files in your codebase to user interfaces
Show HN (score: 5)[Other] Show HN: Ghostpipe ā Connect files in your codebase to user interfaces Hey HN!<p>I built Ghostpipe because:<p>1. I like to keep data about my software in the codebase and under version control.<p>2. I donāt like always working in raw text files with domain specific languages (eg Terraform, Openapi, er diagrams).<p>Ghostpipe is an open source tool that creates a bridge between files in your codebase and applications using webrtc. This lets developers work with user interfaces where appropriate, while still having access to the underlying raw text files.<p>A few side-benefits to this setup are:<p>1. AI agents are good at working with local text files, so we can keep using those.<p>2. Generally speaking, no signup or installation is needed to use Ghostpipe apps, because all relevant data is in the codebase.<p>I built a few demo apps with Ghostpipe support (Excalidraw & Swagger UI), and I hope this proof of concept spurs some interest in taking this idea further.<p>Thanks!
PyPI Blog: Token Exfiltration Campaign via GitHub Actions Workflows
Hacker News (score: 16)[Other] PyPI Blog: Token Exfiltration Campaign via GitHub Actions Workflows
Show HN: AI Code Detector ā detect AI-generated code with 95% accuracy
Hacker News (score: 60)[Other] Show HN: AI Code Detector ā detect AI-generated code with 95% accuracy Hey HN,<p>Iām Henry, cofounder and CTO at Span (<a href="https://span.app/" rel="nofollow">https://span.app/</a>). Today weāre launching AI Code Detector, an AI code detection tool you can try in your browser.<p>The explosion of AI generated code has created some weird problems for engineering orgs. Tools like Cursor and Copilot are used by virtually every org on the planet ā but each codegen tool has its own idiosyncratic way of reporting usage. Some donāt report usage at all.<p>Our view is that token spend will start competing with payroll spend as AI becomes more deeply ingrained in how we build software, so understanding how to drive proficiency, improve ROI, and allocate resources relating to AI tools will become at least as important as parallel processes on the talent side.<p>Getting true visibility into AI-generated code is incredibly difficult. And yet itās the number one thing customers ask us for.<p>So we built a new approach from the ground up.<p>Our AI Code Detector is powered by span-detect-1, a state-of-the-art model trained on millions of AI- and human-written code samples. It detects AI-generated code with 95% accuracy, and ties it to specific lines shipped into production. Within the Span platform, itāll give teams a clear view into AIās real impact on velocity, quality, and ROI.<p>It does have some limitations. Most notably, it only works for TypeScript and Python code. We are adding support for more languages: Java, Ruby, and C# are next. Its accuracy is around 95% today, and weāre working on improving that, too.<p>If youād like to take it for a spin, you can run a code snippet here (<a href="https://code-detector.ai/" rel="nofollow">https://code-detector.ai/</a>) and get results in about five seconds. We also have a more narrative-driven microsite (<a href="https://www.span.app/detector" rel="nofollow">https://www.span.app/detector</a>) that my marketing team says I have to share.<p>Would love your thoughts, both on the tool itself and your own experiences. Iāll be hanging out in the comments to answer questions, too.
Launch HN: Rowboat (YC S24) ā Open-source IDE for multi-agent systems
Hacker News (score: 34)[IDE/Editor] Launch HN: Rowboat (YC S24) ā Open-source IDE for multi-agent systems Hi HN! We are Arjun, Ramnique, and Akhilesh, the founders of Rowboat (<a href="https://www.rowboatlabs.com">https://www.rowboatlabs.com</a>), an AI-assisted IDE for building and managing multi-agent systems with a copilot. Using Rowboat, you can build both deterministic automation agents (e.g. automatically summarizing emails) and more agentic systems (e.g. a meeting prep assistant or a customer support bot).<p>Here are some examples:<p>- Meeting-prep assistant: <a href="https://www.youtube.com/watch?v=KZTP4xZM2DY" rel="nofollow">https://www.youtube.com/watch?v=KZTP4xZM2DY</a><p>- Customer support assistant: <a href="https://www.youtube.com/watch?v=Xfo-OfgOl8w" rel="nofollow">https://www.youtube.com/watch?v=Xfo-OfgOl8w</a><p>- Gmail and Reddit assistant: <a href="https://www.youtube.com/watch?v=6r7P4Vlcn2g" rel="nofollow">https://www.youtube.com/watch?v=6r7P4Vlcn2g</a><p>Rowboat is open-source (<a href="https://github.com/rowboatlabs/rowboat" rel="nofollow">https://github.com/rowboatlabs/rowboat</a>) and has a growing community. We first launched it on Show HN a few months ago (<a href="https://news.ycombinator.com/item?id=43763967">https://news.ycombinator.com/item?id=43763967</a>).<p>Today we are launching a major update along with a cloud offering. Weāve added built-in tool integrations for 100s of tools like Gmail, Github and Slack, RAG with documents and URLs, and triggers to invoke your assistant based on external events.<p>Our cloud version includes all the features of the open-source IDE, but runs instantly with no setup or API keys. For launch, we're offering $10 free usage with Gemini models so you can start building right away for free without adding any card details. Paid plans start at $20/month and give you access to additional models (OpenAI, Anthropic, Gemini, with more coming) and higher usage limits.<p>Thereās a growing view that some tasks are better handled by single agents (<a href="https://news.ycombinator.com/item?id=45096962">https://news.ycombinator.com/item?id=45096962</a>), while others benefit from multi-agent systems for higher accuracy ( <a href="https://www.anthropic.com/engineering/multi-agent-research-system" rel="nofollow">https://www.anthropic.com/engineering/multi-agent-research-s...</a>). The difference often comes down to scope: a focused task like coding suits a single agent, but juggling multiple domains such as email, Slack, and LinkedIn is better split across agents. Multi-agent systems also help avoid context pollution, since LLMs lose focus when asked to handle unrelated tasks. In addition, cleanly dividing responsibilities makes each agent easier to test, debug, and improve.<p>However, splitting work into multiple agents and getting their prompts right is challenging. OpenAI and others have published patterns that work well for different scenarios (<a href="https://cdn.openai.com/business-guides-and-resources/a-practical-guide-to-building-agents.pdf" rel="nofollow">https://cdn.openai.com/business-guides-and-resources/a-pract...</a>). Weāve added agent abstractions, built on top of OpenAIās Agents SDK, to support these patterns. These include user-facing agents that can decide to hand off to another agent when needed; task agents that perform internal tasks; and pipelines that deterministically call a sequence of agents.<p>Rowboatās copilot (āSkipperā) is aware of these patterns and has been seeded with tested patterns, such as a managerāworker setup for a customer support bot, a pipeline for automated document summarization, and multiāagent workflows for combining web search with RAG. It can:<p>- Build multi-agent systems from a high-level request and decide how work must be delegated across agents<p>- Edit agent instructions to make correct tool calls using Composio tools or any connected MCP server<p>- Observe your playground chat and improve agents based on your tests<p>We see agentic systems as a spectrum. On one end are deterministic workflows with a few LLM calls. On the other end are fully agentic systems where the LLM makes all control flow decisions - we focus on this end of the spectrum, while still allowing deterministic control where necessary for real-world assistant use cases. We intentionally avoided flowchart-style editors (like n8n) because they become unwieldy when building and maintaining highly agentic systems.<p>We look forward to hearing your thoughts!
Show HN: I wrote a from-scratch OS to serve my blog
Show HN (score: 5)[Other] Show HN: I wrote a from-scratch OS to serve my blog Hey HN! This is a fun/educational project I built to learn OS programming. I started working on it right after graduating high school last year and have been working on it on and off during my first year of university. It features a TCP/IP stack, an HTTP server, a RAM file system, a BIOS bootloader, paging and memory management, and concurrent tasks based on cooperative scheduling, along with a custom library. It's written in a C programming style focused on safety (based on a custom library of core abstractions) that's inspired by the writing of Chris Wellons (nullprogram.com).<p>There is a link to a test deployment in the README. The TCP/IP implementation is nowhere near perfect, of course, so there may be issues loading the page. I'm curious how the system holds up if this post gets any attention ;-)
Automating Distro Updates in CI
Hacker News (score: 14)[Other] Automating Distro Updates in CI
Show HN: Pyproc ā Call Python from Go Without CGO or Microservices
Hacker News (score: 16)[API/SDK] Show HN: Pyproc ā Call Python from Go Without CGO or Microservices Hi HN!I built *pyproc* to let Go services call Python like a local function ā *no CGO and no separate microservice*. It runs a pool of Python worker processes and talks over *Unix Domain Sockets* on the same host/pod, so you get low overhead, process isolation, and parallelism beyond the GIL.<p>*Why this exists*<p>* Keep your Go service, reuse Python/NumPy/pandas/PyTorch/scikit-learn. * Avoid network hops, service discovery, and ops burden of a separate Python service.<p>*Quick try (\~5 minutes)*<p>Go (app):<p>``` go get github.com/YuminosukeSato/pyproc@latest ```<p>Python (worker):<p>``` pip install pyproc-worker ```<p>Minimal worker (Python):<p>``` from pyproc_worker import expose, run_worker @expose def predict(req): return {"result": req["value"] * 2} if __name__ == "__main__": run_worker() ```<p>Call from Go:<p>``` import ( "context" "fmt" "github.com/YuminosukeSato/pyproc/pkg/pyproc" ) func main() { pool, _ := pyproc.NewPool(pyproc.PoolOptions{ Config: pyproc.PoolConfig{Workers: 4, MaxInFlight: 10}, WorkerConfig: pyproc.WorkerConfig{SocketPath: "/tmp/pyproc.sock", PythonExec: "python3", WorkerScript: "worker.py"}, }, nil) _ = pool.Start(context.Background()) defer pool.Shutdown(context.Background()) var out map[string]any _ = pool.Call(context.Background(), "predict", map[string]any{"value": 42}, &out) fmt.Println(out["result"]) // 84 } ```<p>*Scope / limits*<p>* Same-host/pod only (UDS). Linux/macOS supported; Windows named pipes not yet. * Best for request/response payloads ā² \~100 KB JSON; GPU orchestration and cross-host serving are out of scope.<p>*Benchmarks (indicative)*<p>* Local M1, simple JSON: \~*45µs p50* and *\~200k req/s* with 8 workers. Your numbers will vary.<p>*Whatās included*<p>* Pure Go client (no CGO), Python worker lib, pool, health checks, graceful restarts, and examples.<p>*Docs & code*<p>* README, design/ops/security docs, pkg.go.dev: [<a href="https://github.com/YuminosukeSato/pyproc" rel="nofollow">https://github.com/YuminosukeSato/pyproc</a>](<a href="https://github.com/YuminosukeSato/pyproc" rel="nofollow">https://github.com/YuminosukeSato/pyproc</a>)<p>*License*<p>* Apache-2.0. Current release: v0.2.x.<p>*Feedback welcome*<p>* API ergonomics, failure modes under load, and priorities for codecs/transports (e.g., Arrow IPC, gRPC-over-UDS).<p>---<p><i>Source for details: project README and docs.</i> ([github.com][1])<p>[1]: <a href="https://github.com/YuminosukeSato/pyproc" rel="nofollow">https://github.com/YuminosukeSato/pyproc</a> "GitHub - YuminosukeSato/pyproc: Call Python from Go without CGO or microservices - Unix domain socket based IPC for ML inference and data processin"
Show HN: HN Term ā browse HN using the terminal
Show HN (score: 5)[CLI Tool] Show HN: HN Term ā browse HN using the terminal Hey HN! I've created a terminal interface to browse HN using only the keyboard.<p>You can expand/hide replies, open external links, browse top, new, ask, show and jobs.<p>All key bindings and theme colors are customizable :)<p>It was built with React, OpenTUI, bun and HN API, had a lot of fun building this, excited to hear your feedback!
Asciinema CLI 3.0 rewritten in Rust, adds live streaming, upgrades file format
Hacker News (score: 220)[Other] Asciinema CLI 3.0 rewritten in Rust, adds live streaming, upgrades file format
Show HN: MCP Server Installation Instructions Generator
Hacker News (score: 10)[Other] Show HN: MCP Server Installation Instructions Generator Hey HN, weāve been experimenting a lot with MCP servers lately, and one of the most time-consuming challenges has been connecting MCP clients to remote MCP servers. To solve this, we built a library that generates them on the fly, enabling 1-click installation buttons and links for most clients out there.<p>Feel free to try out the generator and use it to improve the README of your remote MCP server with the generated markdown. You can even configure the library to return HTML instructions if someone accesses your remote MCP server via the web.
Show HN: Daffodil ā Open-Source Ecommerce Framework to connect to any platform
Hacker News (score: 16)[Other] Show HN: Daffodil ā Open-Source Ecommerce Framework to connect to any platform Hello everyone!<p>Iāve been building an Open Source Ecommerce framework for Angular called Daffodil. I think Daffodil is really cool because it allows you to connect to any arbitrary ecommerce platform. Iāve been hacking away at it slowly (for 7 years now) as Iāve had time and it's finally feeling āreadyā. I would love feedback from anyone whoās spent any time in ecommerce (especially as a frontend developer).<p>For those who are not javascript ecosystem devs, hereās a demo of the concept: <a href="https://demo.daff.io/" rel="nofollow">https://demo.daff.io/</a><p>For those who are familiar with Angular, you can just run the following from a new Angular app (use Angular 19, weāre working on support for Angular 20!) to get the exact same result as the demo above:<p>```bash ng add @daffodil/commerce ```<p>Iām trying to solve two distinct challenges:<p>First, I absolutely hate having to learn a new ecommerce platform. We have drivers for printers, mice, keyboards, microphones, and many other physical widgets in the operating system, why not have them for ecommerce software? Itās not that I hate the existing platforms, their UIs or APIs, it's that every platform repeats the same concepts and I always have to learn some new fangled way of doing the same thing. Iāve long desired for these platforms to act more like operating systems on the Web than like custom built software. Ideally, I would like to call them through a standard interface and forget about their existence beyond that.<p>Second, Iād like to keep it simple to start. Iād like to (on day 1) not have to set up any additional software beyond the core frontend stack (essentially yarn/npm + Angular). All too often, Iām forced to set up docker-compose, Kubernetes, pay for a SaaS, wait for IT at the merchant to get me access, or run a VM somewhere just to build some UI for an ecommerce platform that a company uses. More often than not, I just want to start up a little local http server and start writing.<p>I currently have support for Magento/MageOS/Adobe Commerce, I have partial support for Shopify and I recently wrote a product driver for Medusa - <a href="https://github.com/graycoreio/daffodil/pull/3939" rel="nofollow">https://github.com/graycoreio/daffodil/pull/3939</a>.<p>Finally, if youāre thinking āthis isnāt performant, canāt you just do all of this with GraphQl on the serverā, youāre exactly correct! Thatās where Iād like to get to eventually, but thatās a āyet another toolā barrier to āgetting startedā that Iād like to be able to allow developers to do without for as long as I can in the development cycle. Iām shooting to eventually ship the same ādriverā code that we run in the browser in a GraphQl server once all is said and done with just another driver (albeit much simpler than all the others) that uses the native GraphQl format.<p>Any suggestions for drivers and platforms are welcome, though I canāt promise I will implement them. :)
Pgstream: Postgres streaming logical replication with DDL changes
Hacker News (score: 30)[Database] Pgstream: Postgres streaming logical replication with DDL changes
Show HN: I reverse engineered macOS to allow custom Lock Screen wallpapers
Show HN (score: 32)[Other] Show HN: I reverse engineered macOS to allow custom Lock Screen wallpapers Hi HN, I'm Oskar, a solo indie Mac developer from Sweden. For those in the Mac community, you might know me from my other apps like Sensei and Trim Enabler.<p>For years, I've been frustrated by the lack of customisation of macOS. In particular the Lock Screen which supports animated wallpapers, but only ones provided by Apple. There's never been a way to add your own personal videos.<p>I decided to figure out how to solve this, and the result is Backdrop 2.0. Backdrop is my Live Wallpaper app for Mac, it can play video wallpapers on your desktop. And now it can play on your Lock Screen too.<p>The core technical challenge, as you can imagine, came from trying to do something that Apple otherwise does not allow. However, through extensive reverse engineering of the macOS wallpaper system, I figured out a way to provide Backdrop wallpapers to the system in a way that allows them to play on the lock screen, and even appear in a custom section in System Settings.<p>I'm here all day to answer any questionsāespecially about the reverse engineering process, the challenges of integrating with macOS, or the experience of being an indie Mac developer.<p>Would love to hear your thoughts and feedback.
For Good First Issue ā A repository of social impact and open source projects
Hacker News (score: 31)[Other] For Good First Issue ā A repository of social impact and open source projects