π οΈ Hacker News Tools
Showing 441–460 of 1470 tools from Hacker News
Last Updated
January 17, 2026 at 08:00 PM
Show HN: Encore β Type-safe back end framework that generates infra from code
Hacker News (score: 55)[DevOps] Show HN: Encore β Type-safe back end framework that generates infra from code
Reproducible C++ builds by logging Git hashes
Hacker News (score: 14)[Other] Reproducible C++ builds by logging Git hashes
CLI tool to check the Git status of multiple projects
Hacker News (score: 19)[CLI Tool] CLI tool to check the Git status of multiple projects
RegreSQL: Regression Testing for PostgreSQL Queries
Hacker News (score: 76)[Testing] RegreSQL: Regression Testing for PostgreSQL Queries
Show HN: DBOS Java β Postgres-Backed Durable Workflows
Hacker News (score: 40)[Database] Show HN: DBOS Java β Postgres-Backed Durable Workflows Hi HN - Iβm Peter, here with Harry (devhawk), and weβre building DBOS Java, an open-source Java library for durable workflows, backed by Postgres.<p><a href="https://github.com/dbos-inc/dbos-transact-java" rel="nofollow">https://github.com/dbos-inc/dbos-transact-java</a><p>Essentially, DBOS helps you write long-lived, reliable code that can survive failures, restarts, and crashes without losing state or duplicating work. As your workflows run, it checkpoints each step they take in a Postgres database. When a process stops (fails, restarts, or crashes), your program can recover from those checkpoints to restore its exact state and continue from where it left off, as if nothing happened.<p>In practice, this makes it easier to build reliable systems for use cases like AI agents, payments, data synchronization, or anything that takes hours, days, or weeks to complete. Rather than bolting on ad-hoc retry logic and database checkpoints, durable workflows give you one consistent model for ensuring your programs can recover from any failure from exactly where they left off.<p>This library contains all you need to add durable workflows to your program: there's no separate service or orchestrator or any external dependencies except Postgres. Because it's just a library, you can incrementally add it to your projects, and it works out of the box with frameworks like Spring. And because it's built on Postgres, it natively supports all the tooling you're familiar with (backups, GUIs, CLI tools) and works with any Postgres provider.<p>If you want to try it out, check out the quickstart:<p><a href="https://docs.dbos.dev/quickstart?language=java" rel="nofollow">https://docs.dbos.dev/quickstart?language=java</a><p>We'd love to hear what you think! Weβll be in the comments for the rest of the day to answer any questions.
Show HN: Agent-to-code JIT compiler for Z3-theorem-proving agents
Show HN (score: 6)[Other] Show HN: Agent-to-code JIT compiler for Z3-theorem-proving agents
Show HN: I made an open-source Rust program for memory-efficient genomics
Show HN (score: 6)[Other] Show HN: I made an open-source Rust program for memory-efficient genomics My cofounder and I run a startup in oncology, where we handle cancer genomics data. It occurred to me that, thanks to a recent complexity theory result, there's a clever way to run bioinformatics algorithms using far less RAM. I built this Rust engine for running whole-genome workloads in under 100MB of RAM. Runtime is a little longer as a result - O(TlogT) instead of O(T). But it should enable whole-genome analytics on consumer-grade hardware.
Show HN: ChatExport Structurer β parse ChatGPT/Claude exports into queryable SQL
Show HN (score: 5)[CLI Tool] Show HN: ChatExport Structurer β parse ChatGPT/Claude exports into queryable SQL I wanted to query my own chat history but the JSON exports were a mess to work with. Built a small parser that turns them into clean SQL databases. Parsed 70k+ messages across multiple models. Useful for analyzing chat history, building a personal knowledge base, or archiving conversations. Simple CLI, open source.
Show HN: Tokenflood β simulate arbitrary loads on instruction-tuned LLMs
Hacker News (score: 18)[Testing] Show HN: Tokenflood β simulate arbitrary loads on instruction-tuned LLMs Hi everyone, I just released an open source load testing tool for LLMs:<p><a href="https://github.com/twerkmeister/tokenflood" rel="nofollow">https://github.com/twerkmeister/tokenflood</a><p>=== <i>What is it and what problems does it solve?</i> ===<p>Tokenflood is a load testing tool for instruction-tuned LLMs hat can simulate arbitrary LLM loads in terms of prompt, prefix, and output lengths and requests per second. Instead of first collecting prompt data for different load types, you can configure the desired parameters for your load test and you are good to go. It also let's you assess the latency effects of potential prompt parameter changes before spending the time and effort to implement them.<p>I believe it's really useful for developing latency sensitive LLM applications and * load testing self-hosted LLM model setups * Assessing the latency benefit of changes to prompt parameters before implementing those changes * Assessing latency and intraday variation of latency on hosted LLM services before sending your traffic there<p>=== <i>Why did I built it?</i> ===<p>Over the course of the past year, part of my work has been helping my clients to meet their latency, throughput and cost targets for LLMs (PTUs, anyone? ). That process involved making numerous choices about cloud providers, hardware, inference software, models, configurations and prompt changes. During that time I found myself doing similar tests over and over with a collection of adhoc scripts. I finally had some time on my hands and wanted to properly put it together in one tool.<p>=== <i>What am I looking for?</i> ===<p>I am sharing this for three reasons: Hoping this can make other's work for latency-sensitive LLM applications simpler, learning and improving from feedback, and finding new projects to work on.<p>So please check it out on github (<a href="https://github.com/twerkmeister/tokenflood" rel="nofollow">https://github.com/twerkmeister/tokenflood</a>), comment, and reach out at thomas@werkmeister.me or on linkedin(<a href="https://www.linkedin.com/in/twerkmeister/" rel="nofollow">https://www.linkedin.com/in/twerkmeister/</a>) for professional inquiries.<p>=== Pics ===<p>image of cli interface: <a href="https://github.com/twerkmeister/tokenflood/blob/main/images/cli.png?raw=true" rel="nofollow">https://github.com/twerkmeister/tokenflood/blob/main/images/...</a><p>result image: <a href="https://github.com/twerkmeister/tokenflood/blob/main/images/self-hosted_shorter_output_latency_percentiles.png?raw=true" rel="nofollow">https://github.com/twerkmeister/tokenflood/blob/main/images/...</a>
Show HN: DeltaGlider β Store 4TB of build artifacts in 5GB
Show HN (score: 5)[API/SDK] Show HN: DeltaGlider β Store 4TB of build artifacts in 5GB DeltaGlider is a CLI/SDK similar to `aws s3` or `boto3`.<p>UPLOAD: It stores the first file in a S3 path as a full-size (reference), but saves next uploaded archives as deltas (tiny binary diffs) with respect to the reference.<p>DOWNLOAD: it reconstructs the original file on the fly, bit-perfect and verified with SHA256.<p>Why Xdelta3? It's a compression-aware and block-level binary diff algorithm. Perfect for representing differences between archives, where small changes shift bytes but most content stays the same. It can efficiently delta compress ZIP/JAR/TAR archives up to 99.9% between versions, provided the difference in compressed content is overall small.<p>Killer use cases Software versioning, periodic db. backups, JAR, ZIP, TGZ.<p>The impact for us was "2 orders of magnitude" storage price reduction. I hope you can benefit from it too!<p>License: GPLv3<p>Feedback and contributions are super welcome!
[Database] Show HN: YaraDB β Lightweight open-source document database built with FastAPI
Show HN: Real-time 4D Julia set navigation via gamepad
Show HN (score: 7)[Other] Show HN: Real-time 4D Julia set navigation via gamepad I've written Atlas, a GPU scripting language that eliminates the boilerplate of managing textures and uniforms. Here are some demos including 4D fractal exploration with gamepad controls. Press 7 to see the Julia set, and try reloading if you see rectangles/it glitches. Documentation: <a href="https://banditcat.github.io/Atlas/index.html" rel="nofollow">https://banditcat.github.io/Atlas/index.html</a> *requires approximately an RTX 3080.
Performance hacks for faster Python code
Hacker News (score: 74)[Other] Performance hacks for faster Python code
Show HN: Creavi Macropad β Built a wireless macropad with a display
Hacker News (score: 23)[Other] Show HN: Creavi Macropad β Built a wireless macropad with a display Hey HN,<p>We built a wireless, low-profile macropad with a display called the Creavi Macropad. It lasts at least 1 month on a single charge. We also put together a browser-based tool that lets you update macros in real time and even push OTA updates over BLE. Since we're software engineers by day, we had to figure out the hardware, mechanics, and industrial design as we went (and somehow make it all work together). This post covers the build process, and the final result.<p>Hope you enjoy!
Terminal Latency on Windows (2024)
Hacker News (score: 51)[Other] Terminal Latency on Windows (2024)
Show HN: Git Quick Stats β The Easiest Way to Analyze Any Git Repositor
Show HN (score: 5)[Other] Show HN: Git Quick Stats β The Easiest Way to Analyze Any Git Repositor
Grebedoc β static site hosting for Git forges
Hacker News (score: 27)[Other] Grebedoc β static site hosting for Git forges
Show HN: Tusk Drift β Open-source tool for automating API tests
Hacker News (score: 16)[Testing] Show HN: Tusk Drift β Open-source tool for automating API tests Hey HN, I'm Marcel from Tusk. Weβre launching Tusk Drift, an open source tool that generates a full API test suite by recording and replaying live traffic.<p>How it works:<p>1. Records traces from live traffic (what gets captured)<p>2. Replays traces as API tests with mocked responses (how replay works)<p>3. Detects deviations between actual vs. expected output (what you get)<p>Unlike traditional mocking libraries, which require you to manually emulate how dependencies behave, Tusk Drift automatically records what these dependencies respond with based on actual user behavior and maintains recordings over time. The reason we built this is because of painful past experiences with brittle API test suites and regressions that would only be caught in prod.<p>Our SDK instruments your Node service, similar to OpenTelemetry. It captures all inbound requests and outbound calls like database queries, HTTP requests, and auth token generation. When Drift is triggered, it replays the inbound API call while intercepting outbound requests and serving them from recorded data. Driftβs tests are therefore idempotent, side-effect free, and fast (typically <100 ms per test). Think of it as a unit test but for your API.<p>Our Cloud platform does the following automatically:<p>- Updates the test suite of recorded traces to maintain freshness<p>- Matches relevant Drift tests to your PRβs changes when running tests in CI<p>- Surfaces unintended deviations, does root cause analysis, and suggests code fixes<p>Weβre excited to see this use case finally unlocked. The release of Claude Sonnet 4.5 and similar coding models have made it possible to go from failing test to root cause reliably. Also, the ability to do accurate test matching and deviation classification means running a tool like this in CI no longer contributes to poor DevEx (imagine the time otherwise spent reviewing test results).<p>Limitations:<p>- You can specify PII redaction rules but there is no default mode for this at the moment. I recommend first enabling Drift on dev/staging, adding transforms (<a href="https://docs.usetusk.ai/api-tests/pii-redaction/basic-concepts">https://docs.usetusk.ai/api-tests/pii-redaction/basic-concep...</a>), and monitoring for a week before enabling on prod.<p>- Expect a 1-2% throughput overhead. Transforms result in a 1.0% increase in tail latency when a small number of transforms are registered; its impact scales linearly with the number of transforms registered.<p>- Currently only supports Node backends. Python SDK is coming next.<p>- Instrumentation limited to the following packages (more to come): <a href="https://github.com/Use-Tusk/drift-node-sdk?tab=readme-ov-file#requirements" rel="nofollow">https://github.com/Use-Tusk/drift-node-sdk?tab=readme-ov-fil...</a><p>Let me know if you have questions or feedback.<p>Demo repo: <a href="https://github.com/Use-Tusk/drift-node-demo" rel="nofollow">https://github.com/Use-Tusk/drift-node-demo</a>
Show HN: Linnix β eBPF observability that predicts failures before they happen
Hacker News (score: 18)[Monitoring/Observability] Show HN: Linnix β eBPF observability that predicts failures before they happen I kept missing incidents until it was too late. By the time my monitoring alerted me, servers/nodes were already unrecoverable.<p>So I built Linnix. It watches your Linux systems at the kernel level using eBPF and tries to catch problems before they cascade into outages.<p>The idea is simple: instead of alerting you after your server runs out of memory, it notices when memory allocation patterns look weird and tells you "hey, this looks bad."<p>It uses a local LLM to spot patterns. Not trying to build AGI here - just pattern matching on process behavior. Turns out LLMs are actually pretty good at this.<p>Example: it flagged higher memory consumption over a short period and alerted me before it was too late. Turned out to be a memory leak that would've killed the process.<p>Quick start if you want to try it:<p><pre><code> docker pull ghcr.io/linnix-os/cognitod:latest docker-compose up -d </code></pre> Setup takes about 5 minutes. Everything runs locally - your data doesn't leave your machine.<p>The main difference from tools like Prometheus: most monitoring parses /proc files. This uses eBPF to get data directly from the kernel. More accurate, way less overhead.<p>Built it in Rust using the Aya framework. No libbpf, no C - pure Rust all the way down. Makes the kernel interactions less scary.<p>Current state: - Works on any Linux 5.8+ with BTF - Monitors Docker/Kubernetes containers - Exports to Prometheus - Apache 2.0 license<p>Still rough around the edges. Actively working on it.<p>Would love to know: - What kinds of failures do you wish you could catch earlier? - Does this seem useful for your setup?<p>GitHub: <a href="https://github.com/linnix-os/linnix" rel="nofollow">https://github.com/linnix-os/linnix</a><p>Happy to answer questions about how it works.
Listen to Database Changes Through the Postgres WAL
Hacker News (score: 11)[Other] Listen to Database Changes Through the Postgres WAL