🛠️ Hacker News Tools
Showing 1541–1560 of 2557 tools from Hacker News
Last Updated
April 27, 2026 at 12:00 AM
Show HN: Creavi Macropad – Built a wireless macropad with a display
Hacker News (score: 23)[Other] Show HN: Creavi Macropad – Built a wireless macropad with a display Hey HN,<p>We built a wireless, low-profile macropad with a display called the Creavi Macropad. It lasts at least 1 month on a single charge. We also put together a browser-based tool that lets you update macros in real time and even push OTA updates over BLE. Since we're software engineers by day, we had to figure out the hardware, mechanics, and industrial design as we went (and somehow make it all work together). This post covers the build process, and the final result.<p>Hope you enjoy!
Terminal Latency on Windows (2024)
Hacker News (score: 51)[Other] Terminal Latency on Windows (2024)
Show HN: Git Quick Stats – The Easiest Way to Analyze Any Git Repositor
Show HN (score: 5)[Other] Show HN: Git Quick Stats – The Easiest Way to Analyze Any Git Repositor
Grebedoc – static site hosting for Git forges
Hacker News (score: 27)[Other] Grebedoc – static site hosting for Git forges
Show HN: Tusk Drift – Open-source tool for automating API tests
Hacker News (score: 16)[Testing] Show HN: Tusk Drift – Open-source tool for automating API tests Hey HN, I'm Marcel from Tusk. We’re launching Tusk Drift, an open source tool that generates a full API test suite by recording and replaying live traffic.<p>How it works:<p>1. Records traces from live traffic (what gets captured)<p>2. Replays traces as API tests with mocked responses (how replay works)<p>3. Detects deviations between actual vs. expected output (what you get)<p>Unlike traditional mocking libraries, which require you to manually emulate how dependencies behave, Tusk Drift automatically records what these dependencies respond with based on actual user behavior and maintains recordings over time. The reason we built this is because of painful past experiences with brittle API test suites and regressions that would only be caught in prod.<p>Our SDK instruments your Node service, similar to OpenTelemetry. It captures all inbound requests and outbound calls like database queries, HTTP requests, and auth token generation. When Drift is triggered, it replays the inbound API call while intercepting outbound requests and serving them from recorded data. Drift’s tests are therefore idempotent, side-effect free, and fast (typically <100 ms per test). Think of it as a unit test but for your API.<p>Our Cloud platform does the following automatically:<p>- Updates the test suite of recorded traces to maintain freshness<p>- Matches relevant Drift tests to your PR’s changes when running tests in CI<p>- Surfaces unintended deviations, does root cause analysis, and suggests code fixes<p>We’re excited to see this use case finally unlocked. The release of Claude Sonnet 4.5 and similar coding models have made it possible to go from failing test to root cause reliably. Also, the ability to do accurate test matching and deviation classification means running a tool like this in CI no longer contributes to poor DevEx (imagine the time otherwise spent reviewing test results).<p>Limitations:<p>- You can specify PII redaction rules but there is no default mode for this at the moment. I recommend first enabling Drift on dev/staging, adding transforms (<a href="https://docs.usetusk.ai/api-tests/pii-redaction/basic-concepts">https://docs.usetusk.ai/api-tests/pii-redaction/basic-concep...</a>), and monitoring for a week before enabling on prod.<p>- Expect a 1-2% throughput overhead. Transforms result in a 1.0% increase in tail latency when a small number of transforms are registered; its impact scales linearly with the number of transforms registered.<p>- Currently only supports Node backends. Python SDK is coming next.<p>- Instrumentation limited to the following packages (more to come): <a href="https://github.com/Use-Tusk/drift-node-sdk?tab=readme-ov-file#requirements" rel="nofollow">https://github.com/Use-Tusk/drift-node-sdk?tab=readme-ov-fil...</a><p>Let me know if you have questions or feedback.<p>Demo repo: <a href="https://github.com/Use-Tusk/drift-node-demo" rel="nofollow">https://github.com/Use-Tusk/drift-node-demo</a>
Show HN: Linnix – eBPF observability that predicts failures before they happen
Hacker News (score: 18)[Monitoring/Observability] Show HN: Linnix – eBPF observability that predicts failures before they happen I kept missing incidents until it was too late. By the time my monitoring alerted me, servers/nodes were already unrecoverable.<p>So I built Linnix. It watches your Linux systems at the kernel level using eBPF and tries to catch problems before they cascade into outages.<p>The idea is simple: instead of alerting you after your server runs out of memory, it notices when memory allocation patterns look weird and tells you "hey, this looks bad."<p>It uses a local LLM to spot patterns. Not trying to build AGI here - just pattern matching on process behavior. Turns out LLMs are actually pretty good at this.<p>Example: it flagged higher memory consumption over a short period and alerted me before it was too late. Turned out to be a memory leak that would've killed the process.<p>Quick start if you want to try it:<p><pre><code> docker pull ghcr.io/linnix-os/cognitod:latest docker-compose up -d </code></pre> Setup takes about 5 minutes. Everything runs locally - your data doesn't leave your machine.<p>The main difference from tools like Prometheus: most monitoring parses /proc files. This uses eBPF to get data directly from the kernel. More accurate, way less overhead.<p>Built it in Rust using the Aya framework. No libbpf, no C - pure Rust all the way down. Makes the kernel interactions less scary.<p>Current state: - Works on any Linux 5.8+ with BTF - Monitors Docker/Kubernetes containers - Exports to Prometheus - Apache 2.0 license<p>Still rough around the edges. Actively working on it.<p>Would love to know: - What kinds of failures do you wish you could catch earlier? - Does this seem useful for your setup?<p>GitHub: <a href="https://github.com/linnix-os/linnix" rel="nofollow">https://github.com/linnix-os/linnix</a><p>Happy to answer questions about how it works.
Listen to Database Changes Through the Postgres WAL
Hacker News (score: 11)[Other] Listen to Database Changes Through the Postgres WAL
Show HN: Gerbil – an open source desktop app for running LLMs locally
Show HN (score: 34)[Other] Show HN: Gerbil – an open source desktop app for running LLMs locally Gerbil is an open source app that I've been working on for the last couple of months. The development now is largely done and I'm unlikely to add anymore major features. Instead I'm focusing on any bug fixes, small QoL features and dependency upgrades.<p>Under the hood it runs llama.cpp (via koboldcpp) backends and allows easy integration with the popular modern frontends like Open WebUI, SillyTavern, ComfyUI, StableUI (built-in) and KoboldAI Lite (built-in).<p>Why did I create this? I wanted an all-in-one solution for simple text and image-gen local LLMs. I got fed up with needing to manage multiple tools for the various LLM backends and frontends. In addition, as a Linux Wayland user I needed something that would work and look great on my system.
Xqerl – Erlang XQuery 3.1 Processor
Hacker News (score: 26)[Other] Xqerl – Erlang XQuery 3.1 Processor
Show HN: Tracking AI Code with Git AI
Show HN (score: 6)[Other] Show HN: Tracking AI Code with Git AI Git AI is a side project I created to track AI-generated code in our repos from development, through PRs, and into production. It does not just count lines, it keeps track of them as your code evolves, gets refactored and the git history gets rewritten.<p>Think 'git blame' but for AI code. There's a lot about how it works in the post, but wanted to share how it's been impacting me + my team:<p>- I find I review AI code very differently than human code. Being able to see the prompts my colleagues used, what the AI wrote, and where they stepped in to override has been extraordinarily helpful. This is still very manual today, but hope to build more UI around it soon.<p>- “Why is this here?” — more than once I’ve giving my coding agent access to the past prompts that generated code I’m looking at, which lets the Agent know what my colleague was thinking when they made the change. Engineers talk to AI all day now…their prompts are sort of like a log of thoughts :)<p>- I pay a lot of attention to the lines generated for every 1 accepted ratio. If it gets up over 4 or 5 it means I’m well outside the AI’s distribution or prompting poorly — either way, it’s a good cause for reflection and I’ve learned a lot about collaborating with LLMs.<p>This has been really fun to build, especially because some amazing contributors who were working on similar projects came together and directed their efforts towards Git AI shine. We hope you like it.
Run Nix Based Environments in Kubernetes
Hacker News (score: 32)[DevOps] Run Nix Based Environments in Kubernetes
Show HN: DroidDock – A sleek macOS app for browsing Android device files via ADB
Hacker News (score: 29)[Other] Show HN: DroidDock – A sleek macOS app for browsing Android device files via ADB Hi HN,<p>I’m Rajiv, a software engineer turned Math teacher living in the mountains, where I like to slow down life while still building useful software.<p>I recently built DroidDock, a lightweight and modern macOS desktop app that lets you browse and manage files on your Android device via ADB. After 12 years in software development, I wanted a free, clean, and efficient tool because existing solutions were either paid, clunky, or bloated.<p>Features include multiple view modes, thumbnail previews for images/videos, intuitive file search, file upload/download, and keyboard shortcuts. The backend uses Rust and Tauri for performance.<p>You can download the latest .dmg from the landing page here: <a href="https://rajivm1991.github.io/DroidDock/" rel="nofollow">https://rajivm1991.github.io/DroidDock/</a> Source code is available on GitHub: <a href="https://github.com/rajivm1991/DroidDock" rel="nofollow">https://github.com/rajivm1991/DroidDock</a><p>I’d appreciate your feedback on usability, missing features, or bugs. Thanks for checking it out!<p>— Rajiv
Show HN: Trilogy Studio, open-source browser-based SQL editor and visualizer
Hacker News (score: 10)[IDE/Editor] Show HN: Trilogy Studio, open-source browser-based SQL editor and visualizer SQL-first analytic IDE; similar to Redash/Metabase. Aims to solve reuse/composability at the code layer with modified syntax, Trilogy, that includes a semantic layer directly in the SQL-like language.<p>Status: experiment; feedback and contributions welcome!<p>Built to solve 3 problems I have with SQL as my primary iterative analysis language:<p>1. Adjusting queries/analysis takes a lot of boilerplate. Solve with queries that operate on the semantic layer, not tables. Also eliminates the need for CTEs.<p>2. Sources of truth change all the time. I hate updating reports to reference new tables. Also solved by the semantic layer, since data bindings can be updated without changing dashboards or queries.<p>3. Getting from SQL to visuals is too much work in many tools; make it as streamlined as possible. Surprise - solve with the semantic layer; add in more expressive typing to get better defaults;also use it to wire up automatic drilldowns/cross filtering.<p>Supports: bigquery, duckdb, snowflake.<p>Links [1] <a href="https://trilogydata.dev/" rel="nofollow">https://trilogydata.dev/</a> (language info)<p>Git links: [Frontend] <a href="https://github.com/trilogy-data/trilogy-studio-core" rel="nofollow">https://github.com/trilogy-data/trilogy-studio-core</a> [Language] <a href="https://github.com/trilogy-data/pytrilogy" rel="nofollow">https://github.com/trilogy-data/pytrilogy</a><p>Previously: <a href="https://news.ycombinator.com/item?id=44106070">https://news.ycombinator.com/item?id=44106070</a> (significant UX/feature reworks since) <a href="https://news.ycombinator.com/item?id=42231325">https://news.ycombinator.com/item?id=42231325</a>
Show HN: Valid8r, Functional validation for Python CLIs using Maybe monads
Show HN (score: 5)[Other] Show HN: Valid8r, Functional validation for Python CLIs using Maybe monads I built Valid8r because I got tired of writing the same input validation code for every CLI tool. You know the pattern: parse a string, check if it's valid, print an error if not, ask again. Repeat for every argument.<p>The library uses Maybe monads (Success/Failure instead of exceptions) so you can chain parsers and validators:<p><pre><code> # Try it: pip install valid8r from valid8r.core import parsers, validators # Parse and validate in one pipeline result = ( parsers.parse_int(user_input) .bind(validators.minimum(1)) .bind(validators.maximum(65535)) ) match result: case Success(port): print(f"Using port {port}") case Failure(error): print(f"Invalid: {error}") </code></pre> I built integrations for argparse, Click, and Typer so you can drop valid8r parsers directly into your existing CLIs without refactoring everything.<p>The interesting technical bit: it's 4-300x faster than Pydantic for simple parsing (ints, emails, UUIDs) because it doesn't build schemas or do runtime type checking. It just parses strings and returns Maybe[T]. For complex nested validation, Pydantic is still better. I benchmarked both and documented where each one wins.<p>I'm not trying to replace Pydantic. If you're building a FastAPI service, use Pydantic. But if you're building CLI tools or parsing network configs, Maybe monads compose really nicely and keep your code functional.<p>The docs are at <a href="https://valid8r.readthedocs.io/" rel="nofollow">https://valid8r.readthedocs.io/</a> and the benchmarks are in the repo. It's MIT licensed.<p>Would love feedback on the API design. Is the Maybe monad pattern too weird for Python, or does it make validation code cleaner?<p>---<p>Here are a few more examples showing different syntax options for the same port validation:<p><pre><code> from valid8r.core import parsers, validators # Option 1: Combine validators with & operator validator = validators.minimum(1) & validators.maximum(65535) result = parsers.parse_int(user_input).bind(validator) # Option 2: Use parse_int_with_validation (built-in) result = parsers.parse_int_with_validation( user_input, validators.minimum(1) & validators.maximum(65535) ) # Option 3: Interactive prompting (keeps asking until valid) from valid8r.prompt import ask port = ask( "Enter port number (1-65535): ", parser=lambda s: parsers.parse_int(s).bind( validators.minimum(1) & validators.maximum(65535) ) ) # port is guaranteed valid here, no match needed # Option 4: Create a reusable parser function def parse_port(text): return parsers.parse_int(text).bind( validators.minimum(1) & validators.maximum(65535) ) result = parse_port(user_input) </code></pre> The & operator is probably the cleanest for combining validators. And the interactive prompt is nice because you don't need to match Success/Failure, it just keeps looping until the user gives you valid input.
Building a CI/CD Pipeline Runner from Scratch in Python
Hacker News (score: 22)[Other] Building a CI/CD Pipeline Runner from Scratch in Python
Show HN: TidesDB – Fast, transactional storage optimized for flash and RAM
Show HN (score: 5)[Database] Show HN: TidesDB – Fast, transactional storage optimized for flash and RAM
Show HN: Pipeflow-PHP – Automate anything with pipelines even non-devs can edit
Hacker News (score: 10)[Other] Show HN: Pipeflow-PHP – Automate anything with pipelines even non-devs can edit Hello everyone,<p>I’ve been building [Pipeflow-php](<a href="https://github.com/marcosiino/pipeflow-php" rel="nofollow">https://github.com/marcosiino/pipeflow-php</a>), a PHP pipeline engine to automate anything — from content generation to backend and business logic workflows — using core modular stages and custom implemented stages (that can do anything), with the key power of using an easy to reason and read XML to define the pipeline logic, which every actor in a company, even non developers, can understand, maintain and edit.<p>It’s a *headless engine*: no UI is included, but it's designed to be easily wired into any backend interface (e.g. WordPress admin, CMS dashboard, custom panels), so *even non-developers can edit or configure the logic*.<p>It surely needs improvements, more core stages to be implemented and more features, but i'm already using it on two websites i've developed.<p>In future I plan to port it in other languages too.<p>Feedback (and even contributions) are appreciated :)<p>---<p>Why I built it<p>I run a site which every day, via a cron job:<p>- automatically generates and publish coloring pages using complex logics and with the support of the generative AI,<p>- picks categories and prompts based on logic defined in a pipeline,<p>- creates and publishes WordPress posts automatically, every day, without any human intervention.<p>All the logic is defined in an XML pipeline that's editable via wordpress admin panel (using a wordpress plugin I've developed, which also adds some wordpress related custom stages to Pipeflow). A non-dev (like a content manager) can adjust this automatic content generation logic, for example by improving it, or by changing the themes/categories during holidays — without touching PHP.<p>---<p>What Pipeflow does<p>- Define pipelines in *fluent PHP* or *simple, easy understandable XML (even by non developers), directly from your web app admin pages*<p>- Use control-flow stages like `If`, `ForEach`, `For`<p>- Execute pipelines manually, via cron, or on any backend trigger which adapts to your business logic<p>- Build your own UI or editor on top (from a simple text editor to a node based editor which outputs a compatible XML-based configuration to feed to pipeflow)<p>- Reuse modular “stages” (core and custom ones) across different pipelines
Visualize FastAPI endpoints with FastAPI-Voyager
Hacker News (score: 37)[Other] Visualize FastAPI endpoints with FastAPI-Voyager
Show HN: Serve 100 Large AI models on a single GPU with low impact to TTFT
Show HN (score: 5)[Other] Show HN: Serve 100 Large AI models on a single GPU with low impact to TTFT I wanted to build an inference provider for proprietary AI models, but I did not have a huge GPU farm. I started experimenting with Serverless AI inference, but found out that coldstarts were huge. I went deep into the research and put together an engine that loads large models from SSD to VRAM up to ten times faster than alternatives. It works with vLLM, and transformers, and more coming soon.<p>With this project you can hot-swap entire large models (32B) on demand.<p>Its great for:<p>Serverless AI Inference<p>Robotics<p>On Prem deployments<p>Local Agents<p>And Its open source.<p>Let me know if anyone wants to contribute :)
[Other] Show HN: OtterLang – Pythonic scripting language that compiles to native code Hey HN! I’ve been building OtterLang, a small experimental scripting language designed to feel like Python but compile down to native binaries through LLVM.<p>The goal isn’t to reinvent Python or Rust, but to find a middle ground between them:<p>Python-like readability and syntax Rust-level performance and type safety Fast builds and transparent Rust FFI (you can directly import Rust crates without writing bindings)<p>OtterLang is still early and very experimental. the compiler, runtime, and FFI bridge are being rewritten frequently.<p>Please star the repo, and contribute to help this project.