🛠️ Hacker News Tools

Showing 401–420 of 1467 tools from Hacker News

Last Updated
January 17, 2026 at 04:00 PM

[Other] Show HN: Lamina – A compiler backend that is not LLVM or Cranelift Recently, I&#x27;ve been working on Lamina, a compiler infrastructure that generates native assembly for multiple architectures without relying on LLVM or Cranelift. It&#x27;s designed for building compilers for new languages, educational projects, and any projects that can utilize a custom syntax of code generation.<p>Instead of depending on external backends, Lamina provides a complete pipeline from a single SSA based IR directly to the supported target&#x27;s assembly generation. The IR is readable, also provides a IRBuilder API that is easy to use via programmatic construction.<p>For better management of the code generation process, in the future, it will use a new pipeline IR -&gt; MIR -&gt; native assembly with the optimization passes.<p>Key features: - Direct code generation: IR -&gt; assembly&#x2F;machine code without LLVM&#x2F;Cranelift - SSA based IR: single assignment form optimized for analysis and optimization passes - MIR based codegen(experimental): new intermediate representation with register allocation and advanced optimizations - IRBuilder API: fluent interface for building modules, functions, blocks, and control flow - Readable IR: easy to debug and lower than high level languages - Zero external backend dependencies: simplified builds and transparent pipeline while being faster to build<p>Optimization passes (experimental MIR flow only): - Control flow: CFG simplification, jump threading, branch optimization - Loop optimizations: loop fusion, loop invariant code motion, loop unrolling - Code motion: copy propagation, common subexpression elimination, constant folding - Function optimizations: inlining, tail call optimization - Arithmetic: strength reduction, peephole optimizations<p>Performance: On a 256×256 matrix multiplication benchmark (300 runs), Lamina&#x27;s experimental MIR-based codegen (which includes all optimization passes) generates code comparable to C&#x2F;C++&#x2F;Rust (within 1.8x) and faster than Java, Go, JavaScript, and Python. The experimental MIR based flow&#x27;s result is much faster than the IR-&gt; Assembly based codegen.<p>Written in Rust (2024 edition), Current Version 0.0.7. Optional nightly features available for SIMD, atomic placeholders, and experimental targets.

Found: November 20, 2025 ID: 2461

[Other] Show HN: CTON: JSON-compatible, token-efficient text format for LLM prompts

Found: November 20, 2025 ID: 2450

[Other] Show HN: An A2A-compatible, open-source framework for multi-agent networks

Found: November 20, 2025 ID: 2447

[DevOps] Show HN: OctoDNS, Tools for managing DNS across multiple providers After the major outages from AWS and Cloudflare, I began wondering how to make my own services more resilient.<p>Using nameservers from different providers is doable but a bit tricky to manage. OctoDNS helps automate keeping the zones synced so AWS, Cloudflare, etc. are all serving the same information.<p>In an age of centralized infrastructure, we can exploit the capabilities from the origins of the decentralized internet.

Found: November 19, 2025 ID: 2446

[IDE/Editor] Show HN: Marimo VS Code extension – Python notebooks built on LSP and uv Hi HN! We&#x27;re excited to release our VS Code&#x2F;Cursor extension for marimo [1], an open-source, reactive Python notebook.<p>This extension provides a native experience for working with marimo notebooks, a long-requested feature that we’ve worked hard to get right.<p>An LSP-first architecture<p>The core of our extension is a marimo notebook language server (marimo-lsp [2]). As far as we know, it’s the first notebook runtime to take this approach. The Language Server Protocol (LSP) [3] offers a small but important set of notebook-related capabilities that we use for document and kernel syncing; everything else is handled through custom actions and messages.<p>By building on LSP, we aim to create a path to expose marimo capabilities in additional environments (beyond VS Code&#x2F;Cursor). The notebook features in LSP are still limited, but as the protocol evolves, we’ll be able to shift more functionality out of the extension and into the language server, making it available to a wider range of editors and tools. For example, this could enable:<p>- structural edits to notebook documents (e.g., adding or removing cells) [4]<p>- editor hover information that reflects the live runtime values of variables<p>Deep uv integration with PEP 723<p>Because marimo notebooks are plain Python files, we adopt PEP 723-style inline metadata [5] to describe a notebook’s environment. Tools such as uv already support this format: they read the metadata block, build or update the corresponding environment, and run the script inside it.<p>The marimo CLI already integrates with uv in &quot;sandbox&quot; mode [6] to manage an isolated environment defined by PEP 723 metadata for a single notebook. In the extension, our uv-based “sandbox controller” manages multiple notebooks: each notebook gets its own isolated, cached environment. The controller keeps the environment aligned with the dependencies declared in the file and can update that metadata automatically when imports are missing.<p>uv normally syncs such environments whenever you run a script, ensuring it matches the dependencies declared in its metadata; we apply this concept at the cell level so the environment stays in sync whenever cells run. The same cached uv environment is reused if you run the notebook as a script via uv (e.g., uv run notebook.py).<p>—-------<p>This work has been a complete rewrite, and we&#x27;re grateful to the community for early feedback. While VS Code and the LSP support a subset of notebook features, the ecosystem has been shaped heavily by Jupyter, and we’ve had to work around some assumptions baked into existing APIs. We’ve been coordinating with the VS Code team and hope our work can help broaden the conversation—pushing the LSP notebook model forward and making room for runtimes that aren’t Jupyter-based.<p>We&#x27;d love to hear your thoughts!<p>[1] <a href="https:&#x2F;&#x2F;marimo.io" rel="nofollow">https:&#x2F;&#x2F;marimo.io</a><p>[2] <a href="https:&#x2F;&#x2F;github.com&#x2F;marimo-team&#x2F;marimo-lsp" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;marimo-team&#x2F;marimo-lsp</a><p>[3] <a href="https:&#x2F;&#x2F;microsoft.github.io&#x2F;language-server-protocol&#x2F;" rel="nofollow">https:&#x2F;&#x2F;microsoft.github.io&#x2F;language-server-protocol&#x2F;</a><p>[4] <a href="https:&#x2F;&#x2F;github.com&#x2F;microsoft&#x2F;vscode-languageserver-node&#x2F;issues&#x2F;1336" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;microsoft&#x2F;vscode-languageserver-node&#x2F;issu...</a><p>[5] <a href="https:&#x2F;&#x2F;peps.python.org&#x2F;pep-0723&#x2F;" rel="nofollow">https:&#x2F;&#x2F;peps.python.org&#x2F;pep-0723&#x2F;</a><p>[6] <a href="https:&#x2F;&#x2F;docs.marimo.io&#x2F;guides&#x2F;package_reproducibility&#x2F;" rel="nofollow">https:&#x2F;&#x2F;docs.marimo.io&#x2F;guides&#x2F;package_reproducibility&#x2F;</a>

Found: November 19, 2025 ID: 2445

[CLI Tool] Show HN: DNS Benchmark Tool – Compare and monitor resolvers I built a CLI to benchmark DNS resolvers after discovering DNS was adding 300ms to my API requests.<p>v0.3.0 just released with new features: compare: Test single domain across all resolvers top: Rank resolvers by latency&#x2F;reliability&#x2F;balanced monitor: Continuous tracking with threshold alerts<p>1,400+ downloads in first week.<p>Quick start: pip install dns-benchmark-tool dns-benchmark compare --domain google.com<p>CLI stays free forever. Hosted version (multi-region, historical tracking, alerts) coming Q1 2026.<p>GitHub: <a href="https:&#x2F;&#x2F;github.com&#x2F;frankovo&#x2F;dns-benchmark-tool" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;frankovo&#x2F;dns-benchmark-tool</a> Feedback: <a href="https:&#x2F;&#x2F;forms.gle&#x2F;BJBiyBFvRJHskyR57" rel="nofollow">https:&#x2F;&#x2F;forms.gle&#x2F;BJBiyBFvRJHskyR57</a><p>Built with Python + dnspython. Open to questions and feedback!

Found: November 19, 2025 ID: 2443

[DevOps] Show HN: Virtual SLURM HPC cluster in a Docker Compose I&#x27;m the main developer behind vHPC, a SLURM HPC cluster in a docker compose.<p>As part of my job, I&#x27;m working on a software solution that needs to interact with one of the largest Italian HPC clusters (Cineca Leonardo, 270 PFLOPS). Of course developing on the production system was out of question, as it would have led to unbearably long feedback loops. I thus started looking around for existing containerised solutions, which were always lacking some key ingredient in order to suitably mock our target system (accounting, MPI, out of date software, ...).<p>I thus decided that it was worth it to make my own virtual cluster from scratch, learning a thing or two about SLURM in the process. Even though it satisfies the particular needs of the project I&#x27;m working on, I tried to keep vHPC as simple and versatile as possible.<p>I proposed the company to open source it, and as of this morning (CET) vHPC is FLOSS for others to use and tweak. I am around to answer any question.

Found: November 19, 2025 ID: 2489

[Testing] Show HN: ChunkBack – A Fake LLM API server for testing apps without paying Hi HN,<p>I&#x27;ve been working with LLMs in production for a while both as a solo dev building apps for clients and working at an AI startup. The one thing that always was a pain was to pay OpenAI&#x2F;Gemini&#x2F;Anthropic a few dollars a month just for me to say &quot;test&quot; or have a CI runner validate some UI code. So I built this server called ChunkBack, that mocks the popular llm provider&#x27;s functionality but allows you to type in a deterministic language:<p>`SAY &quot;cheese&quot;` or `TOOLCALL &quot;tool_name&quot; {} &quot;tool response&quot;`<p>I&#x27;ve had to work in some test environments and give good results for experimenting with CI, but it&#x27;s still an early project so would love feedback and more testers on.

Found: November 19, 2025 ID: 2449

[Other] Free interactive tool that shows you how PCIe lanes work on motherboards

Found: November 19, 2025 ID: 2455

GitHub: Git operation failures

Hacker News (score: 291)

[Other] GitHub: Git operation failures

Found: November 18, 2025 ID: 2438

[Build/Deploy] Show HN: We built a generator for Vue+Laravel that gives you a clean codebase Hey HN, My team and I built a tool to scratch our own itch. We were tired of spending the first few days of every new project setting up the same Vue + Laravel boilerplate: writing migrations, models, basic CRUD controllers, and wiring up forms and tables on the frontend.<p>So we built Codecannon. It’s a web app where you define your data models, columns, and relationships, and it generates a full-stack application for you.<p>To be clear, the code isn&#x27;t AI-generated. It&#x27;s produced deterministically by our own code generators, so the output is always predictable, clean, and follows conventional best practices.<p>The key difference from other tools is that it’s not a no-code platform you get locked into. When you&#x27;re done, it pushes a well-structured codebase to your GitHub repo (or you can download a .zip file). You own it completely and can start building your real features on top of it right away.<p>What it generates: - Laravel Backend: Migrations, models with relationships, factories, seeders, and basic CRUD API endpoints.<p><pre><code> - Vue Frontend: A SPA with PrimeVue components. It includes auth pages, data tables, and create&#x2F;edit forms for each of your models, with all the state management wired up. - Dev Stuff: Docker configs, a CI&#x2F;CD pipeline starter, linters, and formatters are all included.</code></pre> The idea is to skip the repetitive work and get straight to the interesting parts of a project. It&#x27;s free to use the builder, see a live preview, and download the full codebase for apps up to 5 modules. For larger apps, you only pay if you decide you want the source code.<p>We’re in an early alpha and would love to get some honest feedback from the community. Does the generated code look sensible? Are we missing any obvious features? Is this something you would find useful or know anyone who might? Let me know what you think.

Found: November 18, 2025 ID: 2441

[Other] Show HN: Optimizing LiteLLM with Rust – When Expectations Meet Reality I&#x27;ve been working on Fast LiteLLM - a Rust acceleration layer for the popular LiteLLM library - and I had some interesting learnings that might resonate with other developers trying to squeeze performance out of existing systems.<p>My assumption was that LiteLLM, being a Python library, would have plenty of low-hanging fruit for optimization. I set out to create a Rust layer using PyO3 to accelerate the performance-critical parts: token counting, routing, rate limiting, and connection pooling.<p>The Approach<p>- Built Rust implementations for token counting using tiktoken-rs<p>- Added lock-free data structures with DashMap for concurrent operations<p>- Implemented async-friendly rate limiting<p>- Created monkeypatch shims to replace Python functions transparently<p>- Added comprehensive feature flags for safe, gradual rollouts<p>- Developed performance monitoring to track improvements in real-time<p>After building out all the Rust acceleration, I ran my comprehensive benchmark comparing baseline LiteLLM vs. the shimmed version:<p>Function Baseline Time Shimmed Time Speedup Improvement Status<p>token_counter 0.000035s 0.000036s 0.99x -0.6%<p>count_tokens_batch 0.000001s 0.000001s 1.10x +9.1%<p>router 0.001309s 0.001299s 1.01x +0.7%<p>rate_limiter 0.000000s 0.000000s 1.85x +45.9%<p>connection_pool 0.000000s 0.000000s 1.63x +38.7%<p>Turns out LiteLLM is already quite well-optimized! The core token counting was essentially unchanged (0.6% slower, likely within measurement noise), and the most significant gains came from the more complex operations like rate limiting and connection pooling where Rust&#x27;s concurrent primitives made a real difference.<p>Key Takeaways<p>1. Don&#x27;t assume existing libraries are under-optimized - The maintainers likely know their domain well 2. Focus on algorithmic improvements over reimplementation - Sometimes a better approach beats a faster language 3. Micro-benchmarks can be misleading - Real-world performance impact varies significantly 4. The most gains often come from the complex parts, not the simple operations 5. Even &quot;modest&quot; improvements can matter at scale - 45% improvements in rate limiting are meaningful for high-throughput applications<p>While the core token counting saw minimal improvement, the rate limiting and connection pooling gains still provide value for high-volume use cases. The infrastructure I built (feature flags, performance monitoring, safe fallbacks) creates a solid foundation for future optimizations.<p>The project continues as Fast LiteLLM on GitHub for anyone interested in the Rust-Python integration patterns, even if the performance gains were humbling.<p>Edit: To clarify - the negative performance for token_counter is likely in the noise range of measurement, suggesting that LiteLLM&#x27;s token counting is already well-optimized. The 45%+ gains in rate limiting and connection pooling still provide value for high-throughput applications.

Found: November 18, 2025 ID: 2437

[CLI Tool] Show HN: Gitlogue – A terminal tool that replays your Git commits with animation Gitlogue is a CLI that turns your Git commits into a typing-style replay.<p>It visualizes diffs line by line, shows the file tree, and plays back each edit as if it were typed in real time.<p>Key points<p>• Realistic typing animation<p>• Syntax-highlighted diffs<p>• File-tree view<p>• Replay any commit<p>• Self-contained CLI<p>Demo video is in the README.<p>Repo: <a href="https:&#x2F;&#x2F;github.com&#x2F;unhappychoice&#x2F;gitlogue" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;unhappychoice&#x2F;gitlogue</a>

Found: November 18, 2025 ID: 2479

[DevOps] A 'small' vanilla Kubernetes install on NixOS

Found: November 18, 2025 ID: 2439

[Other] Unofficial "Tier 4" Rust Target for older Windows versions

Found: November 18, 2025 ID: 2435

[CLI Tool] Show HN: Parqeye – A CLI tool to visualize and inspect Parquet files I built a Rust-based CLI&#x2F;terminal UI for inspecting Parquet files—data, metadata, and row-group-level structure—right from the terminal. If someone sent me a Parquet file, I used to open DuckDB or Polars just to see what was inside. Now I can do it with one command.<p>Repo: <a href="https:&#x2F;&#x2F;github.com&#x2F;kaushiksrini&#x2F;parqeye" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;kaushiksrini&#x2F;parqeye</a>

Found: November 17, 2025 ID: 2431

[Other] Show HN: PrinceJS – 19,200 req/s Bun framework in 2.8 kB (built by a 13yo) Hey HN,<p>I&#x27;m 13, from Nigeria, and I just released PrinceJS — the fastest web framework for Bun right now.<p>• 19,200 req&#x2F;s (beats Hono&#x2F;Elysia&#x2F;Express) • 2.8 kB gzipped • Tree-shakable (cache, AI, email, cron, SSE, queue, test, static...) • Zero deps. Zero config.<p>Built in &lt; 1 week. No team. Just me and Bun.<p>Try it: `bun add princejs` GitHub: <a href="https:&#x2F;&#x2F;github.com&#x2F;MatthewTheCoder1218&#x2F;princejs" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;MatthewTheCoder1218&#x2F;princejs</a> Docs: <a href="https:&#x2F;&#x2F;princejs.vercel.app" rel="nofollow">https:&#x2F;&#x2F;princejs.vercel.app</a><p>Brutal feedback welcome. What&#x27;s missing?<p>– @Lil_Prince_1218

Found: November 17, 2025 ID: 2434

[Other] Show HN: Building WebSocket in Apache Iggy with Io_uring and Completion Based IO

Found: November 17, 2025 ID: 2428

[CLI Tool] Show HN: I build a strace clone for macOS Ever since I tested software on macOS, I deeply missed my beloved strace that I use when programs are missbehaving. macOS has dtruss but it&#x27;s getting locked down and more unusable with every machine. My approach uses the signed lldb binary on the system and re-implements the output you are know from the wonderful strace tool. I just created the tool yesterday evening, so it may have a few bugs, but I already got quiet a few integration tests and I am happy so far with it.

Found: November 17, 2025 ID: 2432

[Monitoring/Observability] Show HN: I ditched Grafana for my home server and built this instead Frustrated by the complexity and resource drain of multi service monitoring stacks, I built Simon. I wanted a single, lightweight dashboard to replace the heavy stack and the constant need for an SSH client for routine tasks. The result is a resource efficient dashboard in a single Rust binary, just a couple of megabytes in size. Its support for various architectures on Linux also makes it ideal for embedded systems and lightweight SBCs.<p>It integrates: Comprehensive Monitoring: Realtime and historical metrics for the host system and Docker containers (CPU, memory, disk usage, and network activity). Integrated File &amp; Log Management: A web UI for file operations and for viewing container logs, right where you need them. Flexible Alerting: A system to set rules on any metric, with templates for sending notifications to Telegram, ntfy, and webhooks. My goal was to create a cohesive, lightweight tool for self hosters and resource constrained environments. I&#x27;d love to get your feedback.<p><a href="https:&#x2F;&#x2F;github.com&#x2F;alibahmanyar&#x2F;simon" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;alibahmanyar&#x2F;simon</a>

Found: November 17, 2025 ID: 2429
Previous Page 21 of 74 Next