🛠️ All DevTools
Showing 581–600 of 3038 tools
Last Updated
January 18, 2026 at 12:00 PM
Show HN: CTON: JSON-compatible, token-efficient text format for LLM prompts
Show HN (score: 7)[Other] Show HN: CTON: JSON-compatible, token-efficient text format for LLM prompts
CrowmanCloud
Product Hunt[Build/Deploy] Deploy code 10x faster | cloud deployment tool Deploy code 10x faster with automated cloud-readiness analysis. CrowmanCloud scans projects, generates configs, and provides accurate cost estimations of cloud providers like AWS, AZURE GCP
License Manager
Product Hunt[Other] Software licensing management based on hardware fingerprints Software licensing management system based on hardware fingerprints, protecting the copyrights of commercial software
Signadot Local
Product Hunt[DevOps] Code and debug microservices locally with live traffic Signadot Local makes developing against a Kubernetes cluster as simple as running a single service on your machine. It brings hot-reloading to the backend by routing live traffic and connecting real dependencies directly to your workstation. Record live traffic, inspect payloads, and override API responses to test failures in real-time. No mocks. No CI waits. Just flow.
Show HN: An A2A-compatible, open-source framework for multi-agent networks
Hacker News (score: 35)[Other] Show HN: An A2A-compatible, open-source framework for multi-agent networks
Show HN: OctoDNS, Tools for managing DNS across multiple providers
Show HN (score: 9)[DevOps] Show HN: OctoDNS, Tools for managing DNS across multiple providers After the major outages from AWS and Cloudflare, I began wondering how to make my own services more resilient.<p>Using nameservers from different providers is doable but a bit tricky to manage. OctoDNS helps automate keeping the zones synced so AWS, Cloudflare, etc. are all serving the same information.<p>In an age of centralized infrastructure, we can exploit the capabilities from the origins of the decentralized internet.
Show HN: Marimo VS Code extension – Python notebooks built on LSP and uv
Show HN (score: 26)[IDE/Editor] Show HN: Marimo VS Code extension – Python notebooks built on LSP and uv Hi HN! We're excited to release our VS Code/Cursor extension for marimo [1], an open-source, reactive Python notebook.<p>This extension provides a native experience for working with marimo notebooks, a long-requested feature that we’ve worked hard to get right.<p>An LSP-first architecture<p>The core of our extension is a marimo notebook language server (marimo-lsp [2]). As far as we know, it’s the first notebook runtime to take this approach. The Language Server Protocol (LSP) [3] offers a small but important set of notebook-related capabilities that we use for document and kernel syncing; everything else is handled through custom actions and messages.<p>By building on LSP, we aim to create a path to expose marimo capabilities in additional environments (beyond VS Code/Cursor). The notebook features in LSP are still limited, but as the protocol evolves, we’ll be able to shift more functionality out of the extension and into the language server, making it available to a wider range of editors and tools. For example, this could enable:<p>- structural edits to notebook documents (e.g., adding or removing cells) [4]<p>- editor hover information that reflects the live runtime values of variables<p>Deep uv integration with PEP 723<p>Because marimo notebooks are plain Python files, we adopt PEP 723-style inline metadata [5] to describe a notebook’s environment. Tools such as uv already support this format: they read the metadata block, build or update the corresponding environment, and run the script inside it.<p>The marimo CLI already integrates with uv in "sandbox" mode [6] to manage an isolated environment defined by PEP 723 metadata for a single notebook. In the extension, our uv-based “sandbox controller” manages multiple notebooks: each notebook gets its own isolated, cached environment. The controller keeps the environment aligned with the dependencies declared in the file and can update that metadata automatically when imports are missing.<p>uv normally syncs such environments whenever you run a script, ensuring it matches the dependencies declared in its metadata; we apply this concept at the cell level so the environment stays in sync whenever cells run. The same cached uv environment is reused if you run the notebook as a script via uv (e.g., uv run notebook.py).<p>—-------<p>This work has been a complete rewrite, and we're grateful to the community for early feedback. While VS Code and the LSP support a subset of notebook features, the ecosystem has been shaped heavily by Jupyter, and we’ve had to work around some assumptions baked into existing APIs. We’ve been coordinating with the VS Code team and hope our work can help broaden the conversation—pushing the LSP notebook model forward and making room for runtimes that aren’t Jupyter-based.<p>We'd love to hear your thoughts!<p>[1] <a href="https://marimo.io" rel="nofollow">https://marimo.io</a><p>[2] <a href="https://github.com/marimo-team/marimo-lsp" rel="nofollow">https://github.com/marimo-team/marimo-lsp</a><p>[3] <a href="https://microsoft.github.io/language-server-protocol/" rel="nofollow">https://microsoft.github.io/language-server-protocol/</a><p>[4] <a href="https://github.com/microsoft/vscode-languageserver-node/issues/1336" rel="nofollow">https://github.com/microsoft/vscode-languageserver-node/issu...</a><p>[5] <a href="https://peps.python.org/pep-0723/" rel="nofollow">https://peps.python.org/pep-0723/</a><p>[6] <a href="https://docs.marimo.io/guides/package_reproducibility/" rel="nofollow">https://docs.marimo.io/guides/package_reproducibility/</a>
Show HN: DNS Benchmark Tool – Compare and monitor resolvers
Hacker News (score: 24)[CLI Tool] Show HN: DNS Benchmark Tool – Compare and monitor resolvers I built a CLI to benchmark DNS resolvers after discovering DNS was adding 300ms to my API requests.<p>v0.3.0 just released with new features: compare: Test single domain across all resolvers top: Rank resolvers by latency/reliability/balanced monitor: Continuous tracking with threshold alerts<p>1,400+ downloads in first week.<p>Quick start: pip install dns-benchmark-tool dns-benchmark compare --domain google.com<p>CLI stays free forever. Hosted version (multi-region, historical tracking, alerts) coming Q1 2026.<p>GitHub: <a href="https://github.com/frankovo/dns-benchmark-tool" rel="nofollow">https://github.com/frankovo/dns-benchmark-tool</a> Feedback: <a href="https://forms.gle/BJBiyBFvRJHskyR57" rel="nofollow">https://forms.gle/BJBiyBFvRJHskyR57</a><p>Built with Python + dnspython. Open to questions and feedback!
Show HN: Virtual SLURM HPC cluster in a Docker Compose
Hacker News (score: 20)[DevOps] Show HN: Virtual SLURM HPC cluster in a Docker Compose I'm the main developer behind vHPC, a SLURM HPC cluster in a docker compose.<p>As part of my job, I'm working on a software solution that needs to interact with one of the largest Italian HPC clusters (Cineca Leonardo, 270 PFLOPS). Of course developing on the production system was out of question, as it would have led to unbearably long feedback loops. I thus started looking around for existing containerised solutions, which were always lacking some key ingredient in order to suitably mock our target system (accounting, MPI, out of date software, ...).<p>I thus decided that it was worth it to make my own virtual cluster from scratch, learning a thing or two about SLURM in the process. Even though it satisfies the particular needs of the project I'm working on, I tried to keep vHPC as simple and versatile as possible.<p>I proposed the company to open source it, and as of this morning (CET) vHPC is FLOSS for others to use and tweak. I am around to answer any question.
Show HN: ChunkBack – A Fake LLM API server for testing apps without paying
Show HN (score: 5)[Testing] Show HN: ChunkBack – A Fake LLM API server for testing apps without paying Hi HN,<p>I've been working with LLMs in production for a while both as a solo dev building apps for clients and working at an AI startup. The one thing that always was a pain was to pay OpenAI/Gemini/Anthropic a few dollars a month just for me to say "test" or have a CI runner validate some UI code. So I built this server called ChunkBack, that mocks the popular llm provider's functionality but allows you to type in a deterministic language:<p>`SAY "cheese"` or `TOOLCALL "tool_name" {} "tool response"`<p>I've had to work in some test environments and give good results for experimenting with CI, but it's still an early project so would love feedback and more testers on.
GoExperts
Product Hunt[Testing] Hands-on Go certification for developers who love GOlang GoExperts is a small certification-style project built by a team of Go developers who love the language and wanted a deeper way to test real knowledge. Instead of generic quizzes, it focuses on the parts of Go that actually matter: concurrency, runtime behavior, memory model and scheduling. It’s designed for developers who enjoy exploring how Go really works and want a simple, useful way to validate their skills.
Free interactive tool that shows you how PCIe lanes work on motherboards
Hacker News (score: 68)[Other] Free interactive tool that shows you how PCIe lanes work on motherboards
GitHub: Git operation failures
Hacker News (score: 291)[Other] GitHub: Git operation failures
[Build/Deploy] Show HN: We built a generator for Vue+Laravel that gives you a clean codebase Hey HN, My team and I built a tool to scratch our own itch. We were tired of spending the first few days of every new project setting up the same Vue + Laravel boilerplate: writing migrations, models, basic CRUD controllers, and wiring up forms and tables on the frontend.<p>So we built Codecannon. It’s a web app where you define your data models, columns, and relationships, and it generates a full-stack application for you.<p>To be clear, the code isn't AI-generated. It's produced deterministically by our own code generators, so the output is always predictable, clean, and follows conventional best practices.<p>The key difference from other tools is that it’s not a no-code platform you get locked into. When you're done, it pushes a well-structured codebase to your GitHub repo (or you can download a .zip file). You own it completely and can start building your real features on top of it right away.<p>What it generates: - Laravel Backend: Migrations, models with relationships, factories, seeders, and basic CRUD API endpoints.<p><pre><code> - Vue Frontend: A SPA with PrimeVue components. It includes auth pages, data tables, and create/edit forms for each of your models, with all the state management wired up. - Dev Stuff: Docker configs, a CI/CD pipeline starter, linters, and formatters are all included.</code></pre> The idea is to skip the repetitive work and get straight to the interesting parts of a project. It's free to use the builder, see a live preview, and download the full codebase for apps up to 5 modules. For larger apps, you only pay if you decide you want the source code.<p>We’re in an early alpha and would love to get some honest feedback from the community. Does the generated code look sensible? Are we missing any obvious features? Is this something you would find useful or know anyone who might? Let me know what you think.
Show HN: Optimizing LiteLLM with Rust – When Expectations Meet Reality
Hacker News (score: 24)[Other] Show HN: Optimizing LiteLLM with Rust – When Expectations Meet Reality I've been working on Fast LiteLLM - a Rust acceleration layer for the popular LiteLLM library - and I had some interesting learnings that might resonate with other developers trying to squeeze performance out of existing systems.<p>My assumption was that LiteLLM, being a Python library, would have plenty of low-hanging fruit for optimization. I set out to create a Rust layer using PyO3 to accelerate the performance-critical parts: token counting, routing, rate limiting, and connection pooling.<p>The Approach<p>- Built Rust implementations for token counting using tiktoken-rs<p>- Added lock-free data structures with DashMap for concurrent operations<p>- Implemented async-friendly rate limiting<p>- Created monkeypatch shims to replace Python functions transparently<p>- Added comprehensive feature flags for safe, gradual rollouts<p>- Developed performance monitoring to track improvements in real-time<p>After building out all the Rust acceleration, I ran my comprehensive benchmark comparing baseline LiteLLM vs. the shimmed version:<p>Function Baseline Time Shimmed Time Speedup Improvement Status<p>token_counter 0.000035s 0.000036s 0.99x -0.6%<p>count_tokens_batch 0.000001s 0.000001s 1.10x +9.1%<p>router 0.001309s 0.001299s 1.01x +0.7%<p>rate_limiter 0.000000s 0.000000s 1.85x +45.9%<p>connection_pool 0.000000s 0.000000s 1.63x +38.7%<p>Turns out LiteLLM is already quite well-optimized! The core token counting was essentially unchanged (0.6% slower, likely within measurement noise), and the most significant gains came from the more complex operations like rate limiting and connection pooling where Rust's concurrent primitives made a real difference.<p>Key Takeaways<p>1. Don't assume existing libraries are under-optimized - The maintainers likely know their domain well 2. Focus on algorithmic improvements over reimplementation - Sometimes a better approach beats a faster language 3. Micro-benchmarks can be misleading - Real-world performance impact varies significantly 4. The most gains often come from the complex parts, not the simple operations 5. Even "modest" improvements can matter at scale - 45% improvements in rate limiting are meaningful for high-throughput applications<p>While the core token counting saw minimal improvement, the rate limiting and connection pooling gains still provide value for high-volume use cases. The infrastructure I built (feature flags, performance monitoring, safe fallbacks) creates a solid foundation for future optimizations.<p>The project continues as Fast LiteLLM on GitHub for anyone interested in the Rust-Python integration patterns, even if the performance gains were humbling.<p>Edit: To clarify - the negative performance for token_counter is likely in the noise range of measurement, suggesting that LiteLLM's token counting is already well-optimized. The 45%+ gains in rate limiting and connection pooling still provide value for high-throughput applications.
Show HN: Gitlogue – A terminal tool that replays your Git commits with animation
Hacker News (score: 76)[CLI Tool] Show HN: Gitlogue – A terminal tool that replays your Git commits with animation Gitlogue is a CLI that turns your Git commits into a typing-style replay.<p>It visualizes diffs line by line, shows the file tree, and plays back each edit as if it were typed in real time.<p>Key points<p>• Realistic typing animation<p>• Syntax-highlighted diffs<p>• File-tree view<p>• Replay any commit<p>• Self-contained CLI<p>Demo video is in the README.<p>Repo: <a href="https://github.com/unhappychoice/gitlogue" rel="nofollow">https://github.com/unhappychoice/gitlogue</a>
A 'small' vanilla Kubernetes install on NixOS
Hacker News (score: 27)[DevOps] A 'small' vanilla Kubernetes install on NixOS
Codaro
Product Hunt[Other] Real-time dev insights, no micromanagement. AI-driven workflow visibility for software teams. Codaro’s IDE plugins deliver real-time reports, live alerts, and task integration—recapturing valuable dev hours and turning status updates into strategy.
Unofficial "Tier 4" Rust Target for older Windows versions
Hacker News (score: 96)[Other] Unofficial "Tier 4" Rust Target for older Windows versions
0Xminds
Product Hunt[Other] The AI platform for full-stack Web3 & Web2 development 0xminds bridges the gap between Web2 and Web3 development. Other tools are only site generators or contract editors; 0xminds is a true all-in-one. Our AI builds full-stack apps (frontend + backend) for both Web2 and Web3 from a single prompt. The standout feature is the unified workflow: Generate your site, write, compile, and deploy your custom smart contracts and tokens directly. It’s your entire stack, from idea to deployment.