🛠️ All DevTools
Showing 1–20 of 3221 tools
Last Updated
February 03, 2026 at 08:00 PM
Show HN: PII-Shield – Log Sanitization Sidecar with JSON Integrity (Go, Entropy)
Show HN (score: 9)[Monitoring/Observability] Show HN: PII-Shield – Log Sanitization Sidecar with JSON Integrity (Go, Entropy) What PII-Shield does: It's a K8s sidecar (or CLI tool) that pipes application logs, detects secrets using Shannon entropy (catching unknown keys like "sk-live-..." without predefined patterns), and redacts them deterministically using HMAC.<p>Why deterministic? So that "pass123" always hashes to the same "[HIDDEN:a1b2c]", allowing QA/Devs to correlate errors without seeing the raw data.<p>Key features: 1. JSON Integrity: It parses JSON, sanitizes values, and rebuilds it. It guarantees valid JSON output for your SIEM (ELK/Datadog). 2. Entropy Detection: Uses context-aware entropy analysis to catch high-randomness strings. 3. Fail-Open: Designed as a transparent pipe wrapper to preserve app uptime.<p>The project is open-source (Apache 2.0).<p>Repo: <a href="https://github.com/aragossa/pii-shield" rel="nofollow">https://github.com/aragossa/pii-shield</a> Docs: <a href="https://pii-shield.gitbook.io/docs/" rel="nofollow">https://pii-shield.gitbook.io/docs/</a><p>I'd love your feedback on the entropy/threshold logic!
Tadpole – A modular and extensible DSL built for web scraping
Hacker News (score: 20)[Other] Tadpole – A modular and extensible DSL built for web scraping
Show HN: C discrete event SIM w stackful coroutines runs 45x faster than SimPy
Hacker News (score: 25)[Other] Show HN: C discrete event SIM w stackful coroutines runs 45x faster than SimPy Hi all,<p>I have built <i>Cimba</i>, a multithreaded discrete event simulation library in C.<p>Cimba uses POSIX pthread multithreading for parallel execution of multiple simulation trials, while coroutines provide concurrency inside each simulated trial universe. The simulated processes are based on asymmetric stackful coroutines with the context switching hand-coded in assembly.<p>The stackful coroutines make it natural to express agentic behavior by conceptually placing oneself "inside" that process and describing what it does. A process can run in an infinite loop or just act as a one-shot customer passing through the system, yielding and resuming execution from any level of its call stack, acting both as an active agent and a passive object as needed. This is inspired by my own experience programming in Simula67, many moons ago, where I found the coroutines more important than the deservedly famous object-orientation.<p>Cimba turned out to run <i>really</i> fast. In a simple benchmark, 100 trials of an M/M/1 queue run for one million time units each, it ran <i>45 times faster</i> than an equivalent model built in SimPy + Python multiprocessing. The running time was <i>reduced by 97.8 %</i> vs the SimPy model. Cimba even processed more simulated events per second <i>on a single CPU core</i> than SimPy could do on all 64 cores.<p>The speed is not only due to the efficient coroutines. Other parts are also designed for speed, such as a hash-heap event queue (binary heap plus Fibonacci hash map), fast random number generators and distributions, memory pools for frequently used object types, and so on.<p>The initial implementation supports the AMD64/x86-64 architecture for Linux and Windows. I plan to target Apple Silicon next, then probably ARM.<p>I believe this may interest the HN community. I would appreciate your views on both the API and the code. Any thoughts on future target architectures to consider?<p>Docs: <a href="https://cimba.readthedocs.io/en/latest/" rel="nofollow">https://cimba.readthedocs.io/en/latest/</a><p>Repo: <a href="https://github.com/ambonvik/cimba" rel="nofollow">https://github.com/ambonvik/cimba</a>
Show HN: LUML – an open source (Apache 2.0) MLOps/LLMOps platform
Show HN (score: 5)[DevOps] Show HN: LUML – an open source (Apache 2.0) MLOps/LLMOps platform Hi HN,<p>We built LUML (<a href="https://github.com/luml-ai/luml" rel="nofollow">https://github.com/luml-ai/luml</a>), an open-source (Apache 2.0) MLOps/LLMOps platform that covers experiments, registry, LLM tracing, deployments and so on.<p>It separates the control plane from your data and compute. Artifacts are self-contained. Each model artifact includes all metadata (including the experiment snapshots, dependencies, etc.), and it stays in your storage (S3-compatible or Azure).<p>File transfers go directly between your machine and storage, and execution happens on compute nodes you host and connect to LUML.<p>We’d love you to try the platform and share your feedback!
Show HN: Sandboxing untrusted code using WebAssembly
Hacker News (score: 15)[DevOps] Show HN: Sandboxing untrusted code using WebAssembly Hi everyone,<p>I built a runtime to isolate untrusted code using wasm sandboxes.<p>Basically, it protects your host system from problems that untrusted code can cause. We’ve had a great discussion about sandboxing in Python lately that elaborates a bit more on the problem [1]. In TypeScript, wasm integration is even more natural thanks to the close proximity between both ecosystems.<p>The core is built in Rust. On top of that, I use WASI 0.2 via wasmtime and the component model, along with custom SDKs that keep things as idiomatic as possible.<p>For example, in Python we have a simple decorator:<p><pre><code> from capsule import task @task( name="analyze_data", compute="MEDIUM", ram="512mb", allowed_files=["./authorized-folder/"], timeout="30s", max_retries=1 ) def analyze_data(dataset: list) -> dict: """Process data in an isolated, resource-controlled environment.""" # Your code runs safely in a Wasm sandbox return {"processed": len(dataset), "status": "complete"} </code></pre> And in TypeScript we have a wrapper:<p><pre><code> import { task } from "@capsule-run/sdk" export const analyze = task({ name: "analyzeData", compute: "MEDIUM", ram: "512mb", allowedFiles: ["./authorized-folder/"], timeout: 30000, maxRetries: 1 }, (dataset: number[]) => { return {processed: dataset.length, status: "complete"} }); </code></pre> You can set CPU (with compute), memory, filesystem access, and retries to keep precise control over your tasks.<p>It's still quite early, but I'd love feedback. I’ll be around to answer questions.<p>GitHub: <a href="https://github.com/mavdol/capsule" rel="nofollow">https://github.com/mavdol/capsule</a><p>[1] <a href="https://news.ycombinator.com/item?id=46500510">https://news.ycombinator.com/item?id=46500510</a>
Show HN: Inverting Agent Model (App as Clients, Chat as Server and Reflection)
Hacker News (score: 13)[Other] Show HN: Inverting Agent Model (App as Clients, Chat as Server and Reflection) Hello HN. I’d like to start by saying that I am a developer who started this research project to challenge myself. I know standard protocols like MCP exist, but I wanted to explore a different path and have some fun creating a communication layer tailored specifically for desktop applications.<p>The project is designed to handle communication between desktop apps in an agentic manner, so the focus is strictly on this IPC layer (forget about HTTP API calls).<p>At the heart of RAIL (Remote Agent Invocation Layer) are two fundamental concepts. The names might sound scary, but remember this is a research project:<p>Memory Logic Injection + Reflection Paradigm shift: The Chat is the Server, and the Apps are the Clients.<p>Why this approach? The idea was to avoid creating huge wrappers or API endpoints just to call internal methods. Instead, the agent application passes its own instance to the SDK (e.g., RailEngine.Ignite(this)).<p>Here is the flow that I find fascinating:<p>-The App passes its instance to the RailEngine library running inside its own process.<p>-The Chat (Orchestrator) receives the manifest of available methods.The Model decides what to do and sends the command back via Named Pipe.<p>-The Trigger: The RailEngine inside the App receives the command and uses Reflection on the held instance to directly perform the .Invoke().<p>Essentially, I am injecting the "Agent Logic" directly into the application memory space via the SDK, allowing the Chat to pull the trigger on local methods remotely.<p>A note on the Repo: The GitHub repository has become large. The core focus is RailEngine and RailOrchestrator. You will find other connectors (C++, Python) that are frankly "trash code" or incomplete experiments. I forced RTTR in C++ to achieve reflection, but I'm not convinced by it. Please skip those; they aren't relevant to the architectural discussion.<p>I’d love to focus the discussion on memory-managed languages (like C#/.NET) and ask you:<p>-Architecture: Does this inverted architecture (Apps "dialing home" via IPC) make sense for local agents compared to the standard Server/API model?<p>-Performance: Regarding the use of Reflection for every call—would it be worth implementing a mechanism to cache methods as Delegates at startup? Or is the optimization irrelevant considering the latency of the LLM itself?<p>-Security: Since we are effectively bypassing the API layer, what would be a hypothetical security layer to prevent malicious use? (e.g., a capability manifest signed by the user?)<p>I would love to hear architectural comparisons and critiques.
Show HN: difi – A Git diff TUI with Neovim integration (written in Go)
Hacker News (score: 17)[Other] Show HN: difi – A Git diff TUI with Neovim integration (written in Go)
LNAI – Define AI coding tool configs once, sync to Claude, Cursor, Codex, etc.
Hacker News (score: 28)[Other] LNAI – Define AI coding tool configs once, sync to Claude, Cursor, Codex, etc.
Network Connection Information
Product Hunt[Other] View Wi-Fi & ethernet details from the menubar Ever wonder why your connection feels slow? Curious about what Wi-Fi standard you're really using? Put advanced network diagnostics right into your macOS menu bar. Designed for IT professionals, developers, and anyone who wants to understand their network better, Network Connection Info provides a clean, at-a-glance view of your most important connection stats. See your connection status, IP address, link speed, or Wi-Fi signal strength directly in the menu bar.
Caudex
Product Hunt[CLI Tool] Terminal for Claude Code with GUI controls Caudex is a lightweight terminal app for Claude Code. Features: - Multi-tab & split terminals for parallel development - Tab overview to monitor all sessions at a glance - Context & cost monitoring - One-click setup for Skills and MCP servers - Keyboard-first workflow macOS/Windows available now. Linux coming soon.
[DevOps] Show HN: 127 PRs to Prod this wknd with 18 AI agents: metaswarm. MIT licensed A few weeks ago I posted about GoodToGo <a href="https://news.ycombinator.com/item?id=46656759">https://news.ycombinator.com/item?id=46656759</a> - a tool that gives AI agents a deterministic answer to "is this PR ready to merge?" Several people asked about the larger orchestration system I mentioned. This is that system.<p>I got tired of being a project manager for Claude Code. It writes code fine, but shipping production code is seven or eight jobs — research, planning, design review, implementation, code review, security audit, PR creation, CI babysitting. I was doing all the coordination myself. The agent typed fast. I was still the bottleneck. What I really needed was an orchestrator of orchestrators - swarms of swarms of agents with deterministic quality checks.<p>So I built metaswarm. It breaks work into phases and assigns each to a specialist swarm orchestrator. It manages handoffs and uses BEADS for deterministic gates that persist across /compact, /clear, and even across sessions. Point it at a GitHub issue or brainstorm with it (it uses Superpowers to ask clarifying questions) and it creates epics, tasks, and dependencies, then runs the full pipeline to a merged PR - including outside code review like CodeRabbit, Greptile, and Bugbot.<p>The thing that surprised me most was the design review gate. Five agents — PM, Architect, Designer, Security, CTO — review every plan in parallel before a line of code gets written. All five must approve. Three rounds max, then it escalates to a human. I expected a rubber stamp. It catches real design problems, dependency issues, security gaps.<p>This weekend I pointed it at my backlog. 127 PRs merged. Every one hit 100% test coverage. No human wrote code, reviewed code, or clicked merge. OK, I guided it a bit, mostly helping with plans for some of the epics.<p>A few learnings:<p>Agent checklists are theater. Agents skipped coverage checks, misread thresholds, or decided they didn't apply. Prompts alone weren't enough. The fix was deterministic gates — BEADS, pre-push hooks, CI jobs all on top of the agent completion check. The gates block bad code whether or not the agent cooperates.<p>The agents are just markdown files. No custom runtime, no server, and while I built it on TypeScript, the agents are language-agnostic. You can read all of them, edit them, add your own.<p>It self-reflects too. After every merged PR, the system extracts patterns, gotchas, and decisions into a JSONL knowledge base. Agents only load entries relevant to the files they're touching. The more it ships, the fewer mistakes it makes. It learns as it goes.<p>metaswarm stands on two projects: <a href="https://github.com/steveyegge/beads" rel="nofollow">https://github.com/steveyegge/beads</a> by Steve Yegge (git-native task tracking and knowledge priming) and <a href="https://github.com/obra/superpowers" rel="nofollow">https://github.com/obra/superpowers</a> by Jesse Vincent (disciplined agentic workflows — TDD, brainstorming, systematic debugging). Both were essential.<p>Background: I founded Technorati, Linuxcare, and Warmstart; tech exec at Lyft and Reddit. I built metaswarm because I needed autonomous agents that could ship to a production codebase with the same standards I'd hold a human team to.<p>$ cd my-project-name<p>$ npx metaswarm init<p>MIT licensed. IANAL. YMMV. Issues/PRs welcome!
GitHub experience various partial-outages/degradations
Hacker News (score: 97)[Other] GitHub experience various partial-outages/degradations
Show HN: HoundDog.ai – Ultra-Fast Code Scanner for Data Privacy
Show HN (score: 13)[Code Quality] Show HN: HoundDog.ai – Ultra-Fast Code Scanner for Data Privacy Hi HN,<p>I'm one of the creators of HoundDog.ai (<a href="https://github.com/hounddogai/hounddog" rel="nofollow">https://github.com/hounddogai/hounddog</a>). We currently handle privacy scanning for Replit's 45M+ creators.<p>We built HoundDog because privacy compliance is usually a choice between manual spreadsheets or reactive runtime scanning. While runtime tools are useful for monitoring, they only catch leaks after the code is live and the data has already moved. They can also miss code paths that aren't actively triggered in production.<p>HoundDog traces sensitive data in code during development and helps catch risky flows (e.g., PII leaking into logs or unapproved third-party SDKs) before the code is shipped.<p>The core scanner is a standalone Rust binary. It doesn't use LLMs so it's local, deterministic, cheap, and fast. It can scan 1M+ lines of code in seconds on a standard laptop, and supports 80+ sensitive data types (PII, PHI, CHD) and hundreds of data sinks (logs, SDKs, APIs, ORMs etc.) out of the box.<p>We use AI internally to expand and scale our rules, identifying new data sources and sinks, but the execution is pure static analysis.<p>The scanner is free to use (no signups) so please try it out and send us feedback. I'll be around to answer any questions!
Show HN: Sklad – Secure, offline-first snippet manager (Rust, Tauri v2)
Show HN (score: 10)[Other] Show HN: Sklad – Secure, offline-first snippet manager (Rust, Tauri v2) Hi HN, I’m Pavel.<p>I built Sklad because, as a DevOps engineer, I was frustrated with how I handled operational data. I constantly need access to SSH passwords (where keys aren't an option), specific IP addresses, and complex CLI one-liners. I realized I was storing them in insecure text files or sticky notes because standard clipboard managers felt too bloated and password managers were too slow for my workflow.<p>I wanted a "warehouse" for this data—something that lives quietly in the system tray, supports deep hierarchy, works completely offline, and looks industrial.<p>The app is built with Rust and Tauri v2. The core technical challenge was mapping a local JSON tree structure directly to a recursive native OS tray menu. This allows you to navigate nested folders just by hovering, without opening a window.<p>For security, I implemented AES-256-GCM encryption with Argon2 for key derivation. When the vault locks, the sensitive data is wiped from memory, and the tray menu collapses to a locked state.<p>It was an interesting journey building this on the Tauri v2 Beta ecosystem. I’d love to hear your feedback on the implementation, especially regarding the Rust-side security logic.<p>Repo: <a href="https://github.com/Rench321/sklad" rel="nofollow">https://github.com/Rench321/sklad</a>
Ask Ellie
Product Hunt[API/SDK] Turn Slack messages into GitHub, Jira, or Linear tickets Ask Ellie is the AI chat agent that brings all your engineering context into Slack. Ask about code changes, PR status, sprint velocity, production issues, or analytics and get instant answers pulled from your actual tools. Create tickets, debug incidents, check what shipped, or find out who's blocking what, all without leaving chat. Connect GitHub, Jira, Linear, Sentry, PostHog, and more. No more dashboard hopping Just answers.
Devlop Ai
Product Hunt[IDE/Editor] AI IDE that writes and flashes STM32 firmware for your board AI coding agents to speed up STM32 embedded development
Building a Telegram Bot with Cloudflare Workers, Durable Objects and Grammy
Hacker News (score: 14)[Other] Building a Telegram Bot with Cloudflare Workers, Durable Objects and Grammy
Show HN: Is AI "good" yet? – tracking HN sentiment on AI coding
Show HN (score: 5)[Other] Show HN: Is AI "good" yet? – tracking HN sentiment on AI coding A survey tracking developer sentiment on AI-assisted coding through Hacker News posts.
Show HN: Voiden – an offline, Git-native API tool built around Markdown
Hacker News (score: 19)[API/SDK] Show HN: Voiden – an offline, Git-native API tool built around Markdown Hi HN,<p>We have open-sourced Voiden.<p>Most API tools are built like platforms. They are heavy because they optimize for accounts, sync, and abstraction - not for simple, local API work.<p>Voiden treats API tooling as files.<p>It’s an offline-first, Git-native API tool built on Markdown, where specs, tests, and docs live together as executable Markdown in your repo. Git is the source of truth.<p>No cloud. No syncing. No accounts. No telemetry.Just Markdown, Git, hotkeys, and your damn specs.<p>Voiden is extensible via plugins (including gRPC and WSS).<p>Repo: <a href="https://github.com/VoidenHQ/voiden" rel="nofollow">https://github.com/VoidenHQ/voiden</a><p>Download Voiden here : <a href="https://voiden.md/download" rel="nofollow">https://voiden.md/download</a><p>We'd love feedback from folks tired of overcomplicated and bloated API tooling !