🛠️ All DevTools
Showing 1–20 of 3933 tools
Last Updated
March 27, 2026 at 12:07 PM
Yeachan-Heo/oh-my-claudecode
GitHub Trending[DevOps] Teams-first Multi-agent orchestration for Claude Code
FreeCAD/FreeCAD
GitHub Trending[Other] Official source code of FreeCAD, a free and opensource multiplatform 3D parametric modeler.
Show HN: Fio: 3D World editor/game engine – inspired by Radiant and Hammer
Hacker News (score: 45)[Other] Show HN: Fio: 3D World editor/game engine – inspired by Radiant and Hammer A liminal brush-based CSG editor and game engine with unified (forward) renderer inspired by Radiant and Worldcraft/Hammer<p>Compact and lightweight (target: Snapdragon 8CX, OpenGL 3.3)<p>Real-time lighting with stencil shadows without the need for pre-baked compilation
Show HN: Layerleak – Like Trufflehog, but for Docker Hub
Show HN (score: 5)[Other] Show HN: Layerleak – Like Trufflehog, but for Docker Hub
Show HN: Turbolite – a SQLite VFS serving sub-250ms cold JOIN queries from S3
Hacker News (score: 45)[Database] Show HN: Turbolite – a SQLite VFS serving sub-250ms cold JOIN queries from S3 I built a SQLite VFS in Rust that serves cold queries directly from S3 with sub-second performance, and often much faster.<p>It’s called turbolite. It is experimental, buggy, and may corrupt data. I would not trust it with anything important yet.<p>I wanted to explore whether object storage has gotten fast enough to support embedded databases over cloud storage. Filesystems reward tiny random reads and in-place mutation. S3 rewards fewer requests, bigger transfers, immutable objects, and aggressively parallel operations where bandwidth is often the real constraint. This was explicitly inspired by turbopuffer’s ground-up S3-native design. <a href="https://turbopuffer.com/blog/turbopuffer" rel="nofollow">https://turbopuffer.com/blog/turbopuffer</a><p>The use case I had in mind is lots of mostly-cold SQLite databases (database-per-tenant, database-per-session, or database-per-user architectures) where keeping a separate attached volume for inactive database feels wasteful. turbolite assumes a single write source and is aimed much more at “many databases with bursty cold reads” than “one hot database.”<p>Instead of doing naive page-at-a-time reads from a raw SQLite file, turbolite introspects SQLite B-trees, stores related pages together in compressed page groups, and keeps a manifest that is the source of truth for where every page lives. Cache misses use seekable zstd frames and S3 range GETs for search queries, so fetching one needed page does not require downloading an entire object.<p>At query time, turbolite can also pass storage operations from the query plan down to the VFS to frontrun downloads for indexes and large scans in the order they will be accessed.<p>You can tune how aggressively turbolite prefetches. For point queries and small joins, it can stay conservative and avoid prefetching whole tables. For scans, it can get much more aggressive.<p>It also groups pages by page type in S3. Interior B-tree pages are bundled separately and loaded eagerly. Index pages prefetch aggressively. Data pages are stored by table. The goal is to make cold point queries and joins decent, while making scans less awful than naive remote paging would.<p>On a 1M-row / 1.5GB benchmark on EC2 + S3 Express, I’m seeing results like sub-100ms cold point lookups, sub-200ms cold 5-join profile queries, and sub-600ms scans from an empty cache with a 1.5GB database. It’s somewhat slower on normal S3/Tigris.<p>Current limitations are pretty straightforward: it’s single-writer only, and it is still very much a systems experiment rather than production infrastructure.<p>I’d love feedback from people who’ve worked on SQLite-over-network, storage engines, VFSes, or object-storage-backed databases. I’m especially interested in whether the B-tree-aware grouping / manifest / seekable-range-GET direction feels like the right one to keep pushing.
Taming LLMs: Using Executable Oracles to Prevent Bad Code
Hacker News (score: 31)[Other] Taming LLMs: Using Executable Oracles to Prevent Bad Code
$500 GPU outperforms Claude Sonnet on coding benchmarks
Hacker News (score: 31)[Other] $500 GPU outperforms Claude Sonnet on coding benchmarks
Stripe Projects: Provision and manage services from the CLI
Hacker News (score: 71)[CLI Tool] Stripe Projects: Provision and manage services from the CLI
Moving from GitHub to Codeberg, for lazy people
Hacker News (score: 501)[Other] Moving from GitHub to Codeberg, for lazy people
Show HN: Paseo – Open-source coding agent interface (desktop, mobile, CLI)
Show HN (score: 7)[Other] Show HN: Paseo – Open-source coding agent interface (desktop, mobile, CLI) Hey HN, I'm Mo. I'm building Paseo, a multi-platform interface for running Claude Code, Codex and OpenCode. The daemon runs on any machine (your Macbook, a VPS, whatever) and clients (web, mobile, desktop, CLI) connect over WebSocket (there's a built-in E2EE relay for convenience, but you can opt-out).<p>I started working on Paseo last September as a push-to-talk voice interface for Claude Code. I wanted to bounce ideas hands-free while going on walks, after a while I wanted to see what the agent was doing, then I wanted to text it when I couldn't talk, then I wanted to see diffs and run multiple agents. I kept fixing rough edges and adding features, and slowly it became what it is today.<p>What it does:<p>- Run multiple providers through the same UI<p>- Works on macOS, Linux, Windows, iOS, Android, and web<p>- Manage agents in different machines from the same UI<p>- E2EE Relay for mobile connectivity<p>- Local voice chat and dictation (NVIDIA Parakeet + Kokoro + Sherpa ONNX)<p>- Split panes to work with agents, files and terminals side by side<p>- Git panel to review diffs and do common actions (commit, push, create PR etc.)<p>- Git worktree management so agents don't step on each other<p>- Docker-style CLI to run agents<p>- No telemetry, no tracking, no login<p>Paseo does not call inference APIs directly or extract your OAuth tokens. It wraps your first-party agent CLIs and runs them exactly as you would in your terminal. Your sessions, your system prompts, your tools, nothing is intercepted or modified.<p>Stack: The daemon is written in Typescript. The app uses Expo and compiles to both native mobile apps and web. The desktop app is in Electron (I started with Tauri and had to migrate). Sharing the same codebase across different form factors was challenging but I'd say that with discipline it's doable an the result has been worth it, as most features I build automatically work in all clients. I did have to implement some platform specific stuff, especially around gestures, audio and scroll behavior. The relay is built on top of Cloudflare DO, so far it's holding up quite well.<p>I love using the app, but I am even more excited about the possibilities of the CLI, as it become a primitive for more advanced agent orchestration, it has much better ergonomics than existing harnesses, and I'm already using it to experiment with loops and agent teams, although it's still new territory.<p>How Paseo compares to similar apps: Anthropic and OpenAI already do some of what Paseo does (Claude Code Remote Control, Codex app, etc.), but with mixed quality and you're locked onto their models. Most other alternatives I know about found are either closed source or not flexible enough for my needs.<p>The license is AGPL-3.0. The desktop app ships with a daemon so that's all you need. But you can also `npm install -g @getpaseo/cli` for headless mode and connect via any client.<p>I mainly use Mac, so Linux and Windows has mostly been tested by a small group of early adopters. If you run into issues, I’d appreciate bug reports on GitHub!<p>Repo: <a href="https://github.com/getpaseo/paseo" rel="nofollow">https://github.com/getpaseo/paseo</a><p>Homepage: <a href="https://paseo.sh/" rel="nofollow">https://paseo.sh/</a><p>Discord: <a href="https://discord.gg/jz8T2uahpH" rel="nofollow">https://discord.gg/jz8T2uahpH</a><p>Happy to answer questions about the product, architecture or whatever else!<p>---<p>I resubmitted this post because I forgot to add the URL and it didn't allow me to add it later.
Show HN: Veil – Dark mode PDFs without destroying images, runs in the browser
Hacker News (score: 52)[Other] Show HN: Veil – Dark mode PDFs without destroying images, runs in the browser Hi HN! here's a tool I just deployed that renders PDFs in dark mode without destroying the images. Internal and external links stay intact, and I decided to implement export since I'm not a fan of platform lock-in: you can view your dark PDF in your preferred reader, on any device. It's a side project born from a personal need first and foremost. When I was reading in the factory the books that eventually helped me get out of it, I had the problem that many study materials and books contained images and charts that forced me, with the dark readers available at the time, to always keep the original file in multitasking since the images became, to put it mildly, strange. I hope it can help some of you who have this same need. I think it could be very useful for researchers, but only future adoption will tell.<p>With that premise, I'd like to share the choices that made all of this possible. To do so, I'll walk through the three layers that veil creates from the original PDF:<p>- Layer 1: CSS filter. I use invert(0.86) hue rotate(180deg) on the main canvas. I use 0.86 instead of 1.0 because I found that full inversion produces a pure black and pure white that are too aggressive for prolonged reading. 0.86 yields a soft dark grey (around #242424, though it depends on the document's white) and a muted white (around #DBDBDB) for the text, which I found to be the most comfortable value for hours of reading.<p>- Layer 2: image protection. A second canvas is positioned on top of the first, this time with no filters. Through PDF.js's public API getOperatorList(), I walk the PDF's operator list and reconstruct the CTM stack, that is the save, restore and transform operations the PDF uses to position every object on the page. When I encounter a paintImageXObject (opcode 85 in PDF.js v5), the current transformation matrix gives me the exact bounds of the image. At that point I copy those pixels from a clean render onto the overlay. I didn't fork PDF.js because It would have become a maintenance nightmare given the length of the codebase and the frequent updates. Images also receive OCR treatment: text contained in charts and images becomes selectable, just like any other text on the page. At this point we have the text inverted and the images intact. But what if the page is already dark? Maybe the chapter title pages are black with white text? The next layer takes care of that.<p>- Layer 3: already-dark page detection. After rendering, the background brightness is measured by sampling the edges and corners of the page (where you're most likely to find pure background, without text or images in the way). The BT.601 formula is used to calculate perceived brightness by weighting the three color channels as the human eye sees them: green at 58.7%, red at 29.9%, blue at 11.4%. These weights reflect biology: the eye evolved in natural environments where distinguishing shades of green (vegetation, predators in the grass) was a matter of survival, while blue (sky, water) was less critical. If the average luminance falls below 40%, the page is flagged as already dark and the inversion is skipped, returning the original page. Presentation slides with dark backgrounds stay exactly as they are, instead of being inverted into something blinding.<p>Scanned documents are detected automatically and receive OCR via Tesseract.js, making text selectable and copyable even on PDFs that are essentially images. Everything runs locally, no framework was used, just vanilla JS, which is why it's an installable PWA that works offline too.<p>Here's the link to the app along with the repository: <a href="https://veil.simoneamico.com" rel="nofollow">https://veil.simoneamico.com</a> | <a href="https://github.com/simoneamico-ux-dev/veil" rel="nofollow">https://github.com/simoneamico-ux-dev/veil</a><p>I hope veil can make your reading more pleasant. I'm open to any feedback. Thanks everyone
Show HN: Orloj – agent infrastructure as code (YAML and GitOps)
Hacker News (score: 12)[DevOps] Show HN: Orloj – agent infrastructure as code (YAML and GitOps) Hey HN, we're Jon and Kristiane, and we're building Orloj (<a href="https://orloj.dev" rel="nofollow">https://orloj.dev</a>), an open-source orchestration runtime for multi-agent AI systems. You define agents, tools, policies, and workflows in declarative YAML manifests, and Orloj handles scheduling, execution, governance, and reliability.<p>Over the past year we tried to use many different platforms/frameworks to build out agent systems and while building we hit some sort of problem with all of them, so we decided to have a go at it. Jon has worked with kubernettes and terraform for years and always liked the declarative nature so took patterns and concepts from both to build out Orloj.<p>Orloj treats agents the way infrastructure-as-code treats cloud resources. You write a manifest that declares an agent's model, tools, permissions, and execution limits. You compose agents into directed graphs (pipelines, hierarchies, or swarm loops).<p>Governance has been overlooked so we made resource policies (AgentPolicy, AgentRole, and ToolPermission) that are evaluated inline during execution, before every agent turn and tool call. Instead of prompt instructions that the model might ignore, these policies are a runtime gate. Unauthorized actions fail closed with structured errors and full audit trails. You can set token budgets per run, whitelist models, block specific tools, and scope policies to individual agent systems.<p>For reliability, we built lease-based task ownership (so crashed workers don't leave orphan tasks), which allows you to run workers on different machines with whatever compute that’s needed. It helps when we need a GPU for certain tasks (like we did). The scheduler also supports cron triggers and webhook-driven task creation.<p>The architecture is a server/worker split like kubernettes. orlojd hosts the API, resource store (in-memory for dev, Postgres for production), and task scheduler. orlojworker instances claim and execute tasks, route model requests through a gateway (OpenAI, Anthropic, Ollama, etc.), and run tools in configurable isolation (direct, sandboxed, container, or WASM).<p>We work with a lot of MCP servers so wanted to make MCP integration as easy as possible. You register an MCP server (stdio or HTTP), Orloj auto-discovers its tools, and they become first-class resources with governance applied. So you can connect something like the GitHub MCP server and still have policy enforcement over what agents are allowed to do with it.<p>It comes shipped with a built in UI to manage all your workflows and topology to see everything working in real time. There are a few examples and starter templates in the repo to start playing around with to get a feel for what’s possible.<p>More info in the docs: <a href="https://docs.orloj.dev" rel="nofollow">https://docs.orloj.dev</a><p>We're a small team and this is v0.1.0, so there's a lot still on the roadmap, but the full runtime is open source today and we'd love feedback on what we've built so far. What would you use this for? What's missing?
Show HN: Robust LLM Extractor for Websites in TypeScript
Hacker News (score: 32)[Other] Show HN: Robust LLM Extractor for Websites in TypeScript We've been building data pipelines that scrape websites and extract structured data for a while now. If you've done this, you know the drill: you write CSS selectors, the site changes its layout, everything breaks at 2am, and you spend your morning rewriting parsers.<p>LLMs seemed like the obvious fix — just throw the HTML at GPT and ask for JSON. Except in practice, it's more painful than that:<p>- Raw HTML is full of nav bars, footers, and tracking junk that eats your token budget. A typical product page is 80% noise. - LLMs return malformed JSON more often than you'd expect, especially with nested arrays and complex schemas. One bad bracket and your pipeline crashes. - Relative URLs, markdown-escaped links, tracking parameters — the "small" URL issues compound fast when you're processing thousands of pages. - You end up writing the same boilerplate: HTML cleanup → markdown conversion → LLM call → JSON parsing → error recovery → schema validation. Over and over.<p>We got tired of rebuilding this stack for every project, so we extracted it into a library.<p>Lightfeed Extractor is a TypeScript library that handles the full pipeline from raw HTML to validated, structured data:<p>- Converts HTML to LLM-ready markdown with main content extraction (strips nav, headers, footers), optional image inclusion, and URL cleaning - Works with any LangChain-compatible LLM (OpenAI, Gemini, Claude, Ollama, etc.) - Uses Zod schemas for type-safe extraction with real validation - Recovers partial data from malformed LLM output instead of failing entirely — if 19 out of 20 products parsed correctly, you get those 19 - Built-in browser automation via Playwright (local, serverless, or remote) with anti-bot patches - Pairs with our browser agent (@lightfeed/browser-agent) for AI-driven page navigation before extraction<p>We use this ourselves in production at Lightfeed, and it's been solid enough that we decided to open-source it.<p>GitHub: <a href="https://github.com/lightfeed/extractor" rel="nofollow">https://github.com/lightfeed/extractor</a> npm: npm install @lightfeed/extractor Apache 2.0 licensed.<p>Happy to answer questions or hear feedback.
Show HN: Nit – I rebuilt Git in Zig to save AI agents 71% on tokens
Hacker News (score: 17)[Other] Show HN: Nit – I rebuilt Git in Zig to save AI agents 71% on tokens
Show HN: A plain-text cognitive architecture for Claude Code
Hacker News (score: 45)[Other] Show HN: A plain-text cognitive architecture for Claude Code
Show HN: Automate your workflow in plain English
Show HN (score: 7)[Other] Show HN: Automate your workflow in plain English operator23 lets non-technical operators describe a workflow in plain English and run it across their tool stack, hubspot, apollo, monday, google drive and others. no builder, no if-then config, just a description and a review step before anything runs.<p>We talked to marketing ops people recently to validate whether we are solving the right problems. Three things came up every single time.<p>Setup complexity. People are not afraid of automation in theory. They are afraid of spending two hours configuring conditions and field mappings, only to have something silently misroute. The config layer is where confidence dies.<p>Debugging. When a workflow breaks there is usually no explanation. A trigger did not fire, data passed null downstream, a sequence stopped. You find out three weeks later when someone downstream asks a question. Nobody knows where it went wrong so they delete it and go back to doing it manually.<p>No trust without control. Everyone wanted to keep a review step before the system acts on its own. Not forever, but until it had proven itself across enough edge cases. The unlock for automation adoption is not fewer steps, it is making it safe to delegate gradually.<p>What we are building is a system that addresses all three. Plain English input so setup is fast, step-by-step explanations so debugging is readable, and staged autonomy so trust is earnable.<p>For founders who have built or managed GTM and marketing ops teams: does this match what you have seen. And is there a fourth problem we are missing.
Show HN: Druids – coordinate and deploy coding agents across machines
Show HN (score: 8)[DevOps] Show HN: Druids – coordinate and deploy coding agents across machines
Updates to GitHub Copilot interaction data usage policy
Hacker News (score: 61)[Other] Updates to GitHub Copilot interaction data usage policy
90% of Claude-linked output going to GitHub repos w <2 stars
Hacker News (score: 145)[Other] 90% of Claude-linked output going to GitHub repos w <2 stars
[Other] Show HN: I built an integration for RL training of browser agents for everyone This integration allows for scalable evals and training of browser agents with hosted Prime Intellect eval + training pipelines and headless browser infrastructure on Browserbase to RL train browser agents with LoRA.