🛠️ All DevTools

Showing 1–20 of 4120 tools

Last Updated
April 11, 2026 at 04:00 PM

[Other] practice made claude perfect

Found: April 11, 2026 ID: 4114

[Other] A single CLAUDE.md file to improve Claude Code behavior, derived from Andrej Karpathy's observations on LLM coding pitfalls.

Found: April 11, 2026 ID: 4113

[Other] Show HN: A WYSIWYG word processor in Python Hi all,<p>Finding a good data structure for a word processor is a difficult problem. My notebook diaries on the problem go back 25 years when I was frustrated with using Word for my diploma thesis - it was slow and unstable at that time. I ended up getting pretty hooked on the problem.<p>Right now I’m taking a professional break and decided to finally use the time to push these ideas further, and build MiniWord — a WYSIWYG word processor in Python.<p>My goal is to have a native, non-HTML-based editor that stays simple, fast, and is hackable. So far I am focusing on getting the fundamentals right. What is working yet is:<p>- Real WYSIWYG editing (no HTML layer, no embedded browser) with styles, images and tables.<p>- Clean, simple file format (human-readable, diff-friendly, git-friendly, AI-friendly)<p>- Markdown support<p>- Support for Python-plugins<p><i>Things that I found:</i><p>- B-tree structures are perfect for holding rich text data<p>- A simple text-based file format is incredibly useful — you can diff documents, version them, and even process them with AI tools quite naturally<p><i>What I’d love feedback on:</i><p>- Where do you see real use cases for something like this?<p>- What would be missing for you to take it seriously as a tool or platform?<p>- What kinds of plugins or extensions would actually be worth building?<p>Happy about any thoughts — positive or critical. Greetings

Found: April 10, 2026 ID: 4115

[Other] Show HN: FluidCAD – Parametric CAD with JavaScript Hello HN users,<p>This is a CAD by code project I have been working on on my free time for more than year now.<p>I built it with 3 goals in mind:<p>- It should be familiar to CAD designers who have used other programs. Same workflow, same terminology.<p>- Reduce the mental effort required to create models as much as possible. This is achieved by:<p><pre><code> - Provide live rendering and visual guidance as you type. - Allow the user to reference existing edges&#x2F;faces on the scene instead of having to calculate everything. - Provide interactive mouse helpers for features that are hard to write by code: Only 3 interactive modes for now: Edge trimming, Sketch region extrude, Bezier curve drawing. - Implicit coding whenever possible: e.g: There are sensible defaults for most parameters. The program will automatically fuse intersecting objects together so you do not have to worry about what object needs to be fused with what.</code></pre> - It should be reasonably fast: The scene objects are cached and only the updated objects are re-computed.<p>I think I have achieved these goals to a good extent. The program is still in early stages and there are many features I want to add, rewrite but I think it is already usable for simple models.<p>Update to add more details: This is based on Opencascade.js WASM binding. So you get all the good things that come with any brep kernel. Fillets, chamfers, step import and export...<p>The scene is webview but the editing is in your local file. You use your own editor and the environment you are familiar with.<p>One important feature that I think make this stand out among other code based cad software is the ability to transform features not just shapes. More here: <a href="https:&#x2F;&#x2F;fluidcad.io&#x2F;docs&#x2F;guides&#x2F;patterns" rel="nofollow">https:&#x2F;&#x2F;fluidcad.io&#x2F;docs&#x2F;guides&#x2F;patterns</a> You can see it in action in the lantern example: <a href="https:&#x2F;&#x2F;fluidcad.io&#x2F;docs&#x2F;tutorials&#x2F;lantern" rel="nofollow">https:&#x2F;&#x2F;fluidcad.io&#x2F;docs&#x2F;tutorials&#x2F;lantern</a>

Found: April 10, 2026 ID: 4106

[Other] JSON Formatter Chrome Plugin Now Closed and Injecting Adware

Found: April 10, 2026 ID: 4109

[Other] Show HN: I run AI background removal in the browser–no upload,no server RMBG-1.4 + SAM running client-side via ONNX Runtime WASM. ~2s on laptop, works on mobile. Your image never leaves the browser.<p>Built this as part of allplix.com. 19yo student in France, solo project.<p>Happy to talk about the WASM pipeline or the pain of running ML models in a browser tab.

Found: April 10, 2026 ID: 4117

[Other] Show HN: Eve – Managed OpenClaw for work Eve is an AI agent harness that runs in an isolated Linux sandbox (2 vCPUs, 4GB RAM, 10GB disk) with a real filesystem, headless Chromium, code execution, and connectors to 1000+ services.<p>You give it a task and it works in the background until it&#x27;s done.<p>I built this because I wanted OpenClaw without the self-hosting, pointed at actual day-to-day work. I’m thinking less personal assistant and more helpful colleague.<p>Here’s a short demo video: <a href="https:&#x2F;&#x2F;www.loom.com&#x2F;share&#x2F;00d11bdbe804478e8817710f5f53ac61" rel="nofollow">https:&#x2F;&#x2F;www.loom.com&#x2F;share&#x2F;00d11bdbe804478e8817710f5f53ac61</a><p>The main interface is a web app where you can watch work happen in real time (agents spawning, files being written, use of the CLI). There&#x27;s also an iMessage integration so you can fire a task asynchronously, put your phone down, and get a reply when it&#x27;s finished.<p>Under the hood, there&#x27;s an orchestrator (Claude Opus 4.6) that routes to the right domain-specific model for each subtask: browsing, coding, research, and media generation.<p>For complex tasks it spins up parallel sub-agents that coordinate through the shared filesystem. They have persistent memory across sessions so context compounds over time.<p>I’ve packaged it with a bunch of pre-installed skills so it can execute in a variety of job roles (sales, marketing, finance) at runtime.<p>Here are a few things Eve has helped me with in the last couple days:<p>- Edit this demo video with a voice over of Garry: <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=S4oD7H3cAQ0" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=S4oD7H3cAQ0</a><p>- Do my tax returns<p>- To build HN as if it was the year 2030: <a href="https:&#x2F;&#x2F;api.eve.new&#x2F;api&#x2F;sites&#x2F;hackernews-2030&#x2F;#&#x2F;" rel="nofollow">https:&#x2F;&#x2F;api.eve.new&#x2F;api&#x2F;sites&#x2F;hackernews-2030&#x2F;#&#x2F;</a><p>AMA on the architecture and lmk your thoughts :)<p>P.S. I&#x27;ve given every new user $100 worth of credits to try it.

Found: April 10, 2026 ID: 4110

[DevOps] Launch HN: Twill.ai (YC S25) – Delegate to cloud agents, get back PRs Hey HN, we&#x27;re Willy and Dan, co-founders of Twill.ai (<a href="https:&#x2F;&#x2F;twill.ai&#x2F;">https:&#x2F;&#x2F;twill.ai&#x2F;</a>). Twill runs coding CLIs like Claude Code and Codex in isolated cloud sandboxes. You hand it work through Slack, GitHub, Linear, our web app or CLI, and it comes back with a PR, a review, a diagnosis, or a follow-up question. It loops you in when it needs your input, so you stay in control.<p>Demo: <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=oyfTMXVECbs" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=oyfTMXVECbs</a><p>Before Twill, building with Claude Code locally, we kept hitting three walls<p>1. Parallelization: two tasks that both touch your Docker config or the same infra files are painful to run locally at once, and manual port rebinding and separate build contexts don&#x27;t scale past a couple of tasks.<p>2. Persistence: close your laptop and the agent stops. We wanted to kick off a batch of tasks before bed and wake up to PRs.<p>3. Trust: giving an autonomous agent full access to your local filesystem and processes is a leap, and a sandbox per task felt safer to run unattended.<p>All three pointed to the same answer: move the agents to the cloud, give each task its own isolated environment.<p>So we built what we wanted. The first version was pure delegation: describe a task, get back a PR. Then multiplayer, so the whole team can talk to the same agent, each in their own thread. Then memory, so &quot;use the existing logger in lib&#x2F;log.ts, never console.log&quot; becomes a standing instruction on every future task. Then automation: crons for recurring work, event triggers for things like broken CI.<p>This space is crowded. AI labs ship their own coding products (Claude Code, Codex), local IDEs wrap models in your editor, and a wave of startups build custom cloud agents on bespoke harnesses. We take the following path: reuse the lab-native CLIs in cloud sandboxes. Labs will keep pouring RL into their own harnesses, so they only get better over time. That way, no vendor lock-in, and you can pick a different CLI per task or combine them.<p>When you give Twill a task, it spins up a dedicated sandbox, clones your repo, installs dependencies, and invokes the CLI you chose. Each task gets its own filesystem, ports, and process isolation. Secrets are injected at runtime through environment variables. After a task finishes, Twill snapshots the sandbox filesystem so the next run on the same repo starts warm with dependencies already installed. We chose this architecture because every time the labs ship an improvement to their coding harness, Twill picks up the improvement automatically.<p>We’re also open-sourcing agentbox-sdk, <a href="https:&#x2F;&#x2F;github.com&#x2F;TwillAI&#x2F;agentbox-sdk" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;TwillAI&#x2F;agentbox-sdk</a>, an SDK for running and interacting with agent CLIs across sandbox providers.<p>Here’s an example: a three-person team assigned Twill to a Linear backlog ticket about adding a CSV import feature to their Rails app. Twill cloned the repo, set up the dev environment, implemented the feature, ran the test suite, took screenshots and attached them to the PR. The PR needed one round of revision, which they requested through Github. For more complex tasks, Twill asks clarifying questions before writing code and records a browser session video (using Vercel&#x27;s Webreel) as proof of work.<p>Free tier: 10 credits per month (1 credit = $1 of AI compute at cost, no markup), no credit card. Paid plans start at $50&#x2F;month for 50 credits, with BYOK support on higher tiers. Free pro tier for open-source projects.<p>We’d love to hear how cloud coding agents fit into your workflow today, and if you try Twill, what worked, what broke, and what’s still missing.

Found: April 10, 2026 ID: 4107

[Other] Why I'm Building a Database Engine in C#

Found: April 10, 2026 ID: 4108

[Other] Show HN: Figma for Coding Agents Feels a bit like Figma, but for coding agents.<p>Instead of going back and forth with prompts, you give the agent a DESIGN.md that defines the design system up front, and it generally sticks to it when generating UI.<p>Google Stitch seems to be moving in this direction as a standard, so we put together a small collection of DESIGN.md files based on popular web sites.

Found: April 10, 2026 ID: 4116

[Other] Show HN: Zeroclawed: Secure Agent Gateway I’ve been cautiously (and nervously) playing with openclaw and a number of other claw and code agents for a while now, but trying out different ones was tricky so I wanted a simple way to switch out channel ownership… then I wanted more. Security is hard, and I wanted to make it easier. This is FAR from polished, and no claims that I’m a “security expert” but I tried to think and research a bit on different threat models (I think of 2 broad ones for agents, external adversaries and internal agentic failures) and try and offer best in class protection on both, while also not having any special opinion on what a good agent may look like today or in the future… this is just a gateway, and hopefully one that can work for nearly any agent now or in the future, but trying to come with batteries included for some of the more popular options today like openclaw, zeroclaw, claw-code, clause and opencode, not all there yet but contribution and critiques welcome.

Found: April 10, 2026 ID: 4119

CPU-Z and HWMonitor compromised

Hacker News (score: 341)

[Other] CPU-Z and HWMonitor compromised <a href="https:&#x2F;&#x2F;xcancel.com&#x2F;vxunderground&#x2F;status&#x2F;2042483067655262461" rel="nofollow">https:&#x2F;&#x2F;xcancel.com&#x2F;vxunderground&#x2F;status&#x2F;2042483067655262461</a><p><a href="https:&#x2F;&#x2F;old.reddit.com&#x2F;r&#x2F;pcmasterrace&#x2F;comments&#x2F;1sh4e5l&#x2F;warning_hwmonitor_163_download_on_the_official&#x2F;" rel="nofollow">https:&#x2F;&#x2F;old.reddit.com&#x2F;r&#x2F;pcmasterrace&#x2F;comments&#x2F;1sh4e5l&#x2F;warni...</a><p><a href="https:&#x2F;&#x2F;www.bleepingcomputer.com&#x2F;news&#x2F;security&#x2F;supply-chain-attack-at-cpuid-pushes-malware-with-cpu-z-hwmonitor&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.bleepingcomputer.com&#x2F;news&#x2F;security&#x2F;supply-chain-...</a>

Found: April 10, 2026 ID: 4120

[Other] Show HN: A tool to manage a swarm of coding agents on Linux

Found: April 10, 2026 ID: 4105

[Other] We've raised $17M to build what comes after Git

Found: April 10, 2026 ID: 4104

[Other] Show HN: SmolVM – open-source sandbox for coding and computer-use agents SmolVM is an open-source local sandbox for AI agents on macOS and Linux.<p>I started building it because agent workflows need more than isolated code execution. They need a reusable environment: write files in one step, come back later, snapshot state, pause&#x2F;resume, and increasingly interact with browsers or full desktop environments.<p>Right now SmolVM is a Python SDK and CLI focused on local developer experience.<p>Current features include: - local sandbox environments - macOS and Linux support - snapshotting - pause&#x2F;resume - persistent environments across turns<p>Install: ``` curl -sSL <a href="https:&#x2F;&#x2F;celesto.ai&#x2F;install.sh" rel="nofollow">https:&#x2F;&#x2F;celesto.ai&#x2F;install.sh</a> | bash smolvm ```<p>I’d love feedback from people building coding agents or computer-use agents. Interested in what feels missing, what feels clunky, and what you’d expect from a sandbox like this.

Found: April 10, 2026 ID: 4101

[Other] Show HN: Linear RNN/Reservoir hybrid generative model, one C file (no deps.) I just noticed it takes literally ~5 minutes to train millions parameters on slow CPU...but before you call Yudkowsky that &quot;it&#x27;s over&quot;, an important note: the main bottleneck is the corpus size, params are just &#x27;cleverness&#x27; but given limited info it&#x27;s powerless.<p>Anyway, here is the project:<p><a href="https:&#x2F;&#x2F;github.com&#x2F;bggb7781-collab&#x2F;lrnnsmdds&#x2F;tree&#x2F;main" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;bggb7781-collab&#x2F;lrnnsmdds&#x2F;tree&#x2F;main</a><p>couple of notes:<p>1. single C file, no dependencies. Below are literally all the &quot;dependencies&quot;, not even custom header (copy paste from the top of the single c file):<p>#define _POSIX_C_SOURCE 200809L<p>#include &lt;stdio.h&gt; #include &lt;stdlib.h&gt; #include &lt;string.h&gt; #include &lt;math.h&gt; #include &lt;time.h&gt; #include &lt;stdint.h&gt; #include &lt;stdbool.h&gt; #include &lt;float.h&gt; #include &lt;getopt.h&gt; #include &lt;errno.h&gt;<p>4136 lines of code in one file at the moment, that&#x27;s all.<p>2. easiest way to compile on Windows: download Cygwin (<a href="https:&#x2F;&#x2F;www.cygwin.com&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.cygwin.com&#x2F;</a>), then navigate to the directory where your lrnnsmdds.c file is and just run gcc on it with some optimizations, such as:<p>gcc -std=c17 -O3 -march=native --fast-math -o lrnn lrnnsmdds.c -lm<p>On Linux just run gcc, if for whatever reason you don&#x27;t have gcc on Linux do sudo &amp;&amp; apt-get install gcc --y ,or something...<p>On Apple: i&#x27;ve no idea or maybe just use vmware and install ubuntu and then run it.<p>Of course you can &#x27;git clone&#x27; and go to the dir, but again: it&#x27;s one file! copy it...<p>The repo has tiny toy corpus included where i&#x27;ve borrowed (hopefully it&#x27;s not plagiarism!) the name &quot;John Gordon&quot; from one of my favorite books &quot;Star Kings&quot;, by E. Hamilton. Just the first and last name are copied, the content is unique (well several poorly written sentences by myself...). Obviously it will overfit and result on copy-paste on such small corpus, the sole goal is to check if everything runs and not if it&#x27;s the A-G-I. You&#x27;d need your own 100kb+ if you want to generate unique meaningful text.<p>3. why&#x2F;what&#x2F;when&#x2F;how?<p>The github repo is self-explanatory i believe about features, uses and goals but in an attempt to summarize:<p>My main motivation was to create a fast alternative to transformers which works on CPU only, hence you see the bizarre&#x2F;not-easy task of doing this in C and not python and the lack of dependencies. In addition I was hoping it will also be clever alternative hence you see all those features more stacked than 90s BMW 850. The &#x27;reservoir&#x27; is the most novel feature though, it offers quick exact recall arguably different than RWKV 8 or the latest Mamba, in fact name of the architecture SMDDS comes from the first letters of the implemented features:<p>* S. SwiGLU in Channel Mixing (more coherence) * M. Multi-Scale Token Shift (larger context) * D. Data-Dependent Decay with Low-Rank (speed in large context) * D. Dynamic State Checkpointing (faster&#x2F;linear generation) * S. Slot-memory reservoir (perfect recall, transformers style).<p>If you face some issue just email me (easiest).<p>the good, the bad the ugly:<p>It is more or less working text-to-text novel alternative architecture, it&#x27;s not trying to imitate transformers nor LSTM, Mamba, RWKV though it shares many features with them - the bad is that it&#x27;s not blazing fast, if you&#x27;re armed with ryzen&#x2F;i7 16 cores or whatever and patience you can try training it on several small books via word tokenizer and low perplexity (under 1.2...) and see if it looks smarter&#x2F;faster. Since this is open source obviously the hope is to be improved: make it cuda-friendly, improve the features, port to python etc.<p>Depending on many factors I may try to push for v2 in July, August, September. My focus at the moment will be to test and scale since the features are many, it compiles with zero warnings on the 2 laptops i&#x27;ve tested(windows&#x2F;cygwin and ubuntu) and the speed is comparable to transformers. 10x!

Found: April 09, 2026 ID: 4103

[Testing] Hegel, a universal property-based testing protocol and family of PBT libraries

Found: April 09, 2026 ID: 4093

[Other] Instant 1.0, a backend for AI-coded apps

Found: April 09, 2026 ID: 4097

[Other] Show HN: Control your X/Twitter feed using a small on-device LLM We built a Chrome extension and iOS app that filters Twitter&#x27;s feed using Qwen3.5-4B for contextual matching. You describe what you don&#x27;t want in plain language—it removes posts that match semantically, not by keyword.<p>What surprised us was that because Twitter&#x27;s ranking algorithm adapts based on what you engage with, consistent filtering starts reshaping the recommendations over time. You&#x27;re implicitly signaling preferences to the algorithm. For some of us it &quot;healed&quot; our feed.<p>Currently running inference from our own servers with an experimental on-device option, and we&#x27;re working on fully on-device execution to remove that dependency. Latency is acceptable on most hardware but not great on older machines. No data collection; everything except the model call runs locally.<p>It doesn&#x27;t work perfectly (figurative language trips it up) but it&#x27;s meaningfully better than muting keywords and we use it ourselves every day.<p>Also promising how local &#x2F; open models can now start giving us more control over the algorithmic agents in our lives, because capability density is improving.

Found: April 09, 2026 ID: 4096

[Other] Research-Driven Agents: When an agent reads before it codes

Found: April 09, 2026 ID: 4102
Previous Page 1 of 206 Next