🛠️ All DevTools

Showing 1–20 of 3867 tools

Last Updated
March 22, 2026 at 04:03 PM

[Other] Building an FPGA 3dfx Voodoo with Modern RTL Tools

Found: March 22, 2026 ID: 3864

[CLI Tool] $ teebot.dev – from terminal to tee in 6 seconds

Found: March 22, 2026 ID: 3865

[Other] Sashiko: An agentic Linux kernel code review system

Found: March 22, 2026 ID: 3862

[Other] Show HN: ClawMem – Open-source agent memory with SOTA local GPU retrieval So I&#x27;ve been building ClawMem, an open-source context engine that gives AI coding agents persistent memory across sessions. It works with Claude Code (hooks + MCP) and OpenClaw (ContextEngine plugin + REST API), and both can share the same SQLite vault, so your CLI agent and your voice&#x2F;chat agent build on the same memory without syncing anything.<p>The retrieval architecture is a Frankenstein, which is pretty much always my process. I pulled the best parts from recent projects and research and stitched them together: [QMD](<a href="https:&#x2F;&#x2F;github.com&#x2F;tobi&#x2F;qmd" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;tobi&#x2F;qmd</a>) for the multi-signal retrieval pipeline (BM25 + vector + RRF + query expansion + cross-encoder reranking), [SAME](<a href="https:&#x2F;&#x2F;github.com&#x2F;sgx-labs&#x2F;statelessagent" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;sgx-labs&#x2F;statelessagent</a>) for composite scoring with content-type half-lives and co-activation reinforcement, [MAGMA](<a href="https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2501.13956" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2501.13956</a>) for intent classification with multi-graph traversal (semantic, temporal, and causal beam search), [A-MEM](<a href="https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2510.02178" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2510.02178</a>) for self-evolving memory notes, and [Engram](<a href="https:&#x2F;&#x2F;github.com&#x2F;Gentleman-Programming&#x2F;engram" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;Gentleman-Programming&#x2F;engram</a>) for deduplication patterns and temporal navigation. None of these were designed to work together. Making them coherent was most of the work.<p>On the inference side, QMD&#x27;s original stack uses a 300MB embedding model, a 1.1GB query expansion LLM, and a 600MB reranker. These run via llama-server on a GPU or in-process through node-llama-cpp (Metal, Vulkan, or CPU). But the more interesting path is the SOTA upgrade: ZeroEntropy&#x27;s distillation-paired zembed-1 + zerank-2. These are currently the top-ranked embedding and reranking models on MTEB, and they&#x27;re designed to work together. The reranker was distilled from the same teacher as the embedder, so they share a semantic space. You need ~12GB VRAM to run both, but retrieval quality is noticeably better than the default stack. There&#x27;s also a cloud embedding option if you&#x27;re tight on vram or prefer to offload embedding to a cloud model.<p>For Claude Code specifically, it hooks into lifecycle events. Context-surfacing fires on every prompt to inject relevant memory, decision-extractor and handoff-generator capture session state, and a feedback loop reinforces notes that actually get referenced. That handles about 90% of retrieval automatically. The other 10% is 28 MCP tools for explicit queries. For OpenClaw, it registers as a ContextEngine plugin with the same hook-to-lifecycle mapping, plus 5 REST API tools for the agent to call directly.<p>It runs on Bun with a single SQLite vault (WAL mode, FTS5 + vec0). Everything is on-device; no cloud dependency unless you opt into cloud embedding. The whole system is self-contained.<p>This is a polished WIP, not a finished product. I&#x27;m a solo dev. The codebase is around 19K lines and the main store module is a 4K-line god object that probably needs splitting. And of course, the system is only as good as what you index. A vault with three memory files gives deservedly thin results. One with your project docs, research notes, and decision records gives something actually useful.<p>Two questions I&#x27;d genuinely like input on: (1) Has anyone else tried running SOTA embedding + reranking models locally for agent memory, and is the quality difference worth the VRAM? (2) For those running multiple agent interfaces (CLI + voice&#x2F;chat), how are you handling shared memory today?

Found: March 22, 2026 ID: 3867

[Other] Floci – A free, open-source local AWS emulator

Found: March 21, 2026 ID: 3859

[Other] Professional video editing, right in the browser with WebGPU and WASM

Found: March 21, 2026 ID: 3866

SSH Certificates and Git Signing

Hacker News (score: 30)

[Other] SSH Certificates and Git Signing

Found: March 21, 2026 ID: 3863

[Other] Show HN: Joonote – A note-taking app on your lock screen and notification panel I finally built this app after many years of being sick of unlocking my phone every goddamn time I need to take or view my notes. It particularly sucks when I&#x27;m doing my grocery and going down the list.<p>I started building last year June. This is a native app written in Kotlin. And since I&#x27;m a 100% Web dev guy, I gotta say this wouldn&#x27;t have been possible without this AI to assist me. So this isn&#x27;t &quot;vibe-coded&quot;. I simply used the chat interface in Gemini website, manually copy paste codes to build and integrate every single thing in the app! I used gemini to build it just because I was piggybacking on my last company&#x27;s enterprise subscription. I personally didn&#x27;t subscribe to any AI (and still don&#x27;t cuz the free quota seems enough for me :)<p>So I certainly have learnt alot about Android development, architecture patterns, Kotlin syntax, and obeying Google&#x27;s whims. Can&#x27;t say I love it all, but for the sake of this app, I will :)<p>Anyway, I finally have the app I wish existed, and I&#x27;m using it everyday. It not only does the main thing I needed it to do, but there&#x27;s also all this stuff:<p>- Make your notes private if you don&#x27;t want to show them on lock screen. - Create check&#x2F;to-do lists. - Set one time or recurring reminders. - Full-text search your notes in the app. - Speech-to-text. - Organize your notes with custom or color labels. - Pin the app as a widget on your home screen. - You can auto backup and restore your notes on new install or Android device. - Works offline. - And no funny business happening in the background <a href="https:&#x2F;&#x2F;joonote.com&#x2F;privacy" rel="nofollow">https:&#x2F;&#x2F;joonote.com&#x2F;privacy</a><p>It&#x27;s 30-day trial, then a one-time $9.99 to go Pro forever.<p>I would love you all to check it out, FWIW.<p>Ok thanks!

Found: March 21, 2026 ID: 3861

[Database] Grafeo – A fast, lean, embeddable graph database built in Rust

Found: March 21, 2026 ID: 3855

[Other] Show HN: AI SDLC Scaffold, repo template for AI-assisted software development I built an open-source repo template that brings structure to AI-assisted software development, starting from the pre-coding phases: objectives, user stories, requirements, architecture decisions.<p>It&#x27;s designed around Claude Code but the ideas are tool-agnostic. I&#x27;ve been a computer science researcher and full-stack software engineer for 25 years, working mainly in startups. I&#x27;ve been using this approach on my personal projects for a while, then, when I decided to package it up as scaffold for more easy reuse, I figured it might be useful to others too. I published it under Apache 2.0, fork it and make it yours.<p>You can easily try it out: follow the instructions in the README to start using it.<p>The problem it solves:<p>AI coding agents are great at writing code, but they work much better when they have clear context about what to build and why. Most projects jump straight to implementation. This scaffold provides a structured workflow for the pre-coding phases, and organizes the output so that agents can navigate it efficiently across sessions.<p>How it works:<p>Everything lives in the repo alongside source code. The AI guidance is split into three layers, each optimized for context-window usage:<p>1. Instruction files (CLAUDE.md, CLAUDE.&lt;phase&gt;.md): always loaded, kept small. They are organized hierarchically, describe repo structure, maintain artifact indexes, and define cross-phase rules like traceability invariants.<p>2. Skills (.claude&#x2F;skills&#x2F;SDLC-*): loaded on demand. Step-by-step procedures for each SDLC activity: eliciting requirements, gap analysis, drafting architecture, decomposing into components, planning tasks, implementation.<p>3. Project artifacts: structured markdown files that accumulate as work progresses: stakeholders, goals, user stories, requirements, assumptions, constraints, decisions, architecture, data model, API design, task tracking. Accessed selectively through indexes.<p>This separation matters because instruction files stay in the context window permanently and must be lean, skills can be detailed since they&#x27;re loaded only when invoked, and artifacts scale with the project but are navigated via indexed tables rather than read in full.<p>Key design choices:<p>Context-window efficiency: artifact collections use markdown index tables (one-line description and trigger conditions) so the agent can locate what it needs without reading everything.<p>Decision capture: decisions made during AI reasoning and human feedback are persisted as a structured artifact, to make them reviewable, traceable, and consistently applied across sessions.<p>Waterfall-ish flow: sequential phases with defined outputs. Tedious for human teams, but AI agents don&#x27;t mind the overhead, and the explicit structure prevents the unconstrained &quot;just start vibecoding&quot; failure mode.<p>How I use it:<p>Short, focused sessions. Each session invokes one skill, produces its output, and ends. The knowledge organization means the next session picks up without losing context. I&#x27;ve found that free-form prompting between skills is usually a sign the workflow is missing a piece.<p>Current limitations:<p>I haven&#x27;t found a good way to integrate Figma MCP for importing existing UI&#x2F;UX designs into the workflow. Suggestions welcome.<p>Feedback, criticism, and contributions are very welcome!

Found: March 21, 2026 ID: 3860

[Other] AI Team OS – Turn Claude Code into a Self-Managing AI Team

Found: March 21, 2026 ID: 3856

[CLI Tool] purl: a curl-esque CLI for making HTTP requests that require payment

Found: March 21, 2026 ID: 3850

[Other] Linux Applications Programming by Example: The Fundamental APIs (2nd Edition)

Found: March 20, 2026 ID: 3853

[Other] Show HN: Red Grid Link – peer-to-peer team tracking over Bluetooth, no servers I go on a lot of backcountry trips where I barely get cell service. If my group splits, nobody knows knows where anyone is until you regroup at camp or at your destination. You can buy Garmin radios or try to set up an ATAK, but ATAK is Android-only and assumes you have a TAK Server running somewhere to make use of all of the functionality. Cool tools themselves, but expensive to set up correctly. I just wanted two iPhones to share their location directly over Bluetooth when cell coverage was lacking.<p>Red Grid Link does that. Start a session, and anyone nearby running the app shows up on your offline map. When they walk out of range their marker stays as a &quot;ghost&quot; that slowly fades.<p>The hard part was making sync reliable over BLE. The connections drop all the time. Someone turns a corner, walks behind a vehicle, whatever. I built a CRDT sync layer (LWW Register + G-Counter) so there&#x27;s never merge conflicts. Each update is just under 200 bytes (from what I have tested so far). When a user&#x2F;teammate disappears the app does exponential backoff from 2 to 30 seconds before giving up and marking them as a ghost.<p>Everything is encrypted (AES-256-GCM, ECDH P-256 key exchange per peer pair). Sessions can require a PIN or QR code to join. It also offers offline topo maps with MGRS grid coordinates, same system as in my other app, Red Grid MGRS.<p>The app is free, and I&#x27;m looking for some honest feedback from other real-world users. Let me know if you have any questions!

Found: March 20, 2026 ID: 3851

[API/SDK] Show HN: Agent Use Interface (AUI) – let users bring their own AI agent As I started building AI integrations, I came to realize that for many projects, the the best agentic experience is one that simply enables the user&#x27;s personal agent to take actions within your app.<p>The existing options like MCP or A2A are quite involved and for simple apps that are already URL parameter driven, those options seem like overkill.<p>This led me to prototype the Agent Use Interface (AUI) spec.<p>The idea is simple: a lightweight, open spec that makes any app &quot;agent-navigable.&quot; You drop an XML file at &#x2F;agents&#x2F;aui.xml that describes the URL-parameter-driven actions your app supports, like search, create, filter, etc. And that way any AI agent can read aui.xml, understand what&#x27;s possible, and construct URLs on behalf of the user.<p>That&#x27;s it. No SDK. No auth flow. No API keys. Just a catalog of what your app can do, written for LLMs to understand.<p>Is there something like this that already exists? Is the approach too simple to be useful?<p>If your app already supports Universal Links or is otherwise URL parameter driven you could probably add support for AUI in an afternoon.<p>See a working example: <a href="https:&#x2F;&#x2F;habittiles.app&#x2F;agents&#x2F;aui.xml" rel="nofollow">https:&#x2F;&#x2F;habittiles.app&#x2F;agents&#x2F;aui.xml</a>

Found: March 20, 2026 ID: 3852

[Other] Show HN: We built a terminal-only Bluesky / AT Proto client written in Fortran Yes, that Fortran.

Found: March 20, 2026 ID: 3849

[Other] OpenCode – Open source AI coding agent

Found: March 20, 2026 ID: 3848

[Other] NumKong: 2'000 Mixed Precision Kernels for All

Found: March 20, 2026 ID: 3854

[CLI Tool] Show HN: Sonar – A tiny CLI to see and kill whatever's running on localhost

Found: March 20, 2026 ID: 3846

[Other] Android developer verification: Balancing openness and choice with safety

Found: March 19, 2026 ID: 3841
Previous Page 1 of 194 Next