🛠️ Hacker News Tools

Showing 561–580 of 1473 tools from Hacker News

Last Updated
January 18, 2026 at 04:00 AM

[Other] Show HN: Pipelex – declarative language for repeatable AI workflows (MIT) We’re Robin, Louis, and Thomas. Pipelex is a DSL and a Python runtime for repeatable AI workflows. Think Dockerfile&#x2F;SQL for multi-step LLM pipelines: you declare steps and interfaces; any model&#x2F;provider can fill them.<p>Why this instead of yet another workflow builder?<p>- Declarative, not glue code: you state what to do; the runtime figures out how. - Agent-first: each step carries natural-language context (purpose, inputs&#x2F;outputs with meaning) so LLMs can follow, audit, and optimize. Our MCP server enables agents to run pipelines but also to build new pipelines on demand. - Open standard under MIT: language spec, runtime, API server, editor extensions, MCP server, n8n node. - Composable: pipes can call other pipes, created by you or shared in the community.<p>Why a domain-specific language?<p>- We need context, meaning and nuances preserved in a structured syntax that both humans and LLMs can understand - We need determinism, control, and reproducibility that pure prompts can&#x27;t deliver - Bonus: editors, diffs, semantic coloring, easy sharing, search &amp; replace, version control, linters…<p>How we got there:<p>Initially, we just wanted to solve every use-case with LLMs but kept rebuilding the same agentic patterns across different projects. So we challenged ourselves to keep the code generic and separate from use-case specifics, which meant modeling workflows from the relevant knowledge and know-how.<p>Unlike existing code&#x2F;no-code frameworks for AI workflows, our abstraction layer doesn&#x27;t wrap APIs, it transcribes business logic into a structured, unambiguous script executable by software and AI. Hence the &quot;declarative&quot; aspect: the script says what should be done, not how to do it. It&#x27;s like a Dockerfile or SQL for AI workflows.<p>Additionally, we wanted the language to be LLM-friendly. Classic programming languages hide logic and context in variable names, functions, and comments: all invisible to the interpreter. In Pipelex, these elements are explicitly stated in natural language, giving AI full visibility: it&#x27;s all logic and context, with minimal syntax.<p>Then, we didn&#x27;t want to write Pipelex scripts ourselves so we dogfooded: we built a Pipelex workflow that writes Pipelex workflows. It&#x27;s in the MCP and CLI: &quot;pipelex build pipe &#x27;…&#x27;&quot; runs a multi-step, structured generation flow that produces a validated workflow ready to execute with &quot;pipelex run&quot;. Then you can iterate on it yourself or with any coding agent.<p>What’s included: Python library, FastAPI and Docker, MCP server, n8n node, VS Code extension.<p>What we’d like from you<p>1. Build a workflow: did the language work for you or against you? 2. Agent&#x2F;MCP workflows and n8n node usability. 3. Suggest new kinds of pipes and other AI models we could integrate 4. Looking for OSS contributors to the core library but also to share pipes with the community<p>Known limitations<p>- Connectors: Pipelex doesn’t integrate with “your apps”, we focus on the cognitive steps, and you can integrate through code&#x2F;API or using MCP or n8n - Visualization: we need to generate flow-charts - The pipe builder is still buggy - Run it yourself: we don’t yet provide a hosted Pipelex API, it’s in the works - Cost-tracking: we only track LLM costs, not image generation or OCR costs yet - Caching and reasoning options: not supported yet<p>Links<p>- GitHub: <a href="https:&#x2F;&#x2F;github.com&#x2F;Pipelex&#x2F;pipelex" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;Pipelex&#x2F;pipelex</a> - Cookbook: <a href="https:&#x2F;&#x2F;github.com&#x2F;Pipelex&#x2F;pipelex-cookbook" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;Pipelex&#x2F;pipelex-cookbook</a> - Starter: <a href="https:&#x2F;&#x2F;github.com&#x2F;Pipelex&#x2F;pipelex-starter" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;Pipelex&#x2F;pipelex-starter</a> - VS Code extension: <a href="https:&#x2F;&#x2F;github.com&#x2F;Pipelex&#x2F;vscode-pipelex" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;Pipelex&#x2F;vscode-pipelex</a> - Docs: [<a href="https:&#x2F;&#x2F;docs.pipelex.com" rel="nofollow">https:&#x2F;&#x2F;docs.pipelex.com</a>](<a href="https:&#x2F;&#x2F;docs.pipelex.com&#x2F;" rel="nofollow">https:&#x2F;&#x2F;docs.pipelex.com&#x2F;</a>) - Demo video (2 min): <a href="https:&#x2F;&#x2F;youtu.be&#x2F;dBigQa8M8pQ" rel="nofollow">https:&#x2F;&#x2F;youtu.be&#x2F;dBigQa8M8pQ</a> - Discord for support and sharing: <a href="https:&#x2F;&#x2F;go.pipelex.com&#x2F;discord" rel="nofollow">https:&#x2F;&#x2F;go.pipelex.com&#x2F;discord</a><p>Thanks for reading. If you try Pipelex, tell us exactly where it hurts, that’s the most valuable feedback we can get.

Found: October 28, 2025 ID: 2148

[DevOps] Show HN: Dexto – Connect your AI Agents with real-world tools and data Hi HN, we’re the team at Truffle AI (YC W25), and we’ve been working on Dexto (<a href="https:&#x2F;&#x2F;www.dexto.ai&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.dexto.ai&#x2F;</a>), a runtime and orchestration layer for AI Agents that lets you turn any app, service or tool into an AI assistant that can reason, think and act. Here&#x27;s a video walkthrough - <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=WJ1qbI6MU6g" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=WJ1qbI6MU6g</a><p>We started working on Dexto after helping clients setup agents for everyday marketing tasks like posting on LinkedIn, running Reddit searches, generating ad creatives, etc. We realized that the LLMs weren’t the issue. The real drag was the repetitive orchestration around them:<p>- wiring LLMs to tools - managing context and persistence - adding memory and approval flows - tailoring behavior per client&#x2F;use case<p>Each small project quietly ballooned into weeks of plumbing where each customer had mostly the same, but slightly custom requirement.<p>So instead of another framework where you write orchestration logic yourself, we built Dexto as a top-level orchestration layer where you declare an agent’s capabilities and behavior:<p>- which tools or MCPs the agent can use - which LLM powers it - how it should behave (system prompt, tone, approval rules)<p>Once configured, the agent runs as an event-driven loop - reasoning through steps, invoking tools, handling retries, and maintaining its own state and memory. Your app doesn’t manage orchestration, it just triggers and subscribes to the agent’s events and decides how to render or approve outcomes.<p>Agents can run locally, in the cloud, or hybrid. Dexto ships with a CLI, a web UI, and a few sample agents to get started.<p>To show its flexibility, we wrapped some OpenCV functions into an MCP server and connected it to Dexto (<a href="https:&#x2F;&#x2F;youtu.be&#x2F;A0j61EIgWdI" rel="nofollow">https:&#x2F;&#x2F;youtu.be&#x2F;A0j61EIgWdI</a>). Now, a non-technical user could detect faces in images or create custom photo collages by talking to the agent. The same approach works for coding agents, browser agents, multi-speaker podcast agents, and marketing assistants tuned to your data. <a href="https:&#x2F;&#x2F;docs.dexto.ai&#x2F;examples&#x2F;category&#x2F;agent-examples" rel="nofollow">https:&#x2F;&#x2F;docs.dexto.ai&#x2F;examples&#x2F;category&#x2F;agent-examples</a><p>Dexto is modular, composable and portable allowing you to plug in new tools or even re-expose an entire Dexto agent as an MCP Server and consume it from other apps like Cursor (<a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=_hZMFIO8KZM" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=_hZMFIO8KZM</a>). Because agents are defined through config and powered by a consistent runtime, they can run anywhere without code changes making cross-agent (A2A) interactions and reuse effortless.<p>In a way, we like to think of Dexto as a “meta-agent” or “agent harness” that can be customized into a specialized agent depending on its tools, data, and platform.<p>For the time being, we have opted for an Elastic V2 license to give maximum flexibility for the community to build with Dexto while preventing bigger players from taking over and monetizing our work.<p>We’d love your feedback:<p>- Try the quickstart and tell us what breaks - Share a use case you want to ship in a day, and we’ll suggest a minimal config<p>Repo: <a href="https:&#x2F;&#x2F;github.com&#x2F;truffle-ai&#x2F;dexto" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;truffle-ai&#x2F;dexto</a><p>Docs: <a href="https:&#x2F;&#x2F;docs.dexto.ai&#x2F;docs&#x2F;category&#x2F;getting-started" rel="nofollow">https:&#x2F;&#x2F;docs.dexto.ai&#x2F;docs&#x2F;category&#x2F;getting-started</a><p>Quickstart: npm i -g dexto

Found: October 28, 2025 ID: 2149

Show HN: Bash Screensavers

Hacker News (score: 21)

[Other] Show HN: Bash Screensavers A github project to collect a bunch of bash-based screensavers&#x2F;visualizations.

Found: October 28, 2025 ID: 2145

[Other] Movycat – A terminal movie player written in Zig I saw Mario (the author) at Zigtoberfest in Munich last Saturday where he gave a presentation on a whole stack of related projects implemented in Zig: A graphics library for the terminal (movy), movycat (video playback in the terminal), zig64 &amp; zigreSID (emulators for Commodore 64&#x27;s CPU and sound chip), and a reimplementation of a C64 video game (which I don&#x27;t think he has published on GitHub yet). Anyway, I found his work incredible and thought he deserved some attention.<p>Update: Since writing this, Mario has uploaded the game, too: <a href="https:&#x2F;&#x2F;github.com&#x2F;M64GitHub&#x2F;1st-shot" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;M64GitHub&#x2F;1st-shot</a> . I misunderstood, though: It doesn&#x27;t seem to be a port of an actual C64 game.

Found: October 27, 2025 ID: 2173

[Other] Cisco opensourced MCP-Scanner for finding vulnerabilties in MCP server

Found: October 27, 2025 ID: 2132

[CLI Tool] Show HN: Git Auto Commit (GAC) – LLM-powered Git commit command line tool GAC is a tool I built to help users spend less time summing up what was done and more time building. It uses LLMs to generate contextual git commit messages from your code changes. And it can be a drop-in replacement for `git commit -m &quot;...&quot;`.<p>Example:<p>```<p>feat(auth): add OAuth2 integration with GitHub and Google<p>- Implement OAuth2 authentication flow<p>- Add provider configuration for GitHub and Google<p>- Create callback handler for token exchange<p>- Update login UI with social auth buttons<p>```<p>Don&#x27;t like it? Reroll with &#x27;r&#x27;, or type `r &quot;focus on xyz&quot;` and it rerolls the commit with your feedback!<p>You can try it out with uvx (no install):<p>```<p>uvx gac init # config wizard<p>uvx gac<p>```<p><i>Note: `gac init` creates a .gac.env file in your home directory with your chosen provider, model, and API key.</i><p>*Tech details:*<p>*14 providers* - Supports local (Ollama &amp; LM Studio) and cloud (OpenAI, Anthropic, Gemini, OpenRouter, Groq, Cerebras, Chutes, Fireworks, StreamLake, Synthetic, Together AI, &amp; Z.ai (including their extremely cheap coding plans!)).<p>*Three verbosity modes* - Standard with bullets (default), one-liners (`-o`), or verbose (`-v`) with detailed Motivation&#x2F;Architecture&#x2F;Impact sections.<p>*Secret detection* - Scans for API keys, tokens, and credentials before committing. Has caught my API keys on a new project when I hadn&#x27;t yet gitignored .env.<p>*Flags* - Automate common workflows:<p>- `gac -h &quot;bug fix&quot;` - pass hints to guide intent<p>- `gac -yo` - auto-accept the commit message in one-liner mode<p>- `gac -ayp` - stage all files, auto-accept the commit message, and push (yolo mode)<p>Would love to hear your feedback! Give it a try and let me know what you think! &lt;3<p>GitHub: <a href="https:&#x2F;&#x2F;github.com&#x2F;cellwebb&#x2F;gac" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;cellwebb&#x2F;gac</a>

Found: October 27, 2025 ID: 2134

Show HN: JSON Query

Hacker News (score: 85)

[Other] Show HN: JSON Query I&#x27;m working on a tool that will probably involve querying JSON documents and I&#x27;m asking myself how to expose that functionality to my users.<p>I like the power of `jq` and the fact that LLMs are proficient at it, but I find it right out impossible to come up with the right `jq` incantations myself. Has anyone here been in a similar situation? Which tool &#x2F; language did you end up exposing to your users?

Found: October 27, 2025 ID: 2139

[IDE/Editor] Show HN: Erdos – open-source, AI data science IDE Hey HN! We’re Jorge and Will from Lotas (<a href="https:&#x2F;&#x2F;www.lotas.ai&#x2F;">https:&#x2F;&#x2F;www.lotas.ai&#x2F;</a>), and we’ve built Erdos, a secure AI-powered data science IDE that’s fully open source (<a href="https:&#x2F;&#x2F;www.lotas.ai&#x2F;erdos">https:&#x2F;&#x2F;www.lotas.ai&#x2F;erdos</a>).<p>A few months ago, we shared Rao, an AI coding assistant for RStudio (<a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=44638510">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=44638510</a>). We built Rao to bring the Cursor-like experience to RStudio users. Now we want to take the next step and deliver a tool for the entire data science community that handles Python, R, SQL, and Julia workflows.<p>Erdos is a fork of VS Code designed for data science. It includes:<p>- An AI that can search, read, and write across all file types for Python, R, SQL, and Julia. Also, for Jupyter notebooks, we’ve optimized a jupytext system to allow the AI to make faster edits.<p>- Built-in Python, R, and Julia consoles accessible to both the user and AI<p>- Plot pane that tracks and organizes plots by file and time<p>- Database pane for connecting to and manipulating SQL or FTP data sources<p>- Environment pane for viewing variables, packages, and environments<p>- Help pane for Python, R, and Julia documentation<p>- Remote development via SSH or containers<p>- AI assistant available through a single-click sign-in to our zero data retention backend, bring your own key, or a local model<p>- Open source AGPLv3 license<p>We built Erdos because data scientists are often second-class citizens in modern IDEs. Tools like VS Code, Cursor, and Claude Code are made for software developers, not for people working across Jupyter notebooks, scripts, and SQL. We wanted an IDE that feels native to data scientists, while offering the same AI productivity boosts.<p>You can try Erdos at <a href="https:&#x2F;&#x2F;www.lotas.ai&#x2F;erdos">https:&#x2F;&#x2F;www.lotas.ai&#x2F;erdos</a>, check out our source code on our GitHub (<a href="https:&#x2F;&#x2F;github.com&#x2F;lotas-ai&#x2F;erdos" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;lotas-ai&#x2F;erdos</a>), and let us know what features would make it more useful for your work. We’d love your feedback below!

Found: October 27, 2025 ID: 2133

[Other] Show HN: spoilerjs – Reddit-style spoilers with particle animations Hello HN!<p>I just published my first npm library as a small way to give back to the open source community.<p>I built `spoilerjs`, a lightweight web component that lets you hide text with an animated spoiler effect. Think Reddit spoilers, but with more flair! It works with plain HTML, React, Vue, or Svelte, and you can customize attributes like particle density, velocity, and scale. The effect is totally inspired by the Telegram app!<p>Demo: <a href="https:&#x2F;&#x2F;spoilerjs.sh4jid.me" rel="nofollow">https:&#x2F;&#x2F;spoilerjs.sh4jid.me</a> NPM: <a href="https:&#x2F;&#x2F;www.npmjs.com&#x2F;package&#x2F;spoilerjs" rel="nofollow">https:&#x2F;&#x2F;www.npmjs.com&#x2F;package&#x2F;spoilerjs</a> GitHub: <a href="https:&#x2F;&#x2F;github.com&#x2F;shajidhasan&#x2F;spoilerjs" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;shajidhasan&#x2F;spoilerjs</a><p>I&#x27;m sure there are probably some bugs and rough edges, but I&#x27;d love to hear your feedback!<p>Thanks!

Found: October 27, 2025 ID: 2137

[CLI Tool] Show HN: Whatdidido – CLI to summarize your work from Jira/Linear I built this after spending days every year manually searching through Jira tickets to remember what I&#x27;d accomplished for performance reviews.<p>whatdidido is a CLI tool that:<p>- Pulls tickets from Jira or Linear for a date range<p>- Uses an LLM to create short summaries of each ticket<p>- Generates an overall summary to help you build your self-evaluation<p>The tool doesn&#x27;t write your review for you—crafting thoughtful, contextual feedback still requires human judgment. It just eliminates the busywork of finding and organizing what you worked on.<p>Key details:<p>- MIT licensed, open source<p>- No data storage—everything stays local<p>- Requires OpenAI or OpenRouter API key<p>- Works with Jira and Linear (more integrations welcome&#x2F;coming soon)<p>GitHub: <a href="https:&#x2F;&#x2F;github.com&#x2F;oliviersm199&#x2F;whatdidido" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;oliviersm199&#x2F;whatdidido</a><p>I&#x27;m releasing it now because I think others might find it useful during review season.<p>Would love feedback on the approach and what other integrations would be helpful. Happy to answer questions about how it works.

Found: October 27, 2025 ID: 2136

[API/SDK] Show HN: nblm - Rust CLI/Python SDK for NotebookLM Enterprise automation I built nblm, a Rust-based toolset to automate Google’s NotebookLM Enterprise API reliably. It aims to replace brittle curl snippets with a stable interface you can use in cron&#x2F;CI or agentic systems.<p>* Python SDK (type-safe): IDE auto-complete, fewer JSON key typos, fits complex workflows.<p>* Standalone CLI: single fast binary for scripts and pipelines.<p>* Handles auth, batching, retries; you focus on logic. Rust core is fast and memory-safe.<p>* Enterprise API only (consumer NotebookLM isn’t supported).<p>Repo: <a href="https:&#x2F;&#x2F;github.com&#x2F;K-dash&#x2F;nblm-rs" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;K-dash&#x2F;nblm-rs</a><p>Feedback is welcome—I&#x27;m especially interested in thoughts on the Python SDK’s design for building automated&#x2F;agentic workflows. Thanks!

Found: October 27, 2025 ID: 2135

[Other] Show HN: Write Go code in JavaScript files I built a Vite plugin that lets you write Go code directly in .js files using a &quot;use golang&quot; directive. It compiles to WebAssembly automatically.

Found: October 27, 2025 ID: 2118

[Other] Show HN: Helium Browser for Android with extensions support, based on Vanadium Been working on an experimental Chromium-based browser that brings 2 major features to your phone&#x2F;tablet:<p>1. desktop-style extensions: natively install any extensions (like uBO) from the chrome web store, just toggle &quot;desktop site&quot; in the menu first.<p>2. privacy&#x2F;security hardening: applies the full patch sets from Vanadium (with Helium&#x27;s currently wip).<p>Means you get both browsers&#x27; excellent privacy features, like Vanadium&#x27;s webrtc IP policy option that protects your real IP by default, and security improvements such as JIT being disabled by default, all while being a reasonably efficient FOSS app that can be installed on any (modern) android.<p>It&#x27;s still in beta, and as I note in the README, it&#x27;s not a replacement for the full OS-level security model you&#x27;d get from running the GrapheneOS Vanadium combo. However, goal was to combine privacy of Vanadium with the power of desktop extensions and Helium features, and make it accessible to a wider audience. (Passkeys from Bitwarden Mobile should also work straight away once merged in the list of FIDO2 privileged browsers)<p>Build scripts are in the repo if you want to compile it yourself. You can find pre-built releases there too.<p>Would love any feedback&#x2F;support!

Found: October 26, 2025 ID: 2111

[API/SDK] Show HN: Hermes – Self-hosted video downloader I&#x27;ve been playing with my home server again, and I was looking for a video downloader. I found one that looked great, but was no longer functional. So, I vibe-coded my own.<p>Hermes is a REST API and web app built on top of yt-dlp. Why not just use yt-dlp? Additional features, phone support, and automations. Coming soon will be a very basic video editor so you can easily remux and trim your downloaded videos.

Found: October 26, 2025 ID: 2126

[Other] Show HN: MyraOS – My 32-bit operating system in C and ASM (Hack Club project) Hi HN, I’m Dvir, a young developer. Last year, I got rejected after a job interview because I lacked some CPU knowledge. After that, I decided to deepen my understanding in the low level world and learn how things work under the hood. I decided to try and create an OS in C and ASM as a way to broaden my knowledge in this area.<p>This took me on the most interesting ride, where I’ve learned about OS theory and low level programming on a whole new level. I’ve spent hours upon hours, blood and tears, reading different OS theory blogs, learning low level concepts, debugging, testing and working on this project.<p>I started by reading University books and online blogs, while also watching videos. Some sources that helped me out were OSDev Wiki (<a href="https:&#x2F;&#x2F;wiki.osdev.org&#x2F;Expanded_Main_Page" rel="nofollow">https:&#x2F;&#x2F;wiki.osdev.org&#x2F;Expanded_Main_Page</a>), OSTEP (<a href="https:&#x2F;&#x2F;pages.cs.wisc.edu&#x2F;~remzi&#x2F;OSTEP" rel="nofollow">https:&#x2F;&#x2F;pages.cs.wisc.edu&#x2F;~remzi&#x2F;OSTEP</a>), open-source repositories like MellOS and LemonOS (more advanced), DoomGeneric, and some friends that have built an OS before.<p>This part was the longest, but also the easiest. I felt like I understood the theory, but still could not connect it into actual code. Sitting down and starting to code was difficult, but I knew that was the next step I needed to take! I began by working on the bootloader, which is optional since you can use a pre-made one (I switched to GRUB later), but implementing it was mainly for learning purposes and to warm up on ASM. These were my steps after that:<p><pre><code> 1) I started implementing the VGA driver, which gave me the ability to display text. 2) Interrupts - IDT, ISR, IRQ, which signal to the CPU that a certain event occurred and needs handling (such as faults, hardware connected device actions, etc). 3) Keyboard driver, which enables me to display the same text I type on my keyboard. 4) PMM (Physical memory management) 5) Paging and virtual memory management 6) RTC driver - clock addition (which was, in my opinion, optional) 7) PIT driver - Ticks every certain amount of time, and also 8) FS (File System) and physical HDD drivers - for the HDD I chose PATA (HDD communication protocol) for simplicity (SATA is a newer but harder option as well). For the FS I chose EXT2 (The Second Extended FileSystem), which is a foundational linux FS structure introduced in 1993. This FS structure is not the simplest, but is very popular in hobby-OS, it is very supported, easy to set up and upgrade to newer EXT versions, it has a lot of materials online, compared to other options. This was probably the longest and largest feature I had worked on. 9) Syscall support. 10) Libc implementation. 11) Processing and scheduling for multiprocessing. 12) Here I also made a shell to test it all. </code></pre> At this point, I had a working shell, but later decided to go further and add a GUI! I was working on the FS (stage 8), when I heard about Hack Club’s Summer of Making (SoM). This was my first time practicing in HackClub, and I want to express my gratitude and share my enjoyment of participating in it.<p>At first I just wanted to declare the OS as finished after completing the FS, and a bit of other drivers, but because of SoM my perspective was changed completely. Because of the competition, I started to think that I needed to ship a complete OS, with processing, GUI and the bare minimum ability to run Doom. I wanted to show the community in SoM how everything works.<p>Then I worked on it for another 2 months, after finishing the shell, just because of SoM!, totalling my project to almost 7 months of work. At this time I added full GUI support, with dirty rectangles and double buffering, I made a GUI mouse driver, and even made a full Doom port! things I would&#x27;ve never even thought about without participating in SoM.<p>This is my SoM project: <a href="https:&#x2F;&#x2F;summer.hackclub.com&#x2F;projects&#x2F;5191" rel="nofollow">https:&#x2F;&#x2F;summer.hackclub.com&#x2F;projects&#x2F;5191</a>.<p>Every project has challenges, especially in such a low level project. I had to do a lot of debugging while working on this, and it is no easy task. I highly recommend using GDB which helped me debug so many of my problems, especially memory ones.<p>The first major challenge I encountered was during the coding of processes - I realized that a lot of my paging code was completely wrong, poorly tested, and had to be reworked. During this time I was already in the competition and it was difficult keeping up with devlogs and new features while fixing old problems in a code I wrote a few months ago.<p>Some more major problems occurred when trying to run Doom, and unlike the last problem, this was a disaster. I had random PFs and memory problems, one run could work while the next one wouldn’t, and the worst part is that it was only on the Doom, and not on processes I created myself. These issues took a lot of time to figure out. I began to question the Doom code itself, and even thought about giving up on the whole project.<p>After a lot of time spent debugging, I fixed the issues. It was a combination of scheduling issues, Libc issues and the Qemu not having enough (wrongfully assuming 128MB for the whole OS was enough).<p>Finally, I worked throughout all the difficulties, and shipped the project! In the end, the experience working on this project was amazing. I learned a lot, grew and improved as a developer, and I thank SoM for helping to increase my motivation and make the project memorable and unique like I never imagined it would be.<p>The repo is at <a href="https:&#x2F;&#x2F;github.com&#x2F;dvir-biton&#x2F;MyraOS" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;dvir-biton&#x2F;MyraOS</a>. I’d love to discuss any aspect of this with you all in the comments!

Found: October 26, 2025 ID: 2110

[Other] Show HN: I Built DevTools for Blazor (Like React DevTools but for .NET) Hi HN! I&#x27;ve been working on developer tools for Blazor that let you inspect Razor components in the browser, similar to React DevTools or Vue DevTools.<p>The problem: Blazor is Microsoft&#x27;s frontend framework that lets you write web UIs in C#. It&#x27;s growing fast but lacks the debugging tools other frameworks have. When your component tree gets complex, you&#x27;re stuck with Console.WriteLine debugging.<p>What I built: A browser extension + NuGet package that:<p>Shows the Razor component tree in your browser Maps DOM elements back to their source components Highlights components on hover Works with both Blazor Server and WASM<p>How it works: The NuGet package creates shadow copies of your .razor files and injects invisible markers during compilation. These markers survive the Razor→HTML pipeline. The browser extension reads these markers to reconstruct the component tree.<p>Current status: Beta - it works but has rough edges. Found some bugs when testing on larger production apps that I&#x27;m working through. All documented on GitHub.<p>Technical challenges solved:<p>Getting markers through the Razor compiler without breaking anything Working around CSS isolation that strips unknown attributes Making it work with both hosting models<p>It&#x27;s completely open source:<p><a href="https:&#x2F;&#x2F;github.com&#x2F;joe-gregory&#x2F;blazor-devtools" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;joe-gregory&#x2F;blazor-devtools</a><p>Demo site where you can try it:<p><a href="https:&#x2F;&#x2F;blazordevelopertools.com" rel="nofollow">https:&#x2F;&#x2F;blazordevelopertools.com</a><p>Would love feedback, especially from anyone building production Blazor apps. What debugging pain points do you have that developer tools could solve?

Found: October 26, 2025 ID: 2115

[Other] Clojure Land – Discover open-source Clojure libraries and frameworks

Found: October 26, 2025 ID: 2106

[Other] Show HN: Dictly – Local, real‑time voice‑to‑text for macOS (sub‑100ms, no cloud) TL;DR: I built a native macOS dictation app that transcribes locally and instantly. Text appears as you speak (measured ~100 ms first‑character latency). No accounts, no servers, no tracking.<p>Links: • Website: <a href="https:&#x2F;&#x2F;dictly.app" rel="nofollow">https:&#x2F;&#x2F;dictly.app</a> • Mac App Store: <a href="https:&#x2F;&#x2F;apps.apple.com&#x2F;de&#x2F;app&#x2F;dictly-no-keys-just-clarity&#x2F;id6752733596">https:&#x2F;&#x2F;apps.apple.com&#x2F;de&#x2F;app&#x2F;dictly-no-keys-just-clarity&#x2F;id...</a> • Free download; optional Pro tier (pipelines, unlimited history, etc.)<p>What it does<p>Real‑time transcription — streaming text while you talk, not after you stop.<p>Quick‑Capture Overlay (macOS) — global hotkey, drop text into any app&#x2F;field.<p>Custom Pipelines — local post‑processing steps for cleanup, punctuation, or house style.<p>Dictionary Profiles — teach domain terms (names, brands, code tokens, etc.).<p>Local Analytics — see time saved vs typing (computed on device, never sent anywhere).<p>Why I built it<p>I wanted dictation that felt as immediate as typing and was trustworthy. Most tools stream audio to a server; I wanted something that never leaves the machine.<p>How it’s built (high‑level)<p>Swift + Apple speech&#x2F;ML frameworks. Streaming audio capture → on‑device recognition → local pipeline → paste into the active app. Works with Wi‑Fi off; there are no network requests in the transcription path.<p>What’s different vs built‑ins<p>Always on‑device + streaming with a global overlay that works in any app. Extensible, deterministic cleanup via pipelines (not a black‑box cloud). Per‑project dictionaries to tame jargon and proper nouns.<p>Numbers (early)<p>Latency: ~100 ms (first visible characters from speech onset) in typical conditions on modern Macs. Privacy: zero telemetry; no account; no background syncing. Everything stays local.<p>Trade‑offs (calling them out up front)<p>Accuracy depends on mic and environment (no surprise). For weird proper nouns&#x2F;jargon, you’ll want a dictionary profile. Heavy background noise will degrade results (pipelines can only do so much).<p>What I’m looking for from HN<p>Performance impressions on different hardware. Failure cases (accents, acronyms, coding, meetings). Pipeline ideas you’d actually use (e.g., Markdown formatting, code‑block guards, style rules). Integration wishes: CLI? Shortcut actions? Editor‑specific helpers?<p>I’m a solo dev. Happy to answer pointed questions and ship fixes fast. If you spot hand‑wavy claims, call them out.

Found: October 25, 2025 ID: 2098

[Other] Show HN: Diagram as code tool with draggable customizations In the past I&#x27;ve used declarative diagram generation tools like Mermaid.js a lot for quickly drawing up things but for presentations or deliverables I find that I have to then move the generated diagrams over to a tool like Lucidchart which allows full control of the organization and customization.<p>Therefore I am now working on this to combine the benefits of both into just one tool which can do both functions.<p>The project is certainly in the early stages but if you find yourself making architecture diagrams I&#x27;d love to hear your thoughts on the idea or even a Github issue for a feature request!<p>One of the workflows I&#x27;m targeting is when an AI generates the first draft of the diagram (all the LLMs know .mmd syntax) and then the user can then customize it to their liking which I think can drastically speed up making complex diagrams!

Found: October 25, 2025 ID: 2094

[Other] Agent Lightning: Train agents with RL (no code changes needed) <a href="https:&#x2F;&#x2F;microsoft.github.io&#x2F;agent-lightning&#x2F;stable&#x2F;" rel="nofollow">https:&#x2F;&#x2F;microsoft.github.io&#x2F;agent-lightning&#x2F;stable&#x2F;</a>

Found: October 25, 2025 ID: 2096
Previous Page 29 of 74 Next