Show HN: Diggit.dev – Git history for architecture archaeologists
Show HN (score: 9)Description
Today I'm sharing a little tool to help you explore GitHub repositories:
This project was admittedly a big dumb excuse to play with Elm and Claude Code. I published my design notes and all the chat transcripts here:
• https://taylor.town/diggit-000
Please add bug reports and feature requests to the repo:
• https://github.com/surprisetalk/diggit
Enjoy!
More from Show
Show HN: My first Go project, a useless animated bunny sign for your terminal
Show HN: My first Go project, a useless animated bunny sign for your terminal Hi HN, I wanted to share my very first (insignificant) project written in Go: a little CLI tool that displays messages with an animated bunny holding a sign.<p>I wanted to learn Go and needed a small, fun project to get my hands dirty with the language and the process of building and distributing a CLI. I've built a similar tool in JavaScript before so I thought porting it would be a great learning exercise.<p>This was a dive into Go's basics for me, from package structure and CLI flag parsing to building binaries for different platforms (never did that on my JS projects).<p>I'm starting to understand why Go is so praised: it's standard library is huge compared with other languages. One thing that really impressed me was the idea (at some point of this journey) to develop a functionality by myself (where in the javascript original project I choose to use an external library), here with the opportunities that std lib was giving me I thought "why don't try to create the function by miself?" and it worked! In the Js version I used the nodejs "log-update", here I write a dedicated pkg.<p>I know it's a bit silly, but I could see it being used to add some fun to build scripts or idk highlight important log messages, or just make a colleague smile. It's easy to install if you have Go set up:<p><pre><code> go install github.com/fsgreco/go-bunny-sign/cmd/bunnysign@latest </code></pre> Since I'm new to Go, I would genuinely appreciate any feedback on the code, project structure, or Go best practices. The README also lists my planned next steps, like adding tests and setting up CI better.<p>Thanks for taking a look!
Show HN: MCPcat – A free open-source library for MCP server monitoring
Show HN: MCPcat – A free open-source library for MCP server monitoring Hey everyone!<p>We've been working with several MCP server maintainers and we noticed some difficulties getting drop-in logging and identity attribution working out of the box with existing vendors. A couple of challenges we hoped to solve were: - Baseline piping of tool calls to traditional vendors - How to tie tool calls to a “user session” - Understanding the context behind tool calls made by agents<p>So we built something. :) The MCPcat library is completely free to use, MIT licensed, and provides a one-line solution for adding logging and observability to any vendor that supports OpenTelemetry. We added custom support for Datadog and Sentry because we personally use those vendors, but we’re happy to add more if there’s interest.<p>Here’s how it works:<p><pre><code> mcpcat.track(serverObject, {...options…}) </code></pre> This initializes a series of listeners that: 1. Categorize events within the same working session 2. Publish those events directly to your third-party data provider<p>Optionally, you can redact sensitive data. The data never touches our servers (unless you opt in to additional contextual analysis, which I mention below).<p>Some teams might also want a better understanding of “what use cases are people finding with my MCP server.” For that, we provide a separate dashboard that visualizes the user journey in more detail (free for a high baseline of monthly usage and always free for open source projects).<p>We have two SDKs so far: Python SDK: <<a href="https://github.com/MCPCat/mcpcat-python-sdk" rel="nofollow">https://github.com/MCPCat/mcpcat-python-sdk</a>> TypeScript SDK: <<a href="https://github.com/MCPCat/mcpcat-typescript-sdk" rel="nofollow">https://github.com/MCPCat/mcpcat-typescript-sdk</a>><p>Other SDKs are on the way!
Show HN: Envoy – Command Logger
Show HN: Envoy – Command Logger Envoy is a lightweight, background utility that logs your terminal commands. It's designed to be a simple and unobtrusive way to keep a history of your shell usage, which can be useful for debugging, tracking work, or just remembering what you did.
Show HN: Lemonade: Run LLMs Locally with GPU and NPU Acceleration
Show HN: Lemonade: Run LLMs Locally with GPU and NPU Acceleration Lemonade is an open-source SDK and local LLM server focused on making it easy to run and experiment with large language models (LLMs) on your own PC, with special acceleration paths for NPUs (Ryzen™ AI) and GPUs (Strix Halo and Radeon™).<p>Why?<p>There are three qualities needed in a local LLM serving stack, and none of the market leaders (Ollama, LM Studio, or using llama.cpp by itself) deliver all three: 1. Use the best backend for the user’s hardware, even if it means integrating multiple inference engines (llama.cpp, ONNXRuntime, etc.) or custom builds (e.g., llama.cpp with ROCm betas). 2. Zero friction for both users and developers from onboarding to apps integration to high performance. 3. Commitment to open source principles and collaborating in the community.<p>Lemonade Overview:<p>Simple LLM serving: Lemonade is a drop-in local server that presents an OpenAI-compatible API, so any app or tool that talks to OpenAI’s endpoints will “just work” with Lemonade’s local models. Performance focus: Powered by llama.cpp (Vulkan and ROCm for GPUs) and ONNXRuntime (Ryzen AI for NPUs and iGPUs), Lemonade squeezes the best out of your PC, no extra code or hacks needed. Cross-platform: One-click installer for Windows (with GUI), pip/source install for Linux. Bring your own models: Supports GGUFs and ONNX. Use Gemma, Llama, Qwen, Phi and others out-of-the-box. Easily manage, pull, and swap models. Complete SDK: Python API for LLM generation, and CLI for benchmarking/testing. Open source: Apache 2.0 (core server and SDK), no feature gating, no enterprise “gotchas.” All server/API logic and performance code is fully open; some software the NPU depends on is proprietary, but we strive for as much openness as possible (see our GitHub for details). Active collabs with GGML, Hugging Face, and ROCm/TheRock.<p>Get started:<p>Windows? Download the latest GUI installer from <a href="https://lemonade-server.ai/" rel="nofollow">https://lemonade-server.ai/</a><p>Linux? Install with pip or from source (<a href="https://lemonade-server.ai/" rel="nofollow">https://lemonade-server.ai/</a>)<p>Docs: <a href="https://lemonade-server.ai/docs/" rel="nofollow">https://lemonade-server.ai/docs/</a><p>Discord for banter/support/feedback: <a href="https://discord.gg/5xXzkMu8Zk" rel="nofollow">https://discord.gg/5xXzkMu8Zk</a><p>How do you use it?<p>Click on lemonade-server from the start menu Open http://localhost:8000 in your browser for a web ui with chat, settings, and model management. Point any OpenAI-compatible app (chatbots, coding assistants, GUIs, etc.) at http://localhost:8000/api/v1 Use the CLI to run/load/manage models, monitor usage, and tweak settings such as temperature, top-p and top-k. Integrate via the Python API for direct access in your own apps or research.<p>Who is it for?<p>Developers: Integrate LLMs into your apps with standardized APIs and zero device-specific code, using popular tools and frameworks. LLM Enthusiasts, plug-and-play with: Morphik AI (contextual RAG/PDF Q&A) Open WebUI (modern local chat interfaces) Continue.dev (VS Code AI coding copilot) …and many more integrations in progress! Privacy-focused users: No cloud calls, run everything locally, including advanced multi-modal models if your hardware supports it.<p>Why does this matter?<p>Every month, new on-device models (e.g., Qwen3 MOEs and Gemma 3) are getting closer to the capabilities of cloud LLMs. We predict a lot of LLM use will move local for cost reasons alone. Keeping your data and AI workflows on your own hardware is finally practical, fast, and private, no vendor lock-in, no ongoing API fees, and no sending your sensitive info to remote servers. Lemonade lowers friction for running these next-gen models, whether you want to experiment, build, or deploy at the edge. Would love your feedback! Are you running LLMs on AMD hardware? What’s missing, what’s broken, what would you like to see next? Any pain points from Ollama, LM Studio, or others you wish we solved? Share your stories, questions, or rant at us.<p>Links:<p>Download & Docs: <a href="https://lemonade-server.ai/" rel="nofollow">https://lemonade-server.ai/</a><p>GitHub: <a href="https://github.com/lemonade-sdk/lemonade" rel="nofollow">https://github.com/lemonade-sdk/lemonade</a><p>Discord: <a href="https://discord.gg/5xXzkMu8Zk" rel="nofollow">https://discord.gg/5xXzkMu8Zk</a><p>Thanks HN!
Show HN: E-commerce data from 100k stores that is refreshed daily
Show HN: E-commerce data from 100k stores that is refreshed daily Hi HN! I'm building Agora, an AI search engine for e-commerce that returns results in under 300ms. We've indexed 30M products from 100k stores and made them easy to purchase using AI agents.<p>After launching here on HN, a large enterprise reached out to pay for access to the raw data. We serviced the contract manually to learn the exact workflow and then decided to productize the "Data Connector" to help us scale to more customers.<p>The Data Connector enables developers to select any of our 100k stores in the index, view sample data, format the output, and export the up-to-date data. Data can be exported as CSV or JSON.<p>We've built crawlers for Shopify, WooCommerce, Squarespace, Wix, and custom built stores to index the store information, product data, stock, reviews, and more. The primary technical challenge is to recrawl the entire dataset every 24 hours. We do this with a series of servers that "recrawl" different store-types with rotating local proxies and then add changes to a queue to be updated in our search index. Our primary database is Mongo and our search runs on self-hosted Meilisearch on high RAM servers.<p>My vision is to index the world's e-commerce data. I believe this will create market efficiencies for customers, developers, and merchants.<p>I'd love your feedback!
Show HN: I built a service to run Claude Code in the Cloud
Show HN: I built a service to run Claude Code in the Cloud
Show HN: AgentGuard – Auto-kill AI agents before they burn through your budget
Show HN: AgentGuard – Auto-kill AI agents before they burn through your budget Your AI agent hits an infinite loop and racks up $2000 in API charges overnight. This happens weekly to AI developers.<p>AgentGuard monitors API calls in real-time and automatically kills your process when it hits your budget limit.<p>How it works:<p>Add 2 lines to any AI project:<p><pre><code> const agentGuard = require('agent-guard'); await agentGuard.init({ limit: 50 }); // $50 budget // Your existing code runs unchanged const response = await openai.chat.completions.create({...}); // AgentGuard tracks costs automatically </code></pre> When your code hits $50 in API costs, AgentGuard stops execution and shows you exactly what happened.<p>Why I built this:<p>I got tired of seeing "I accidentally spent $500 on OpenAI" posts. Existing tools like tokencost help you <i>measure</i> costs after the fact, but nothing prevents runaway spending in real-time.<p>AgentGuard is essentially a circuit breaker for AI API costs. It's saved me from several costly bugs during development.<p>Limitations: Only works with OpenAI and Anthropic APIs currently. Cost calculations are estimates based on documented pricing.<p>Source: <a href="https://github.com/dipampaul17/AgentGuard">https://github.com/dipampaul17/AgentGuard</a><p>Install: npm i agent-guard
Show HN: Walk-through of rocket landing optimization paper [pdf]
Show HN: Walk-through of rocket landing optimization paper [pdf] Hey all! Long time lurker, first time poster.<p>I found this rocket landing trajectory optimization paper cool, but it took me a while to wrap my head around it and implement it. I wrote up an expanded version of the paper including details that would have helped me understand it the first time through, with the idea being that it might make the content more approachable for others with similar interests. The source code is also linked in the document.<p>I'm open to feedback, I'm always trying to get better across the board.
No other tools from this source yet.