🛠️ All DevTools

Showing 1581–1600 of 2549 tools

Last Updated
December 03, 2025 at 12:00 AM

bitwarden/clients

GitHub Trending

[Other] Bitwarden client apps (web, browser extension, desktop, and cli).

Found: August 20, 2025 ID: 960

[CLI Tool] Show HN: Claude Code workflow: PRDs → GitHub Issues → parallel execution I built a lightweight project management workflow to keep AI-driven development organized.<p>The problem was that context kept disappearing between tasks. With multiple Claude agents running in parallel, I’d lose track of specs, dependencies, and history. External PM tools didn’t help because syncing them with repos always created friction.<p>The solution was to treat GitHub Issues as the database. The &quot;system&quot; is ~50 bash scripts and markdown configs that:<p>- Brainstorm with you to create a markdown PRD, spins up an epic, and decomposes it into tasks and syncs them with GitHub issues - Track progress across parallel streams - Keep everything traceable back to the original spec - Run fast from the CLI (commands finish in seconds)<p>We’ve been using it internally for a few months and it’s cut our shipping time roughly in half. Repo: <a href="https:&#x2F;&#x2F;github.com&#x2F;automazeio&#x2F;ccpm" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;automazeio&#x2F;ccpm</a><p>It’s still early and rough around the edges, but has worked well for us. I’d love feedback from others experimenting with GitHub-centric project management or AI-driven workflows.

Found: August 20, 2025 ID: 964

[Other] Tidewave Web: in-browser coding agent for Rails and Phoenix

Found: August 20, 2025 ID: 963

RustMailer

Product Hunt

[Other] A self-hosted IMAP/SMTP middleware designed for developers RustMailer is a self-hosted, open-source email API platform for syncing IMAP, sending SMTP, and integrating via webhooks. It supports multi-account sync, programmable filters with VRL, NATS delivery, gRPC & OpenAPI, plus a built-in web client.

Found: August 20, 2025 ID: 957

[Other] Professional online json tool | format, validate & convert Professional JSON formatter, validator, and converter. Format, validate, minify JSON with real-time processing. Convert to XML, CSV, YAML. Free online tool with dark mode, tree view, and advanced features.

Found: August 20, 2025 ID: 958

Black Cat HQ

Product Hunt

[Other] The best AI platform for young developers. We provide the best AI platform, from curated datasets to a serverless inference platform, for young developers who want to build the next chapter of the internet.

Found: August 20, 2025 ID: 959

BuilderHack

Product Hunt

[Other] Build saas faster and monetize Your NextJS boilerplate to build your business and start earning faster.

Found: August 20, 2025 ID: 966

React App

Product Hunt

[Other] Discover Smart AI Tools to Boost Creativity and Productivity Introducing AI Tools Eosin, your all-in-one platform offering the best free AI-powered tools to transform developer, or business owner, Check it out now: https://aitools-eosin.vercel.app/

Found: August 20, 2025 ID: 969

BLAZED.sh

Product Hunt

[Other] Run web3 code on ethereum nodes Shared Hoster for Ethereum Nodes & PaaS for Web3 App Docker deployments. Low latency RPC access, high throughput for MEV, Web3 Gaming, and Data Analysis

Found: August 20, 2025 ID: 970

Splash

Product Hunt

[Monitoring/Observability] Add color to your logs Splash's CLI transforms boring plain text into beautiful color coded logs. Splash automatically detects many common log formats like Apache, Syslog, Nginx and many more. Splash can highlight stack traces from Go, Java, Python and Javascript.

Found: August 20, 2025 ID: 973

Show HN: Because I Kanban

Show HN (score: 5)

[Other] Show HN: Because I Kanban Just wanted to share my latest project Taskstax, it&#x27;s just a simple Kanban kind of trello clone, built mainly for the learns, but it works too so it&#x27;s online.<p>Simple Kanban boards, easy login that takes you straight to it.<p>It uses socket.io for data xfer after login which was fun to setup and also makes it work well.<p>Totally free, any feedback would be cool or if you wanted some info on the tech just ask.

Found: August 20, 2025 ID: 976

[Other] Docker container for running Claude Code in "dangerously skip permissions" mode

Found: August 19, 2025 ID: 955

OpenBB-finance/OpenBB

GitHub Trending

[Other] Financial data platform for analysts, quants and AI agents.

Found: August 19, 2025 ID: 947

[API/SDK] Show HN: Lemonade: Run LLMs Locally with GPU and NPU Acceleration Lemonade is an open-source SDK and local LLM server focused on making it easy to run and experiment with large language models (LLMs) on your own PC, with special acceleration paths for NPUs (Ryzen™ AI) and GPUs (Strix Halo and Radeon™).<p>Why?<p>There are three qualities needed in a local LLM serving stack, and none of the market leaders (Ollama, LM Studio, or using llama.cpp by itself) deliver all three: 1. Use the best backend for the user’s hardware, even if it means integrating multiple inference engines (llama.cpp, ONNXRuntime, etc.) or custom builds (e.g., llama.cpp with ROCm betas). 2. Zero friction for both users and developers from onboarding to apps integration to high performance. 3. Commitment to open source principles and collaborating in the community.<p>Lemonade Overview:<p>Simple LLM serving: Lemonade is a drop-in local server that presents an OpenAI-compatible API, so any app or tool that talks to OpenAI’s endpoints will “just work” with Lemonade’s local models. Performance focus: Powered by llama.cpp (Vulkan and ROCm for GPUs) and ONNXRuntime (Ryzen AI for NPUs and iGPUs), Lemonade squeezes the best out of your PC, no extra code or hacks needed. Cross-platform: One-click installer for Windows (with GUI), pip&#x2F;source install for Linux. Bring your own models: Supports GGUFs and ONNX. Use Gemma, Llama, Qwen, Phi and others out-of-the-box. Easily manage, pull, and swap models. Complete SDK: Python API for LLM generation, and CLI for benchmarking&#x2F;testing. Open source: Apache 2.0 (core server and SDK), no feature gating, no enterprise “gotchas.” All server&#x2F;API logic and performance code is fully open; some software the NPU depends on is proprietary, but we strive for as much openness as possible (see our GitHub for details). Active collabs with GGML, Hugging Face, and ROCm&#x2F;TheRock.<p>Get started:<p>Windows? Download the latest GUI installer from <a href="https:&#x2F;&#x2F;lemonade-server.ai&#x2F;" rel="nofollow">https:&#x2F;&#x2F;lemonade-server.ai&#x2F;</a><p>Linux? Install with pip or from source (<a href="https:&#x2F;&#x2F;lemonade-server.ai&#x2F;" rel="nofollow">https:&#x2F;&#x2F;lemonade-server.ai&#x2F;</a>)<p>Docs: <a href="https:&#x2F;&#x2F;lemonade-server.ai&#x2F;docs&#x2F;" rel="nofollow">https:&#x2F;&#x2F;lemonade-server.ai&#x2F;docs&#x2F;</a><p>Discord for banter&#x2F;support&#x2F;feedback: <a href="https:&#x2F;&#x2F;discord.gg&#x2F;5xXzkMu8Zk" rel="nofollow">https:&#x2F;&#x2F;discord.gg&#x2F;5xXzkMu8Zk</a><p>How do you use it?<p>Click on lemonade-server from the start menu Open http:&#x2F;&#x2F;localhost:8000 in your browser for a web ui with chat, settings, and model management. Point any OpenAI-compatible app (chatbots, coding assistants, GUIs, etc.) at http:&#x2F;&#x2F;localhost:8000&#x2F;api&#x2F;v1 Use the CLI to run&#x2F;load&#x2F;manage models, monitor usage, and tweak settings such as temperature, top-p and top-k. Integrate via the Python API for direct access in your own apps or research.<p>Who is it for?<p>Developers: Integrate LLMs into your apps with standardized APIs and zero device-specific code, using popular tools and frameworks. LLM Enthusiasts, plug-and-play with: Morphik AI (contextual RAG&#x2F;PDF Q&amp;A) Open WebUI (modern local chat interfaces) Continue.dev (VS Code AI coding copilot) …and many more integrations in progress! Privacy-focused users: No cloud calls, run everything locally, including advanced multi-modal models if your hardware supports it.<p>Why does this matter?<p>Every month, new on-device models (e.g., Qwen3 MOEs and Gemma 3) are getting closer to the capabilities of cloud LLMs. We predict a lot of LLM use will move local for cost reasons alone. Keeping your data and AI workflows on your own hardware is finally practical, fast, and private, no vendor lock-in, no ongoing API fees, and no sending your sensitive info to remote servers. Lemonade lowers friction for running these next-gen models, whether you want to experiment, build, or deploy at the edge. Would love your feedback! Are you running LLMs on AMD hardware? What’s missing, what’s broken, what would you like to see next? Any pain points from Ollama, LM Studio, or others you wish we solved? Share your stories, questions, or rant at us.<p>Links:<p>Download &amp; Docs: <a href="https:&#x2F;&#x2F;lemonade-server.ai&#x2F;" rel="nofollow">https:&#x2F;&#x2F;lemonade-server.ai&#x2F;</a><p>GitHub: <a href="https:&#x2F;&#x2F;github.com&#x2F;lemonade-sdk&#x2F;lemonade" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;lemonade-sdk&#x2F;lemonade</a><p>Discord: <a href="https:&#x2F;&#x2F;discord.gg&#x2F;5xXzkMu8Zk" rel="nofollow">https:&#x2F;&#x2F;discord.gg&#x2F;5xXzkMu8Zk</a><p>Thanks HN!

Found: August 19, 2025 ID: 950

[API/SDK] Show HN: Twick - React SDK for Timeline-Based Video Editing

Found: August 19, 2025 ID: 953

[Other] D2 (text to diagram tool) now supports ASCII renders

Found: August 19, 2025 ID: 952

[Other] Show HN: Built a memory layer that stops AI agents from forgetting everything Tired of AI coding tools that forget everything between sessions? Every time I open a new chat with Claude or fire up Copilot, I&#x27;m back to square one explaining my codebase structure.<p>So I built something to fix this. It&#x27;s called In Memoria. Its an MCP server that gives AI tools persistent memory. Instead of starting fresh every conversation, the AI remembers your coding patterns, architectural decisions, and all the context you&#x27;ve built up.<p>The setup is dead simple: `npx in-memoria server` then connect your AI tool. No accounts, no data leaves your machine.<p>Under the hood it&#x27;s TypeScript + Rust with tree-sitter for parsing and vector storage for semantic search. Supports JavaScript&#x2F;TypeScript, Python, and Rust so far.<p>It originally started as a documentation tool but had a realization - AI doesn&#x27;t need better docs, it needs to remember stuff. Spent the last few months rebuilding it from scratch as this memory layer.<p>It&#x27;s working pretty well for me but curious what others think, especially about the pattern learning part. What languages would you want supported next?<p>Code: <a href="https:&#x2F;&#x2F;github.com&#x2F;pi22by7&#x2F;In-Memoria" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;pi22by7&#x2F;In-Memoria</a>

Found: August 19, 2025 ID: 956

[Other] Show HN: GiralNet – A Privacy Network for Your Team (Not the World) Hello, for some time I&#x27;ve been developing this project now that I am happy that it finally can see the light. I love Tor, but I believe the biggest thing with Tor is that the nodes are strangers which in itself requires some sort of level in just that, complete strangers.<p>For this reason, I decided to build this private network inspired by the Onion router. Unlike other public networks, GiralNet is not for anonymous connections to strangers. It is built for small teams or groups who want privacy but also need a level of trust. It assumes that the people running the nodes in the network are known and verifiable. This provides a way for a group to create their own private and secure network, where the infrastructure is controlled and the people behind the nodes are accountable. The goal is to provide privacy without relying on a large, anonymous public network.<p>In terms of technical details, it is a SOCKS5 proxy that routes internet traffic through a series of other computers. It does this by wrapping your data in multiple layers of encryption, just like the onion router does it. Each computer in the path unwraps one layer to find the next destination, but never knows the full path. This makes it difficult for any single party to see both where the traffic came from and where it is going.<p>I will gladly answer any questions you might have, thank you.

Found: August 19, 2025 ID: 951

Positron, a New Data Science IDE

Hacker News (score: 90)

[IDE/Editor] Positron, a New Data Science IDE

Found: August 19, 2025 ID: 948

[Other] Show HN: Python file streaming 237MB/s on $8/M droplet in 507 lines of stdlib Quick Links:<p>- PyPI: <a href="https:&#x2F;&#x2F;pypi.org&#x2F;project&#x2F;axon-api&#x2F;" rel="nofollow">https:&#x2F;&#x2F;pypi.org&#x2F;project&#x2F;axon-api&#x2F;</a><p>- GitHub: <a href="https:&#x2F;&#x2F;github.com&#x2F;b-is-for-build&#x2F;axon-api" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;b-is-for-build&#x2F;axon-api</a><p>- Deployment Script: <a href="https:&#x2F;&#x2F;github.com&#x2F;b-is-for-build&#x2F;axon-api&#x2F;blob&#x2F;master&#x2F;examples&#x2F;deployment_scripts&#x2F;deploy-axon.sh" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;b-is-for-build&#x2F;axon-api&#x2F;blob&#x2F;master&#x2F;examp...</a><p>Axon is a 507-line, pure Python WSGI framework that achieves up to 237MB&#x2F;s file streaming on $8&#x2F;month hardware. The key feature is the dynamic bundling of multiple files into a single multipart stream while maintaining bounded memory (&lt;225MB). The implementation saturates CPU before reaching I&#x2F;O limits.<p>Technical highlights:<p>- Pure Python stdlib implementation (no external dependencies)<p>- HTTP range support for partial content delivery<p>- Generator-based streaming with constant memory usage<p>- Request batching via query parameters<p>- Match statement-based routing (eliminates traversal and probing)<p>- Built-in sanitization and structured logging<p>The benchmarking methodology uses fresh Digital Ocean droplets with reproducible wrk tests across different file sizes. All code and deployment scripts are included.

Found: August 19, 2025 ID: 949
Previous Page 80 of 128 Next