Show HN: Claudable – OpenSource Lovable that runs locally with Claude Code

Show HN (score: 14)
Found: August 21, 2025
ID: 988

Description

Other
Show HN: Claudable – OpenSource Lovable that runs locally with Claude Code Hey, HN! I'm Aaron. I built an open-source Lovable for Claude Code users.

Platforms like Lovable, Replit Agent, and Bolt require separate API keys and $25+/month subscriptions. But if you’re already subscribed to Claude Pro or Cursor, you can use those plans directly without extra costs.

Claudable runs entirely locally through Claude Code (Cursor CLI also supported) and provides:

- Instant UI preview (similar to Lovable)

- Web-optimized, production-ready designs

- Direct Git integration

- One-click Vercel deployment

- Zero additional API costs

It’s open source and available today. I’m actively developing it and would love community feedback on what features to prioritize next.

GitHub: https://github.com/opactorai/Claudable

Happy to answer any questions!

More from Show

Show HN: Fine-tuned Llama 3.2 3B to match 70B models for local transcripts

Show HN: Fine-tuned Llama 3.2 3B to match 70B models for local transcripts I wrote a small local tool to transcribe audio notes (Whisper&#x2F;Parakeet). Code: <a href="https:&#x2F;&#x2F;github.com&#x2F;bilawalriaz&#x2F;lazy-notes" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;bilawalriaz&#x2F;lazy-notes</a><p>I wanted to process raw transcripts locally without OpenRouter. Llama 3.2 3B with a prompt was decent but incomplete, so I tried SFT. I fine-tuned Llama 3.2 3B to clean&#x2F;analyze dictation and emit structured JSON (title, tags, entities, dates, actions).<p>Data: 13 real memos → Kimi K2 gold JSON → ~40k synthetic + gold; keys canonicalized. Chutes.ai (5k req&#x2F;day).<p>Training: RTX 4090 24GB, ~4h, LoRA (r=128, α=128, dropout=0.05), max seq 2048, bs=16, lr=5e-5, cosine, Unsloth. On 2070 Super 8GB it was ~8h.<p>Inference: merged to GGUF, Q4_K_M (llama.cpp), runs in LM Studio.<p>Evals (100-sample, scored by GLM 4.5 FP8): overall 5.35 (base 3B) → 8.55 (fine-tuned); completeness 4.12 → 7.62; factual 5.24 → 8.57.<p>Head-to-head (10 samples): ~8.40 vs Hermes-70B 8.18, Mistral-Small-24B 7.90, Gemma-3-12B 7.76, Qwen3-14B 7.62. Teacher Kimi K2 ~8.82.<p>Why: task specialization + JSON canonicalization reduces variance; the model learns the exact structure&#x2F;fields.<p>Lessons: train on completions only; synthetic is fine for narrow tasks; Llama is straightforward to train. Dataset pipeline + training script + evals: <a href="https:&#x2F;&#x2F;github.com&#x2F;bilawalriaz&#x2F;local-notes-transcribe-llm" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;bilawalriaz&#x2F;local-notes-transcribe-llm</a>

Show HN: Memori – Open-Source Memory Engine for AI Agents

Show HN: Memori – Open-Source Memory Engine for AI Agents Hey HN! I&#x27;m Arindam, part of the team behind Memori (<a href="https:&#x2F;&#x2F;memori.gibsonai.com&#x2F;" rel="nofollow">https:&#x2F;&#x2F;memori.gibsonai.com&#x2F;</a>).<p>Memori adds a stateful memory engine to AI agents, enabling them to stay consistent, recall past work, and improve over time. With Memori, agents don’t lose track of multi-step workflows, repeat tool calls, or forget user preferences. Instead, they build up human-like memory that makes them more reliable and efficient across sessions.<p>We’ve also put together demo apps (a personal diary assistant, a research agent, and a travel planner) so you can see memory in action.<p>Current LLMs are stateless — they forget everything between sessions. This leads to repetitive interactions, wasted tokens, and inconsistent results. When building AI agents, this problem gets even worse: without memory, they can’t recover from failures, coordinate across steps, or apply simple rules like “always write tests.”<p>We realized that for AI agents to work in production, they need memory. That’s why we built Memori.<p>Memori uses a multi-agent architecture to capture conversations, analyze them, and decide which memories to keep active. It supports three modes:<p>- Conscious Mode: short-term memory for recent, essential context. - Auto Mode: dynamic search across long-term memory. - Combined Mode: blends both for fast recall and deep retrieval.<p>Under the hood, Memori is SQL-first. You can use SQLite, PostgreSQL, or MySQL to store memory with built-in full-text search, versioning, and optimization. This makes it simple to deploy, production-ready, and extensible.<p>Memori is backed by GibsonAI’s database infrastructure, which supports:<p>- Instant provisioning - Autoscaling on demand - Database branching &amp; versioning - Query optimization - Point of recovery<p>This means memory isn’t just stored, it’s reliable, efficient, and scales with real-world workloads.<p>We’ve open-sourced Memori under the Apache 2.0 license so anyone can build with it. You can check out the GitHub repo here: <a href="https:&#x2F;&#x2F;github.com&#x2F;GibsonAI&#x2F;memori" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;GibsonAI&#x2F;memori</a>, explore the docs, and join our community on Discord.<p>We’d love to hear your thoughts. Please dive into the code, try out the demos, and share feedback, your input will help shape where we take Memori from here.

Show HN: Ten years of running every day, visualized

Show HN: Ten years of running every day, visualized Today marks ten years, 3653 consecutive days, of running at least one mile every day under the USRSA rules [1]. To celebrate, I built an interactive dashboard that turns a decade of GPX files into charts you can explore.<p>Running has truly changed my life: I&#x27;ve made lifelong friends, explored beautiful places, and more importantly invested into my own health and fitness, which I&#x27;m starting to see the positive benefits as I get older.<p>The stack is pretty simple: a NextJS app, with a Postgres database to keep all my running data, and all the stats are pre-computed and cached in Redis, so I effectively only hit the database once a day when a new run is ingested. On the fronted, I toyed with the idea of using D3 or pre-existing data viz libraries, but ended up rolling my own using SVGs directly, it gave me more control on the visualizations.<p>I used the Strava bulk export to pre-populate the database, and I&#x27;m using their webhook API to do incremental updates. I have to tap into OpenWeatherMap and OpenCageDate to enrich the running data a little bit.<p>Happy to answer anything about the stack, data pipeline, or how I stayed motivated for 10 years!<p>[1] <a href="https:&#x2F;&#x2F;www.runeveryday.com" rel="nofollow">https:&#x2F;&#x2F;www.runeveryday.com</a> Run Streak Association rules: ≥ 1 mile per day

No other tools from this source yet.