Show HN: TTSLab β A voice AI agent and TTS lab running in the browser via WebGPU
Show HN (score: 5)Description
No API keys, no backend, no data leaves your machine.
When you open the site, you'll hear it immediately β the landing page auto-generates speech from three different sentences right in your browser, no setup required.
You can then try any model yourself: type text, hit generate, hear it instantly. Models download once and get cached locally.
The most experimental feature: a fully in-browser Voice Agent. It chains speech-to-text β LLM β text-to-speech, all running locally on your GPU via WebGPU. You can have a spoken conversation with an AI without a single network request.
Currently supported models: - TTS: Kokoro 82M, SpeechT5, Piper (VITS) - STT: Whisper Tiny, Whisper Base
Other features: - Side-by-side model comparison - Speed benchmarking on your hardware - Streaming generation for supported models
Source: https://github.com/MbBrainz/ttslab (MIT)
Feedback I'd especially like: 1. How does performance feel on your hardware? 2. What models should I add next? 3. Did the Voice Agent work for you? That's the most experimental part.
Built on top of ONNX Runtime Web (https://onnxruntime.ai) and Transformers.js β huge thanks to those communities for making in-browser ML inference possible.
More from Show
Show HN: Gemini Plugin for Claude Code
Show HN: Gemini Plugin for Claude Code I built a plugin that lets Claude Code delegate work to Gemini CLI.<p>I started this after finding myself reaching for Gemini more often on long context repo work. I have been especially liking Geminiβs codebaseinvestigator for long context.<p>This is inspired by openai/codex-plugin-cc.<p>Code Review, adversarial review. Under the hood its Gemini CLI over ACP<p>Would love feedback from people using Claude Code, Gemini CLI, or ACP. I am especially curious whether this feels useful outside my own workflow.<p>Its a great combo with Opus 4.7 + Gemini 3.1 workflows
Show HN: Open Chronicle β Local Screen Memory for Claude Code and Codex CLI
Show HN: Open Chronicle β Local Screen Memory for Claude Code and Codex CLI I built an open source version of OpenAI Chronicle.<p>Some design decisions I made:<p>1. Local first: OCR uses Apple Vision, summarization supports local AI providers via Vercel AI SDK. Nothing leaves your computer. 2. Multiple Provider: exposes MCP so any coding agents can use it. 3. Swift menubar app: efficient, low-footprint 4. Blacklist apps: password managers, messaging apps (Slack, WhatsApp, Messenger), mail clients are on default blocklist.<p>Current Limitations: 1. Mac only. Mac-first is a feature. 2. Small local models with weak structured-output support will fail on generateObject. 3. Retrieval is LIKE-query keyword search. FTS and optional embeddings are on the list.<p>Demo video (6s): <a href="https://youtu.be/V75tnvIdovc" rel="nofollow">https://youtu.be/V75tnvIdovc</a><p>Curious what you think the right balance between exclusionlist allowlists. Happy to answer anything.
No other tools from this source yet.