WebPizza AI - Private PDF Chat

Product Hunt
Found: November 11, 2025
ID: 2363

Description

Other
POC: Private PDF AI using only your browser with WebGPU I built this POC to test if complete RAG pipelines could run entirely client-side using WebGPU. Key difference: zero server dependency. PDF parsing, embeddings, vector search, and LLM inference all happen in your browser. Select a model (Llama, Phi-3, Mistral), upload a PDF, ask questions. Documents stay local in IndexedDB. Works offline once models are cached. Integrated WeInfer optimization achieving ~3.76x speedup over standard WebLLM through buffer reuse and async pipeline processing.

More from Product

Vibe Coding

A hub for vibe coding tools and developer resources. Vibe-Coding is a navigation site for vibe coding tools and development resources. We collect the best tools to boost your productivity. We've organized a comprehensive directory to cover every aspect of your developing workflow. Whether you're optimizing your process or personalizing your environment, you'll find what you need right here.

tuck

fastest way to backup dotfiles! Simple, fast, and built in TypeScript. Manage your dotfiles with Git, sync across machines, and never lose your configs again.

Web Accessibility Testing MCP

Give LLMs access to web accessibility testing APIs A11y MCP is an MCP (Model Context Protocol) server that gives LLMs access to web accessibility testing APIs. This server uses the Deque Axe-core API and Puppeteer to allow LLMs to analyze web content for WCAG compliance and identify accessibility issues.

Dev Streaks

Build daily. Solve daily. Stay unstoppable Track your GitHub commits and LeetCode problem-solving streaks in one beautiful mobile app. Built with Expo + React Native, designed for developers who care about consistency

No other tools from this source yet.