Show HN: Any-LLM – Lightweight router to access any LLM Provider

Hacker News (score: 52)
Found: July 22, 2025
ID: 440

Description

Other
Show HN: Any-LLM – Lightweight router to access any LLM Provider We built any-llm because we needed a lightweight router for LLM providers with minimal overhead. Switching between models is just a string change : update "openai/gpt-4" to "anthropic/claude-3" and you're done.

It uses official provider SDKs when available, which helps since providers handle their own compatibility updates. No proxy or gateway service needed either, so getting started is pretty straightforward - just pip install and import.

Currently supports 20+ providers including OpenAI, Anthropic, Google, Mistral, and AWS Bedrock. Would love to hear what you think!

More from Hacker

Launch HN: Reality Defender (YC W22) – API for Deepfake and GenAI Detection

Launch HN: Reality Defender (YC W22) – API for Deepfake and GenAI Detection Hi HN! This is Ben from Reality Defender (<a href="https:&#x2F;&#x2F;www.realitydefender.com">https:&#x2F;&#x2F;www.realitydefender.com</a>). We build real-time multimodal and multi-model deepfake detection for Fortune 100s and governments all over the world. (We even won the RSAC Innovation Showcase award for our work: <a href="https:&#x2F;&#x2F;www.prnewswire.com&#x2F;news-releases&#x2F;reality-defender-wins-most-innovative-startup-at-rsa-conference-2024-innovation-sandbox-302137326.html" rel="nofollow">https:&#x2F;&#x2F;www.prnewswire.com&#x2F;news-releases&#x2F;reality-defender-wi...</a>)<p>Today, we’re excited to share our public API and SDK, allowing anyone to access our platform with 2 lines of code: <a href="https:&#x2F;&#x2F;www.realitydefender.com&#x2F;api">https:&#x2F;&#x2F;www.realitydefender.com&#x2F;api</a><p>Back in W22, we launched our product to detect AI-generated media across audio, video, and images: <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=30766050">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=30766050</a><p>That post kicked off conversations with devs, security teams, researchers, and governments. The most common question: &quot;Can we get API&#x2F;SDK access to build deepfake detection into our product?&quot;<p>We’ve heard that from solo devs building moderation tools, fintechs adding ID verification, founders running marketplaces, and infrastructure companies protecting video calls and onboarding flows. They weren’t asking us to build anything new; they simply wanted access to what we already had so they could plug it in and move forward.<p>After running pilots and engagements with customers, we’re finally ready to share our public API and SDK. Now anyone can embed deepfake detection with just two lines of code, starting at the low price of free.<p><a href="https:&#x2F;&#x2F;www.realitydefender.com&#x2F;api">https:&#x2F;&#x2F;www.realitydefender.com&#x2F;api</a><p>Our new developer tools support detection across images, voice, video, and text — with the former two available as part of the free tier. If your product touches KYC, UGC, support workflows, communications, marketplaces, or identity layers, you can now embed real-time detection directly in your stack. It runs in the cloud, and longstanding clients using our platform have also deployed on-prem, at the edge, or on fully airgapped systems.<p>SDKs are currently available in Python, Java, Rust, TypeScript, and Go. The first 50 scans per month are free, with usage-based pricing beyond that. If you’re working on something that requires other features or streaming access (like real-time voice or video), email us directly at yc@realitydefender.com<p>Much has changed since 2022. The threats we imagined back then are now showing up in everyday support tickets and incident reports. We’ve witnessed voice deepfakes targeting bank call centers to commit real-time fraud; fabricated documents and AI-generated selfies slip through KYC and IDV onboarding systems; fake dating profiles, AI-generated marketplace sellers, and “verified” influencers impersonating real people. Political disinformation videos and synthetic media leaks have triggered real-world legal and PR crises. Even reviews, support transcripts, and impersonation scripts are increasingly being generated by AI. Detection remains harder than we first expected since we began in 2021. New generation methods emerge every few weeks that invalidate prior assumptions. This is why we are committed to building every layer of this ourselves. We don’t license or white-label detection models; everything we deploy is built in-house by our team.<p>Since our original launch, we’ve worked with tier-one banks, global governments, and media companies to deploy detection inside their highest-risk workflows. However, we always believed the need wasn’t limited to large institutions, but everywhere. It showed up in YC office hours, in early bug reports, and in group chats after our last HN post.<p>We’ve taken our time to make sure this was built well, flexible enough for startups, and battle-tested enough to trust in production. The API you can use today is the same one powering many of our enterprise deployments.<p>Our goal is to make Reality Defender feel like Stripe, Twilio, or Plaid — an invisible, trusted layer that you can drop into your system to protect what matters. We feel deepfake detection is a key component of critical infrastructure, and like any good infrastructure, it should be modular, reliable, and boring (in the best possible way).<p>Reality Defender is already in the Zoom marketplace and will be on the Teams marketplace soon. We will also power deepfake detection for identity workflows, support platforms, and internal trust and safety pipelines.<p>If you&#x27;re building something where trust, identity, or content integrity matter, or if you’ve run into weird edge cases you can’t explain, we’d love to hear from you.<p>You can get started here: <a href="https:&#x2F;&#x2F;realitydefender.com&#x2F;api">https:&#x2F;&#x2F;realitydefender.com&#x2F;api</a><p>Or you can try us for free two different ways:<p>1) 1-click add to Zoom &#x2F; Teams to try in your own calls immediately.<p>2) Email us up to 50 files at yc@realitydefender.com and we’ll scan them for you — no setup required.<p>Thanks again to the HN community for helping launch us three years ago. It’s been a wild ride, and we’re excited to share something new. We live on HN ourselves and will be here for all your feedback. Let us know what you think!

No other tools from this source yet.