Show HN: Modelence – Supabase for MongoDB
Hacker News (score: 22)Description
As Karpathy (and many of us) noted, getting from prototype to production is mostly painful integration work. The pieces exist, but stitching them together reliably is the hard part: https://x.com/karpathy/status/1905051558783418370. YC AI Startup School talk about this - https://www.youtube.com/watch?feature=shared&t=1940&v=LCEmiR...
We intend to fill those gaps! What you get out of the box:
- Authentication / user management
- Database
- Email integration (3rd party, but things like user verification emails work out of the box)
- AI integration
- Cron jobs
- Monitoring / Telemetry
- Configs & secrets
- Analytics (coming soon)
- File uploads (coming soon)
How it runs: A Node.js backend with MongoDB. It's frontend-agnostic, so you can use our minimal Vite + React starter or drop Modelence behind an existing Next.js (or any) frontend.
We're also building a managed cloud, similar to what Vercel is for Next.js, except Modelence focuses on the backend instead of the frontend (Vercel is great for content sites like landing pages, blogs, etc, but things like persistent connections and complex backend logic outgrow it quickly). You can find a quick demo here: https://www.youtube.com/watch?v=S4f22FyPpI8
We're looking for early users (especially TS teams on MongoDB). Tell us what's missing, what's confusing, and what you'd want before trusting this in prod. Happy to answer anything!
More from Hacker
Show HN: Hc: an agentless, multi-tenant shell history sink
Show HN: Hc: an agentless, multi-tenant shell history sink This project is a tool for engineers who live in the terminal and are tired of losing their command history to ephemeral servers or fragmented `.bash_history` files. If you’re jumping between dozens of boxes, many of which might be destroyed an hour later, your "local memory" (the history file) is essentially useless. This tool builds a centralized, permanent brain for your shell activity, ensuring that a complex one-liner you crafted months ago remains accessible even if the server it ran on is long gone.<p>The core mechanism wants to be a "zero-touch" capture that happens at the connection gateway level. Instead of installing logging agents or scripts on every target machine, the tool reconstructs your terminal sessions from raw recording files generated by the proxy you use to connect. This "in-flight" capture means you get a high-fidelity log of every keystroke and output without ever having to touch the configuration of the remote host. It’s a passive way to build a personal knowledge base while you work.<p>To handle the reality of context-switching, the tool is designed with a "multi-tenant" architecture. For an individual engineer, this isn't about managing different users, but about isolating project contexts. It automatically categorizes history based on the specific organization or project tags defined at the gateway. This keeps your work for different clients or personal side-projects in separate buckets, so you don't have to wade through unrelated noise when you're looking for a specific solution.<p>In true nerd fashion, the search interface stays exactly where you want it: in the command line. There is no bloated web UI to slow you down. The tool turns your entire professional history into a searchable, greppable database accessible directly from your terminal.<p>Please read the full story [here](<a href="https://carminatialessandro.blogspot.com/2026/01/hc-agentless-multi-tenant-shell-history.html" rel="nofollow">https://carminatialessandro.blogspot.com/2026/01/hc-agentles...</a>)
Show HN: Ayder – HTTP-native durable event log written in C (curl as client)
Show HN: Ayder – HTTP-native durable event log written in C (curl as client) Hi HN,<p>I built Ayder — a single-binary, HTTP-native durable event log written in C. The wedge is simple: curl is the client (no JVM, no ZooKeeper, no thick client libs).<p>There’s a 2-minute demo that starts with an unclean SIGKILL, then restarts and verifies offsets + data are still there.<p>Numbers (3-node Raft, real network, sync-majority writes, 64B payload): ~50K msg/s sustained (wrk2 @ 50K req/s), client P99 ~3.46ms. Crash recovery after SIGKILL is ~40–50s with ~8M offsets.<p>Repo link has the video, benchmarks, and quick start. I’m looking for a few early design partners (any event ingestion/streaming workload).
Useful patterns for building HTML tools
Useful patterns for building HTML tools
Migrating to Positron, a next-generation data science IDE for Python and R
Migrating to Positron, a next-generation data science IDE for Python and R
No other tools from this source yet.