Show HN: We developed an AI tool to diagnose car problems
Show HN (score: 5)Description
We built AutoAI – an AI tool that tells you what's wrong with your car in plain English.
Just enter:
Your car’s make/model/year
The OBD2 error codes (optional) (like P0420, P0171, etc.)
Any symptoms you're noticing (e.g. “rough idle” or “weird sound when starting”)
And we’ll tell you:
The most likely issue
How to verify it yourself
Whether it’s a DIY fix or shop-worthy
No more endless Googling or forum-hopping. Built for car owners, tinkerers, and pros who want fast, reliable answers. Powered by a repair-trained AI using real-world automotive data.
We’re trying to make diagnostics smarter, not replace your mechanic – just make you way more informed before spending money.
Would love feedback or crazy edge-case inputs to improve it.
More from Show
Show HN: KeyEnv – CLI-first secrets manager for dev teams (Rust)
Show HN: KeyEnv – CLI-first secrets manager for dev teams (Rust) Hi HN,<p>I built KeyEnv because I was tired of the "can you Slack me the Stripe key?" workflow.<p><pre><code> The problem: My team's secrets lived in a mix of Slack DMs, shared Google Docs, and .env files that definitely weren't in .gitignore at some point. Enterprise tools like Vault required more DevOps time than we had. Doppler was close but felt heavier than we needed. What KeyEnv does: keyenv init # link project keyenv pull # sync secrets to local .env keyenv run -- npm start # inject secrets, run command That's basically it. Secrets are encrypted client-side (AES-256-GCM) before leaving your machine. Zero-knowledge architecture—we can't read your secrets even if we wanted to. Technical details: - Single Rust binary, no runtime dependencies - Works offline (cached secrets) - RBAC for teams (owner/admin/member/viewer) - Service tokens for CI/CD - Full audit trail Honest tradeoffs: - SaaS only, no self-hosted option - Fewer integrations than Doppler - If you need dynamic secrets or PKI, use Vault Pricing: Free tier (3 projects, 100 secrets), $12/user/month for teams. Would love feedback on the CLI UX and any rough edges. Happy to answer questions about the architecture. </code></pre> <a href="https://www.keyenv.dev" rel="nofollow">https://www.keyenv.dev</a>
Show HN: WebTerm – Browser-based terminal emulator
Show HN: WebTerm – Browser-based terminal emulator
Show HN: WebGPU React Renderer Using Vello
Show HN: WebGPU React Renderer Using Vello I've built a package to use Raph Levien's Vello as a blazing fast 2D renderer for React on WebGPU. It uses WASM to hook into the Rust code
Show HN: On the edge of Apple Silicon memory speeds
Show HN: On the edge of Apple Silicon memory speeds I have developed open source CLI-tool for Apple Silicon macOS. It measures memory speeds in different ways and also latency. It can achieve up to 96-97% efficiency on read speed on M4 base what is advertised as 120GB/s. All memory operations are in assembly.<p>I would really appreciate for results on different CPU's how benchmark works on those. I have been able to test this on M1 and M4.<p>command : 'memory_benchmark -non-cacheable -count 5 -output results.JSON' (close all applications before running)<p>This will generate JSON file where you find sections copy_gb_s, read_gb_s and write_gb_s statics.<p>Example M4 with 10 loops: "copy_gb_s": { "statistics": { "average": 106.65421233311835, "max": 106.70240696071005, "median": 106.65069297260811, "min": 106.6336774994254, "p90": 106.66606919223108, "p95": 106.68423807647056, "p99": 106.69877318386216, "stddev": 0.01930653530818627 }, "values": [ 106.70240696071005, 106.66203166240008, 106.64410802226159, 106.65831409449595, 106.64148106986977, 106.6482935780762, 106.63974821679058, 106.65896986001393, 106.6336774994254, 106.65309236714002 ] }, "read_gb_s": { "statistics": { "average": 115.83111228356601, "max": 116.11098114619033, "median": 115.84480882265643, "min": 115.56959026587722, "p90": 115.99667266786554, "p95": 116.05382690702793, "p99": 116.09955029835784, "stddev": 0.1768243167963439 }, "values": [ 115.79154681380165, 115.56959026587722, 115.60574235736468, 115.72112860271632, 115.72147129262802, 115.89807083151123, 115.95527337086908, 115.95334642887214, 115.98397172582945, 116.11098114619033 ] }, "write_gb_s": { "statistics": { "average": 65.55966046805113, "max": 65.59040040480241, "median": 65.55933583741347, "min": 65.50911885624045, "p90": 65.5840272860955, "p95": 65.58721384544896, "p99": 65.58976309293172, "stddev": 0.02388146120866979 },<p>Patterns benchmark also shows bit more of memory speeds. command: 'memory_benchmark -patterns -non-cacheable -count 5 -output patterns.JSON'<p>Example M4 from 100 loops: "sequential_forward": { "bandwidth": { "read_gb_s": { "statistics": { "average": 116.38363691482549, "max": 116.61212708384109, "median": 116.41264548721367, "min": 115.449510036971, "p90": 116.54143114134801, "p95": 116.57314206456576, "p99": 116.60095068065866, "stddev": 0.17026641589059727 } } } }<p>"strided_4096": { "bandwidth": { "read_gb_s": { "statistics": { "average": 26.460392735220456, "max": 27.7722419653915, "median": 26.457051473208285, "min": 25.519925729459107, "p90": 27.105171215736604, "p95": 27.190715938337473, "p99": 27.360449534513144, "stddev": 0.4730857335572576 } } } }<p>"random": { "bandwidth": { "read_gb_s": { "statistics": { "average": 26.71367836895143, "max": 26.966820487564327, "median": 26.69907406197067, "min": 26.49374804466308, "p90": 26.845236287807374, "p95": 26.882004355057887, "p99": 26.95742242818151, "stddev": 0.09600564296001704 } } } }<p>Thank you for reading :)
No other tools from this source yet.