Building an AI Chatbot That Diagnoses Car Problems: Lessons from AutoDetective.ai
I built an AI chatbot that diagnoses car problems and generates repair content. Here's what I learned about trust, accuracy, and domain-specific AI.
Most AI products target developers. AutoDetective.ai targets the person staring at a check engine light in a parking lot.
I built AutoDetective.ai because the car repair information landscape is broken. You search your symptoms, land on a forum post from 2014, and read through forty replies before someone says "same problem, turned out to be the alternator." Maybe it was the alternator for that person. Maybe it wasn't. You have no way to know if their 2009 Civic's problem has anything to do with your 2018 CX-5.
AutoDetective.ai takes a different approach: conversational AI diagnosis that asks the right follow-up questions, ranks probable causes, estimates costs, and gives you actionable next steps before you ever talk to a mechanic.
The Problem with Car Repair Content
The existing options for someone with a car problem are terrible.
Forums are a graveyard of anecdotal advice. YouTube videos are 22 minutes long for a 3-minute answer. Repair estimate sites give you a range so wide it's meaningless. And the moment you walk into a shop without any context, you're at the mercy of whatever the service advisor decides to tell you.
What people actually need is a structured diagnostic conversation: what's the vehicle, what are the symptoms, when did it start, what changed. The same questions a good mechanic asks before they even pop the hood.
How AutoDetective Works
The experience starts with a chatbot. You describe your problem in plain language: "the plastic shield under my front bumper is hanging loose" or "grinding noise when I brake" or "car won't start but the lights work." No forms, no dropdowns, no year/make/model lookup tables.
The AI asks follow-up questions. Not generic ones: contextual ones based on what you just said. If you mention body shop work, it asks about the timing. If you describe a noise, it asks about speed and conditions. It's building a case file the way a diagnostic technician would.
Once it has enough information, it presents a structured summary: your vehicle, the problem, the symptoms. You confirm, and it runs the diagnosis.
The output is where things get interesting. You get:
- Severity and urgency ratings so you know if this is "fix it Saturday" or "don't drive it"
- Ranked probable causes with high, medium, and low probability tags, each with its own cost and labor estimate
- DIY repair steps if the fix is within reach for someone with basic tools
- Parts recommendations with links to buy what you need
- Nearby repair shops pulled from Google Places, with ratings and distance, for when DIY isn't the move
The diagnosis isn't a guess. It's a structured differential analysis based on the specific vehicle, specific symptoms, and specific context the user provided.
The Content Engine
Every diagnosis generates a full article page, agentically derived from that individual real-world use case. Someone asks about a disconnected splash shield on a 2018 Mazda CX-5 after body shop work, and AutoDetective creates a detailed, indexable page covering that exact scenario: causes, diagnostics steps, repair options, cost breakdown, prevention tips.
These aren't templated pages with swapped-out keywords. Each one reflects the actual diagnostic conversation that produced it. The breadcrumb structure (Home > Mazda > Disconnected splash shield) organizes them into a browsable knowledge base that grows with every user interaction.
The result is a content library that covers the long tail of car problems in a way no editorial team could. Every weird edge case, every model-specific quirk, every "it only happens when it's cold and I'm turning left" scenario becomes a permanent resource.
Building AI for Non-Technical Users
The biggest lesson from AutoDetective has nothing to do with prompt engineering or model selection. It's about trust calibration.
When a developer uses an AI coding assistant and it hallucinates a function that doesn't exist, the developer catches it. When a car owner gets told their grinding noise is "probably just brake dust" and it's actually a worn rotor, they might drive on it for another month and cause real damage.
The solution isn't to hedge everything with disclaimers. It's to be structurally honest about confidence. The probability tags on each cause (high, medium, low) aren't cosmetic. They tell the user "we're pretty sure about this one, less sure about that one, and this third option is a long shot but worth checking." That's how a good mechanic talks.
The severity and urgency ratings serve the same purpose. "Low severity, address soon" means something different than "high severity, immediate attention." Users don't need to understand the AI to trust the output: they need the output to communicate its own certainty clearly.
Architecture Decisions That Mattered
AutoDetective is designed to work with any OAS-capable LLM interface. The current production system runs on GPT-5 class models, but the architecture doesn't lock you into a single provider. That was a deliberate choice: model capabilities are improving fast enough that you want the ability to swap without rewriting your application logic.
The stack is straightforward: DigitalOcean for hosting, PostgreSQL for data, OpenAI for inference. No exotic infrastructure. The conversational flow, the diagnostic logic, the article generation: it's all orchestration code on top of a capable model.
The Google Places integration for nearby shops was one of those features that seemed like a nice-to-have and turned out to be essential. When someone gets a diagnosis that says "you need a mechanic for this one," the next thing they want is a mechanic. Reducing that friction from "open Google Maps, search for auto repair" to "here are four shops within 12 miles, sorted by rating" closes the loop.
Monetization
The revenue model is affiliate links for parts and a lead-generation system that connects users directly to mechanics. The lead-gen side is still partially manual: users fill out a form with their diagnosis details, and we connect them to a shop. It works, but there's a clear path to automating the matchmaking.
The key insight is that the monetization aligns with the user's intent. Someone who just got a diagnosis that says "buy OEM plastic fasteners, $15" wants to buy fasteners right now. Someone whose diagnosis says "this needs a professional" wants a mechanic right now. You're not selling against the user's interest: you're completing the transaction they already decided to make.
What I'd Do Differently
I'd invest more in the structured data layer earlier. The diagnostic conversations produce incredibly rich data about real-world car problems: which symptoms correlate with which causes on which vehicles, what the actual repair costs end up being, which problems are DIY-friendly and which aren't. That data has compounding value, and I wish I'd built the infrastructure to capture and analyze it from day one.
I'd also spend more time on the "I'm not sure" paths. The chatbot is good at asking follow-up questions, but there are cases where the user genuinely doesn't know the answer: "I don't know if it's the front or rear brakes." The system handles that, but it could handle it more gracefully by adjusting probability weights rather than just widening the differential.
The Bigger Picture
AutoDetective is a bet on a specific thesis: domain-specific AI applications will outperform general-purpose tools in verticals where context matters more than capability. ChatGPT can answer car questions. But it can't pull up nearby shops, estimate your specific repair cost, generate a shareable case report, or build a content page that helps the next person with the same problem.
The moat isn't the model. It's the workflow, the data layer, the integrations, and the understanding of what someone standing in a parking lot with a weird noise actually needs.
Check it out at autodetective.ai.