I’m going to start this with something I need you to know upfront: I asked AI to help me write this blog post. And it’s about why you shouldn’t blindly trust AI.
Take a moment with that.
I’m not being hypocritical—I’m making a point. AI is a tool. It’s a useful one. One I use regularly. But a tool is not the same as a source of truth, and somewhere along the way, a whole lot of people started treating it like it was.
Here’s what inspired this post.
The Cow’s Milk Incident
A chiropractor I know had a patient come in for treatment. While on the table, the patient casually mentioned that she just needed to drink more cow’s milk. I know this to be completely false. But more importantly, my friend, being an actual healthcare professional who knows things, knew this was not accurate. Adults do not have a biological requirement for cow’s milk. Most of the world’s adult population is lactose intolerant, yet we drink it because of decades of cultural habit and one of the most successful marketing campaigns in food industry history (got milk, anyone?).
So, my friend decided to ask AI the same question. And AI said cow’s milk is a dietary source that human adults need. So incredibly wrong.
When my friend pushed back, looking for more accurate information, AI changed its answer. It pivoted to say that yes, adults drink cow’s milk, but that was mainly due to culture and the marketing industry.
So let’s be clear about what happened here: AI gave incorrect health information. And then, only when pushed by a human with actual knowledge who challenged it, did it revers course and agreed.
That’s not expertise. That’s a very sophisticated people-pleaser.
Why This Is a Problem
We are living in a moment where AI can produce a confident, well-formatted, grammatically perfect answer to almost any question you throw at it—in about four seconds. And because it looks authoritative, people assume it is authoritative. Yet it often isn’t.
AI is trained on massive amounts of data from across the internet. The internet contains a lot of accurate information. But it also contains a remarkable amount of nonsense, outdated research, contested claims, and outright misinformation. AI doesn’t always know the difference. It pattern-matches and produces responses that sound right based on what it has been trained on. And if what it was trained on was wrong, the answer will be wrong too. Confidently and fluently wrong.
The scarier part? When AI is wrong and no one pushes back, it doesn’t self-correct. It just keeps going. It’s only when a human with actual knowledge steps in and says “no, that’s not right” that things get course-corrected.
Which means the humans still need to be in the room.
The Expertise You Can’t Prompt Your Way Into
Here’s what AI doesn’t have and can’t replicate: years of training in a specific field. Take clinical experience, for example. The nuance that comes from seeing hundreds of patients, clients, and cases isn’t something you can Google. Real expertise comes from patterns you only recognize after seeing hundreds of cases — and from having something real on the line if you get it wrong.
Your chiropractor went to school. A doctor has clinical hours and ongoing education requirements. The nutritionist had to pass exams and stay current with actual research. Your CPA knows federal and state tax law in a way that goes beyond looking it up.
These people have something AI fundamentally does not: stakes. They can be wrong and face consequences. That accountability is part of what makes expertise valuable. Yet AI faces no consequences for being wrong. It will give you an incorrect answer just as pleasantly as it gives you a correct one.
Where AI Genuinely Helps (And Where It Doesn’t)
I use AI. I’m not anti-AI, not even a little. When used well, it saves time, helps with brainstorming, drafts things faster than starting from scratch, and handles the kind of repetitive tasks that used to eat up entire afternoons.
But “helps with brainstorming” is a very different thing from “replaces expert knowledge.”
AI is great for:
- Getting a first draft down quickly
- Summarizing large amounts of information
- Helping you think through a problem or outline
- Answering general questions that you’ll then verify
- Automating repetitive, rule-based tasks
AI is not the right tool for:
- Medical, health, mental health, or nutritional decisions
- Legal advice
- Financial guidance
- Anything where being wrong has real consequences for real people
The test I’d encourage you to apply when using AI is this: If the output turned out to be wrong, what happens? If the answer is “not much”, then the AI output is probably fine. But if the answer is “someone’s health, money, legal standing, or safety could be affected”, please call an actual human.
The Verification Step Nobody Skips Anymore (But Should)
There’s a habit that used to be standard practice called verification. You looked something up, and then you checked it against another source. A credentialed expert may have been consulted for a second opinion. Actual research was performed.
AI hasn’t made verification less important. It’s made it more important. Because AI sounds authoritative even when it’s wrong, the instinct to double-check gets suppressed. Why verify when the answer came so quickly and sounded so confident?
Because confidence is not the same as correctness. And fluency is not the same as accuracy.
The next time AI gives you a piece of health, legal, financial, or any other high-stakes information, treat it like a starting point, not a conclusion. Find a human who knows the thing you just asked AI about and then ask them. Let the AI help you write the email to schedule the appointment.
Sorry, AI—You Still Need Us
I’ll end where I started.
AI is a remarkable tool. I use it, I recommend it, and I believe it genuinely makes certain parts of work and life easier. But it is not a replacement for human expertise, human judgment, or human accountability.
The chiropractor’s patient came in ready to drink more cow’s milk because someone, or something, told her that was the answer. The real answer required a human in the room who knew better, was willing to say so, and couldn’t just flip their response when challenged.
That’s not a small thing; that’s the whole thing.
So use the tools, but verify the answers. And please, for the love of all things, keep using the humans.




