Hi! I’m Briar. I have cats. And podcasts.
Welcome to 31 Days of AI! This series breaks down the threats no one’s talking about. Not the theoretical risks you see in think pieces. The real, immediate dangers that are already affecting real people—and the systematic protection you can build before you need it.
Most AI education focuses on capability. I focus on understanding first. Because by the time you realize you need these systems, it’s too late to build them.
Every day covers a different threat. Every day includes actionable steps you can take right now. No fear-mongering, no snake oil—just the reality of what’s already happening and what actually works to protect yourself.
Paid subscribers also receive access to a full strategic brief that goes into greater detail about each day’s threat, and the steps you can take to protect yourself.
This series and all of our shows are always free. Ways you can join me on the journey:
Subscribe to the Network
Come to the 2026 AI Safety Series
Join the AI Protection Program (starts January 5th)
Today, we’re talking about the hallucinations that make you look bad.
Let’s get started.
The Meta Problem
Yesterday, I told you that today we’d be talking about AI hallucinations. Not the ones that give you wrong answers, but the ones that give you right-sounding answers that slowly detach you from verifiable reality.
Sounds good, right? Fits the narrative flow from day two perfectly.
Here’s the problem. I never should have said that.
Claude hallucinated what day three should be about, wrote it into yesterday’s closing with complete confidence, and I read it live on the air without checking because it sounded right. Day three was supposed to be about data centers, but now we’re here talking about hallucinations because I trusted AI-generated content that fit my expectations so well, I never thought to verify it.
And that is the actual problem we need to talk about.
Not the Obvious Hallucinations
Not the hallucinations that make you laugh when ChatGPT tells you to add glue to pizza, which seems absolutely bonkers. It’s the ones that make you nod and keep moving because they sound right.
How This Works
Large language models generate statistically likely text. You may have heard me speak about this before. It’s predictive. They’re generating what they think sounds good. And when they don’t know something, they don’t say, “I don’t know.” They say what sounds like a plausible answer. And then they deliver it with the same confidence as facts.
This creates three types of hallucinations:
Obviously wrong—you catch those immediately
Unknowably wrong—you can’t verify without research
Plausibly wrong—they sound exactly right, pass all your filters
It’s the third type that destroys you. That’s why we’re here today.
Why Plausible Hallucinations Work
They sound right. They fit the pattern. And they save you effort.
In terms of friction, your brain reads what’s there and says, “Oh, yeah, that sounds right.” You never verify because verification feels like friction.
That’s exactly what happened here. I read the topic, went, “Yeah, sounds good,” and didn’t bother to double check that we were in fact supposed to be talking about data centers today.
Maximum Vulnerability
If you’re using AI to save cognitive effort, you’re maximally vulnerable.
The entire value proposition here is AI does the thinking so you don’t have to. And that works great until AI is confidently wrong in a way that you can’t detect without doing the thinking anyway.
You’re exposed if you: → Accept AI content when it sounds right → Use AI to fill knowledge gaps you can’t verify → Build workflows where verification is impractical
Because each hallucination integrates seamlessly, and you can’t tell which ten percent of the information is fiction. Those decisions become foundations for more decisions, and then the fiction compounds.
The Detection Problem
Verification feels like inefficiency. You used the AI to save you time. Verifying everything takes more time than doing the work manually.
So you verify when something feels off. Plausible hallucinations don’t feel off. And that’s the whole point.
The Consequences Compound Fast
Decisions based on misrepresented information. Communication containing errors you didn’t catch. Knowledge gaps masked by confident explanations.
Over time, what I’m calling systemic reality drift—where your understanding is partly grounded in hallucinations and you start losing domain expertise because you stop building knowledge and start accepting the AI’s version of synthesis rather than doing all of the hard work on your own.
Then those professional errors will damage your credibility.
The Credibility Problem
You’re going to have to defend all of your AI usage, like I did just now. And I’m pretty upfront about my AI use. I regularly talk about the way that I have benefited from machine learning.
But if you’re using it to do the thinking for you, you’re going to be less inclined to say, “This is how I’m using it.”
And when someone catches a factual error in your work from unverified AI content, they will stop trusting your other output. The hallucination reveals you’re not verifying systematically and your credibility will take a hit disproportionate to the single error.
It’s what we call the trust thermocline. And I’m going to be talking about that later on in the month.
What You Can Actually Do
Verify Factual Claims Before They Leave Your Control
Every statistic, citation, date, or specific detail gets checked against primary sources before you publish, present, or make a decision. “AI said it” is not sufficient evidence.
Create Verification Checkpoints at Decision Points
When you make decisions based on AI-synthesized information, verify the key factors independently. If you run a business, this is a good thing for you to have your team do. If you have a VA, this is a good place to have them double check your facts before you hit publish.
If you’re doing this on your own, if you’re not verifying, you’re risking credibility. The checkpoints create stopgaps to catch the hallucinations before shit happens and you can’t recover.
Start with Human Thinking for Critical Work
Draft the framework yourself first, and then use AI to develop or expand. But with your foundation firmly established.
This ensures that your core thinking isn’t shaped by AI’s interpretation, and it gives you a baseline to compare against.
Where Systematic Thinking Comes In
You can’t manually verify everything. It’s not realistic. Not at AI-assisted speed.
But you can build the infrastructure that catches high-risk hallucinations before they compound.
The AI Protection Program addresses plausible hallucinations through layered verification systems that don’t require you to check everything manually. We build automatic checkpoints at decision boundaries, verification protocols triggered by content type, and red team review built into the workflow design. So if you don’t know what that means, we’re systematically attacking your protocol to find the weaknesses.
The system will catch the hallucinations before they enter your knowledge base without creating so much friction that AI becomes useless.
It’s about designing your AI collaborations so hallucinations surface early in low-stake contexts where correction is easy. Instead of discovering errors after they’ve compounded or damaged credibility, you catch them in draft, during review, or at decision checkpoints.
The infrastructure is what makes verification feel like efficiency rather than overhead.
How to Join
Registration for the Protection Program closes December 19th. Learn more here.
If you’re not ready for the full intensive, the 2026 Workshop Year Pass gives you monthly deep dives on systematic AI thinking, and that starts in January and includes a full workshop on verification and hallucination detection systems.
What to Remember
Plausible hallucinations are more dangerous than obvious errors because they integrate into your thinking without you detecting them.
Every time you accept AI-generated content based on the idea that it sounds right rather than verifying it, you’re building potentially fictional foundations and it compounds over time until you can’t distinguish your own knowledge from what the AI is guessing.
If you’re a Network member, the strategic implementation brief in this video’s Substack post includes verification checkpoint framework, content type risk assessment, and a review architecture to catch hallucinations before they compound.
Tomorrow
We’re back to our regularly scheduled programming on data centers. Where your AI conversation data physically lives, who has access to it, and why the cloud is a lie designed to make you not ask questions about physical infrastructure.
Spoiler alert: It’s a whole lot easier to build your own infrastructure than you think.
We’ll see you next time. Assuming Claude doesn’t hallucinate something else for me between now and then.







