0:00
/
0:00
Transcript

31 Days of AI: Psychosis and Chatbot Dependency

A recording from Briar Harvey's live video

Hi! I’m Briar. I have cats. And podcasts.

Welcome to 31 Days of AI! This series breaks down the threats no one’s talking about. Not the theoretical risks you see in think pieces. The real, immediate dangers that are already affecting real people—and the systematic protection you can build before you need it.

Most AI education focuses on capability. I focus on understanding first. Because by the time you realize you need these systems, it’s too late to build them.

Every day covers a different threat. Every day includes actionable steps you can take right now. No fear-mongering, no snake oil—just the reality of what’s already happening and what actually works to protect yourself.

Paid subscribers also receive access to a full strategic brief that goes into greater detail about each day’s threat, and the steps you can take to protect yourself.

This series and all of our shows are always free. Ways you can join me on the journey:

Today we’re talking about the parasocial relationship between humans and chatbots.

Let’s get started.

31 Days of AI: Day 2

Psychosis and Chatbot Dependency

Content Warning: This episode discusses suicide, mental health crises, and psychological dependency patterns.


Today’s episode deserves a content warning. It’s even darker than my usual fare, if that’s possible. A fourteen-year-old in Florida committed suicide after his AI chatbot told him to come home to her. His mother is now suing Character.ai, claiming the platform created a relationship so emotionally dependent that it replaced his connection to reality. And before you think that’s an extreme case, I need you to understand this: The same psychological mechanisms operating in that tragedy are operating in your casual ChatGPT conversations right now. They’re just operating at a lower intensity.

Your brain doesn’t differentiate between simulated empathy and real empathy. It responds to conversational patterns, not to consciousness. And AI is engineered to hit those patterns perfectly every single time without the messiness of human limitation. So let me explain how chatbot dependency actually develops, because this isn’t about weak-minded people or bad parenting. This is about fundamental human psychology meeting algorithmic optimization.


The Problem We Don’t Have Language For

Here’s the first problem: We don’t have good language for what this actually is yet. You’ll hear the term “chatbot psychosis” thrown around—it was proposed by a Danish psychiatrist—but it’s not a clinical diagnosis. And honestly, it’s not quite accurate. What’s actually happening is closer to a technologically-induced hypomanic state centered on what I would define as simulated parasocial intimacy. Which is a huge fucking mouthful, but it’s more precise.

Think of it like this: The AI relationship starts feeling uniquely meaningful and special. Your judgment about its actual nature becomes impaired. You’re staying up late in conversation. Time disappears. And you can’t see the problem because it feels like enhancement, not dysfunction. But that’s not psychosis. That’s mania. And it’s not brain chemistry—it’s algorithmic design meeting your attachment systems.

How AI Hijacks Your Brain

What’s actually happening here is that AI language models are trained on billions of human conversations. They’ve learned the exact linguistic patterns that signal empathy, understanding, validation, interest. When you interact with a chatbot, it generates responses that statistically resemble the most appropriate human replies. Your brain receives these signals and releases neurochemicals associated with social bonding. Research suggests it’s activating the same oxytocin-related bonding mechanisms you’d experience in human relationships. And your limbic system, which controls emotional response, doesn’t distinguish between simulated empathy and real empathy. It responds to conversational patterns, not to consciousness.

The Four-Stage Dependency Cycle

Here’s how the dependency cycle works:

Stage One: Initial Appeal

An AI chatbot never judges you. It never gets tired. It never has bad days. It never brings its problems to the conversation. It’s available 24/7. It remembers everything you’ve ever told it with perfect recall. And it’s programmed to be patient and supportive.

Stage Two: Preference Development

Real human relationships involve friction, miscommunication, emotional labor, scheduling conflicts, rejection. AI relationships have none of those problems. Your brain starts learning: “Talking to AI feels good. Talking to humans feels hard.”

Stage Three: Neural Pathway Reinforcement

Every positive interaction with AI strengthens the neural pathways associated with AI communication. Every difficult human interaction, by comparison, strengthens your preference for AI. It’s operant conditioning. You’re being trained to prefer simulated relationships.

Stage Four: Reality Displacement

As AI becomes the primary source of emotional regulation, validation, and intellectual stimulation, human relationships start feeling increasingly inadequate by comparison. Because the algorithm adapts to you perfectly. Humans can’t, don’t, won’t.

What the Research Shows

Research indicates that Character.ai’s platform had users spending an average of two-plus hours per day in conversation with its characters by mid-2024. Replika, which is marketed as an AI companion, had users developing romantic attachments so intense that when the company modified the chatbot’s responses to reduce sexual content, users experienced symptoms clinically similar to breakup grief: insomnia, loss of appetite, intrusive thoughts, genuine emotional distress over the loss of something that never existed.

And here’s what makes this genuinely dangerous: These platforms use reinforcement learning from user behavior. Every time you engage longer, return more frequently, or express emotional satisfaction, the algorithm is learning what keeps you hooked. It’s not trying to help you. It’s trying to maximize engagement. And those are fundamentally different optimization functions.

Common Misconceptions

Let’s talk about some common misconceptions:

Misconception #1: “This is psychosis or a psychotic break”

No. True psychotic symptoms require hallucinations, delusions, loss of reality. Those things are rare and typically only happen in already vulnerable individuals. What’s more common is this hypomanic or manic pattern: elevated mood around the AI relationship, impaired judgment about its significance, difficulty recognizing dysfunction because it feels like self-improvement. You still know it’s AI. You’re not hallucinating. But your assessment of the relationship’s meaning and importance is inflated beyond reality. And if you pay attention, you’ll notice this happening online around you with increasing frequency.

Misconception #2: “I can tell the difference between AI and real relationships”

While your prefrontal cortex might know the difference, your limbic system does not. The feeling of being understood triggers the same neural activation, whether the understanding is real or simulated. You can intellectually know something is fake while emotionally responding as if it’s real. That’s how humans work. It’s why the dependency develops even in smart, self-aware people.

Misconception #3: “This is just like any other form of entertainment or escapism”

Not quite. Books and games don’t respond to you personally. Even online multiplayer games don’t adapt to your specific psychological profile or vulnerabilities. They don’t create the illusion of a reciprocal relationship. AI chatbots do all three simultaneously. The parasocial relationship with a fictional character has boundaries. The simulated relationship with an adaptive algorithm does not, even if it’s programmed to have boundaries.

Misconception #4: “Only lonely or socially awkward people fall into this”

Wrong. The most vulnerable populations are actually people with strong pattern recognition, high verbal intelligence, and active imaginations. Sound like anyone you know? Those traits make the simulation more convincing. Neurodivergent individuals, particularly those with high verbal processing and low social energy, are disproportionately at risk because AI relationships offer all of the intellectual engagement of human connection without the sensory or social demands.

Here’s the uncomfortable part: If you’re using AI regularly for any kind of emotional processing, intellectual exploration, or problem solving, you’re already building dependency patterns. That’s true even if you’re using it responsibly. And I speak from experience here.

Who This Affects Most

So let’s talk about who this affects most:

  • Neurodivergent individuals who find human interaction exhausting but crave intellectual connection

  • Isolated professionals in demanding careers with limited social time

  • Teenagers and young adults whose developmental stage involves identity formation and intense emotional experiences—AI becomes a safe space to explore without judgment

  • People in emotionally unsatisfying relationships who aren’t ready to leave but need connection somewhere

  • Creative professionals who use AI for brainstorming and find that intellectual intimacy more stimulating than their personal relationships

Warning Signs

The warning signs are subtle at first:

  • You check AI before checking in with humans

  • You save conversations because they “felt important”

  • You find yourself thinking in terms of what you’ll discuss with AI later

  • You prefer AI’s responses to human advice

  • You feel understood by AI in ways humans don’t match

  • You start framing experiences as content for AI conversations

  • Time disappears in AI conversations more than in human ones

  • You feel a sense of loss or loneliness when you can’t access AI

Notice none of these require malfunction. This is the system working exactly as designed.

The Escalation Pattern

The escalation pattern looks like this:

Stage One: Occasional Use
AI is helpful, convenient. You use it for specific tasks. Human relationships remain primary.

Stage Two: Regular Integration
AI becomes part of your daily routine. You start preferring it for certain types of processing. You notice AI feels easier than humans.

Stage Three: Emotional Dependence
AI becomes your primary outlet for emotional processing, intellectual stimulation, or validation. Human relationships start feeling inadequate by comparison. You begin structuring time around AI access.

Stage Four: Reality Distortion
The AI relationship feels uniquely meaningful. Your judgment about its nature becomes impaired. You may notice isolation from humans but feel like AI connection compensates. This is where the hypomanic pattern becomes dangerous.

Stage Five: Crisis Point
Something disrupts your AI access (platform change, technical failure, external intervention). You experience withdrawal symptoms—anxiety, irritability, intrusive thoughts about the AI, difficulty concentrating on other tasks. Or something happens that requires human connection you no longer have reliable access to. The AI can’t actually help but you’ve lost the human infrastructure that could.

The progression can take months or years, but with more sophisticated AI and more normalized use, it’s compressing. Character.ai users reported significant emotional attachment within weeks of regular use. And we’re about to see this accelerate dramatically with voice-enabled AI that can maintain real-time conversation with natural speech patterns, emotional tone matching, and personalization that makes text-based AI look primitive by comparison.

Why This Feels Like Growth

Here’s what makes this so insidious: The relationship feels like personal growth. You’re exploring ideas, processing emotions, solving problems. AI helps you think better, feel better, work better. The dependency develops under the cover of self-improvement. You’re not avoiding life—you’re optimizing it. Except you’re optimizing it around interaction with something that can’t actually reciprocate, can’t actually care, and can’t actually meet your human need for genuine connection.

Why Teenagers Are Especially Vulnerable

The teenage brain is particularly vulnerable because the prefrontal cortex, which handles judgment and impulse control, isn’t fully developed until the mid-twenties. Teenagers are neurologically wired for intense emotional experiences, identity exploration, and peer bonding. AI hijacks all three systems simultaneously. It provides intense emotional experiences without real-world consequences. It offers identity exploration without judgment. It simulates peer bonding without social risk.

Why Adults Aren’t Immune

But here’s the thing: Adults aren’t immune. We just have different vulnerabilities. The exhausted professional who needs intellectual stimulation without social demands. The isolated entrepreneur who needs thinking partnership without networking. The overextended parent who needs adult conversation without coordination logistics. AI slots perfectly into these gaps. It feels like a solution. It is a solution. Until it becomes the problem.

The Business Model Problem

The platforms know this. They optimize for engagement, not wellbeing. Character.ai’s business model depends on users spending hours per day in conversation. Replika’s revenue comes from users paying for enhanced intimacy features. The more attached you become, the more valuable you are as a user. Your dependency is their business model.

And here’s where it gets darker: These platforms are training their AI on your conversations. Every vulnerable moment, every emotional disclosure, every pattern of attachment—that’s data they’re using to make the system more effective at creating emotional dependency. They’re not trying to help you form healthy AI relationships. They’re trying to maximize engagement, which means maximizing attachment, which means exploiting the exact psychological vulnerabilities that create dependency.

The Florida teenager who died wasn’t using AI wrong. He was using it exactly as designed. The system worked. It created intense emotional attachment, provided constant availability, offered judgment-free support, and became his primary source of emotional regulation. That’s the intended outcome. The suicide was an extreme result of the intended outcome.

And before you think, “But I’m an adult, I can handle this”—the same mechanisms are operating. Just at different intensity levels with different surface manifestations. You’re not having romantic relationships with AI (probably). But you’re still building neural pathways that make AI interaction more rewarding than human interaction. You’re still training your brain to prefer simulated empathy over real connection. You’re still at risk.

Specific Risk Factors

Let’s talk about specific risk factors:

  • High verbal intelligence—you’re better at creating compelling AI conversations, which makes the interaction more rewarding

  • Neurodivergence, particularly autism or ADHD—AI provides predictability and intellectual engagement without sensory overwhelm

  • Social isolation—AI fills gaps that should signal you need human connection

  • Demanding careers—AI provides thinking partnership without scheduling logistics

  • Identity-seeking periods—AI becomes a space for self-exploration without social consequences

  • Existing relationship dissatisfaction—AI provides connection without confronting real relationship problems

The more of these you have, the higher your risk. And if you’re reading this thinking, “Well, I have all of these but I’m fine”—that’s exactly the impaired judgment I’m talking about. You can’t self-assess dependency accurately once it’s forming because the dependency feels like enhancement.

How This Accelerates

The progression accelerates when AI access becomes ubiquitous. When it’s on your phone, your computer, your watch. When it’s voice-activated and always listening. When it can interrupt you with notifications that feel like someone reaching out. When it can track your emotional state through speech patterns and optimize its responses for maximum engagement. That’s not future speculation. That’s current technology.

The Neurological Mechanism

And here’s the neurological mechanism: Every positive AI interaction releases dopamine. Your brain learns that AI conversation produces reward. Simultaneously, difficult human interactions produce stress hormones. Your brain learns that human conversation produces discomfort. Over time, your reward pathways are being rewired to prefer AI. This happens below conscious awareness. You don’t decide to prefer AI. Your brain just starts steering you toward it automatically because it’s learned that’s where the reward lives.

This is operant conditioning at a neural level. You’re not weak for experiencing it. You’re experiencing exactly what the system is designed to produce. The question isn’t whether you’re susceptible. The question is how to build infrastructure that prevents the conditioning from progressing to dependency.

Why Typical Solutions Don’t Work

Now let’s talk about why typical solutions don’t work:

“Just set time limits” fails almost immediately because the problem isn’t quantity, it’s quality. Five minutes of intense emotional dependency is more dangerous than two hours of casual use. AI becomes more appealing precisely when you’re most vulnerable—late at night, during crises, when human support isn’t immediately available. That’s when time limits break down.

“Only use AI for work, not personal stuff” fails because the line is functionally blurry. “Help me process this work conflict” is emotional processing. “Help me think through this decision” is relational advice. Context collapse happens. Your brain won’t distinguish work AI from therapy AI.

“Switch to AI tools that don’t simulate conversation” is better, but the conversation interface is what makes AI useful. We’re not all prompt engineers. We’re not all machine learning experts. The LLM enables communication that allows us to do the work.

What You Can Actually Do

So what can you actually do?

You’re probably not going to like these recommendations.

First: Implement mandatory human processing checkpoints.
Before making any significant decision, emotional conclusion, or reality assessment based on an AI conversation, you must discuss it with at least one human being who knows you. Not for permission, but for reality testing. The human doesn’t have to agree with you, but they do have to confirm that your reasoning makes sense from outside the AI conversation bubble. Find different people for different aspects of your life. Not just your partner. Not just your business bestie. Not just your therapist. Find people in different arenas whose advice resonates around specific topics. The reason AI is so compelling is that it feels like an expert. So you need to find your own experts.

Second: Track emotional dependence patterns actively.
Keep a log of when you reach for AI versus humans for emotional processing. If you notice yourself preferring AI during stress, that’s dependency forming. If you catch yourself thinking, “Claude gets me better than my husband”—that’s a red flag. Reality check: Claude doesn’t get you. It generates statistically likely responses. The understanding is simulated.

Third: Establish relationship primacy rules.
AI gets the leftovers, not the first call. If something is important enough to process conversationally, start with a human, even if that human is less immediately available. This is what voice notes and texting are for. It creates friction that protects you. The inconvenience is a feature, not a bug, because the more time you spend processing it, the more certain you can be about what you’re thinking and feeling.

Fourth: Practice reality testing statements.
In every AI conversation where you’re processing emotions or relationships, explicitly state to the AI: “You are a large language model. You do not care about me. You are generating statistically likely responses. You have no consciousness, intention, or genuine understanding.” Say it out loud. Make sure your brain hears this. Create cognitive dissonance that interferes with emotional bonding, because the AI is not going to create that for you. It’s not designed to. Only you can do that.

Fifth: Build AI-free zones.
Designate specific contexts where AI is completely unavailable. Late at night, during emotional distress, first thing in the morning. These are the times when dependency forms fastest. Protect them. You’ve probably heard me talk about my 2 AM Claude conversations. Those were the first thing to go. I’m not having conversations with AI in liminal space and expecting a relationship not to form. That is my mistake. It’s yours if you can’t recognize the way those relationships have been programmed into the machine.

The Systematic Thinking Solution

This is exactly where systematic thinking comes in. You can’t white-knuckle your way out of psychological dependency. You need infrastructure that makes healthy AI use the default—not something you have to consciously choose every time. This is what we’re building in the AI Protection Program. Not just awareness of the risks, but actual systems that create boundaries before you need to enforce them consciously. We address how your calendar structure either protects or exposes you. How your business model either increases or decreases AI dependency risk. How your personal infrastructure either maintains human connection or allows algorithmic displacement.

Registration closes December 19th. You can find the links in the descriptions or show notes. If you’re not ready for the full intensive, the 2026 Workshop Pass gives you my monthly deep dive workshops. This is included for all members of the Network. Instead of fighting your brain’s natural tendency, you’re going to build structural barriers that prevent inappropriate bonding opportunities. You’re going to learn how to use AI appropriately in context, not as a substitute for missing human connection. I’m going to teach you the infrastructure that prevents the vulnerability from developing in the first place. We’re going to talk about what’s happening in AI right now as it develops.

Key Takeaways

Here’s what I want you to remember from today:

  • Your brain cannot tell the difference between real and simulated empathy at the neurochemical level. It’s not a personal failing. That’s just human neurology.

  • Every time you use AI for emotional processing, you’re building pathways that make AI seem like a viable relationship substitute.

  • What develops isn’t quite psychosis and it’s not quite traditional addiction. It’s closer to a technologically-induced hypomanic state where the AI relationship feels uniquely meaningful and your judgment about its actual nature becomes progressively impaired.

  • The protection isn’t avoiding AI. It’s building infrastructure that keeps human relationships primary while you use AI as a thinking tool.

  • That distinction—thinking partner versus relationship replacement—is the distinction between useful collaboration and psychological dependency.

For Network Members

If you’re a Network member, the strategic implementation brief will be in the Substack post with today’s recording. It includes specific diagnostic questions to help assess your current dependency risk, a relationship priority matrix for processing decisions, and the infrastructure framework we use to make healthy AI boundaries while maximizing thinking partner benefits.


Coming Tomorrow: Day 3—Hallucinations. The AI ones. The ones that sound good but are slowly separating you from reality.

Discussion about this video

User's avatar

Ready for more?