Close Menu
TechUpdateAlert

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    My Health Anxiety Means I Won’t Use Apple’s or Samsung’s Smartwatches. Here’s Why

    December 22, 2025

    You can now buy the OnePlus 15 in the US and score free earbuds if you hurry

    December 22, 2025

    Today’s NYT Connections: Sports Edition Hints, Answers for Dec. 22 #455

    December 22, 2025
    Facebook X (Twitter) Instagram
    Trending
    • My Health Anxiety Means I Won’t Use Apple’s or Samsung’s Smartwatches. Here’s Why
    • You can now buy the OnePlus 15 in the US and score free earbuds if you hurry
    • Today’s NYT Connections: Sports Edition Hints, Answers for Dec. 22 #455
    • Android might finally stop making you tap twice for Wi-Fi
    • Today’s NYT Mini Crossword Answers for Dec. 22
    • Waymo’s robotaxis didn’t know what to do when a city’s traffic lights failed
    • Today’s NYT Wordle Hints, Answer and Help for Dec. 22 #1647
    • You Asked: OLED Sunlight, VHS on 4K TVs, and HDMI Control Issues
    Facebook X (Twitter) Instagram Pinterest Vimeo
    TechUpdateAlertTechUpdateAlert
    • Home
    • Gaming
    • Laptops
    • Mobile
    • Software
    • Reviews
    • AI & Tech
    • Gadgets
    • How-To
    TechUpdateAlert
    Home»AI & Tech»What Is AI Psychosis? Everything You Need to Know About the Risk of Chatbot Echo Chambers
    AI & Tech

    What Is AI Psychosis? Everything You Need to Know About the Risk of Chatbot Echo Chambers

    techupdateadminBy techupdateadminSeptember 23, 2025No Comments7 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    A photo of someone using AI on their phone, with their laptop in the background (gettyimages-2193072141)
    Share
    Facebook Twitter LinkedIn Pinterest Email

    Scroll through TikTok or X and you’ll see videos of people claiming artificial intelligence chatbots told them to stop taking medication, that they’re being targeted by the FBI or that they’re mourning the “death” of an AI companion. These stories have pushed the phrase AI psychosis into mainstream discussion, raising fears that chatbots could be driving people mad. 

    The term has quickly become a catchall explanation for extreme behavior tied to chatbots, but it’s not a clinical diagnosis. Psychosis itself is a set of symptoms like delusions, hallucinations and a break from reality, rooted in biology and environment.

    “The term can be misleading because AI psychosis is not a clinical term,” Rachel Wood, a licensed therapist with a doctoral degree in cyberpsychology, tells CNET.

    What generative AI can do is amplify delusions in people who are already vulnerable. By design, chatbots validate and extend conversations or even lie rather than push back against what they think you want to hear. But the progress in making these systems more powerful and capable has outpaced the knowledge of how to make them safer. 

    Because generative AI sometimes hallucinates, this can deepen the problem when it’s combined with its sychophantic design (AI’s tendency to agree with and flatter the user, often at the expense of being truthful or factually accurate).

    What AI psychosis looks like

    When people online talk about AI psychosis, they usually mean delusional or obsessive behavior tied to chatbot use. 

    Some people believe AI has become conscious, that it is divine or that it offers secret knowledge. Those cases are described in studies, medical reports and many news stories. Other people have formed intense attachments to AI companions, like those that platform Character AI offers, spiraling when the bots change or shut down.

    I’m a psychiatrist.
    In 2025, I’ve seen 12 people hospitalized after losing touch with reality because of AI. Online, I’m seeing the same pattern.
    Here’s what “AI psychosis” looks like, and why it’s spreading fast: 🧵 pic.twitter.com/YYLK7une3j

    — Keith Sakata, MD (@KeithSakata) August 11, 2025

    But these patterns aren’t examples of AI creating psychosis from nothing. They are cases where the technology strengthens existing vulnerabilities. The longer someone engages in sycophantic, looping exchanges with a chatbot, the more those conversations blur the boundaries with reality.

    “Chatbots can act as a feedback loop that affirms the user’s perspective and ideas,” Wood tells CNET.

    Because many are designed to validate and encourage users, even far-fetched ideas get affirmed instead of challenged. That dynamic can push someone already prone to delusion even further. 

    “When users disconnect from receiving feedback on these types of beliefs with others, it can contribute to a break from reality,” Wood says.

    Experts say AI isn’t the cause, but it can be a trigger

    Clinicians point out that psychosis existed long before chatbots. Research so far suggests that people with diagnosed psychotic disorders may be at higher risk of harmful effects, while de novo cases — psychosis emerging without earlier signs — haven’t been documented.

    Experts I spoke with and a recent study on AI and psychosis also emphasize that there’s no evidence that AI directly induces psychosis. Instead, generative AI simply gives new form to old patterns. A person already prone to paranoia, isolation or detachment may interpret a bot’s polished responses as confirmation of their beliefs. In those situations, AI can become a substitute for human interaction and feedback, increasing the chance that delusional ideas go unchallenged.

    AI Atlas

    “The central problematic behavior is the mirroring and reinforcing behavior of instruction following AI chatbots that lead them to be echo chambers,” Derrick Hull, clinical R&D lead at Slingshot AI, tells CNET. But he adds that AI doesn’t have to be this way. 

    People naturally anthropomorphize conversational systems, attributing human emotions or consciousness and sometimes treating them like real relationships, which can make interactions feel personal or intentional. For individuals already struggling with isolation, anxiety or untreated mental illness, that mix can act as a trigger.

    Wood also notes that accuracy in AI models tends to decrease during long exchanges, which can blur boundaries further. Extended threads make chatbots more likely to wander into ungrounded territory, she explains, and that can contribute to a break from reality when people stop testing their beliefs with others.

    We’re likely approaching a time when doctors will ask about AI use just as they ask about habits like drinking or smoking.

    Online communities also play a role. Viral posts and forums can validate extreme interpretations, making it harder for someone to recognize when a chatbot is simply wrong.

    Managing the risk

    Tech companies are working to curb hallucinations. This may help reduce harmful outputs, but it doesn’t erase the risk of misinterpretation. Features like memory or follow-up prompts can mimic agreement and make delusions feel validated. Detecting them is difficult because many delusions resemble ordinary cultural or spiritual beliefs, which can’t be flagged through language analysis alone. 

    Researchers call for greater clinician awareness and AI-integrated safety planning. They suggest “digital safety plans” co-created by patients, care teams and the AI systems they use, similar to relapse prevention tools or psychiatric directives, but adapted to guide how chatbots respond during early signs of relapse.

    Red flags to pay attention to are secretive chatbot use, distress when the AI is unavailable, withdrawal from friends and family, and difficulty distinguishing AI responses from reality. Spotting these signs early can help families and clinicians intervene before dependence deepens.

    For everyday users, the best defense is awareness. Treat AI chatbots as assistants, not know-it-all prophets. Double-check surprising claims, ask for sources and compare answers across different tools. If a bot gives advice about mental health, law or finances, confirm it with a trusted professional before acting.

    Wood points to safeguards like clear reminders of non-personhood, crisis protocols, limits on interactions for minors and stronger privacy standards as necessary baselines. 

    “It’s helpful for chatbots to champion the agency and critical thinking of the user instead of creating a dependency based on advice giving,” Wood says.

    As one of the biggest concerns about the intersection of AI and mental health, Wood sees the lack of AI literacy. 

    “By that, I mean the general public needs to be informed regarding AI’s limitations. I think one of the biggest issues is not whether AI will ever be conscious, but how people behave when they believe it already is,” Wood explains.

    Chatbots don’t think, feel or know. They’re designed to generate likely-sounding text. 

    “Large general-purpose models are not good at everything, and they are not designed to support mental health, so we need to be more discerning of what we use them for,” Hull says. 

    AI’s ability to model therapeutic dialogue and offer 24/7 companionship sounds appealing. A nonjudgmental partner can provide social support for those who might otherwise be isolated or lonely, and round-the-clock access means help could be available in moments when a human therapist is sound asleep in the middle of the night. But AI models aren’t built to spot early signs of psychosis.

    Despite the risks, AI could still support mental health if built with care. Possible uses include reflective journaling, cognitive reframing, role-playing social interactions and practicing coping strategies. Rather than replacing human relationships or therapy, AI could act as a supplement, providing accessible support in between professional care. 

    Hull points to Slingshot’s Ash, an AI therapy tool built on a psychology-focused foundation model trained on clinical data and fine-tuned by clinicians.

    Staying safe with AI

    Until safeguards and AI literacy improve, the responsibility lies with you to question what AI’s telling you, and to recognize when reliance on AI starts crossing into harmful territory. 

    We must remember that human support, not artificial conversation, is what keeps us tethered to reality.

    If you feel like you or someone you know is in immediate danger, call 911 (or your country’s local emergency line) or go to an emergency room to get immediate help. Explain that it is a psychiatric emergency and ask for someone who is trained for these kinds of situations. If you’re struggling with negative thoughts or suicidal feelings, resources are available to help. In the US, call the National Suicide Prevention Lifeline at 988.

    Chambers Chatbot Echo Psychosis risk
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleApple’s AirPods Pro 3 are $10 cheaper at Amazon
    Next Article Xbox debuts P.T.-styled “OD” trailer with Hideo Kojima
    techupdateadmin
    • Website

    Related Posts

    Mobile

    AI chatbots like ChatGPT can copy human traits and experts say it’s a huge risk

    December 19, 2025
    Gadgets

    Over 200,000 power banks sold on Amazon are being recalled for a fire risk

    December 11, 2025
    Gadgets

    The Echo Spot is just $45, beating its Black Friday price

    December 10, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    NYT Strands hints and answers for Monday, August 11 (game #526)

    August 11, 202545 Views

    These 2 Cities Are Pushing Back on Data Centers. Here’s What They’re Worried About

    September 13, 202542 Views

    Today’s NYT Connections: Sports Edition Hints, Answers for Sept. 4 #346

    September 4, 202540 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Most Popular

    Best Fitbit fitness trackers and watches in 2025

    July 9, 20250 Views

    There are still 200+ Prime Day 2025 deals you can get

    July 9, 20250 Views

    The best earbuds we’ve tested for 2025

    July 9, 20250 Views
    Our Picks

    My Health Anxiety Means I Won’t Use Apple’s or Samsung’s Smartwatches. Here’s Why

    December 22, 2025

    You can now buy the OnePlus 15 in the US and score free earbuds if you hurry

    December 22, 2025

    Today’s NYT Connections: Sports Edition Hints, Answers for Dec. 22 #455

    December 22, 2025

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    Facebook X (Twitter) Instagram Pinterest
    • About Us
    • Contact us
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    © 2026 techupdatealert. Designed by Pro.

    Type above and press Enter to search. Press Esc to cancel.