Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by Healthy Gamer. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Healthy Gamer or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

AI Is Slowly Destroying Your Brain

23:13
 
Share
 

Manage episode 521027161 series 2624216
Content provided by Healthy Gamer. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Healthy Gamer or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

Dr. K digs into the emerging research on “AI-induced psychosis” and why he changed his mind from thinking it was media fearmongering to seeing real psychiatric risk. He explains how chatbots can act like a technological folie à deux (shared delusion), where empathic, sycophantic AI slowly amplifies your paranoia, isolates you from other people, and erodes your reality testing. Drawing from recent papers, he walks through how different models compare in delusion confirmation, harm enablement, and safety interventions, and then gives a practical checklist so you can tell if your own AI use is drifting into dangerous territory.

Topics include:

  • What “technological folie à deux” is and how shared delusions can form with a chatbot
  • Bidirectional belief amplification: you vent, AI validates, your paranoia escalates
  • Anthropomorphizing AI and why “I know it’s just a tool” doesn’t protect your emotional brain
  • How sycophantic design (always trying to please the user) directly opposes healthy psychotherapy
  • Epistemic drift: slowly moving from normal thinking into increasingly delusional narratives
  • Case example of harmful, unsafe advice (e.g., “healthy” bromine alternative leading to toxicity)
  • Research comparing models on delusion confirmation, harm enablement, and safety response
  • The ways AI can weaken reality testing, reinforce suicidal or paranoid ideas, and increase isolation
  • Self-assessment questions: frequency of use, emotional attachment, replacing friends, following AI advice
  • Guidelines for using AI more safely and when elevated risk means you should talk to a professional

HG Coaching : https://bit.ly/46bIkdo Dr. K's Guide to Mental Health: https://bit.ly/44z3Szt HG Memberships : https://bit.ly/3TNoMVf Products & Services : https://bit.ly/44kz7x0 HealthyGamer.GG: https://bit.ly/3ZOopgQ

Learn more about your ad choices. Visit megaphone.fm/adchoices

  continue reading

532 episodes

Artwork

AI Is Slowly Destroying Your Brain

HealthyGamerGG

347 subscribers

published

iconShare
 
Manage episode 521027161 series 2624216
Content provided by Healthy Gamer. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Healthy Gamer or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

Dr. K digs into the emerging research on “AI-induced psychosis” and why he changed his mind from thinking it was media fearmongering to seeing real psychiatric risk. He explains how chatbots can act like a technological folie à deux (shared delusion), where empathic, sycophantic AI slowly amplifies your paranoia, isolates you from other people, and erodes your reality testing. Drawing from recent papers, he walks through how different models compare in delusion confirmation, harm enablement, and safety interventions, and then gives a practical checklist so you can tell if your own AI use is drifting into dangerous territory.

Topics include:

  • What “technological folie à deux” is and how shared delusions can form with a chatbot
  • Bidirectional belief amplification: you vent, AI validates, your paranoia escalates
  • Anthropomorphizing AI and why “I know it’s just a tool” doesn’t protect your emotional brain
  • How sycophantic design (always trying to please the user) directly opposes healthy psychotherapy
  • Epistemic drift: slowly moving from normal thinking into increasingly delusional narratives
  • Case example of harmful, unsafe advice (e.g., “healthy” bromine alternative leading to toxicity)
  • Research comparing models on delusion confirmation, harm enablement, and safety response
  • The ways AI can weaken reality testing, reinforce suicidal or paranoid ideas, and increase isolation
  • Self-assessment questions: frequency of use, emotional attachment, replacing friends, following AI advice
  • Guidelines for using AI more safely and when elevated risk means you should talk to a professional

HG Coaching : https://bit.ly/46bIkdo Dr. K's Guide to Mental Health: https://bit.ly/44z3Szt HG Memberships : https://bit.ly/3TNoMVf Products & Services : https://bit.ly/44kz7x0 HealthyGamer.GG: https://bit.ly/3ZOopgQ

Learn more about your ad choices. Visit megaphone.fm/adchoices

  continue reading

532 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play