Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by Everyday AI. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Everyday AI or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

EP 587: GPT-5 canceled for being a bad therapist? Why that’s a bad idea

38:56
 
Share
 

Manage episode 499835821 series 3470198
Content provided by Everyday AI. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Everyday AI or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

When GPT-5 was released last week, the internets were in an UPROAR.
One of the main reasons?
With the better model, came a new behavior.
And in losing GPT-4o, people feel they lost a friend. Their only friend.
Or their therapist. Yikes.
For this Hot Take Tuesday, we're gonna say why using AI as a therapist is a really, really bad idea.

Newsletter: Sign up for our free daily newsletter
More on this Episode: Episode Page
Join the discussion: Thoughts on this? Join the convo and connect with other AI leaders on LinkedIn.

Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineup
Website: YourEverydayAI.com
Email The Show: [email protected]
Connect with Jordan on LinkedIn
Topics Covered in This Episode:

  1. GPT-5 Launch Backlash Explained
  2. Users Cancel GPT-5 Over Therapy Role
  3. AI Therapy Risks and Dangers Discussed
  4. Sycophancy Reduction in GPT-5 Model
  5. Addiction to AI Companionship and Validation
  6. OpenAI’s Response to AI Therapist Outcry
  7. Illinois State Ban on AI Therapy
  8. Mental Health Use Cases for ChatGPT
  9. Harvard Study: AI’s Top Personal Support Uses
  10. OpenAI’s New Guardrails on ChatGPT Therapy

Timestamps:
00:00 "AI Therapy: Harm or Help?"

04:44 "OpenAI Model Update Controversy"

09:23 "Customizing ChatGPT: Echo Chamber Risk"

11:38 GPT-5 Update Reduces Sycophancy

16:17 Concerns Over AI Dependency

19:50 AI Addiction and Societal Bias

21:05 AI and Mental Health Concerns

27:01 AI Barred from Therapeutic Roles

29:22 ChatGPT Enhances Safety and Support Measures

34:03 AI Models: Benefits and Misuse

35:17 "Human Judgment Over AI Decisions"
Keywords:
GPT-5, GPT 4o, OpenAI, AI therapy, AI therapist, large language model, AI mental health support, AI companionship, sycophancy, echo chamber, AI validation, custom instructions, AI addiction, AI model update, user revolt, Illinois AI therapy ban, House Bill 1806, AI chatbots, mental health apps, Sentio survey, Harvard Business Review AI use cases, task completion tuning, AI safety, clinical outcomes, AI reasoning, emotional dependence, AI model personality, emotional validation, AI boundaries, US state AI regulation, AI policymaking, therapy ban, AI in mental health, digital companionship, AI model sycophancy rate, AI in personal life, AI for decision making, AI guardrails, AI model tuning, Sam Altman, Silicon Valley AI labs, AI companion, psychology and AI, online petitions against GPT-5, AI as life coach, accessibility of AI therapy, therapy alternatives, AI-driven self help, digital mental health tools, AI echo chamber risks

Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info)

Ready for ROI on GenAI? Go to youreverydayai.com/partner

  continue reading

589 episodes

Artwork
iconShare
 
Manage episode 499835821 series 3470198
Content provided by Everyday AI. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Everyday AI or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

When GPT-5 was released last week, the internets were in an UPROAR.
One of the main reasons?
With the better model, came a new behavior.
And in losing GPT-4o, people feel they lost a friend. Their only friend.
Or their therapist. Yikes.
For this Hot Take Tuesday, we're gonna say why using AI as a therapist is a really, really bad idea.

Newsletter: Sign up for our free daily newsletter
More on this Episode: Episode Page
Join the discussion: Thoughts on this? Join the convo and connect with other AI leaders on LinkedIn.

Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineup
Website: YourEverydayAI.com
Email The Show: [email protected]
Connect with Jordan on LinkedIn
Topics Covered in This Episode:

  1. GPT-5 Launch Backlash Explained
  2. Users Cancel GPT-5 Over Therapy Role
  3. AI Therapy Risks and Dangers Discussed
  4. Sycophancy Reduction in GPT-5 Model
  5. Addiction to AI Companionship and Validation
  6. OpenAI’s Response to AI Therapist Outcry
  7. Illinois State Ban on AI Therapy
  8. Mental Health Use Cases for ChatGPT
  9. Harvard Study: AI’s Top Personal Support Uses
  10. OpenAI’s New Guardrails on ChatGPT Therapy

Timestamps:
00:00 "AI Therapy: Harm or Help?"

04:44 "OpenAI Model Update Controversy"

09:23 "Customizing ChatGPT: Echo Chamber Risk"

11:38 GPT-5 Update Reduces Sycophancy

16:17 Concerns Over AI Dependency

19:50 AI Addiction and Societal Bias

21:05 AI and Mental Health Concerns

27:01 AI Barred from Therapeutic Roles

29:22 ChatGPT Enhances Safety and Support Measures

34:03 AI Models: Benefits and Misuse

35:17 "Human Judgment Over AI Decisions"
Keywords:
GPT-5, GPT 4o, OpenAI, AI therapy, AI therapist, large language model, AI mental health support, AI companionship, sycophancy, echo chamber, AI validation, custom instructions, AI addiction, AI model update, user revolt, Illinois AI therapy ban, House Bill 1806, AI chatbots, mental health apps, Sentio survey, Harvard Business Review AI use cases, task completion tuning, AI safety, clinical outcomes, AI reasoning, emotional dependence, AI model personality, emotional validation, AI boundaries, US state AI regulation, AI policymaking, therapy ban, AI in mental health, digital companionship, AI model sycophancy rate, AI in personal life, AI for decision making, AI guardrails, AI model tuning, Sam Altman, Silicon Valley AI labs, AI companion, psychology and AI, online petitions against GPT-5, AI as life coach, accessibility of AI therapy, therapy alternatives, AI-driven self help, digital mental health tools, AI echo chamber risks

Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info)

Ready for ROI on GenAI? Go to youreverydayai.com/partner

  continue reading

589 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play