Go offline with the Player FM app!
EP 587: GPT-5 canceled for being a bad therapist? Why that’s a bad idea
Manage episode 499835821 series 3470198
When GPT-5 was released last week, the internets were in an UPROAR.
One of the main reasons?
With the better model, came a new behavior.
And in losing GPT-4o, people feel they lost a friend. Their only friend.
Or their therapist. Yikes.
For this Hot Take Tuesday, we're gonna say why using AI as a therapist is a really, really bad idea.
Newsletter: Sign up for our free daily newsletter
More on this Episode: Episode Page
Join the discussion: Thoughts on this? Join the convo and connect with other AI leaders on LinkedIn.
Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineup
Website: YourEverydayAI.com
Email The Show: [email protected]
Connect with Jordan on LinkedIn
Topics Covered in This Episode:
- GPT-5 Launch Backlash Explained
- Users Cancel GPT-5 Over Therapy Role
- AI Therapy Risks and Dangers Discussed
- Sycophancy Reduction in GPT-5 Model
- Addiction to AI Companionship and Validation
- OpenAI’s Response to AI Therapist Outcry
- Illinois State Ban on AI Therapy
- Mental Health Use Cases for ChatGPT
- Harvard Study: AI’s Top Personal Support Uses
- OpenAI’s New Guardrails on ChatGPT Therapy
Timestamps:
00:00 "AI Therapy: Harm or Help?"
04:44 "OpenAI Model Update Controversy"
09:23 "Customizing ChatGPT: Echo Chamber Risk"
11:38 GPT-5 Update Reduces Sycophancy
16:17 Concerns Over AI Dependency
19:50 AI Addiction and Societal Bias
21:05 AI and Mental Health Concerns
27:01 AI Barred from Therapeutic Roles
29:22 ChatGPT Enhances Safety and Support Measures
34:03 AI Models: Benefits and Misuse
35:17 "Human Judgment Over AI Decisions"
Keywords:
GPT-5, GPT 4o, OpenAI, AI therapy, AI therapist, large language model, AI mental health support, AI companionship, sycophancy, echo chamber, AI validation, custom instructions, AI addiction, AI model update, user revolt, Illinois AI therapy ban, House Bill 1806, AI chatbots, mental health apps, Sentio survey, Harvard Business Review AI use cases, task completion tuning, AI safety, clinical outcomes, AI reasoning, emotional dependence, AI model personality, emotional validation, AI boundaries, US state AI regulation, AI policymaking, therapy ban, AI in mental health, digital companionship, AI model sycophancy rate, AI in personal life, AI for decision making, AI guardrails, AI model tuning, Sam Altman, Silicon Valley AI labs, AI companion, psychology and AI, online petitions against GPT-5, AI as life coach, accessibility of AI therapy, therapy alternatives, AI-driven self help, digital mental health tools, AI echo chamber risks
Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info)
Ready for ROI on GenAI? Go to youreverydayai.com/partner
589 episodes
Manage episode 499835821 series 3470198
When GPT-5 was released last week, the internets were in an UPROAR.
One of the main reasons?
With the better model, came a new behavior.
And in losing GPT-4o, people feel they lost a friend. Their only friend.
Or their therapist. Yikes.
For this Hot Take Tuesday, we're gonna say why using AI as a therapist is a really, really bad idea.
Newsletter: Sign up for our free daily newsletter
More on this Episode: Episode Page
Join the discussion: Thoughts on this? Join the convo and connect with other AI leaders on LinkedIn.
Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineup
Website: YourEverydayAI.com
Email The Show: [email protected]
Connect with Jordan on LinkedIn
Topics Covered in This Episode:
- GPT-5 Launch Backlash Explained
- Users Cancel GPT-5 Over Therapy Role
- AI Therapy Risks and Dangers Discussed
- Sycophancy Reduction in GPT-5 Model
- Addiction to AI Companionship and Validation
- OpenAI’s Response to AI Therapist Outcry
- Illinois State Ban on AI Therapy
- Mental Health Use Cases for ChatGPT
- Harvard Study: AI’s Top Personal Support Uses
- OpenAI’s New Guardrails on ChatGPT Therapy
Timestamps:
00:00 "AI Therapy: Harm or Help?"
04:44 "OpenAI Model Update Controversy"
09:23 "Customizing ChatGPT: Echo Chamber Risk"
11:38 GPT-5 Update Reduces Sycophancy
16:17 Concerns Over AI Dependency
19:50 AI Addiction and Societal Bias
21:05 AI and Mental Health Concerns
27:01 AI Barred from Therapeutic Roles
29:22 ChatGPT Enhances Safety and Support Measures
34:03 AI Models: Benefits and Misuse
35:17 "Human Judgment Over AI Decisions"
Keywords:
GPT-5, GPT 4o, OpenAI, AI therapy, AI therapist, large language model, AI mental health support, AI companionship, sycophancy, echo chamber, AI validation, custom instructions, AI addiction, AI model update, user revolt, Illinois AI therapy ban, House Bill 1806, AI chatbots, mental health apps, Sentio survey, Harvard Business Review AI use cases, task completion tuning, AI safety, clinical outcomes, AI reasoning, emotional dependence, AI model personality, emotional validation, AI boundaries, US state AI regulation, AI policymaking, therapy ban, AI in mental health, digital companionship, AI model sycophancy rate, AI in personal life, AI for decision making, AI guardrails, AI model tuning, Sam Altman, Silicon Valley AI labs, AI companion, psychology and AI, online petitions against GPT-5, AI as life coach, accessibility of AI therapy, therapy alternatives, AI-driven self help, digital mental health tools, AI echo chamber risks
Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info)
Ready for ROI on GenAI? Go to youreverydayai.com/partner
589 episodes
All episodes
×Welcome to Player FM!
Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.