Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by Michael Berk. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Michael Berk or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Beyond Intelligence: GPT-5, Explainability and the Ethics of AI Reasoning (E.24)

42:11
 
Share
 

Manage episode 515201604 series 3646654
Content provided by Michael Berk. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Michael Berk or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

What happens when AI stops generating answers and starts deciding what’s true?

In this episode of Free Form AI, Michael Berk and Ben Wilson dive into GPT-5’s growing role as an interpreter of information — not just generating text, but analyzing news, assessing credibility, and shaping how we understand truth itself.

They unpack how reasoning capabilities, source reliability, and human feedback intersect to build, or break trust in AI systems. The conversation also examines the ethical stakes of explainability, the dangers of “sycophantic” AI behavior and the future of intelligence in a market-driven ecosystem.

Tune in to Episode 24 for a wide-ranging conversation about:
• How GPT-5’s reasoning is redefining “understanding” in AI
• Why explainability is critical for trust and transparency
• The risks of AI echo chambers and feedback bias
• The role of human judgment in AI alignment and evaluation
• What it means for machines to become arbiters of truth

Whether you build, study, or rely on AI systems, this episode will leave you questioning how far we’re willing to let our models think for us.

  continue reading

25 episodes

Artwork
iconShare
 
Manage episode 515201604 series 3646654
Content provided by Michael Berk. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Michael Berk or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

What happens when AI stops generating answers and starts deciding what’s true?

In this episode of Free Form AI, Michael Berk and Ben Wilson dive into GPT-5’s growing role as an interpreter of information — not just generating text, but analyzing news, assessing credibility, and shaping how we understand truth itself.

They unpack how reasoning capabilities, source reliability, and human feedback intersect to build, or break trust in AI systems. The conversation also examines the ethical stakes of explainability, the dangers of “sycophantic” AI behavior and the future of intelligence in a market-driven ecosystem.

Tune in to Episode 24 for a wide-ranging conversation about:
• How GPT-5’s reasoning is redefining “understanding” in AI
• Why explainability is critical for trust and transparency
• The risks of AI echo chambers and feedback bias
• The role of human judgment in AI alignment and evaluation
• What it means for machines to become arbiters of truth

Whether you build, study, or rely on AI systems, this episode will leave you questioning how far we’re willing to let our models think for us.

  continue reading

25 episodes

Semua episode

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play