Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by Mike Breault. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Mike Breault or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Shining a Light on the AI Black Box: Chain of Thought and Monitorability

5:13
 
Share
 

Manage episode 525246903 series 3690682
Content provided by Mike Breault. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Mike Breault or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

We explore how monitoring AI reasoning can reveal safety signals in critical decisions. Learn what monitorability means, why a perfect transcript isn’t required, and how robust metrics and three evaluation modes—intervention, process, and outcome—help catch red flags. The episode covers why bigger models aren’t necessarily less transparent, the surprising role of compute and RL, and practical tips like the monitorability tax and targeted follow-ups.

Note: This podcast was AI-generated, and sometimes AI can make mistakes. Please double-check any critical information.

Sponsored by Embersilk LLC

  continue reading

1612 episodes

Artwork
iconShare
 
Manage episode 525246903 series 3690682
Content provided by Mike Breault. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Mike Breault or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

We explore how monitoring AI reasoning can reveal safety signals in critical decisions. Learn what monitorability means, why a perfect transcript isn’t required, and how robust metrics and three evaluation modes—intervention, process, and outcome—help catch red flags. The episode covers why bigger models aren’t necessarily less transparent, the surprising role of compute and RL, and practical tips like the monitorability tax and targeted follow-ups.

Note: This podcast was AI-generated, and sometimes AI can make mistakes. Please double-check any critical information.

Sponsored by Embersilk LLC

  continue reading

1612 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play