Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by The Lawfare Institute. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The Lawfare Institute or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Lawfare Daily: Josh Batson on Understanding How and Why AI Works

41:15
 
Share
 

Manage episode 485730395 series 56794
Content provided by The Lawfare Institute. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The Lawfare Institute or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

Josh Batson, a research scientist at Anthropic, joins Kevin Frazier, AI Innovation and Law Fellow at the Texas Law and Senior Editor at Lawfare, to break down two research papers—“Mapping the Mind of a Large Language Model” and “Tracing the thoughts of a large language model”—that uncovered some important insights about how advanced generative AI models work. The two discuss those findings as well as the broader significance of interpretability and explainability research.

To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.

Support this show http://supporter.acast.com/lawfare.


Hosted on Acast. See acast.com/privacy for more information.

  continue reading

2562 episodes

Artwork
iconShare
 
Manage episode 485730395 series 56794
Content provided by The Lawfare Institute. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The Lawfare Institute or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

Josh Batson, a research scientist at Anthropic, joins Kevin Frazier, AI Innovation and Law Fellow at the Texas Law and Senior Editor at Lawfare, to break down two research papers—“Mapping the Mind of a Large Language Model” and “Tracing the thoughts of a large language model”—that uncovered some important insights about how advanced generative AI models work. The two discuss those findings as well as the broader significance of interpretability and explainability research.

To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.

Support this show http://supporter.acast.com/lawfare.


Hosted on Acast. See acast.com/privacy for more information.

  continue reading

2562 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play