Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by Conviction. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Conviction or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

National Security Strategy and AI Evals on the Eve of Superintelligence with Dan Hendrycks

36:24
 
Share
 

Manage episode 469781963 series 3444082
Content provided by Conviction. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Conviction or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

This week on No Priors, Sarah is joined by Dan Hendrycks, director of the Center of AI Safety. Dan serves as an advisor to xAI and Scale AI. He is a longtime AI researcher, publisher of interesting AI evals such as "Humanity's Last Exam," and co-author of a new paper on National Security "Superintelligence Strategy" along with Scale founder-CEO Alex Wang and former Google CEO Eric Schmidt. They explore AI safety, geopolitical implications, the potential weaponization of AI, along with policy recommendations.

Sign up for new podcasts every week. Email feedback to [email protected]

Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @DanHendrycks

Show Notes:

0:00 Introduction

0:36 Dan’s path to focusing on AI Safety

1:25 Safety efforts in large labs

3:12 Distinguishing alignment and safety

4:48 AI’s impact on national security

9:59 How might AI be weaponized?

14:43 Immigration policies for AI talent

17:50 Mutually assured AI malfunction

22:54 Policy suggestions for current administration

25:34 Compute security

30:37 Current state of evals

  continue reading

135 episodes

Artwork
iconShare
 
Manage episode 469781963 series 3444082
Content provided by Conviction. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Conviction or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

This week on No Priors, Sarah is joined by Dan Hendrycks, director of the Center of AI Safety. Dan serves as an advisor to xAI and Scale AI. He is a longtime AI researcher, publisher of interesting AI evals such as "Humanity's Last Exam," and co-author of a new paper on National Security "Superintelligence Strategy" along with Scale founder-CEO Alex Wang and former Google CEO Eric Schmidt. They explore AI safety, geopolitical implications, the potential weaponization of AI, along with policy recommendations.

Sign up for new podcasts every week. Email feedback to [email protected]

Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @DanHendrycks

Show Notes:

0:00 Introduction

0:36 Dan’s path to focusing on AI Safety

1:25 Safety efforts in large labs

3:12 Distinguishing alignment and safety

4:48 AI’s impact on national security

9:59 How might AI be weaponized?

14:43 Immigration policies for AI talent

17:50 Mutually assured AI malfunction

22:54 Policy suggestions for current administration

25:34 Compute security

30:37 Current state of evals

  continue reading

135 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play