Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by Max Bodach and Foundation for American Innovation. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Max Bodach and Foundation for American Innovation or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Mutually Assured Malfunction and the AI Arms Race w/Dan Hendrycks and Sam Hammond

50:13
 
Share
 

Manage episode 473770763 series 3530279
Content provided by Max Bodach and Foundation for American Innovation. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Max Bodach and Foundation for American Innovation or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

AI has emerged as a critical geopolitical battleground where Washington and Beijing are racing not just for economic advantage, but military dominance. Despite these high stakes, there's surprising little consensus on how—or whether—to respond to frontier AI development.

The polarized landscape features techno-optimists battling AI safety advocates, with the former

dismissing the latter as "doomers" who exaggerate existential risks. Meanwhile, AI business leaders face criticism for potentially overstating their companies' capabilities to attract investors and secure favorable regulations that protect their market positions.

Democrats and civil rights advocates warn that focusing solely on catastrophic risks versus economic prosperity distracts from immediate harms like misinformation, algorithmic discrimination, and synthetic media abuse. U.S. regulatory efforts have struggled, with California's SB 1047 failing last year and Trump repealing Biden's AI Executive Order on inauguration day. Even the future of the U.S. government's AI Safety Institute remains uncertain under the new administration.

With a new administration in Washington, important questions linger: How should government approach AI's national security implications? Can corporate profit motives align with safer outcomes? And if the U.S. and China are locked in an AI arms race, is de-escalation possible, or are we heading toward a digital version of Mutually Assured Destruction?

Joining me to explore these questions are Dan Hendrycks, AI researcher and Director of the Center for AI Safety and co-author of "Superintelligence Strategy," a framework for navigating advanced AI from a national security and geopolitical perspective, and FAI's own Sam Hammond, Senior Economist and AI policy expert.

  continue reading

115 episodes

Artwork
iconShare
 
Manage episode 473770763 series 3530279
Content provided by Max Bodach and Foundation for American Innovation. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Max Bodach and Foundation for American Innovation or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

AI has emerged as a critical geopolitical battleground where Washington and Beijing are racing not just for economic advantage, but military dominance. Despite these high stakes, there's surprising little consensus on how—or whether—to respond to frontier AI development.

The polarized landscape features techno-optimists battling AI safety advocates, with the former

dismissing the latter as "doomers" who exaggerate existential risks. Meanwhile, AI business leaders face criticism for potentially overstating their companies' capabilities to attract investors and secure favorable regulations that protect their market positions.

Democrats and civil rights advocates warn that focusing solely on catastrophic risks versus economic prosperity distracts from immediate harms like misinformation, algorithmic discrimination, and synthetic media abuse. U.S. regulatory efforts have struggled, with California's SB 1047 failing last year and Trump repealing Biden's AI Executive Order on inauguration day. Even the future of the U.S. government's AI Safety Institute remains uncertain under the new administration.

With a new administration in Washington, important questions linger: How should government approach AI's national security implications? Can corporate profit motives align with safer outcomes? And if the U.S. and China are locked in an AI arms race, is de-escalation possible, or are we heading toward a digital version of Mutually Assured Destruction?

Joining me to explore these questions are Dan Hendrycks, AI researcher and Director of the Center for AI Safety and co-author of "Superintelligence Strategy," a framework for navigating advanced AI from a national security and geopolitical perspective, and FAI's own Sam Hammond, Senior Economist and AI policy expert.

  continue reading

115 episodes

Minden epizód

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Listen to this show while you explore
Play