Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by Spencer Greenberg. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Spencer Greenberg or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Will AI superintelligence kill us all? (with Nate Soares)

1:24:17
 
Share
 

Manage episode 513880393 series 2807409
Content provided by Spencer Greenberg. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Spencer Greenberg or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

Read the full transcript here.

Are the existential risks posed by superhuman AI fundamentally different from prior technological threats such as nuclear weapons or pandemics? How do the inherent “alien drives” that emerge from AI training processes complicate our ability to control or align these systems? Can we truly predict the behavior of entities that are “grown” rather than “crafted,” and what does this mean for accountability? To what extent does the analogy between human evolutionary drives and AI training objectives illuminate potential failure modes? How should we conceptualize the difference between superficial helpfulness and deeply embedded, unintended AI motivations? What lessons can we draw from AI hallucinations and deceptive behaviors about the limits of current alignment techniques? How do we assess the danger that AI systems might actively seek to preserve and propagate themselves against human intervention? Is the “death sentence” scenario a realistic prediction or a worst-case thought experiment? How much uncertainty should we tolerate when the stakes involve potential human extinction?

Links:

Staff

Music

Affiliates

  continue reading

478 episodes

Artwork
iconShare
 
Manage episode 513880393 series 2807409
Content provided by Spencer Greenberg. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Spencer Greenberg or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

Read the full transcript here.

Are the existential risks posed by superhuman AI fundamentally different from prior technological threats such as nuclear weapons or pandemics? How do the inherent “alien drives” that emerge from AI training processes complicate our ability to control or align these systems? Can we truly predict the behavior of entities that are “grown” rather than “crafted,” and what does this mean for accountability? To what extent does the analogy between human evolutionary drives and AI training objectives illuminate potential failure modes? How should we conceptualize the difference between superficial helpfulness and deeply embedded, unintended AI motivations? What lessons can we draw from AI hallucinations and deceptive behaviors about the limits of current alignment techniques? How do we assess the danger that AI systems might actively seek to preserve and propagate themselves against human intervention? Is the “death sentence” scenario a realistic prediction or a worst-case thought experiment? How much uncertainty should we tolerate when the stakes involve potential human extinction?

Links:

Staff

Music

Affiliates

  continue reading

478 episodes

Усі епізоди

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play