Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by Australian Strategic Policy Institute (ASPI). All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Australian Strategic Policy Institute (ASPI) or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Superintelligence and human security, with Dan Hendrycks

55:45
 
Share
 

Manage episode 517479961 series 3641728
Content provided by Australian Strategic Policy Institute (ASPI). All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Australian Strategic Policy Institute (ASPI) or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

Last month, some of the world’s leading artificial intelligence experts signed a petition calling for a prohibition on developing superintelligent AI until it is safe. One of those experts was Dan Hendrycks, director for the Center for AI Safety and an adviser to Elon Musk’s xAI and leading firm Scale AI. Dan has led original and thought-provoking research including into the risk of rogue AIs escaping human control, the deliberate misuse of the technology by malign actors, and the emergence of dangerous strategic dynamics if one nation creates superintelligence, prompting fears among rival nations.

In the lead-up to ASPI’s Sydney Dialogue tech and security conference in December, Dan talks about the different risks AI poses, the possibility that AI develops its own goals and values, the concept of recursion in which machines build smarter machines, definitions of artificial “general” intelligence, the shortcomings of current AIs and the inadequacy of historical analogies such as nuclear weapons in understanding risks from superintelligence.

To see some of the research discussed in today’s episode, visit the Center for AI Safety’s website here.

  continue reading

93 episodes

Artwork
iconShare
 
Manage episode 517479961 series 3641728
Content provided by Australian Strategic Policy Institute (ASPI). All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Australian Strategic Policy Institute (ASPI) or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

Last month, some of the world’s leading artificial intelligence experts signed a petition calling for a prohibition on developing superintelligent AI until it is safe. One of those experts was Dan Hendrycks, director for the Center for AI Safety and an adviser to Elon Musk’s xAI and leading firm Scale AI. Dan has led original and thought-provoking research including into the risk of rogue AIs escaping human control, the deliberate misuse of the technology by malign actors, and the emergence of dangerous strategic dynamics if one nation creates superintelligence, prompting fears among rival nations.

In the lead-up to ASPI’s Sydney Dialogue tech and security conference in December, Dan talks about the different risks AI poses, the possibility that AI develops its own goals and values, the concept of recursion in which machines build smarter machines, definitions of artificial “general” intelligence, the shortcomings of current AIs and the inadequacy of historical analogies such as nuclear weapons in understanding risks from superintelligence.

To see some of the research discussed in today’s episode, visit the Center for AI Safety’s website here.

  continue reading

93 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play