Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by Rob Wiblin and Keiran Harris and The 80000 Hours team. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Rob Wiblin and Keiran Harris and The 80000 Hours team or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Highlights: #214 – Buck Shlegeris on controlling AI that wants to take over – so we can use it anyway

41:26
 
Share
 

Manage episode 477728641 series 3320433
Content provided by Rob Wiblin and Keiran Harris and The 80000 Hours team. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Rob Wiblin and Keiran Harris and The 80000 Hours team or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

Most AI safety conversations centre on alignment: ensuring AI systems share our values and goals. But despite progress, we’re unlikely to know we’ve solved the problem before the arrival of human-level and superhuman systems in as little as three years.

So some — including Buck Shlegeris, CEO of Redwood Research — are developing a backup plan to safely deploy models we fear are actively scheming to harm us: so-called “AI control.” While this may sound mad, given the reluctance of AI companies to delay deploying anything they train, not developing such techniques is probably even crazier.

These highlights are from episode #214 of The 80,000 Hours Podcast: Buck Shlegeris on controlling AI that wants to take over – so we can use it anyway, and include:

  • What is AI control? (00:00:15)
  • One way to catch AIs that are up to no good (00:07:00)
  • What do we do once we catch a model trying to escape? (00:13:39)
  • Team Human vs Team AI (00:18:24)
  • If an AI escapes, is it likely to be able to beat humanity from there? (00:24:59)
  • Is alignment still useful? (00:32:10)
  • Could 10 safety-focused people in an AGI company do anything useful? (00:35:34)

These aren't necessarily the most important or even most entertaining parts of the interview — so if you enjoy this, we strongly recommend checking out the full episode!

And if you're finding these highlights episodes valuable, please let us know by emailing [email protected].

Highlights put together by Ben Cordell, Milo McGuire, and Dominic Armstrong

  continue reading

106 episodes

Artwork
iconShare
 
Manage episode 477728641 series 3320433
Content provided by Rob Wiblin and Keiran Harris and The 80000 Hours team. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Rob Wiblin and Keiran Harris and The 80000 Hours team or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

Most AI safety conversations centre on alignment: ensuring AI systems share our values and goals. But despite progress, we’re unlikely to know we’ve solved the problem before the arrival of human-level and superhuman systems in as little as three years.

So some — including Buck Shlegeris, CEO of Redwood Research — are developing a backup plan to safely deploy models we fear are actively scheming to harm us: so-called “AI control.” While this may sound mad, given the reluctance of AI companies to delay deploying anything they train, not developing such techniques is probably even crazier.

These highlights are from episode #214 of The 80,000 Hours Podcast: Buck Shlegeris on controlling AI that wants to take over – so we can use it anyway, and include:

  • What is AI control? (00:00:15)
  • One way to catch AIs that are up to no good (00:07:00)
  • What do we do once we catch a model trying to escape? (00:13:39)
  • Team Human vs Team AI (00:18:24)
  • If an AI escapes, is it likely to be able to beat humanity from there? (00:24:59)
  • Is alignment still useful? (00:32:10)
  • Could 10 safety-focused people in an AGI company do anything useful? (00:35:34)

These aren't necessarily the most important or even most entertaining parts of the interview — so if you enjoy this, we strongly recommend checking out the full episode!

And if you're finding these highlights episodes valuable, please let us know by emailing [email protected].

Highlights put together by Ben Cordell, Milo McGuire, and Dominic Armstrong

  continue reading

106 episodes

Alle episoder

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Listen to this show while you explore
Play