Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by Gus Docker and Future of Life Institute. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Gus Docker and Future of Life Institute or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Brain-like AGI and why it's Dangerous (with Steven Byrnes)

1:13:13
 
Share
 

Manage episode 475188133 series 1334308
Content provided by Gus Docker and Future of Life Institute. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Gus Docker and Future of Life Institute or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

On this episode, Steven Byrnes joins me to discuss brain-like AGI safety. We discuss learning versus steering systems in the brain, the distinction between controlled AGI and social-instinct AGI, why brain-inspired approaches might be our most plausible route to AGI, and honesty in AI models. We also talk about how people can contribute to brain-like AGI safety and compare various AI safety strategies.

You can learn more about Steven's work at: https://sjbyrnes.com/agi.html

Timestamps:

00:00 Preview

00:54 Brain-like AGI Safety

13:16 Controlled AGI versus Social-instinct AGI

19:12 Learning from the brain

28:36 Why is brain-like AI the most likely path to AGI?

39:23 Honesty in AI models

44:02 How to help with brain-like AGI safety

53:36 AI traits with both positive and negative effects

01:02:44 Different AI safety strategies

  continue reading

231 episodes

Artwork
iconShare
 
Manage episode 475188133 series 1334308
Content provided by Gus Docker and Future of Life Institute. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Gus Docker and Future of Life Institute or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

On this episode, Steven Byrnes joins me to discuss brain-like AGI safety. We discuss learning versus steering systems in the brain, the distinction between controlled AGI and social-instinct AGI, why brain-inspired approaches might be our most plausible route to AGI, and honesty in AI models. We also talk about how people can contribute to brain-like AGI safety and compare various AI safety strategies.

You can learn more about Steven's work at: https://sjbyrnes.com/agi.html

Timestamps:

00:00 Preview

00:54 Brain-like AGI Safety

13:16 Controlled AGI versus Social-instinct AGI

19:12 Learning from the brain

28:36 Why is brain-like AI the most likely path to AGI?

39:23 Honesty in AI models

44:02 How to help with brain-like AGI safety

53:36 AI traits with both positive and negative effects

01:02:44 Different AI safety strategies

  continue reading

231 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Listen to this show while you explore
Play