Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by Foresight Institute. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Foresight Institute or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Andrew Critch on what AGI might look like in practice

1:03:00
 
Share
 

Manage episode 523764067 series 3701298
Content provided by Foresight Institute. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Foresight Institute or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

When people think about AGI, most of them ask “When is it going to arrive?” or “What kind of AGI will we get?”. Andrew Critch, AI safety researcher and mathematician, argues that the most important question is actually “What will we do with it?”

In our conversation, we explore the importance of our choices in the quest to make AGI a force for good. Andrew explains what AGI might look like in practical terms, and the consequences of it being trained on our culture. He also claims that finding the “best” values AI should have is a philosophical trap, and that we should instead focus on finding a basic agreement about “good” vs. “bad” behaviors.

The episode also covers concrete takes on the transition to AGI, including:

  • Why an advanced intelligence would likely find killing humans “mean.”
  • How automated computer security checks could be one of the best uses of powerful AI.
  • Why the best preparation for AGI is simply to build helpful products today.

On the Existential Hope Podcast hosts Allison Duettmann and Beatrice Erkers from the Foresight Institute invite scientists, founders, and philosophers for in-depth conversations on positive, high-tech futures.


Full transcript, listed resources, and more: https://www.existentialhope.com/podcasts


Follow on X.


Hosted on Acast. See acast.com/privacy for more information.

  continue reading

25 episodes

Artwork
iconShare
 
Manage episode 523764067 series 3701298
Content provided by Foresight Institute. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Foresight Institute or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

When people think about AGI, most of them ask “When is it going to arrive?” or “What kind of AGI will we get?”. Andrew Critch, AI safety researcher and mathematician, argues that the most important question is actually “What will we do with it?”

In our conversation, we explore the importance of our choices in the quest to make AGI a force for good. Andrew explains what AGI might look like in practical terms, and the consequences of it being trained on our culture. He also claims that finding the “best” values AI should have is a philosophical trap, and that we should instead focus on finding a basic agreement about “good” vs. “bad” behaviors.

The episode also covers concrete takes on the transition to AGI, including:

  • Why an advanced intelligence would likely find killing humans “mean.”
  • How automated computer security checks could be one of the best uses of powerful AI.
  • Why the best preparation for AGI is simply to build helpful products today.

On the Existential Hope Podcast hosts Allison Duettmann and Beatrice Erkers from the Foresight Institute invite scientists, founders, and philosophers for in-depth conversations on positive, high-tech futures.


Full transcript, listed resources, and more: https://www.existentialhope.com/podcasts


Follow on X.


Hosted on Acast. See acast.com/privacy for more information.

  continue reading

25 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play