Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by Robinson Erhardt. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Robinson Erhardt or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

251 - Eliezer Yudkowsky: Artificial Intelligence and the End of Humanity

2:51:13
 
Share
 

Manage episode 484778495 series 3482062
Content provided by Robinson Erhardt. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Robinson Erhardt or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

Eliezer Yudkowsky is a decision theorist, computer scientist, and author who co-founded and leads research at the Machine Intelligence Research Institute. He is best known for his work on the alignment problem—how and whether we can ensure that AI is aligned with human values to avoid catastrophe and harness its power. In this episode, Robinson and Eliezer run the gamut on questions related to AI and the danger it poses to human civilization as we know it. More particularly, they discuss the alignment problem, gradient descent, consciousness, the singularity, cyborgs, ChatGPT, OpenAI, Anthropic, Claude, how long we have until doomsday, whether it can be averted, and the various reasons why and ways in which AI might wipe out human life on earth.

The Machine Intelligence Research Institute: https://intelligence.org/about/

Eliezer’s X Account: https://x.com/ESYudkowsky?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Eauthor

OUTLINE

00:00:00 Introduction

00:00:43 The Default Condition for AI’s Takeover
00:06:36 Could a Future AI Country Be Our Trade Partner?
00:11:18 What Is Artificial Intelligence?
00:21:23 Why AIs Having Goals Could Mean the End of Humanity
00:29:34 What Is the Alignment Problem?
00:34:11 How To Avoid AI Apocalypse
00:40:25 Would Cyborgs Eliminate Humanity?
00:47:55 AI and the Problem of Gradient Descent
00:55:24 How Do We Solve the Alignment Problem?
01:00:50 How Anthropic’s AI Freed Itself from Human Control
01:08:56 The Pseudo-Alignment Problem
01:19:28 Why Are People Wrong About AI Not Taking Over the World?
01:23:23 How Certain Is It that AI Will Wipe Out Humanity?
01:38:35 Is Eliezer Yudkowski Wrong About The AI Apocalypse
01:42:04 Do AI Corporations Control the Fate of Humanity?
01:43:49 How To Convince the President Not to Let AI Kill Us All
01:52:01 How Will ChatGPT’s Descendants Wipe Out Humanity?
02:24:11 Could AI Destroy us with New Science?
02:39:37 Could AI Destroy us with Advanced Biology?
02:47:29 How Will AI Actually Destroy Humanity?

Robinson’s Website: http://robinsonerhardt.com

Robinson Erhardt researches symbolic logic and the foundations of mathematics at Stanford University.

  continue reading

253 episodes

Artwork
iconShare
 
Manage episode 484778495 series 3482062
Content provided by Robinson Erhardt. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Robinson Erhardt or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

Eliezer Yudkowsky is a decision theorist, computer scientist, and author who co-founded and leads research at the Machine Intelligence Research Institute. He is best known for his work on the alignment problem—how and whether we can ensure that AI is aligned with human values to avoid catastrophe and harness its power. In this episode, Robinson and Eliezer run the gamut on questions related to AI and the danger it poses to human civilization as we know it. More particularly, they discuss the alignment problem, gradient descent, consciousness, the singularity, cyborgs, ChatGPT, OpenAI, Anthropic, Claude, how long we have until doomsday, whether it can be averted, and the various reasons why and ways in which AI might wipe out human life on earth.

The Machine Intelligence Research Institute: https://intelligence.org/about/

Eliezer’s X Account: https://x.com/ESYudkowsky?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Eauthor

OUTLINE

00:00:00 Introduction

00:00:43 The Default Condition for AI’s Takeover
00:06:36 Could a Future AI Country Be Our Trade Partner?
00:11:18 What Is Artificial Intelligence?
00:21:23 Why AIs Having Goals Could Mean the End of Humanity
00:29:34 What Is the Alignment Problem?
00:34:11 How To Avoid AI Apocalypse
00:40:25 Would Cyborgs Eliminate Humanity?
00:47:55 AI and the Problem of Gradient Descent
00:55:24 How Do We Solve the Alignment Problem?
01:00:50 How Anthropic’s AI Freed Itself from Human Control
01:08:56 The Pseudo-Alignment Problem
01:19:28 Why Are People Wrong About AI Not Taking Over the World?
01:23:23 How Certain Is It that AI Will Wipe Out Humanity?
01:38:35 Is Eliezer Yudkowski Wrong About The AI Apocalypse
01:42:04 Do AI Corporations Control the Fate of Humanity?
01:43:49 How To Convince the President Not to Let AI Kill Us All
01:52:01 How Will ChatGPT’s Descendants Wipe Out Humanity?
02:24:11 Could AI Destroy us with New Science?
02:39:37 Could AI Destroy us with Advanced Biology?
02:47:29 How Will AI Actually Destroy Humanity?

Robinson’s Website: http://robinsonerhardt.com

Robinson Erhardt researches symbolic logic and the foundations of mathematics at Stanford University.

  continue reading

253 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Listen to this show while you explore
Play