Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by Rob Wiblin and Keiran Harris and The 80000 Hours team. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Rob Wiblin and Keiran Harris and The 80000 Hours team or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Highlights: #200 – Ezra Karger on what superforecasters and experts think about existential risks

22:54
 
Share
 

Manage episode 440570506 series 3320433
Content provided by Rob Wiblin and Keiran Harris and The 80000 Hours team. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Rob Wiblin and Keiran Harris and The 80000 Hours team or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

This is a selection of highlights from episode #200 of The 80,000 Hours Podcast. These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:

Ezra Karger on what superforecasters and experts think about existential risks

And if you're finding these highlights episodes valuable, please let us know by emailing [email protected].

Highlights:

  • Luisa’s intro (00:00:00)
  • Why we need forecasts about existential risks (00:00:26)
  • Headline estimates of existential and catastrophic risks (00:02:43)
  • What explains disagreements about AI risks? (00:06:18)
  • Learning more doesn't resolve disagreements about AI risks (00:08:59)
  • A lot of disagreement about AI risks is about when AI will pose risks (00:11:31)
  • Cruxes about AI risks (00:15:17)
  • Is forecasting actually useful in the real world? (00:18:24)

Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong

  continue reading

106 episodes

Artwork
iconShare
 
Manage episode 440570506 series 3320433
Content provided by Rob Wiblin and Keiran Harris and The 80000 Hours team. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Rob Wiblin and Keiran Harris and The 80000 Hours team or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

This is a selection of highlights from episode #200 of The 80,000 Hours Podcast. These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:

Ezra Karger on what superforecasters and experts think about existential risks

And if you're finding these highlights episodes valuable, please let us know by emailing [email protected].

Highlights:

  • Luisa’s intro (00:00:00)
  • Why we need forecasts about existential risks (00:00:26)
  • Headline estimates of existential and catastrophic risks (00:02:43)
  • What explains disagreements about AI risks? (00:06:18)
  • Learning more doesn't resolve disagreements about AI risks (00:08:59)
  • A lot of disagreement about AI risks is about when AI will pose risks (00:11:31)
  • Cruxes about AI risks (00:15:17)
  • Is forecasting actually useful in the real world? (00:18:24)

Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong

  continue reading

106 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Listen to this show while you explore
Play