Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by Chris Williamson. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Chris Williamson or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

#1011 - Eliezer Yudkowsky - Why Superhuman AI Would Kill Us All

1:37:08
 
Share
 

Manage episode 515492481 series 2042336
Content provided by Chris Williamson. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Chris Williamson or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

Eliezer Yudkowsky is an AI researcher, decision theorist, and founder of the Machine Intelligence Research Institute.

Is AI our greatest hope or our final mistake? For all its promise to revolutionize human life, there’s a growing fear that artificial intelligence could end it altogether. How grounded are these fears, how close are we to losing control, and is there still time to change course before it’s too late

Expect to learn the problem with building superhuman AI, why AI would have goals we haven’t programmed into it, if there is such a thing as Ai benevolence, what the actual goals of super-intelligent AI are and how far away it is, if LLMs are actually dangerous and their ability to become a super AI, how god we are at predicting the future of AI, if extinction if possible with the development of AI, and much more…

Sponsors:

See discounts for all the products I use and recommend: https://chriswillx.com/deals

Get 15% off your first order of Intake’s magnetic nasal strips at https://intakebreathing.com/modernwisdom

Get 10% discount on all Gymshark’s products at https://gym.sh/modernwisdom (use code MODERNWISDOM10)

Get 4 extra months of Surfshark VPN at https://surfshark.com/modernwisdom

Extra Stuff:

Get my free reading list of 100 books to read before you die: https://chriswillx.com/books

Try my productivity energy drink Neutonic: https://neutonic.com/modernwisdom

Episodes You Might Enjoy:

#577 - David Goggins - This Is How To Master Your Life: https://tinyurl.com/43hv6y59

#712 - Dr Jordan Peterson - How To Destroy Your Negative Beliefs: https://tinyurl.com/2rtz7avf

#700 - Dr Andrew Huberman - The Secret Tools To Hack Your Brain: https://tinyurl.com/3ccn5vkp

-

Get In Touch:

Instagram: https://www.instagram.com/chriswillx

Twitter: https://www.twitter.com/chriswillx

YouTube: https://www.youtube.com/modernwisdompodcast

Email: https://chriswillx.com/contact

-

Learn more about your ad choices. Visit megaphone.fm/adchoices

  continue reading

1009 episodes

Artwork
iconShare
 
Manage episode 515492481 series 2042336
Content provided by Chris Williamson. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Chris Williamson or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

Eliezer Yudkowsky is an AI researcher, decision theorist, and founder of the Machine Intelligence Research Institute.

Is AI our greatest hope or our final mistake? For all its promise to revolutionize human life, there’s a growing fear that artificial intelligence could end it altogether. How grounded are these fears, how close are we to losing control, and is there still time to change course before it’s too late

Expect to learn the problem with building superhuman AI, why AI would have goals we haven’t programmed into it, if there is such a thing as Ai benevolence, what the actual goals of super-intelligent AI are and how far away it is, if LLMs are actually dangerous and their ability to become a super AI, how god we are at predicting the future of AI, if extinction if possible with the development of AI, and much more…

Sponsors:

See discounts for all the products I use and recommend: https://chriswillx.com/deals

Get 15% off your first order of Intake’s magnetic nasal strips at https://intakebreathing.com/modernwisdom

Get 10% discount on all Gymshark’s products at https://gym.sh/modernwisdom (use code MODERNWISDOM10)

Get 4 extra months of Surfshark VPN at https://surfshark.com/modernwisdom

Extra Stuff:

Get my free reading list of 100 books to read before you die: https://chriswillx.com/books

Try my productivity energy drink Neutonic: https://neutonic.com/modernwisdom

Episodes You Might Enjoy:

#577 - David Goggins - This Is How To Master Your Life: https://tinyurl.com/43hv6y59

#712 - Dr Jordan Peterson - How To Destroy Your Negative Beliefs: https://tinyurl.com/2rtz7avf

#700 - Dr Andrew Huberman - The Secret Tools To Hack Your Brain: https://tinyurl.com/3ccn5vkp

-

Get In Touch:

Instagram: https://www.instagram.com/chriswillx

Twitter: https://www.twitter.com/chriswillx

YouTube: https://www.youtube.com/modernwisdompodcast

Email: https://chriswillx.com/contact

-

Learn more about your ad choices. Visit megaphone.fm/adchoices

  continue reading

1009 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play