Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by Sean Kim. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Sean Kim or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

The Real Dangers of AI No One Talks About | Dr. Roman Yampolskiy

47:02
 
Share
 

Manage episode 510350269 series 2672727
Content provided by Sean Kim. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Sean Kim or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

Dr. Roman Yampolskiy is a computer scientist, AI researcher, and professor at the University of Louisville, where he directs the Cyber Security Lab. He is widely recognized for his work on artificial intelligence safety, security, and the study of superintelligent systems. Dr. Yampolskiy has authored numerous books and publications, including Artificial Superintelligence: A Futuristic Approach, exploring the risks and safeguards needed as AI capabilities advance. His research and commentary have made him a leading voice in the global conversation on the future of AI and humanity.

In our conversation we discuss:

(00:01) Background and path into AI safety
(02:27) Early AI state and containment techniques
(03:43) When did AI’s Pandora’s box open
(04:38) How close is AGI and definition
(07:20) Why AGI definition keeps moving goalposts
(09:25) ASI vs AGI: future five–ten years
(11:12) Measuring ASI: tests and quantification methods
(12:03) Existential threats and broad AI risks
(17:35) Transhumanism: human-AI merging and coexistence
(18:35) Chances and timeline for peaceful coexistence
(21:16) Layers of risk beyond human extinction
(23:55) Can humans retain meaning post-AGI era
(27:41) Jobs AI likely cannot or won’t replace
(29:42) Skills humans are losing to AI reliance
(31:00) Cultivating critical thinking amid AI influence
(33:34) Can nations or corporations meaningfully slow AI
(37:29) Decentralized development: open-source control feasibility
(40:46) Any current models with real safety measures?
(41:12) Has meaning of life changed with AI?
(42:36) Thoughts on simulation hypothesis and implications
(43:58) If AI found simulation: modify or escape?
(44:54) Key takeaway for public about AI safety
(45:26) Is this your core mission in AI safety
(46:14) Where to follow and learn more about you

Learn more about Dr. Roman

Website: https://www.romanyampolskiy.com/
⁠⁠Socials: @RomanYampolskiy⁠⁠

Watch full episodes on: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.youtube.com/@seankim⁠⁠⁠⁠⁠⁠⁠⁠
Connect on IG: ⁠⁠⁠⁠⁠⁠⁠https://instagram.com/heyseankim

  continue reading

212 episodes

Artwork
iconShare
 
Manage episode 510350269 series 2672727
Content provided by Sean Kim. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Sean Kim or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

Dr. Roman Yampolskiy is a computer scientist, AI researcher, and professor at the University of Louisville, where he directs the Cyber Security Lab. He is widely recognized for his work on artificial intelligence safety, security, and the study of superintelligent systems. Dr. Yampolskiy has authored numerous books and publications, including Artificial Superintelligence: A Futuristic Approach, exploring the risks and safeguards needed as AI capabilities advance. His research and commentary have made him a leading voice in the global conversation on the future of AI and humanity.

In our conversation we discuss:

(00:01) Background and path into AI safety
(02:27) Early AI state and containment techniques
(03:43) When did AI’s Pandora’s box open
(04:38) How close is AGI and definition
(07:20) Why AGI definition keeps moving goalposts
(09:25) ASI vs AGI: future five–ten years
(11:12) Measuring ASI: tests and quantification methods
(12:03) Existential threats and broad AI risks
(17:35) Transhumanism: human-AI merging and coexistence
(18:35) Chances and timeline for peaceful coexistence
(21:16) Layers of risk beyond human extinction
(23:55) Can humans retain meaning post-AGI era
(27:41) Jobs AI likely cannot or won’t replace
(29:42) Skills humans are losing to AI reliance
(31:00) Cultivating critical thinking amid AI influence
(33:34) Can nations or corporations meaningfully slow AI
(37:29) Decentralized development: open-source control feasibility
(40:46) Any current models with real safety measures?
(41:12) Has meaning of life changed with AI?
(42:36) Thoughts on simulation hypothesis and implications
(43:58) If AI found simulation: modify or escape?
(44:54) Key takeaway for public about AI safety
(45:26) Is this your core mission in AI safety
(46:14) Where to follow and learn more about you

Learn more about Dr. Roman

Website: https://www.romanyampolskiy.com/
⁠⁠Socials: @RomanYampolskiy⁠⁠

Watch full episodes on: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.youtube.com/@seankim⁠⁠⁠⁠⁠⁠⁠⁠
Connect on IG: ⁠⁠⁠⁠⁠⁠⁠https://instagram.com/heyseankim

  continue reading

212 episodes

Tutti gli episodi

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play