Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by John Koetsier. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by John Koetsier or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

AGI: will it kill us or save us?

29:57
 
Share
 

Manage episode 516161826 series 2809813
Content provided by John Koetsier. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by John Koetsier or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

Artificial general intelligence (AGI) could be humanity’s greatest invention ... or our biggest risk.

In this episode of TechFirst, I talk with Dr. Ben Goertzel, CEO and founder of SingularityNET, about the future of AGI, the possibility of superintelligence, and what happens when machines think beyond human programming.

We cover:

• Is AGI inevitable? How soon will it arrive?

• Will AGI kill us … or save us?

• Why decentralization and blockchain could make AGI safer

• How large language models (LLMs) fit into the path toward AGI

• The risks of an AGI arms race between the U.S. and China

• Why Ben Goertzel created Meta, a new AGI programming language

📌 Topics include AI safety, decentralized AI, blockchain for AI, LLMs, reasoning engines, superintelligence timelines, and the role of governments and corporations in shaping the future of AI.

⏱️ Chapters

00:00 – Intro: Will AGI kill us or save us?

01:02 – Ben Goertzel in Istanbul & the Beneficial AGI Conference

02:47 – Is AGI inevitable?

05:08 – Defining AGI: generalization beyond programming

07:15 – Emotions, agency, and artificial minds

08:47 – The AGI arms race: US vs. China vs. decentralization

13:09 – Risks of narrow or bounded AGI

15:27 – Decentralization and open-source as safeguards

18:21 – Can LLMs become AGI?

20:18 – Using LLMs as reasoning guides

21:55 – Hybrid models: LLMs plus reasoning engines

23:22 – Hallucination: humans vs. machines

25:26 – How LLMs accelerate AI research

26:55 – How close are we to AGI?

28:18 – Why Goertzel built a new AGI language (Meta)

29:43 – Meta: from AI coding to smart contracts

30:06 – Closing thoughts

  continue reading

341 episodes

Artwork
iconShare
 
Manage episode 516161826 series 2809813
Content provided by John Koetsier. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by John Koetsier or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

Artificial general intelligence (AGI) could be humanity’s greatest invention ... or our biggest risk.

In this episode of TechFirst, I talk with Dr. Ben Goertzel, CEO and founder of SingularityNET, about the future of AGI, the possibility of superintelligence, and what happens when machines think beyond human programming.

We cover:

• Is AGI inevitable? How soon will it arrive?

• Will AGI kill us … or save us?

• Why decentralization and blockchain could make AGI safer

• How large language models (LLMs) fit into the path toward AGI

• The risks of an AGI arms race between the U.S. and China

• Why Ben Goertzel created Meta, a new AGI programming language

📌 Topics include AI safety, decentralized AI, blockchain for AI, LLMs, reasoning engines, superintelligence timelines, and the role of governments and corporations in shaping the future of AI.

⏱️ Chapters

00:00 – Intro: Will AGI kill us or save us?

01:02 – Ben Goertzel in Istanbul & the Beneficial AGI Conference

02:47 – Is AGI inevitable?

05:08 – Defining AGI: generalization beyond programming

07:15 – Emotions, agency, and artificial minds

08:47 – The AGI arms race: US vs. China vs. decentralization

13:09 – Risks of narrow or bounded AGI

15:27 – Decentralization and open-source as safeguards

18:21 – Can LLMs become AGI?

20:18 – Using LLMs as reasoning guides

21:55 – Hybrid models: LLMs plus reasoning engines

23:22 – Hallucination: humans vs. machines

25:26 – How LLMs accelerate AI research

26:55 – How close are we to AGI?

28:18 – Why Goertzel built a new AGI language (Meta)

29:43 – Meta: from AI coding to smart contracts

30:06 – Closing thoughts

  continue reading

341 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play