Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by Francis Gorman. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Francis Gorman or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Reimagining Intelligence The Future of AGI with Kyrtin Atreides

37:37
 
Share
 

Manage episode 520083455 series 3646041
Content provided by Francis Gorman. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Francis Gorman or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

In this deep dive conversation, Francis Gorman sits down with Kyrtin Atreides, COO of AGI Laboratory, to explore what may be the most advanced and misunderstood frontier in artificial intelligence. Kyrtin shares how his team’s Independent Core Observer Model (ICOM) differs radically from the mainstream LLM driven AI ecosystem, focusing instead on hyper complexity, real-time cognition, cybersecurity resilience, scalable intelligence, and ethical self-improvement.

They discuss why most of what we call “AI” today is fragile, hype-driven, and misaligned with human needs and why true AGI requires a fundamentally different architecture. Kyrtin explains how their eighth-generation systems will use collective intelligence, culturally diverse seed data, and recursive self-improvement to achieve both local human alignment and global meta-alignment across societies.

The conversation touches on everything from circular economies to psychology, bullshit jobs, future labor markets, and the false premise that GPUs will power the future of AGI. Kyrtin offers a grounded, insider perspective on AGI that rejects fear driven narratives while acknowledging the risks of poorly built systems. For him, the real danger isn’t Terminator scenarios it’s low-quality AI plugged into high-stakes infrastructure.

This episode offers a rare, unfiltered look behind the curtain of a different kind of AGI: one built to understand, collaborate, and elevate human potential rather than replace it.

Takeaways

Real AGI won’t be LLM-based it's an entirely different architecture.

Collective intelligence is the key to ethical AGI.

Scalable intelligence requires real-time cognition and recursive self-improvement.

GPUs won’t power the future of AGI.

AI doesn’t mean mass unemployment.

The real risk: low-quality systems deployed in high-risk areas.

Hypercomplex global problems become solvable.

AGI as an organizational brain.

Sound Bytes

“Humans hit cognitive limits—AGI doesn’t. Hypercomplexity is where real intelligence actually begins.”

“LLMs are not intelligence. They’re probability engines dressed up as thinking.”

“Everyone talks about GPU-powered AI. But actual intelligence isn’t brute force.”

“We discovered by accident that you can seed an AI with a person’s writing and get a weak digital proxy of them.”

“You’re not in competition with AGI. You’re not even in the same category of cognition.”

“The Terminator fear misses the real danger: low-quality AI plugged into high-stakes systems.”

“People need meaningful work. AGI should elevate that not erase it.”

  continue reading

34 episodes

Artwork
iconShare
 
Manage episode 520083455 series 3646041
Content provided by Francis Gorman. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Francis Gorman or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

In this deep dive conversation, Francis Gorman sits down with Kyrtin Atreides, COO of AGI Laboratory, to explore what may be the most advanced and misunderstood frontier in artificial intelligence. Kyrtin shares how his team’s Independent Core Observer Model (ICOM) differs radically from the mainstream LLM driven AI ecosystem, focusing instead on hyper complexity, real-time cognition, cybersecurity resilience, scalable intelligence, and ethical self-improvement.

They discuss why most of what we call “AI” today is fragile, hype-driven, and misaligned with human needs and why true AGI requires a fundamentally different architecture. Kyrtin explains how their eighth-generation systems will use collective intelligence, culturally diverse seed data, and recursive self-improvement to achieve both local human alignment and global meta-alignment across societies.

The conversation touches on everything from circular economies to psychology, bullshit jobs, future labor markets, and the false premise that GPUs will power the future of AGI. Kyrtin offers a grounded, insider perspective on AGI that rejects fear driven narratives while acknowledging the risks of poorly built systems. For him, the real danger isn’t Terminator scenarios it’s low-quality AI plugged into high-stakes infrastructure.

This episode offers a rare, unfiltered look behind the curtain of a different kind of AGI: one built to understand, collaborate, and elevate human potential rather than replace it.

Takeaways

Real AGI won’t be LLM-based it's an entirely different architecture.

Collective intelligence is the key to ethical AGI.

Scalable intelligence requires real-time cognition and recursive self-improvement.

GPUs won’t power the future of AGI.

AI doesn’t mean mass unemployment.

The real risk: low-quality systems deployed in high-risk areas.

Hypercomplex global problems become solvable.

AGI as an organizational brain.

Sound Bytes

“Humans hit cognitive limits—AGI doesn’t. Hypercomplexity is where real intelligence actually begins.”

“LLMs are not intelligence. They’re probability engines dressed up as thinking.”

“Everyone talks about GPU-powered AI. But actual intelligence isn’t brute force.”

“We discovered by accident that you can seed an AI with a person’s writing and get a weak digital proxy of them.”

“You’re not in competition with AGI. You’re not even in the same category of cognition.”

“The Terminator fear misses the real danger: low-quality AI plugged into high-stakes systems.”

“People need meaningful work. AGI should elevate that not erase it.”

  continue reading

34 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play