Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by Immad Akhund and Rajat Suri, Immad Akhund, and Rajat Suri. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Immad Akhund and Rajat Suri, Immad Akhund, and Rajat Suri or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

AGI, Alignment, and the Future of AI Power With Emmett Shear

52:08
 
Share
 

Manage episode 525121317 series 3462101
Content provided by Immad Akhund and Rajat Suri, Immad Akhund, and Rajat Suri. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Immad Akhund and Rajat Suri, Immad Akhund, and Rajat Suri or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

Emmett Shear is the founder and CEO of Softmax, an alignment research company, and previously co-founded and led Twitch as CEO. He was also a Y Combinator partner and briefly served as interim CEO of OpenAI.

What you'll learn:

  1. Why AI alignment and AGI are fundamentally the same problem
  2. How theory of mind is the critical missing piece in current AI systems
  3. Why continuous learning requires self-modeling capabilities
  4. The dangerous truth: alignment is a capacity for both great good and great evil
  5. Why "aligned AI" really means "aligned to me"—and why that's concerning
  6. How societies of smaller AIs will outcompete singleton superintelligences
  7. Why AI needs to be integrated with humans, not segregated into AI-only societies
  8. The Twitch lesson: people don't want easy, they want good
  9. Why 99% of AI startups are building labor-saving tools instead of value-creating products
  10. How parenting and AI development mirror each other in surprising ways
  11. Why current AI labs are confused about continuous learning
  12. Conway's Law applied to AI: you ship your org chart
  13. The problem with mode collapse in self-learning systems
  14. Why emotions are training signals, not irrational noise
  15. Emmett's biggest mistake at Twitch: chasing new products instead of perfecting the core

In this episode, we cover:

(00:00) The dangerous truth about AI alignment

(01:13) Introduction to Softmax and organic alignment

(02:05) What alignment actually means (and why most people are confused)

(03:33) The output: training environments for theory of mind

(05:01) Continuous learning and why it's so hard

(06:25) Multiplayer reasoning training in open-ended environments

(07:14) Aligned to what? The critical question everyone ignores

(08:40) Why alignment is always relative to the aligning being

(11:07) Cooperation vs. competition: training for the real world

(12:56) Is AGI an urgent problem or do we have time?

(13:15) AGI and alignment are the same problem

(15:25) Alignment capacity enables both good and evil

(17:13) The singleton problem and why societies of AIs make sense

(20:41) Building alignment between AIs and humans

(22:09) Why Elon's "biggest cluster" strategy might be wrong

(23:06) AI must be aligned to individual humans, not humanity

(25:03) What does the atomic unit of AI look like?

(28:02) Adding a new kind of person to society

(29:06) Everything will be alive: from spreadsheets to cars

(30:00) From Twitch retirement to Softmax founding

(31:26) Research vs. product engineering at early-stage startups

(32:41) Raising money for AI research in the current era

(34:30) Why Softmax will ship products

(34:50) Ilya's closed-loop research vs. open-loop learning

(36:36) How you do anything is how you do everything

(37:28) The continuous learning problem explained simply

(38:29) Mode collapse: why AIs become stereotypes of themselves

(39:33) The reward problem and why humans need emotions

(40:48) Why LLMs are trained to avoid emotions

(41:52) Watching children learn while building learning AI

(43:04) Advice for first-time AI founders

(45:08) Treat AI as clay to be molded, not a genie granting wishes

(45:50) The Twitch lesson: people want good things, not easy things

(47:22) Why 99% of AI companies are building the wrong thing

(48:16) Rapid fire: biggest career mistake at Twitch

(50:15) Which founders inspire Emmett most

(50:56) The passing fad: AI slop generators

  continue reading

89 episodes

Artwork
iconShare
 
Manage episode 525121317 series 3462101
Content provided by Immad Akhund and Rajat Suri, Immad Akhund, and Rajat Suri. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Immad Akhund and Rajat Suri, Immad Akhund, and Rajat Suri or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

Emmett Shear is the founder and CEO of Softmax, an alignment research company, and previously co-founded and led Twitch as CEO. He was also a Y Combinator partner and briefly served as interim CEO of OpenAI.

What you'll learn:

  1. Why AI alignment and AGI are fundamentally the same problem
  2. How theory of mind is the critical missing piece in current AI systems
  3. Why continuous learning requires self-modeling capabilities
  4. The dangerous truth: alignment is a capacity for both great good and great evil
  5. Why "aligned AI" really means "aligned to me"—and why that's concerning
  6. How societies of smaller AIs will outcompete singleton superintelligences
  7. Why AI needs to be integrated with humans, not segregated into AI-only societies
  8. The Twitch lesson: people don't want easy, they want good
  9. Why 99% of AI startups are building labor-saving tools instead of value-creating products
  10. How parenting and AI development mirror each other in surprising ways
  11. Why current AI labs are confused about continuous learning
  12. Conway's Law applied to AI: you ship your org chart
  13. The problem with mode collapse in self-learning systems
  14. Why emotions are training signals, not irrational noise
  15. Emmett's biggest mistake at Twitch: chasing new products instead of perfecting the core

In this episode, we cover:

(00:00) The dangerous truth about AI alignment

(01:13) Introduction to Softmax and organic alignment

(02:05) What alignment actually means (and why most people are confused)

(03:33) The output: training environments for theory of mind

(05:01) Continuous learning and why it's so hard

(06:25) Multiplayer reasoning training in open-ended environments

(07:14) Aligned to what? The critical question everyone ignores

(08:40) Why alignment is always relative to the aligning being

(11:07) Cooperation vs. competition: training for the real world

(12:56) Is AGI an urgent problem or do we have time?

(13:15) AGI and alignment are the same problem

(15:25) Alignment capacity enables both good and evil

(17:13) The singleton problem and why societies of AIs make sense

(20:41) Building alignment between AIs and humans

(22:09) Why Elon's "biggest cluster" strategy might be wrong

(23:06) AI must be aligned to individual humans, not humanity

(25:03) What does the atomic unit of AI look like?

(28:02) Adding a new kind of person to society

(29:06) Everything will be alive: from spreadsheets to cars

(30:00) From Twitch retirement to Softmax founding

(31:26) Research vs. product engineering at early-stage startups

(32:41) Raising money for AI research in the current era

(34:30) Why Softmax will ship products

(34:50) Ilya's closed-loop research vs. open-loop learning

(36:36) How you do anything is how you do everything

(37:28) The continuous learning problem explained simply

(38:29) Mode collapse: why AIs become stereotypes of themselves

(39:33) The reward problem and why humans need emotions

(40:48) Why LLMs are trained to avoid emotions

(41:52) Watching children learn while building learning AI

(43:04) Advice for first-time AI founders

(45:08) Treat AI as clay to be molded, not a genie granting wishes

(45:50) The Twitch lesson: people want good things, not easy things

(47:22) Why 99% of AI companies are building the wrong thing

(48:16) Rapid fire: biggest career mistake at Twitch

(50:15) Which founders inspire Emmett most

(50:56) The passing fad: AI slop generators

  continue reading

89 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play