Go offline with the Player FM app!
AI Vibe Check: The Actual Bottleneck In Research, SSI’s Mystique, & Spicy 2026 Predictions
Manage episode 524924241 series 3495253
Ari Morcos and Rob Toews return for their spiciest conversation yet. Fresh from NeurIPS, they debate whether models are truly plateauing or if we're just myopically focused on LLMs while breakthroughs happen in other modalities.
They reveal why infinite capital at labs may actually constrain innovation, explain the narrow "Goldilocks zone" where RL actually works, and argue why U.S. chip restrictions may have backfired catastrophically—accelerating China's path to self-sufficiency by a decade. The conversation covers OpenAI's code red moment and structural vulnerabilities, the mystique surrounding SSI and Ilya's "two words," and why the real bottleneck in AI research is compute, not ideas.
The episode closes with bold 2026 predictions: Rob forecasts Sam Altman won't be OpenAI's CEO by year-end, while Ari gives 50%+ odds a Chinese open-source model will be the world's best at least once next year.
(0:00) Intro
(1:51) Reflections on NeurIPS Conference
(5:14) Are AI Models Plateauing?
(11:12) Reinforcement Learning and Enterprise Adoption
(16:16) Future Research Vectors in AI
(28:40) The Role of Neo Labs
(39:35) The Myth of the Great Man Theory in Science
(41:47) OpenAI's Code Red and Market Position
(47:19) Disney and OpenAI's Strategic Partnership
(51:28) Meta's Super Intelligence Team Challenges
(54:33) US-China AI Chip Dynamics
(1:00:54) Amazon's Nova Forge and Enterprise AI
(1:03:38) End of Year Reflections and Predictions
With your co-hosts:
@jacobeffron
- Partner at Redpoint, Former PM Flatiron Health
@patrickachase
- Partner at Redpoint, Former ML Engineer LinkedIn
@ericabrescia
- Former COO Github, Founder Bitnami (acq’d by VMWare)
@jordan_segall
- Partner at Redpoint
87 episodes
Manage episode 524924241 series 3495253
Ari Morcos and Rob Toews return for their spiciest conversation yet. Fresh from NeurIPS, they debate whether models are truly plateauing or if we're just myopically focused on LLMs while breakthroughs happen in other modalities.
They reveal why infinite capital at labs may actually constrain innovation, explain the narrow "Goldilocks zone" where RL actually works, and argue why U.S. chip restrictions may have backfired catastrophically—accelerating China's path to self-sufficiency by a decade. The conversation covers OpenAI's code red moment and structural vulnerabilities, the mystique surrounding SSI and Ilya's "two words," and why the real bottleneck in AI research is compute, not ideas.
The episode closes with bold 2026 predictions: Rob forecasts Sam Altman won't be OpenAI's CEO by year-end, while Ari gives 50%+ odds a Chinese open-source model will be the world's best at least once next year.
(0:00) Intro
(1:51) Reflections on NeurIPS Conference
(5:14) Are AI Models Plateauing?
(11:12) Reinforcement Learning and Enterprise Adoption
(16:16) Future Research Vectors in AI
(28:40) The Role of Neo Labs
(39:35) The Myth of the Great Man Theory in Science
(41:47) OpenAI's Code Red and Market Position
(47:19) Disney and OpenAI's Strategic Partnership
(51:28) Meta's Super Intelligence Team Challenges
(54:33) US-China AI Chip Dynamics
(1:00:54) Amazon's Nova Forge and Enterprise AI
(1:03:38) End of Year Reflections and Predictions
With your co-hosts:
@jacobeffron
- Partner at Redpoint, Former PM Flatiron Health
@patrickachase
- Partner at Redpoint, Former ML Engineer LinkedIn
@ericabrescia
- Former COO Github, Founder Bitnami (acq’d by VMWare)
@jordan_segall
- Partner at Redpoint
87 episodes
All episodes
×Welcome to Player FM!
Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.