#18 Nathan Labenz on reinforcement learning, reasoning models, emergent misalignment & more
Fetch error
Hmmm there seems to be a problem fetching this series right now. Last successful fetch was on November 24, 2025 13:15 ()
What now? This series will be checked again in the next day. If you believe it should be working, please verify the publisher's feed link below is valid and includes actual episode links. You can contact support to request the feed be immediately fetched.
Manage episode 469291175 series 3554381
A lot has happened in AI since the last time I spoke to Nathan Labenz of The Cognitive Revolution, so I invited him back on for a whistlestop tour of the most important developments we've seen over the last year!
We covered reasoning models, DeepSeek, the many spooky alignment failures we've observed in the last few months & much more!
20 episodes