Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by Lenny Rachitsky. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Lenny Rachitsky or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Anthropic co-founder on quitting OpenAI, AGI predictions, $100M talent wars, 20% unemployment, and the nightmare scenarios keeping him up at night | Ben Mann

1:14:59
 
Share
 

Manage episode 495457176 series 3359822
Content provided by Lenny Rachitsky. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Lenny Rachitsky or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

Benjamin Mann is a co-founder of Anthropic, an AI startup dedicated to building aligned, safety-first AI systems. Prior to Anthropic, Ben was one of the architects of GPT-3 at OpenAI. He left OpenAI driven by the mission to ensure that AI benefits humanity. In this episode, Ben opens up about the accelerating progress in AI and the urgent need to steer it responsibly.

In this conversation, we discuss:

1. The inside story of leaving OpenAI with the entire safety team to start Anthropic

2. How Meta’s $100M offers reveal the true market price of top AI talent

3. Why AI progress is still accelerating (not plateauing), and how most people misjudge the exponential

4. Ben’s “economic Turing test” for knowing when we’ve achieved AGI—and why it’s likely coming by 2027-2028

5. Why he believes 20% unemployment is inevitable

6. The AI nightmare scenarios that concern him most—and how he believes we can still avoid them

7. How focusing on AI safety created Claude’s beloved personality

8. What three skills he’s teaching his kids instead of traditional academics

Brought to you by:

Sauce—Turn customer pain into product revenue: https://sauce.app/lenny

LucidLink—Real-time cloud storage for teams: https://www.lucidlink.com/lenny

Fin—The #1 AI agent for customer service: https://fin.ai/lenny

Transcript: https://www.lennysnewsletter.com/p/anthropic-co-founder-benjamin-mann

My biggest takeaways (for paid newsletter subscribers): https://www.lennysnewsletter.com/i/168107911/my-biggest-takeaways-from-this-conversation

Where to find Ben Mann:

• X: https://x.com/8enmann

• LinkedIn: https://www.linkedin.com/in/benjamin-mann/

• Website: https://benjmann.net/

Where to find Lenny:

• Newsletter: https://www.lennysnewsletter.com

• X: https://twitter.com/lennysan

• LinkedIn: https://www.linkedin.com/in/lennyrachitsky/

In this episode, we cover:

(00:00) Introduction to Benjamin

(04:43) The AI talent war

(06:28) AI progress and scaling laws

(10:50) Defining AGI and the economic Turing test

(12:26) The impact of AI on jobs

(17:45) Preparing for an AI future

(24:05) Founding Anthropic

(27:06) Balancing AI safety and progress

(29:10) Constitutional AI and model alignment

(34:21) The importance of AI safety

(43:40) The risks of autonomous agents

(45:40) Forecasting superintelligence

(48:36) How hard is it to align AI?

(53:19) Reinforcement learning from AI feedback (RLAIF)

(57:03) AI's biggest bottlenecks

(01:00:11) Personal reflections on responsibilities

(01:02:36) Anthropic’s growth and innovations

(01:07:48) Lightning round and final thoughts

Referenced:

• Dario Amodei on LinkedIn: https://www.linkedin.com/in/dario-amodei-3934934/

• Anthropic CEO: AI Could Wipe Out 50% of Entry-Level White Collar Jobs: https://www.marketingaiinstitute.com/blog/dario-amodei-ai-entry-level-jobs

• Alexa+: https://www.amazon.com/dp/B0DCCNHWV5

• Azure: https://azure.microsoft.com/

• Sam Altman on X: https://x.com/sama

• Opus 3: https://www.anthropic.com/news/claude-3-family

• Claude’s Constitution: https://www.anthropic.com/news/claudes-constitution

• Greg Brockman on X: https://x.com/gdb

• Anthropic’s Responsible Scaling Policy: https://www.anthropic.com/news/anthropics-responsible-scaling-policy

• Agentic Misalignment: How LLMs could be insider threats: https://www.anthropic.com/research/agentic-misalignment

• Anthropic’s CPO on what comes next | Mike Krieger (co-founder of Instagram): https://www.lennysnewsletter.com/p/anthropics-cpo-heres-what-comes-next

• AI prompt engineering in 2025: What works and what doesn’t | Sander Schulhoff (Learn Prompting, HackAPrompt): https://www.lennysnewsletter.com/p/ai-prompt-engineering-in-2025-sander-schulhoff

• Unitree: https://www.unitree.com/

• Arthur C. Clarke: https://en.wikipedia.org/wiki/Arthur_C._Clarke

• How Reinforcement Learning from AI Feedback Works: https://www.assemblyai.com/blog/how-reinforcement-learning-from-ai-feedback-works

• RLHF: https://en.wikipedia.org/wiki/Reinforcement_learning_from_human_feedback

• Jared Kaplan on LinkedIn: https://www.linkedin.com/in/jared-kaplan-645843213/

• Moore’s law: https://en.wikipedia.org/wiki/Moore%27s_law

• Machine Intelligence Research Institute: https://intelligence.org/

• Raph Lee on LinkedIn: https://www.linkedin.com/in/raphaeltlee/

• “The Last Question”: https://en.wikipedia.org/wiki/The_Last_Question

• Beth Barnes on LinkedIn: https://www.linkedin.com/in/elizabethmbarnes/

• “The Last Question”: https://en.wikipedia.org/wiki/The_Last_Question

• Good Strategy, Bad Strategy | Richard Rumelt: https://www.lennysnewsletter.com/p/good-strategy-bad-strategy-richard

Pantheon on Netflix: https://www.netflix.com/title/81937398

Ted Lasso on AppleTV+: https://tv.apple.com/us/show/ted-lasso/umc.cmc.vtoh0mn0xn7t3c643xqonfzy

• Kurzgesagt—In a Nutshell: https://www.youtube.com/channel/UCsXVk37bltHxD1rDPwtNM8Q

• 5 tips to poop like a champion: https://8enmann.medium.com/5-tips-to-poop-like-a-champion-3292481a9651

Recommended books:

Superintelligence: Paths, Dangers, Strategies: https://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/0198739834

The Hacker and the State: Cyber Attacks and the New Normal of Geopolitics: https://www.amazon.com/Hacker-State-Attacks-Normal-Geopolitics/dp/0674987551

Replacing Guilt: Minding Our Way: https://www.amazon.com/Replacing-Guilt-Minding-Our-Way/dp/B086FTSB3Q

Good Strategy/Bad Strategy: The Difference and Why It Matters: https://www.amazon.com/Good-Strategy-Bad-Difference-Matters/dp/0307886239

The Alignment Problem: Machine Learning and Human Values: https://www.amazon.com/Alignment-Problem-Machine-Learning-Values/dp/0393635821

Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email [email protected].

Lenny may be an investor in the companies discussed.


To hear more, visit www.lennysnewsletter.com
  continue reading

295 episodes

Artwork
iconShare
 
Manage episode 495457176 series 3359822
Content provided by Lenny Rachitsky. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Lenny Rachitsky or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

Benjamin Mann is a co-founder of Anthropic, an AI startup dedicated to building aligned, safety-first AI systems. Prior to Anthropic, Ben was one of the architects of GPT-3 at OpenAI. He left OpenAI driven by the mission to ensure that AI benefits humanity. In this episode, Ben opens up about the accelerating progress in AI and the urgent need to steer it responsibly.

In this conversation, we discuss:

1. The inside story of leaving OpenAI with the entire safety team to start Anthropic

2. How Meta’s $100M offers reveal the true market price of top AI talent

3. Why AI progress is still accelerating (not plateauing), and how most people misjudge the exponential

4. Ben’s “economic Turing test” for knowing when we’ve achieved AGI—and why it’s likely coming by 2027-2028

5. Why he believes 20% unemployment is inevitable

6. The AI nightmare scenarios that concern him most—and how he believes we can still avoid them

7. How focusing on AI safety created Claude’s beloved personality

8. What three skills he’s teaching his kids instead of traditional academics

Brought to you by:

Sauce—Turn customer pain into product revenue: https://sauce.app/lenny

LucidLink—Real-time cloud storage for teams: https://www.lucidlink.com/lenny

Fin—The #1 AI agent for customer service: https://fin.ai/lenny

Transcript: https://www.lennysnewsletter.com/p/anthropic-co-founder-benjamin-mann

My biggest takeaways (for paid newsletter subscribers): https://www.lennysnewsletter.com/i/168107911/my-biggest-takeaways-from-this-conversation

Where to find Ben Mann:

• X: https://x.com/8enmann

• LinkedIn: https://www.linkedin.com/in/benjamin-mann/

• Website: https://benjmann.net/

Where to find Lenny:

• Newsletter: https://www.lennysnewsletter.com

• X: https://twitter.com/lennysan

• LinkedIn: https://www.linkedin.com/in/lennyrachitsky/

In this episode, we cover:

(00:00) Introduction to Benjamin

(04:43) The AI talent war

(06:28) AI progress and scaling laws

(10:50) Defining AGI and the economic Turing test

(12:26) The impact of AI on jobs

(17:45) Preparing for an AI future

(24:05) Founding Anthropic

(27:06) Balancing AI safety and progress

(29:10) Constitutional AI and model alignment

(34:21) The importance of AI safety

(43:40) The risks of autonomous agents

(45:40) Forecasting superintelligence

(48:36) How hard is it to align AI?

(53:19) Reinforcement learning from AI feedback (RLAIF)

(57:03) AI's biggest bottlenecks

(01:00:11) Personal reflections on responsibilities

(01:02:36) Anthropic’s growth and innovations

(01:07:48) Lightning round and final thoughts

Referenced:

• Dario Amodei on LinkedIn: https://www.linkedin.com/in/dario-amodei-3934934/

• Anthropic CEO: AI Could Wipe Out 50% of Entry-Level White Collar Jobs: https://www.marketingaiinstitute.com/blog/dario-amodei-ai-entry-level-jobs

• Alexa+: https://www.amazon.com/dp/B0DCCNHWV5

• Azure: https://azure.microsoft.com/

• Sam Altman on X: https://x.com/sama

• Opus 3: https://www.anthropic.com/news/claude-3-family

• Claude’s Constitution: https://www.anthropic.com/news/claudes-constitution

• Greg Brockman on X: https://x.com/gdb

• Anthropic’s Responsible Scaling Policy: https://www.anthropic.com/news/anthropics-responsible-scaling-policy

• Agentic Misalignment: How LLMs could be insider threats: https://www.anthropic.com/research/agentic-misalignment

• Anthropic’s CPO on what comes next | Mike Krieger (co-founder of Instagram): https://www.lennysnewsletter.com/p/anthropics-cpo-heres-what-comes-next

• AI prompt engineering in 2025: What works and what doesn’t | Sander Schulhoff (Learn Prompting, HackAPrompt): https://www.lennysnewsletter.com/p/ai-prompt-engineering-in-2025-sander-schulhoff

• Unitree: https://www.unitree.com/

• Arthur C. Clarke: https://en.wikipedia.org/wiki/Arthur_C._Clarke

• How Reinforcement Learning from AI Feedback Works: https://www.assemblyai.com/blog/how-reinforcement-learning-from-ai-feedback-works

• RLHF: https://en.wikipedia.org/wiki/Reinforcement_learning_from_human_feedback

• Jared Kaplan on LinkedIn: https://www.linkedin.com/in/jared-kaplan-645843213/

• Moore’s law: https://en.wikipedia.org/wiki/Moore%27s_law

• Machine Intelligence Research Institute: https://intelligence.org/

• Raph Lee on LinkedIn: https://www.linkedin.com/in/raphaeltlee/

• “The Last Question”: https://en.wikipedia.org/wiki/The_Last_Question

• Beth Barnes on LinkedIn: https://www.linkedin.com/in/elizabethmbarnes/

• “The Last Question”: https://en.wikipedia.org/wiki/The_Last_Question

• Good Strategy, Bad Strategy | Richard Rumelt: https://www.lennysnewsletter.com/p/good-strategy-bad-strategy-richard

Pantheon on Netflix: https://www.netflix.com/title/81937398

Ted Lasso on AppleTV+: https://tv.apple.com/us/show/ted-lasso/umc.cmc.vtoh0mn0xn7t3c643xqonfzy

• Kurzgesagt—In a Nutshell: https://www.youtube.com/channel/UCsXVk37bltHxD1rDPwtNM8Q

• 5 tips to poop like a champion: https://8enmann.medium.com/5-tips-to-poop-like-a-champion-3292481a9651

Recommended books:

Superintelligence: Paths, Dangers, Strategies: https://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/0198739834

The Hacker and the State: Cyber Attacks and the New Normal of Geopolitics: https://www.amazon.com/Hacker-State-Attacks-Normal-Geopolitics/dp/0674987551

Replacing Guilt: Minding Our Way: https://www.amazon.com/Replacing-Guilt-Minding-Our-Way/dp/B086FTSB3Q

Good Strategy/Bad Strategy: The Difference and Why It Matters: https://www.amazon.com/Good-Strategy-Bad-Difference-Matters/dp/0307886239

The Alignment Problem: Machine Learning and Human Values: https://www.amazon.com/Alignment-Problem-Machine-Learning-Values/dp/0393635821

Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email [email protected].

Lenny may be an investor in the companies discussed.


To hear more, visit www.lennysnewsletter.com
  continue reading

295 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play