Go offline with the Player FM app!
Unpacking METR’s findings: Does AI slow developers down?
Manage episode 497732302 series 3338504
In this episode of the Engineering Enablement podcast, host Abi Noda is joined by Quentin Anthony, Head of Model Training at Zyphra and a contributor at EleutherAI. Quentin participated in METR’s recent study on AI coding tools, which revealed that developers often slowed down when using AI—despite feeling more productive. He and Abi unpack the unexpected results of the study, which tasks AI tools actually help with, and how engineering teams can adopt them more effectively by focusing on task-level fit and developing better digital hygiene.
Where to find Quentin Anthony:
• LinkedIn: https://www.linkedin.com/in/quentin-anthony/
• X: https://x.com/QuentinAnthon15
Where to find Abi Noda:
• LinkedIn: https://www.linkedin.com/in/abinoda
In this episode, we cover:
(00:00) Intro
(01:32) A brief overview of Quentin’s background and current work
(02:05) An explanation of METR and the study Quentin participated in
(11:02) Surprising results of the METR study
(12:47) Quentin’s takeaways from the study’s results
(16:30) How developers can avoid bloated code bases through self-reflection
(19:31) Signs that you’re not making progress with a model
(21:25) What is “context rot”?
(23:04) Advice for combating context rot
(25:34) How to make the most of your idle time as a developer
(28:13) Developer hygiene: the case for selectively using AI tools
(33:28) How to interact effectively with new models
(35:28) Why organizations should focus on tasks that AI handles well
(38:01) Where AI fits in the software development lifecycle
(39:40) How to approach testing with models
(40:31) What makes models different
(42:05) Quentin’s thoughts on agents
Referenced:
- DX Core 4 Productivity Framework
- Zyphra
- EleutherAI
- METR
- Cursor
- Claude
- LibreChat
- Google Gemini
- Introducing OpenAI o3 and o4-mini
- METR’s study on how AI affects developer productivity
- Quentin Anthony on X: "I was one of the 16 devs in this study."
- Context rot from Hacker News
- Tracing the thoughts of a large language model
- Kimi
- Grok 4 | xAI
82 episodes
Manage episode 497732302 series 3338504
In this episode of the Engineering Enablement podcast, host Abi Noda is joined by Quentin Anthony, Head of Model Training at Zyphra and a contributor at EleutherAI. Quentin participated in METR’s recent study on AI coding tools, which revealed that developers often slowed down when using AI—despite feeling more productive. He and Abi unpack the unexpected results of the study, which tasks AI tools actually help with, and how engineering teams can adopt them more effectively by focusing on task-level fit and developing better digital hygiene.
Where to find Quentin Anthony:
• LinkedIn: https://www.linkedin.com/in/quentin-anthony/
• X: https://x.com/QuentinAnthon15
Where to find Abi Noda:
• LinkedIn: https://www.linkedin.com/in/abinoda
In this episode, we cover:
(00:00) Intro
(01:32) A brief overview of Quentin’s background and current work
(02:05) An explanation of METR and the study Quentin participated in
(11:02) Surprising results of the METR study
(12:47) Quentin’s takeaways from the study’s results
(16:30) How developers can avoid bloated code bases through self-reflection
(19:31) Signs that you’re not making progress with a model
(21:25) What is “context rot”?
(23:04) Advice for combating context rot
(25:34) How to make the most of your idle time as a developer
(28:13) Developer hygiene: the case for selectively using AI tools
(33:28) How to interact effectively with new models
(35:28) Why organizations should focus on tasks that AI handles well
(38:01) Where AI fits in the software development lifecycle
(39:40) How to approach testing with models
(40:31) What makes models different
(42:05) Quentin’s thoughts on agents
Referenced:
- DX Core 4 Productivity Framework
- Zyphra
- EleutherAI
- METR
- Cursor
- Claude
- LibreChat
- Google Gemini
- Introducing OpenAI o3 and o4-mini
- METR’s study on how AI affects developer productivity
- Quentin Anthony on X: "I was one of the 16 devs in this study."
- Context rot from Hacker News
- Tracing the thoughts of a large language model
- Kimi
- Grok 4 | xAI
82 episodes
All episodes
×Welcome to Player FM!
Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.