Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by Hugo Bowne-Anderson. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Hugo Bowne-Anderson or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Episode 50: A Field Guide to Rapidly Improving AI Products -- With Hamel Husain

27:42
 
Share
 

Manage episode 489221120 series 3317544
Content provided by Hugo Bowne-Anderson. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Hugo Bowne-Anderson or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

If we want AI systems that actually work, we need to get much better at evaluating them, not just building more pipelines, agents, and frameworks.

In this episode, Hugo talks with Hamel Hussain (ex-Airbnb, GitHub, DataRobot) about how teams can improve AI products by focusing on error analysis, data inspection, and systematic iteration. The conversation is based on Hamel’s blog post A Field Guide to Rapidly Improving AI Products, which he joined Hugo’s class to discuss.

They cover:
🔍 Why most teams struggle to measure whether their systems are actually improving
📊 How error analysis helps you prioritize what to fix (and when to write evals)
🧮 Why evaluation isn’t just a metric — but a full development process
⚠️ Common mistakes when debugging LLM and agent systems
🛠️ How to think about the tradeoffs in adding more evals vs. fixing obvious issues
👥 Why enabling domain experts — not just engineers — can accelerate iteration

If you’ve ever built an AI system and found yourself unsure how to make it better, this conversation is for you.

LINKS


🎓 Learn more:

📺 Watch the video version on YouTube: YouTube link

  continue reading

50 episodes

Artwork
iconShare
 
Manage episode 489221120 series 3317544
Content provided by Hugo Bowne-Anderson. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Hugo Bowne-Anderson or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

If we want AI systems that actually work, we need to get much better at evaluating them, not just building more pipelines, agents, and frameworks.

In this episode, Hugo talks with Hamel Hussain (ex-Airbnb, GitHub, DataRobot) about how teams can improve AI products by focusing on error analysis, data inspection, and systematic iteration. The conversation is based on Hamel’s blog post A Field Guide to Rapidly Improving AI Products, which he joined Hugo’s class to discuss.

They cover:
🔍 Why most teams struggle to measure whether their systems are actually improving
📊 How error analysis helps you prioritize what to fix (and when to write evals)
🧮 Why evaluation isn’t just a metric — but a full development process
⚠️ Common mistakes when debugging LLM and agent systems
🛠️ How to think about the tradeoffs in adding more evals vs. fixing obvious issues
👥 Why enabling domain experts — not just engineers — can accelerate iteration

If you’ve ever built an AI system and found yourself unsure how to make it better, this conversation is for you.

LINKS


🎓 Learn more:

📺 Watch the video version on YouTube: YouTube link

  continue reading

50 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play