Embedded is the show for people who love gadgets. Making them, breaking them, and everything in between. Weekly interviews with engineers, educators, and enthusiasts. Find the show, blog, and more at embedded.fm.
…
continue reading
Content provided by Hugo Bowne-Anderson. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Hugo Bowne-Anderson or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!
Go offline with the Player FM app!
Episode 45: Your AI application is broken. Here’s what to do about it.
MP3•Episode home
Manage episode 467694275 series 3317544
Content provided by Hugo Bowne-Anderson. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Hugo Bowne-Anderson or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
Too many teams are building AI applications without truly understanding why their models fail. Instead of jumping straight to LLM evaluations, dashboards, or vibe checks, how do you actually fix a broken AI app?
In this episode, Hugo speaks with Hamel Husain, longtime ML engineer, open-source contributor, and consultant, about why debugging generative AI systems starts with looking at your data.
In this episode, we dive into:
Why “look at your data” is the best debugging advice no one follows.
How spreadsheet-based error analysis can uncover failure modes faster than complex dashboards.
The role of synthetic data in bootstrapping evaluation.
When to trust LLM judges—and when they’re misleading.
Why most AI dashboards measuring truthfulness, helpfulness, and conciseness are often a waste of time.
If you're building AI-powered applications, this episode will change how you approach debugging, iteration, and improving model performance in production.
LINKS
The podcast livestream on YouTube (https://youtube.com/live/Vz4--82M2_0?feature=share)
Hamel's blog (https://hamel.dev/)
Hamel on twitter (https://x.com/HamelHusain)
Hugo on twitter (https://x.com/hugobowne)
Vanishing Gradients on twitter (https://x.com/vanishingdata)
Vanishing Gradients on YouTube (https://www.youtube.com/channel/UC_NafIo-Ku2loOLrzm45ABA)
Vanishing Gradients on Twitter (https://x.com/vanishingdata)
Vanishing Gradients on Lu.ma (https://lu.ma/calendar/cal-8ImWFDQ3IEIxNWk)
Building LLM Application for Data Scientists and SWEs, Hugo course on Maven (use VG25 code for 25% off) (https://maven.com/s/course/d56067f338)
Hugo is also running a free lightning lesson next week on LLM Agents: When to Use Them (and When Not To) (https://maven.com/p/ed7a72/llm-agents-when-to-use-them-and-when-not-to?utm_medium=ll_share_link&utm_source=instructor)
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit hugobowne.substack.com
…
continue reading
In this episode, Hugo speaks with Hamel Husain, longtime ML engineer, open-source contributor, and consultant, about why debugging generative AI systems starts with looking at your data.
In this episode, we dive into:
Why “look at your data” is the best debugging advice no one follows.
How spreadsheet-based error analysis can uncover failure modes faster than complex dashboards.
The role of synthetic data in bootstrapping evaluation.
When to trust LLM judges—and when they’re misleading.
Why most AI dashboards measuring truthfulness, helpfulness, and conciseness are often a waste of time.
If you're building AI-powered applications, this episode will change how you approach debugging, iteration, and improving model performance in production.
LINKS
The podcast livestream on YouTube (https://youtube.com/live/Vz4--82M2_0?feature=share)
Hamel's blog (https://hamel.dev/)
Hamel on twitter (https://x.com/HamelHusain)
Hugo on twitter (https://x.com/hugobowne)
Vanishing Gradients on twitter (https://x.com/vanishingdata)
Vanishing Gradients on YouTube (https://www.youtube.com/channel/UC_NafIo-Ku2loOLrzm45ABA)
Vanishing Gradients on Twitter (https://x.com/vanishingdata)
Vanishing Gradients on Lu.ma (https://lu.ma/calendar/cal-8ImWFDQ3IEIxNWk)
Building LLM Application for Data Scientists and SWEs, Hugo course on Maven (use VG25 code for 25% off) (https://maven.com/s/course/d56067f338)
Hugo is also running a free lightning lesson next week on LLM Agents: When to Use Them (and When Not To) (https://maven.com/p/ed7a72/llm-agents-when-to-use-them-and-when-not-to?utm_medium=ll_share_link&utm_source=instructor)
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit hugobowne.substack.com
65 episodes
MP3•Episode home
Manage episode 467694275 series 3317544
Content provided by Hugo Bowne-Anderson. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Hugo Bowne-Anderson or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
Too many teams are building AI applications without truly understanding why their models fail. Instead of jumping straight to LLM evaluations, dashboards, or vibe checks, how do you actually fix a broken AI app?
In this episode, Hugo speaks with Hamel Husain, longtime ML engineer, open-source contributor, and consultant, about why debugging generative AI systems starts with looking at your data.
In this episode, we dive into:
Why “look at your data” is the best debugging advice no one follows.
How spreadsheet-based error analysis can uncover failure modes faster than complex dashboards.
The role of synthetic data in bootstrapping evaluation.
When to trust LLM judges—and when they’re misleading.
Why most AI dashboards measuring truthfulness, helpfulness, and conciseness are often a waste of time.
If you're building AI-powered applications, this episode will change how you approach debugging, iteration, and improving model performance in production.
LINKS
The podcast livestream on YouTube (https://youtube.com/live/Vz4--82M2_0?feature=share)
Hamel's blog (https://hamel.dev/)
Hamel on twitter (https://x.com/HamelHusain)
Hugo on twitter (https://x.com/hugobowne)
Vanishing Gradients on twitter (https://x.com/vanishingdata)
Vanishing Gradients on YouTube (https://www.youtube.com/channel/UC_NafIo-Ku2loOLrzm45ABA)
Vanishing Gradients on Twitter (https://x.com/vanishingdata)
Vanishing Gradients on Lu.ma (https://lu.ma/calendar/cal-8ImWFDQ3IEIxNWk)
Building LLM Application for Data Scientists and SWEs, Hugo course on Maven (use VG25 code for 25% off) (https://maven.com/s/course/d56067f338)
Hugo is also running a free lightning lesson next week on LLM Agents: When to Use Them (and When Not To) (https://maven.com/p/ed7a72/llm-agents-when-to-use-them-and-when-not-to?utm_medium=ll_share_link&utm_source=instructor)
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit hugobowne.substack.com
…
continue reading
In this episode, Hugo speaks with Hamel Husain, longtime ML engineer, open-source contributor, and consultant, about why debugging generative AI systems starts with looking at your data.
In this episode, we dive into:
Why “look at your data” is the best debugging advice no one follows.
How spreadsheet-based error analysis can uncover failure modes faster than complex dashboards.
The role of synthetic data in bootstrapping evaluation.
When to trust LLM judges—and when they’re misleading.
Why most AI dashboards measuring truthfulness, helpfulness, and conciseness are often a waste of time.
If you're building AI-powered applications, this episode will change how you approach debugging, iteration, and improving model performance in production.
LINKS
The podcast livestream on YouTube (https://youtube.com/live/Vz4--82M2_0?feature=share)
Hamel's blog (https://hamel.dev/)
Hamel on twitter (https://x.com/HamelHusain)
Hugo on twitter (https://x.com/hugobowne)
Vanishing Gradients on twitter (https://x.com/vanishingdata)
Vanishing Gradients on YouTube (https://www.youtube.com/channel/UC_NafIo-Ku2loOLrzm45ABA)
Vanishing Gradients on Twitter (https://x.com/vanishingdata)
Vanishing Gradients on Lu.ma (https://lu.ma/calendar/cal-8ImWFDQ3IEIxNWk)
Building LLM Application for Data Scientists and SWEs, Hugo course on Maven (use VG25 code for 25% off) (https://maven.com/s/course/d56067f338)
Hugo is also running a free lightning lesson next week on LLM Agents: When to Use Them (and When Not To) (https://maven.com/p/ed7a72/llm-agents-when-to-use-them-and-when-not-to?utm_medium=ll_share_link&utm_source=instructor)
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit hugobowne.substack.com
65 episodes
All episodes
×Welcome to Player FM!
Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.