Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by Galileo + Conor Bronsdon. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Galileo + Conor Bronsdon or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Architecting AI Agents: The Shift from Models to Systems | Aishwarya Srinivasan, Fireworks AI Head of AI Developer Relations

53:25
 
Share
 

Manage episode 512315831 series 3617425
Content provided by Galileo + Conor Bronsdon. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Galileo + Conor Bronsdon or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

Most AI agents are built backwards, starting with models instead of system architecture.

Aishwarya Srinivasan, Head of AI Developer Relations at Fireworks AI, joins host Conor Bronsdon to explain the shift required to build reliable agents: stop treating them as model problems and start architecting them as complete software systems. Benchmarks alone won't save you.

Aish breaks down the evolution from prompt engineering to context engineering, revealing how production agents demand careful orchestration of multiple models, memory systems, and tool calls. She shares battle-tested insights on evaluation-driven development, the rise of open source models like DeepSeek v3, and practical strategies for managing autonomy with human-in-the-loop systems. The conversation addresses critical production challenges, ranging from LLM-as-judge techniques to navigating compliance in regulated environments.

Connect with Aishwarya Srinivasan:

LinkedIn: https://www.linkedin.com/in/aishwarya-srinivasan/

Instagram: https://www.instagram.com/the.datascience.gal/

Connect with Conor: https://www.linkedin.com/in/conorbronsdon/

00:00 Intro — Welcome to Chain of Thought

00:22 Guest Intro — Ash Srinivasan of Fireworks AI

02:37 The Challenge of Responsible AI

05:44 The Hidden Risks of Reward Hacking

07:22 From Prompt to Context Engineering

10:14 Data Quality and Human Feedback

14:43 Quantifying Trust and Observability

20:27 Evaluation-Driven Development

30:10 Open Source Models vs. Proprietary Systems

34:56 Gaps in the Open-Source AI Stack

38:45 When to Use Different Models

45:36 Governance and Compliance in AI Systems

50:11 The Future of AI Builders

56:00 Closing Thoughts & Follow Ash Online

Follow the hosts

Follow⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Atin⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

Follow⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Conor⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

Follow⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Vikram⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

Follow⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠Yash⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

  continue reading

43 episodes

Artwork
iconShare
 
Manage episode 512315831 series 3617425
Content provided by Galileo + Conor Bronsdon. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Galileo + Conor Bronsdon or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

Most AI agents are built backwards, starting with models instead of system architecture.

Aishwarya Srinivasan, Head of AI Developer Relations at Fireworks AI, joins host Conor Bronsdon to explain the shift required to build reliable agents: stop treating them as model problems and start architecting them as complete software systems. Benchmarks alone won't save you.

Aish breaks down the evolution from prompt engineering to context engineering, revealing how production agents demand careful orchestration of multiple models, memory systems, and tool calls. She shares battle-tested insights on evaluation-driven development, the rise of open source models like DeepSeek v3, and practical strategies for managing autonomy with human-in-the-loop systems. The conversation addresses critical production challenges, ranging from LLM-as-judge techniques to navigating compliance in regulated environments.

Connect with Aishwarya Srinivasan:

LinkedIn: https://www.linkedin.com/in/aishwarya-srinivasan/

Instagram: https://www.instagram.com/the.datascience.gal/

Connect with Conor: https://www.linkedin.com/in/conorbronsdon/

00:00 Intro — Welcome to Chain of Thought

00:22 Guest Intro — Ash Srinivasan of Fireworks AI

02:37 The Challenge of Responsible AI

05:44 The Hidden Risks of Reward Hacking

07:22 From Prompt to Context Engineering

10:14 Data Quality and Human Feedback

14:43 Quantifying Trust and Observability

20:27 Evaluation-Driven Development

30:10 Open Source Models vs. Proprietary Systems

34:56 Gaps in the Open-Source AI Stack

38:45 When to Use Different Models

45:36 Governance and Compliance in AI Systems

50:11 The Future of AI Builders

56:00 Closing Thoughts & Follow Ash Online

Follow the hosts

Follow⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Atin⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

Follow⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Conor⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

Follow⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Vikram⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

Follow⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠Yash⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

  continue reading

43 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play