Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by Weaviate. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Weaviate or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Patronus AI with Anand Kannappan - Weaviate Podcast #122!

1:01:06
 
Share
 

Manage episode 482913856 series 3524543
Content provided by Weaviate. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Weaviate or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

AI agents are getting more complex and harder to debug. How do you know what's happening when your agent makes 20+ function calls? What if you have a Multi-Agent System orchestrating several Agents? Anand Kannappan, co-founder of Patronus AI, reveals how their groundbreaking tool Percival transforms agent debugging and evaluation. Percival can instantly analyze complex agent traces, it pinpoints failures across 60 different modes, and it automatically suggests prompt fixes to improve performance. Anand unpacks several of these common failure modes. This includes the critical challenges of "context explosion" where agents process millions of tokens. He also explains domain adaptation for specific use cases, and the complex challenge of multi-agent orchestration. The paradigm of AI Evals is shifting from static evaluation to dynamic oversight! Also learn how Percival's memory architecture leverages both episodic and semantic knowledge with Weaviate!This conversation explores powerful concepts like process vs. outcome rewards and LLM-as-judge approaches. Anand shares his vision for "agentic supervision" where equally capable AI systems provide oversight for complex agent workflows. Whether you're building AI agents, evaluating LLM systems, or interested in how debugging autonomous systems will evolve, this episode delivers concrete techniques. You'll gain philosophical insights on evaluation and a roadmap for how evaluation must transform to keep pace with increasingly autonomous AI systems.

  continue reading

123 episodes

Artwork
iconShare
 
Manage episode 482913856 series 3524543
Content provided by Weaviate. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Weaviate or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

AI agents are getting more complex and harder to debug. How do you know what's happening when your agent makes 20+ function calls? What if you have a Multi-Agent System orchestrating several Agents? Anand Kannappan, co-founder of Patronus AI, reveals how their groundbreaking tool Percival transforms agent debugging and evaluation. Percival can instantly analyze complex agent traces, it pinpoints failures across 60 different modes, and it automatically suggests prompt fixes to improve performance. Anand unpacks several of these common failure modes. This includes the critical challenges of "context explosion" where agents process millions of tokens. He also explains domain adaptation for specific use cases, and the complex challenge of multi-agent orchestration. The paradigm of AI Evals is shifting from static evaluation to dynamic oversight! Also learn how Percival's memory architecture leverages both episodic and semantic knowledge with Weaviate!This conversation explores powerful concepts like process vs. outcome rewards and LLM-as-judge approaches. Anand shares his vision for "agentic supervision" where equally capable AI systems provide oversight for complex agent workflows. Whether you're building AI agents, evaluating LLM systems, or interested in how debugging autonomous systems will evolve, this episode delivers concrete techniques. You'll gain philosophical insights on evaluation and a roadmap for how evaluation must transform to keep pace with increasingly autonomous AI systems.

  continue reading

123 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Listen to this show while you explore
Play