Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by Chris Madden. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Chris Madden or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

CoreWeave Acquired OpenPipe — Kyle Corbitt on Reinforcement Learning & Reliable AI Agents | E150

21:37
 
Share
 

Manage episode 508409442 series 3660840
Content provided by Chris Madden. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Chris Madden or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
CoreWeave just announced its acquisition of OpenPipe — a pivotal moment for reinforcement learning and reliable AI agents. Let’s take a step back and watch Kyle Corbitt, Co-founder and CEO of OpenPipe, talk about how reinforcement learning turns prototypes into production-ready systems. In this exclusive Imagine AI Live 25 talk, Kyle explains the “why, when, and how” of RL, walks through a case study of building an email assistant that outperformed frontier models, and shares lessons learned from designing environments and reward functions. With OpenPipe now joining forces with CoreWeave, the AI Hyperscaler™, the mission to scale reliable reinforcement learning is accelerating. Read the full announcement here. (0:00) Introduction to OpenPipe and Reinforcement Learning (0:38) The Steps to Training a Reliable Agent (1:19) What is Reinforcement Learning? (2:07) Why, When, and How to Use Reinforcement Learning (3:30) How the Email Agent Works (5:26) Initial Performance and Baselines (7:42) Is Reinforcement Learning Practical? (9:11) The First Rule of Fine-Tuning a Model (10:11) When to Adopt Reinforcement Learning (10:48) The Two Hard Problems of Reinforcement Learning (11:14) Problem 1: Building a Realistic Environment (13:38) Problem 2: The Reward Function (15:36) The Training Loop (16:47) Bonus: Optimizing for More Than Accuracy (18:16) Guardrails: Dealing with Reward Hacking (20:00) The Takeaway: Expanding the Envelope (20:40) Final Thoughts and Q&A
  continue reading

153 episodes

Artwork
iconShare
 
Manage episode 508409442 series 3660840
Content provided by Chris Madden. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Chris Madden or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
CoreWeave just announced its acquisition of OpenPipe — a pivotal moment for reinforcement learning and reliable AI agents. Let’s take a step back and watch Kyle Corbitt, Co-founder and CEO of OpenPipe, talk about how reinforcement learning turns prototypes into production-ready systems. In this exclusive Imagine AI Live 25 talk, Kyle explains the “why, when, and how” of RL, walks through a case study of building an email assistant that outperformed frontier models, and shares lessons learned from designing environments and reward functions. With OpenPipe now joining forces with CoreWeave, the AI Hyperscaler™, the mission to scale reliable reinforcement learning is accelerating. Read the full announcement here. (0:00) Introduction to OpenPipe and Reinforcement Learning (0:38) The Steps to Training a Reliable Agent (1:19) What is Reinforcement Learning? (2:07) Why, When, and How to Use Reinforcement Learning (3:30) How the Email Agent Works (5:26) Initial Performance and Baselines (7:42) Is Reinforcement Learning Practical? (9:11) The First Rule of Fine-Tuning a Model (10:11) When to Adopt Reinforcement Learning (10:48) The Two Hard Problems of Reinforcement Learning (11:14) Problem 1: Building a Realistic Environment (13:38) Problem 2: The Reward Function (15:36) The Training Loop (16:47) Bonus: Optimizing for More Than Accuracy (18:16) Guardrails: Dealing with Reward Hacking (20:00) The Takeaway: Expanding the Envelope (20:40) Final Thoughts and Q&A
  continue reading

153 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play