Go offline with the Player FM app!
AI Safety: Constitutional AI vs Human Feedback
Manage episode 424053414 series 3427795
With great power comes great responsibility. How do leading AI companies implement safety and ethics as language models scale? OpenAI uses Model Spec combined with RLHF (Reinforcement Learning from Human Feedback). Anthropic uses Constitutional AI. The technical approaches to maximizing usefulness while minimizing harm. Solo episode on AI alignment.
REFERENCE
OpenAI Model Spec
https://cdn.openai.com/spec/model-spec-2024-05-08.html#overview
Anthropic Constitutional AI
https://www.anthropic.com/news/claudes-constitution
To stay in touch, sign up for our newsletter at https://www.superprompt.fm
30 episodes
Manage episode 424053414 series 3427795
With great power comes great responsibility. How do leading AI companies implement safety and ethics as language models scale? OpenAI uses Model Spec combined with RLHF (Reinforcement Learning from Human Feedback). Anthropic uses Constitutional AI. The technical approaches to maximizing usefulness while minimizing harm. Solo episode on AI alignment.
REFERENCE
OpenAI Model Spec
https://cdn.openai.com/spec/model-spec-2024-05-08.html#overview
Anthropic Constitutional AI
https://www.anthropic.com/news/claudes-constitution
To stay in touch, sign up for our newsletter at https://www.superprompt.fm
30 episodes
All episodes
×Welcome to Player FM!
Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.