Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by Samuel Salzer and Aline Holzwarth, Samuel Salzer, and Aline Holzwarth. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Samuel Salzer and Aline Holzwarth, Samuel Salzer, and Aline Holzwarth or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Building Moral AI with Jana Schaich Borg

1:22:03
 
Share
 

Manage episode 480153643 series 3034739
Content provided by Samuel Salzer and Aline Holzwarth, Samuel Salzer, and Aline Holzwarth. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Samuel Salzer and Aline Holzwarth, Samuel Salzer, and Aline Holzwarth or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

How Do You Build a Moral AI? with Jana Schaich Borg

In this episode of the Behavioral Design Podcast, hosts Aline and Samuel are joined by Jana Schaich Borg, Associate Research Professor at Duke University and co-author of the book “Moral AI and How We Get There”. Together they explore one of the thorniest and most important questions in the AI age: How do you encode human morality into machines—and should you even try?

Drawing from neuroscience, philosophy, and machine learning, Jana walks us through bottom-up and top-down approaches to moral alignment, why current models fall short, and how her team’s hybrid framework may offer a better path. Along the way, they dive into the messy nature of human values, the challenges of AI ethics in organizations, and how AI could help us become more moral—not just more efficient.

This conversation blends practical tools with philosophical inquiry and leaves us with a cautiously hopeful perspective: that we can, and should, teach machines to care.

 Topics Covered:

  • What AI alignment really means (and why it’s so hard)

  • Bottom-up vs. top-down moral AI systems

  • How organizations get ethical AI wrong—and what to do instead

  • The messy reality of human values and decision making

  • Translational ethics and the need for AI KPIs

  • Personalizing AI to match your values

  • When moral self-reflection becomes a design feature

Timestamps:

00:00  Intro: AI Alignment — Mission Impossible?
04:00  Why Moral AI Is So Hard (and Necessary)
07:00  The “Spec” Story & Reinforcement Gone Wrong
10:00  Anthropomorphizing AI — Helpful or Misleading?
12:00  Introducing Jana & the Moral AI Project
15:00  What “Moral AI” Really Means
18:00  Interdisciplinary Collaboration (and Friction)
21:00  Bottom-Up vs. Top-Down Approaches
27:00  Why Human Morality Is Messy
31:00  Building a Hybrid Moral AI System
41:00  Case Study: Kidney Donation Decisions
47:00  From Models to Moral Reflection
52:00  Embedding Ethics Inside Organizations
56:00  Moral Growth Mindset & Training the Workforce
01:03:00  Why Trust & Culture Matter Most
01:06:00  Comparing AI Labs: OpenAI vs. Anthropic vs. Meta
01:10:00  What We Still Don’t Know
01:11:00  Quickfire: To AI or Not To AI
01:16:00  Jana’s Most Controversial Take
01:19:00  Can AI Make Us Better Humans?

🎧 Like this episode? Share it with a friend or leave us a review to help others discover the show.

Let me know if you’d like an abridged version, pull quotes, or platform-specific text for Apple, Spotify, or LinkedIn.

  continue reading

64 episodes

Artwork
iconShare
 
Manage episode 480153643 series 3034739
Content provided by Samuel Salzer and Aline Holzwarth, Samuel Salzer, and Aline Holzwarth. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Samuel Salzer and Aline Holzwarth, Samuel Salzer, and Aline Holzwarth or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

How Do You Build a Moral AI? with Jana Schaich Borg

In this episode of the Behavioral Design Podcast, hosts Aline and Samuel are joined by Jana Schaich Borg, Associate Research Professor at Duke University and co-author of the book “Moral AI and How We Get There”. Together they explore one of the thorniest and most important questions in the AI age: How do you encode human morality into machines—and should you even try?

Drawing from neuroscience, philosophy, and machine learning, Jana walks us through bottom-up and top-down approaches to moral alignment, why current models fall short, and how her team’s hybrid framework may offer a better path. Along the way, they dive into the messy nature of human values, the challenges of AI ethics in organizations, and how AI could help us become more moral—not just more efficient.

This conversation blends practical tools with philosophical inquiry and leaves us with a cautiously hopeful perspective: that we can, and should, teach machines to care.

 Topics Covered:

  • What AI alignment really means (and why it’s so hard)

  • Bottom-up vs. top-down moral AI systems

  • How organizations get ethical AI wrong—and what to do instead

  • The messy reality of human values and decision making

  • Translational ethics and the need for AI KPIs

  • Personalizing AI to match your values

  • When moral self-reflection becomes a design feature

Timestamps:

00:00  Intro: AI Alignment — Mission Impossible?
04:00  Why Moral AI Is So Hard (and Necessary)
07:00  The “Spec” Story & Reinforcement Gone Wrong
10:00  Anthropomorphizing AI — Helpful or Misleading?
12:00  Introducing Jana & the Moral AI Project
15:00  What “Moral AI” Really Means
18:00  Interdisciplinary Collaboration (and Friction)
21:00  Bottom-Up vs. Top-Down Approaches
27:00  Why Human Morality Is Messy
31:00  Building a Hybrid Moral AI System
41:00  Case Study: Kidney Donation Decisions
47:00  From Models to Moral Reflection
52:00  Embedding Ethics Inside Organizations
56:00  Moral Growth Mindset & Training the Workforce
01:03:00  Why Trust & Culture Matter Most
01:06:00  Comparing AI Labs: OpenAI vs. Anthropic vs. Meta
01:10:00  What We Still Don’t Know
01:11:00  Quickfire: To AI or Not To AI
01:16:00  Jana’s Most Controversial Take
01:19:00  Can AI Make Us Better Humans?

🎧 Like this episode? Share it with a friend or leave us a review to help others discover the show.

Let me know if you’d like an abridged version, pull quotes, or platform-specific text for Apple, Spotify, or LinkedIn.

  continue reading

64 episodes

Wszystkie odcinki

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Listen to this show while you explore
Play