Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by Chris Hoffman and Nasim Motalebi, Chris Hoffman, and Nasim Motalebi. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Chris Hoffman and Nasim Motalebi, Chris Hoffman, and Nasim Motalebi or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Ethics and Responsibility from 30,000 Feet

57:29
 
Share
 

Manage episode 462116599 series 3625878
Content provided by Chris Hoffman and Nasim Motalebi, Chris Hoffman, and Nasim Motalebi. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Chris Hoffman and Nasim Motalebi, Chris Hoffman, and Nasim Motalebi or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

Are we ready to let AI drive humanitarian solutions or are we rushing toward an ethical disaster? In this episode of Humanitarian Frontiers in AI, host Chris Hoffman is joined by AI experts Emily Springer, Mala Kumar, and Suzy Madigan to tackle the pressing question of accountability when AI systems cause harm and how to ensure that AI truly serves those who need it most. Together, they discuss the difference between AI ethics and responsible AI, the dangers of rushing AI pilots, the importance of AI literacy, and the need for inclusive, participatory AI systems that prioritize community wellbeing over box-ticking for compliance. Emily, Mala, and Suzy also emphasize the importance of collaboration with the Global South and address the funding gaps that typically hinder progress. The panel argues that slowing down is crucial for building the infrastructure, governance, and ethical frameworks needed to ensure AI delivers a sustainable and equitable impact. Be sure to tune in for a thought-provoking conversation on balancing innovation with responsibility and shaping AI as a force for good in humanitarian action!

Key Points From This Episode:

  • Responsible AI versus AI ethics and the importance of operationalizing ethical principles.
  • The divide between AI for compliance (negative rights) and AI for social good (positive rights).
  • CARE’s research advocating for “participatory AI” that centers voices from the Global South.
  • Challenges in troubleshooting AI failures and insufficient readiness for technical demands.
  • The need for AI literacy, funding for holistic builds, and a cultural shift in understanding AI.
  • Avoiding “participation-washing” in AI and raising the standard for meaningful inclusion.
  • Ensuring proper due diligence through collaborative design and authentic engagement.
  • Why it’s essential to slow down and prioritize responsibility before rushing AI implementation.
  • The question of who is responsible for halting AI deployment until systems are ready.
  • Balancing global standards with localized needs: the value of a context-sensitive approach.
  • Building infrastructure for the future: a focus on foundational technology, not one-off solutions.
  • What goes into navigating AI in a geopolitically diverse and rapidly changing world.

Links Mentioned in Today’s Episode:

Emily Springer on LinkedIn

Emily Springer Advisory

The Inclusive AI Lab by Emily Springer

Mala Kumar

Mala Kumar on LinkedIn

ML Commons

Suzy Madigan on LinkedIn

Suzy Madigan on X

The Machine Race by Suzy Madigan

FCDO Call for Humanitarian Action and Responsible AI Research

ML Commons AI Safety Benchmark

‘Collective Constitutional AI: Aligning a Language Model with Public Input’

Nasim Motalebi
Nasim Motalebi on LinkedIn
Chris Hoffman on LinkedIn

  continue reading

9 episodes

Artwork
iconShare
 
Manage episode 462116599 series 3625878
Content provided by Chris Hoffman and Nasim Motalebi, Chris Hoffman, and Nasim Motalebi. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Chris Hoffman and Nasim Motalebi, Chris Hoffman, and Nasim Motalebi or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

Are we ready to let AI drive humanitarian solutions or are we rushing toward an ethical disaster? In this episode of Humanitarian Frontiers in AI, host Chris Hoffman is joined by AI experts Emily Springer, Mala Kumar, and Suzy Madigan to tackle the pressing question of accountability when AI systems cause harm and how to ensure that AI truly serves those who need it most. Together, they discuss the difference between AI ethics and responsible AI, the dangers of rushing AI pilots, the importance of AI literacy, and the need for inclusive, participatory AI systems that prioritize community wellbeing over box-ticking for compliance. Emily, Mala, and Suzy also emphasize the importance of collaboration with the Global South and address the funding gaps that typically hinder progress. The panel argues that slowing down is crucial for building the infrastructure, governance, and ethical frameworks needed to ensure AI delivers a sustainable and equitable impact. Be sure to tune in for a thought-provoking conversation on balancing innovation with responsibility and shaping AI as a force for good in humanitarian action!

Key Points From This Episode:

  • Responsible AI versus AI ethics and the importance of operationalizing ethical principles.
  • The divide between AI for compliance (negative rights) and AI for social good (positive rights).
  • CARE’s research advocating for “participatory AI” that centers voices from the Global South.
  • Challenges in troubleshooting AI failures and insufficient readiness for technical demands.
  • The need for AI literacy, funding for holistic builds, and a cultural shift in understanding AI.
  • Avoiding “participation-washing” in AI and raising the standard for meaningful inclusion.
  • Ensuring proper due diligence through collaborative design and authentic engagement.
  • Why it’s essential to slow down and prioritize responsibility before rushing AI implementation.
  • The question of who is responsible for halting AI deployment until systems are ready.
  • Balancing global standards with localized needs: the value of a context-sensitive approach.
  • Building infrastructure for the future: a focus on foundational technology, not one-off solutions.
  • What goes into navigating AI in a geopolitically diverse and rapidly changing world.

Links Mentioned in Today’s Episode:

Emily Springer on LinkedIn

Emily Springer Advisory

The Inclusive AI Lab by Emily Springer

Mala Kumar

Mala Kumar on LinkedIn

ML Commons

Suzy Madigan on LinkedIn

Suzy Madigan on X

The Machine Race by Suzy Madigan

FCDO Call for Humanitarian Action and Responsible AI Research

ML Commons AI Safety Benchmark

‘Collective Constitutional AI: Aligning a Language Model with Public Input’

Nasim Motalebi
Nasim Motalebi on LinkedIn
Chris Hoffman on LinkedIn

  continue reading

9 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Listen to this show while you explore
Play