Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by Michael Krigsman. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Michael Krigsman or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

AI Inequality EXPOSED: When Algorithms Fail | CXOTalk #882

56:06
 
Share
 

Manage episode 487809033 series 1431021
Content provided by Michael Krigsman. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Michael Krigsman or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

In episode 882 of CXOTalk, Michael Krigsman sits down with Kevin De Liban, founder of TechTonic Justice and former legal aid attorney, who reveals the shocking truth about AI's impact on vulnerable communities.

De Liban shares how 92 million low-income Americans now have critical life decisions, such as healthcare, housing, employment, and government benefits, that are determined by algorithms that often fail catastrophically.

Drawing from his groundbreaking 2016 legal victory in Arkansas, where he successfully challenged an algorithm that devastated the lives of disabled Medicaid recipients, De Liban exposes the myth of AI neutrality and demonstrates how these systems reflect the biases and incentives of their creators. He explains why self-regulation and "ethical AI" initiatives often fail when they conflict with business interests, and why effective regulation is crucial.

What you'll learn:

  • The real scale of AI harm affecting 92 million Americans
  • Why AI systems aren't neutral decision-making tools
  • How algorithms denied healthcare to disabled people in Arkansas
  • Why ethical AI initiatives fail without enforceable accountability
  • Practical steps technology leaders can take to prevent harm
  • The expansion of AI monitoring into middle-class professions
  • Why regulation benefits ethical companies
  • How to determine if AI is appropriate for high-stakes decisions

This conversation challenges conventional wisdom about AI adoption and offers essential guidance for executives, developers, and policymakers navigating the intersection of technological innovation and social responsibility.

Whether you're a C-suite executive, technology professional, policymaker, or concerned citizen, this discussion provides crucial insights into one of the most pressing issues of our time: ensuring AI serves humanity rather than harming those who need protection most.

=====

Subscribe to CXOTalk: www.cxotalk.com/newsletter

Read the episode summary and transcript: www.cxotalk.com/episode/ai-failure-injustice-inequality-and-algorithms

Learn more about TechTonic Justice: www.techtonicjustice.org

  continue reading

526 episodes

Artwork
iconShare
 
Manage episode 487809033 series 1431021
Content provided by Michael Krigsman. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Michael Krigsman or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

In episode 882 of CXOTalk, Michael Krigsman sits down with Kevin De Liban, founder of TechTonic Justice and former legal aid attorney, who reveals the shocking truth about AI's impact on vulnerable communities.

De Liban shares how 92 million low-income Americans now have critical life decisions, such as healthcare, housing, employment, and government benefits, that are determined by algorithms that often fail catastrophically.

Drawing from his groundbreaking 2016 legal victory in Arkansas, where he successfully challenged an algorithm that devastated the lives of disabled Medicaid recipients, De Liban exposes the myth of AI neutrality and demonstrates how these systems reflect the biases and incentives of their creators. He explains why self-regulation and "ethical AI" initiatives often fail when they conflict with business interests, and why effective regulation is crucial.

What you'll learn:

  • The real scale of AI harm affecting 92 million Americans
  • Why AI systems aren't neutral decision-making tools
  • How algorithms denied healthcare to disabled people in Arkansas
  • Why ethical AI initiatives fail without enforceable accountability
  • Practical steps technology leaders can take to prevent harm
  • The expansion of AI monitoring into middle-class professions
  • Why regulation benefits ethical companies
  • How to determine if AI is appropriate for high-stakes decisions

This conversation challenges conventional wisdom about AI adoption and offers essential guidance for executives, developers, and policymakers navigating the intersection of technological innovation and social responsibility.

Whether you're a C-suite executive, technology professional, policymaker, or concerned citizen, this discussion provides crucial insights into one of the most pressing issues of our time: ensuring AI serves humanity rather than harming those who need protection most.

=====

Subscribe to CXOTalk: www.cxotalk.com/newsletter

Read the episode summary and transcript: www.cxotalk.com/episode/ai-failure-injustice-inequality-and-algorithms

Learn more about TechTonic Justice: www.techtonicjustice.org

  continue reading

526 episodes

Alla avsnitt

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play