Go offline with the Player FM app!
AI Inequality EXPOSED: When Algorithms Fail | CXOTalk #882
Manage episode 487809033 series 1431021
In episode 882 of CXOTalk, Michael Krigsman sits down with Kevin De Liban, founder of TechTonic Justice and former legal aid attorney, who reveals the shocking truth about AI's impact on vulnerable communities.
De Liban shares how 92 million low-income Americans now have critical life decisions, such as healthcare, housing, employment, and government benefits, that are determined by algorithms that often fail catastrophically.
Drawing from his groundbreaking 2016 legal victory in Arkansas, where he successfully challenged an algorithm that devastated the lives of disabled Medicaid recipients, De Liban exposes the myth of AI neutrality and demonstrates how these systems reflect the biases and incentives of their creators. He explains why self-regulation and "ethical AI" initiatives often fail when they conflict with business interests, and why effective regulation is crucial.
What you'll learn:
- The real scale of AI harm affecting 92 million Americans
- Why AI systems aren't neutral decision-making tools
- How algorithms denied healthcare to disabled people in Arkansas
- Why ethical AI initiatives fail without enforceable accountability
- Practical steps technology leaders can take to prevent harm
- The expansion of AI monitoring into middle-class professions
- Why regulation benefits ethical companies
- How to determine if AI is appropriate for high-stakes decisions
This conversation challenges conventional wisdom about AI adoption and offers essential guidance for executives, developers, and policymakers navigating the intersection of technological innovation and social responsibility.
Whether you're a C-suite executive, technology professional, policymaker, or concerned citizen, this discussion provides crucial insights into one of the most pressing issues of our time: ensuring AI serves humanity rather than harming those who need protection most.
=====
Subscribe to CXOTalk: www.cxotalk.com/newsletter
Read the episode summary and transcript: www.cxotalk.com/episode/ai-failure-injustice-inequality-and-algorithms
Learn more about TechTonic Justice: www.techtonicjustice.org
526 episodes
Manage episode 487809033 series 1431021
In episode 882 of CXOTalk, Michael Krigsman sits down with Kevin De Liban, founder of TechTonic Justice and former legal aid attorney, who reveals the shocking truth about AI's impact on vulnerable communities.
De Liban shares how 92 million low-income Americans now have critical life decisions, such as healthcare, housing, employment, and government benefits, that are determined by algorithms that often fail catastrophically.
Drawing from his groundbreaking 2016 legal victory in Arkansas, where he successfully challenged an algorithm that devastated the lives of disabled Medicaid recipients, De Liban exposes the myth of AI neutrality and demonstrates how these systems reflect the biases and incentives of their creators. He explains why self-regulation and "ethical AI" initiatives often fail when they conflict with business interests, and why effective regulation is crucial.
What you'll learn:
- The real scale of AI harm affecting 92 million Americans
- Why AI systems aren't neutral decision-making tools
- How algorithms denied healthcare to disabled people in Arkansas
- Why ethical AI initiatives fail without enforceable accountability
- Practical steps technology leaders can take to prevent harm
- The expansion of AI monitoring into middle-class professions
- Why regulation benefits ethical companies
- How to determine if AI is appropriate for high-stakes decisions
This conversation challenges conventional wisdom about AI adoption and offers essential guidance for executives, developers, and policymakers navigating the intersection of technological innovation and social responsibility.
Whether you're a C-suite executive, technology professional, policymaker, or concerned citizen, this discussion provides crucial insights into one of the most pressing issues of our time: ensuring AI serves humanity rather than harming those who need protection most.
=====
Subscribe to CXOTalk: www.cxotalk.com/newsletter
Read the episode summary and transcript: www.cxotalk.com/episode/ai-failure-injustice-inequality-and-algorithms
Learn more about TechTonic Justice: www.techtonicjustice.org
526 episodes
Alla avsnitt
×Welcome to Player FM!
Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.