Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by JR DeLaney. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by JR DeLaney or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

AI and the Future of Identity - Episode 2: The Hidden Cameras Reading Your Emotions: Biometric AI Surveillance Explained

34:02
 
Share
 

Manage episode 524051214 series 3677038
Content provided by JR DeLaney. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by JR DeLaney or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

AI systems are reading your face right now—in stores, schools, workplaces, and airports. But can they really detect your emotions? And should they?

Episode 2 of our AI & The Future of Identity series explores emotion recognition AI and biometric surveillance. We examine how these systems work, where they're deployed, and why experts are sounding alarms about accuracy, bias, and privacy.

WHAT WE COVER:

• How emotion recognition AI analyzes facial expressions, vocal tone, and body language to predict emotional states

• Why the science is controversial—research shows emotional expressions aren't universal across cultures

• Real-world applications: Walmart checkout cameras, Amazon warehouse monitoring, HireVue job interviews, online exam proctoring

• Discrimination risks for neurodivergent individuals, different cultures, and marginalized communities

• Workplace surveillance and the erosion of employee privacy

• Law enforcement use and the dangers of automated guilt detection

• Beneficial applications in mental health screening and accessibility technology

• Current regulations: EU AI Act, US city bans, and the gaps that remain

• What you can do to protect your emotional data and demand transparency

FEATURED INSIGHTS FROM: • Satya Nadella (Microsoft CEO) on responsible AI development • Meredith Whittaker (Signal President) on algorithmic bias • Shoshana Zuboff (Harvard Business School) on surveillance capitalism • Dr. Lisa Feldman Barrett's groundbreaking emotion research

NEXT EPISODE: Culture vs. Code - How AI threatens and preserves cultural identity

Subscribe now!

#AIInnovationsUnleashed #EmotionAI #BiometricSurveillance #AIPrivacy #TechEthics

  continue reading

121 episodes

Artwork
iconShare
 
Manage episode 524051214 series 3677038
Content provided by JR DeLaney. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by JR DeLaney or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

AI systems are reading your face right now—in stores, schools, workplaces, and airports. But can they really detect your emotions? And should they?

Episode 2 of our AI & The Future of Identity series explores emotion recognition AI and biometric surveillance. We examine how these systems work, where they're deployed, and why experts are sounding alarms about accuracy, bias, and privacy.

WHAT WE COVER:

• How emotion recognition AI analyzes facial expressions, vocal tone, and body language to predict emotional states

• Why the science is controversial—research shows emotional expressions aren't universal across cultures

• Real-world applications: Walmart checkout cameras, Amazon warehouse monitoring, HireVue job interviews, online exam proctoring

• Discrimination risks for neurodivergent individuals, different cultures, and marginalized communities

• Workplace surveillance and the erosion of employee privacy

• Law enforcement use and the dangers of automated guilt detection

• Beneficial applications in mental health screening and accessibility technology

• Current regulations: EU AI Act, US city bans, and the gaps that remain

• What you can do to protect your emotional data and demand transparency

FEATURED INSIGHTS FROM: • Satya Nadella (Microsoft CEO) on responsible AI development • Meredith Whittaker (Signal President) on algorithmic bias • Shoshana Zuboff (Harvard Business School) on surveillance capitalism • Dr. Lisa Feldman Barrett's groundbreaking emotion research

NEXT EPISODE: Culture vs. Code - How AI threatens and preserves cultural identity

Subscribe now!

#AIInnovationsUnleashed #EmotionAI #BiometricSurveillance #AIPrivacy #TechEthics

  continue reading

121 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play