Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by Christopher Lind. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Christopher Lind or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

ChatGPT Leak Panic | Workday AI Lawsuit Escalates | Life Denied by Algorithm | AI Hiring Done Right

47:15
 
Share
 

Manage episode 499037851 series 3593966
Content provided by Christopher Lind. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Christopher Lind or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

Happy Friday, everyone! This week’s update is heavily shaped by you. After some recent feedback, I’m working to be intentional about highlighting not just the risks of AI, but also examples of some real wins I’m involved in. Amidst all the dystopian noise, I want people to know it’s possible for AI to help people, not just hurt them. You’ll see that in the final segment, which I’ll try and include each week moving forward.

Oh, and one of this week’s stories? It came directly from a listener who shared how an AI system nearly wrecked their life. It’s a powerful reminder that what we talk about here isn’t just theory; it’s affecting real people, right now.

Now, all four updates this week deal with the tension between moving fast and being responsible. It emphasizes the importance of being intentional about how we handle power, pressure, and people in the age of AI.

With that, let’s get into it.

ChatGPT Didn’t Leak Your Private Conversations, But the Panic Reveals a Bigger Problem

You probably saw the headlines: “ChatGPT conversations showing up in Google search!” The truth? It wasn’t a breach, well, at least not how you might think. It was a case of people moving too fast, not reading the fine print, and accidentally sharing public links. I break down what really happened, why OpenAI shut the feature down, and what this teaches us about the cultural costs of speed over discernment.

Workday’s AI Hiring Lawsuit Just Took a Big Turn

Workday’s already in court for alleged bias in its hiring AI, but now the judge wants a full list of every company that used it. Ruh-Roh George! This isn’t just a vendor issue anymore. I unpack how this sets a new legal precedent, what it means for enterprise leaders, and why blindly trusting software could drag your company into consequences you didn’t see coming.

How AI Nearly Cost One Man His Life-Saving Medication

A listener shared a personal story about how an AI system denied his long-standing prescription with zero human context. Guess what saved it? A wave of people stepped in. It’s a chilling example of what happens when algorithms make life-and-death decisions without context, compassion, or recourse. I explore what this reveals about system design, bias, and the irreplaceable value of human community.

Yes, AI Can Improve Hiring; Here’s a Story Where It Did

As part of my future commitment, I want to end with a win. I share a project I worked on where AI actually helped more people get hired by identifying overlooked talent and recommending better-fit roles. It didn’t replace people; it empowered them. I walk through how we designed it, what made it work, and why this kind of human-centered AI is not only possible, it’s necessary.

If this episode was helpful, would you share it with someone? Leave a rating, drop a comment with your thoughts, and follow for future updates that go beyond the headlines and help you lead with clarity in the AI age. And, if you’d take me out for a coffee to say thanks, you can do that here: https://www.buymeacoffee.com/christopherlind

Show Notes:

In this Weekly Update, Christopher Lind unpacks four timely stories at the intersection of AI, business, leadership, and human experience. He opens by setting the record straight on the so-called ChatGPT leak, then covers a new twist in Workday’s AI lawsuit that could change how companies are held liable. Next, he shares a listener’s powerful story about healthcare denied by AI and how community turned the tide. Finally, he wraps with a rare AI hiring success story, one that highlights how thoughtful design can lead to better outcomes for everyone involved.

Timestamps:

00:00 – Introduction

01:24 – Episode Overview

02:58 – The ChatGPT Public Link Panic

12:39 – Workday’s AI Hiring Lawsuit Escalates

25:01 – AI Denies Critical Medication

35:53 – AI Success in Recruiting Done Right

45:02 – Final Thoughts and Wrap-Up

#AIethics #AIharm #DigitalLeadership #HiringAI #HumanCenteredAI #FutureOfWork

  continue reading

370 episodes

Artwork
iconShare
 
Manage episode 499037851 series 3593966
Content provided by Christopher Lind. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Christopher Lind or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

Happy Friday, everyone! This week’s update is heavily shaped by you. After some recent feedback, I’m working to be intentional about highlighting not just the risks of AI, but also examples of some real wins I’m involved in. Amidst all the dystopian noise, I want people to know it’s possible for AI to help people, not just hurt them. You’ll see that in the final segment, which I’ll try and include each week moving forward.

Oh, and one of this week’s stories? It came directly from a listener who shared how an AI system nearly wrecked their life. It’s a powerful reminder that what we talk about here isn’t just theory; it’s affecting real people, right now.

Now, all four updates this week deal with the tension between moving fast and being responsible. It emphasizes the importance of being intentional about how we handle power, pressure, and people in the age of AI.

With that, let’s get into it.

ChatGPT Didn’t Leak Your Private Conversations, But the Panic Reveals a Bigger Problem

You probably saw the headlines: “ChatGPT conversations showing up in Google search!” The truth? It wasn’t a breach, well, at least not how you might think. It was a case of people moving too fast, not reading the fine print, and accidentally sharing public links. I break down what really happened, why OpenAI shut the feature down, and what this teaches us about the cultural costs of speed over discernment.

Workday’s AI Hiring Lawsuit Just Took a Big Turn

Workday’s already in court for alleged bias in its hiring AI, but now the judge wants a full list of every company that used it. Ruh-Roh George! This isn’t just a vendor issue anymore. I unpack how this sets a new legal precedent, what it means for enterprise leaders, and why blindly trusting software could drag your company into consequences you didn’t see coming.

How AI Nearly Cost One Man His Life-Saving Medication

A listener shared a personal story about how an AI system denied his long-standing prescription with zero human context. Guess what saved it? A wave of people stepped in. It’s a chilling example of what happens when algorithms make life-and-death decisions without context, compassion, or recourse. I explore what this reveals about system design, bias, and the irreplaceable value of human community.

Yes, AI Can Improve Hiring; Here’s a Story Where It Did

As part of my future commitment, I want to end with a win. I share a project I worked on where AI actually helped more people get hired by identifying overlooked talent and recommending better-fit roles. It didn’t replace people; it empowered them. I walk through how we designed it, what made it work, and why this kind of human-centered AI is not only possible, it’s necessary.

If this episode was helpful, would you share it with someone? Leave a rating, drop a comment with your thoughts, and follow for future updates that go beyond the headlines and help you lead with clarity in the AI age. And, if you’d take me out for a coffee to say thanks, you can do that here: https://www.buymeacoffee.com/christopherlind

Show Notes:

In this Weekly Update, Christopher Lind unpacks four timely stories at the intersection of AI, business, leadership, and human experience. He opens by setting the record straight on the so-called ChatGPT leak, then covers a new twist in Workday’s AI lawsuit that could change how companies are held liable. Next, he shares a listener’s powerful story about healthcare denied by AI and how community turned the tide. Finally, he wraps with a rare AI hiring success story, one that highlights how thoughtful design can lead to better outcomes for everyone involved.

Timestamps:

00:00 – Introduction

01:24 – Episode Overview

02:58 – The ChatGPT Public Link Panic

12:39 – Workday’s AI Hiring Lawsuit Escalates

25:01 – AI Denies Critical Medication

35:53 – AI Success in Recruiting Done Right

45:02 – Final Thoughts and Wrap-Up

#AIethics #AIharm #DigitalLeadership #HiringAI #HumanCenteredAI #FutureOfWork

  continue reading

370 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play