Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by Jodi and Justin Daniels and Justin Daniels. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Jodi and Justin Daniels and Justin Daniels or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Privacy in the Loop: Why Human Training Is AI’s Greatest Weakness and Strength

28:22
 
Share
 

Manage episode 493644148 series 2806859
Content provided by Jodi and Justin Daniels and Justin Daniels. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Jodi and Justin Daniels and Justin Daniels or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

Nick Oldham is the Chief Operations Officer, USIS, and Global Chief Risk, Privacy and Compliance Officer at Equifax Inc. A forward-thinking legal and operations executive, Nick has a proven track record of driving large-scale transformations by integrating legal expertise with strategic operational leadership. He oversees all enterprise-wide second-line functions, leading initiatives to embed AI, enable data-driven decision-making, and deliver innovative, compliant solutions across a $1.9B business unit. His focus is on building efficient, scalable systems that align with both compliance standards and long-term strategic goals.

In this episode…

Many companies are rushing to adopt AI tools without adequately training their workforce on how to use them responsibly. As AI becomes embedded in daily business operations, the biggest risk isn’t the technology itself, but the lack of human understanding around how AI works and what it can do. When teams struggle to understand the differences between machine learning and generative AI, it creates risks and makes it harder to establish appropriate privacy and security guardrails. Human training is AI's greatest weakness and strength, and closing that gap involves rethinking how companies educate and train employees at every level.

The responsible use of AI depends on human judgment. Companies need to embed privacy education, critical thinking, and AI risk awareness into training programs from the start. Employees should be taught how to ask questions, evaluate model behavior, and recognize when personal information is being misused. AI literacy should also extend beyond the workplace. Introducing it in high school or even earlier helps prepare future professionals to navigate complex AI tools and make thoughtful, responsible decisions.

In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels speak with Nick Oldham, Chief Operations Officer, USIS, and Global Chief Risk, Privacy and Compliance Officer at Equifax, about the role of human training in AI literacy. Nick breaks down the components of AI literacy, explains why everyone needs a foundational understanding, and emphasizes the importance of prioritizing privacy awareness when using AI tools. He also highlights ways to embed privacy and security into AI governance programs and provides actionable steps organizations can take to strengthen AI literacy across teams.

  continue reading

228 episodes

Artwork
iconShare
 
Manage episode 493644148 series 2806859
Content provided by Jodi and Justin Daniels and Justin Daniels. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Jodi and Justin Daniels and Justin Daniels or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

Nick Oldham is the Chief Operations Officer, USIS, and Global Chief Risk, Privacy and Compliance Officer at Equifax Inc. A forward-thinking legal and operations executive, Nick has a proven track record of driving large-scale transformations by integrating legal expertise with strategic operational leadership. He oversees all enterprise-wide second-line functions, leading initiatives to embed AI, enable data-driven decision-making, and deliver innovative, compliant solutions across a $1.9B business unit. His focus is on building efficient, scalable systems that align with both compliance standards and long-term strategic goals.

In this episode…

Many companies are rushing to adopt AI tools without adequately training their workforce on how to use them responsibly. As AI becomes embedded in daily business operations, the biggest risk isn’t the technology itself, but the lack of human understanding around how AI works and what it can do. When teams struggle to understand the differences between machine learning and generative AI, it creates risks and makes it harder to establish appropriate privacy and security guardrails. Human training is AI's greatest weakness and strength, and closing that gap involves rethinking how companies educate and train employees at every level.

The responsible use of AI depends on human judgment. Companies need to embed privacy education, critical thinking, and AI risk awareness into training programs from the start. Employees should be taught how to ask questions, evaluate model behavior, and recognize when personal information is being misused. AI literacy should also extend beyond the workplace. Introducing it in high school or even earlier helps prepare future professionals to navigate complex AI tools and make thoughtful, responsible decisions.

In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels speak with Nick Oldham, Chief Operations Officer, USIS, and Global Chief Risk, Privacy and Compliance Officer at Equifax, about the role of human training in AI literacy. Nick breaks down the components of AI literacy, explains why everyone needs a foundational understanding, and emphasizes the importance of prioritizing privacy awareness when using AI tools. He also highlights ways to embed privacy and security into AI governance programs and provides actionable steps organizations can take to strengthen AI literacy across teams.

  continue reading

228 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play