Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by Black Hills Information Security. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Black Hills Information Security or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Data Poisoning | Episode 31

31:20
 
Share
 

Manage episode 523886409 series 3706340
Content provided by Black Hills Information Security. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Black Hills Information Security or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

🔗 Register for FREE Infosec Webcasts, Anti-casts & Summits –

https://poweredbybhis.com

Data Poisoning Attacks | Episode 31
In this episode of BHIS Presents: AI Security Ops, the panel dives into the hidden danger of data poisoning – where attackers corrupt the data that trains your AI models, leading to unpredictable and often harmful behavior. From classifiers to LLMs, discover why poisoned data can undermine security, accuracy, and trust in AI systems.

We break down:

  • What data poisoning is and why it matters
  • How attackers inject malicious samples or flip labels in training sets
  • The role of open-source repositories like Hugging Face in supply chain risk
  • New twists for LLMs: poisoning via reinforcement feedback and RAG
  • Real-world concerns like bias in ChatGPT and malicious model uploads
  • Defensive strategies: governance, provenance, versioning, and security assessments

Whether you’re building classifiers or fine-tuning LLMs, this episode will help you understand how poisoned data sneaks in, and what you can do to prevent it. Treat your AI like a “drunk intern”: verify everything.

#aisecurity #DataPoisoning #Cybersecurity #BHIS #llmsecurity #aithreats

Brought to you by Black Hills Information Security

https://www.blackhillsinfosec.com

----------------------------------------------------------------------------------------------

Joff Thyer - https://blackhillsinfosec.com/team/joff-thyer/

Derek Banks - https://www.blackhillsinfosec.com/team/derek-banks/

Brian Fehrman - https://www.blackhillsinfosec.com/team/brian-fehrman/

Bronwen Aker - http://blackhillsinfosec.com/team/bronwen-aker/

Ben Bowman - https://www.blackhillsinfosec.com/team/ben-bowman/

  • (00:00) - Intro & Sponsor Shoutouts
  • (01:19) - What Is Data Poisoning?
  • (03:58) - Poisoning Classifier Models
  • (08:10) - Risks in Open-Source Data Sets
  • (12:30) - LLM-Specific Poisoning Vectors
  • (17:04) - RAG and Context Injection
  • (21:25) - Realistic Threats & Examples
  • (25:48) - Defensive Strategies & Governance
  • (28:27) - Panel Takeaways & Closing Thoughts
  continue reading

33 episodes

Artwork
iconShare
 
Manage episode 523886409 series 3706340
Content provided by Black Hills Information Security. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Black Hills Information Security or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

🔗 Register for FREE Infosec Webcasts, Anti-casts & Summits –

https://poweredbybhis.com

Data Poisoning Attacks | Episode 31
In this episode of BHIS Presents: AI Security Ops, the panel dives into the hidden danger of data poisoning – where attackers corrupt the data that trains your AI models, leading to unpredictable and often harmful behavior. From classifiers to LLMs, discover why poisoned data can undermine security, accuracy, and trust in AI systems.

We break down:

  • What data poisoning is and why it matters
  • How attackers inject malicious samples or flip labels in training sets
  • The role of open-source repositories like Hugging Face in supply chain risk
  • New twists for LLMs: poisoning via reinforcement feedback and RAG
  • Real-world concerns like bias in ChatGPT and malicious model uploads
  • Defensive strategies: governance, provenance, versioning, and security assessments

Whether you’re building classifiers or fine-tuning LLMs, this episode will help you understand how poisoned data sneaks in, and what you can do to prevent it. Treat your AI like a “drunk intern”: verify everything.

#aisecurity #DataPoisoning #Cybersecurity #BHIS #llmsecurity #aithreats

Brought to you by Black Hills Information Security

https://www.blackhillsinfosec.com

----------------------------------------------------------------------------------------------

Joff Thyer - https://blackhillsinfosec.com/team/joff-thyer/

Derek Banks - https://www.blackhillsinfosec.com/team/derek-banks/

Brian Fehrman - https://www.blackhillsinfosec.com/team/brian-fehrman/

Bronwen Aker - http://blackhillsinfosec.com/team/bronwen-aker/

Ben Bowman - https://www.blackhillsinfosec.com/team/ben-bowman/

  • (00:00) - Intro & Sponsor Shoutouts
  • (01:19) - What Is Data Poisoning?
  • (03:58) - Poisoning Classifier Models
  • (08:10) - Risks in Open-Source Data Sets
  • (12:30) - LLM-Specific Poisoning Vectors
  • (17:04) - RAG and Context Injection
  • (21:25) - Realistic Threats & Examples
  • (25:48) - Defensive Strategies & Governance
  • (28:27) - Panel Takeaways & Closing Thoughts
  continue reading

33 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play