Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by MLSecOps.com. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by MLSecOps.com or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

MLSecOps: Red Teaming, Threat Modeling, and Attack Methods of AI Apps; With Guest: Johann Rehberger

40:29
 
Share
 

Fetch error

Hmmm there seems to be a problem fetching this series right now. Last successful fetch was on October 30, 2025 16:55 (2M ago)

What now? This series will be checked again in the next day. If you believe it should be working, please verify the publisher's feed link below is valid and includes actual episode links. You can contact support to request the feed be immediately fetched.

Manage episode 361711037 series 3461851
Content provided by MLSecOps.com. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by MLSecOps.com or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

Send us a text

Johann Rehberger is an entrepreneur and Red Team Director at Electronic Arts. His career experience includes time with Microsoft and Uber, and he is the author of “Cybersecurity Attacks – Red Team Strategies: A practical guide to building a penetration testing program having homefield advantage” and the popular blog, EmbraceTheRed.com.
In this episode, Johann offers insights about how to apply a traditional security engineering mindset and red team approach to analyzing the AI/ML attack surface. We also discuss ways that organizations can adapt their traditional security postures to address the unique challenges of ML security.

Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.
Additional tools and resources to check out:
Protect AI Guardian: Zero Trust for ML Models

Recon: Automated Red Teaming for GenAI

Protect AI’s ML Security-Focused Open Source Tools

LLM Guard Open Source Security Toolkit for LLM Interactions

Huntr - The World's First AI/Machine Learning Bug Bounty Platform

  continue reading

58 episodes

Artwork
iconShare
 

Fetch error

Hmmm there seems to be a problem fetching this series right now. Last successful fetch was on October 30, 2025 16:55 (2M ago)

What now? This series will be checked again in the next day. If you believe it should be working, please verify the publisher's feed link below is valid and includes actual episode links. You can contact support to request the feed be immediately fetched.

Manage episode 361711037 series 3461851
Content provided by MLSecOps.com. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by MLSecOps.com or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

Send us a text

Johann Rehberger is an entrepreneur and Red Team Director at Electronic Arts. His career experience includes time with Microsoft and Uber, and he is the author of “Cybersecurity Attacks – Red Team Strategies: A practical guide to building a penetration testing program having homefield advantage” and the popular blog, EmbraceTheRed.com.
In this episode, Johann offers insights about how to apply a traditional security engineering mindset and red team approach to analyzing the AI/ML attack surface. We also discuss ways that organizations can adapt their traditional security postures to address the unique challenges of ML security.

Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.
Additional tools and resources to check out:
Protect AI Guardian: Zero Trust for ML Models

Recon: Automated Red Teaming for GenAI

Protect AI’s ML Security-Focused Open Source Tools

LLM Guard Open Source Security Toolkit for LLM Interactions

Huntr - The World's First AI/Machine Learning Bug Bounty Platform

  continue reading

58 episodes

كل الحلقات

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play