Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by MLSecOps.com. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by MLSecOps.com or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Unpacking the Cloud Security Alliance AI Controls Matrix

35:53
 
Share
 

Manage episode 477420588 series 3461851
Content provided by MLSecOps.com. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by MLSecOps.com or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

Send us a text

In this episode of the MLSecOps Podcast, we sit down with three expert contributors from the Cloud Security Alliance’s AI Controls Matrix working group. They reveal how this newly released framework addresses emerging AI threats—like model poisoning and adversarial manipulation—through robust technical controls, detailed implementation guidelines, and clear auditing strategies.

Full transcript with links to resources available at https://mlsecops.com/podcast/unpacking-the-cloud-security-alliance-ai-controls-matrix

Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.
Additional tools and resources to check out:
Protect AI Guardian: Zero Trust for ML Models

Recon: Automated Red Teaming for GenAI

Protect AI’s ML Security-Focused Open Source Tools

LLM Guard Open Source Security Toolkit for LLM Interactions

Huntr - The World's First AI/Machine Learning Bug Bounty Platform

Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.
Additional tools and resources to check out:
Protect AI Guardian: Zero Trust for ML Models

Recon: Automated Red Teaming for GenAI

Protect AI’s ML Security-Focused Open Source Tools

LLM Guard Open Source Security Toolkit for LLM Interactions

Huntr - The World's First AI/Machine Learning Bug Bounty Platform

  continue reading

50 episodes

Artwork
iconShare
 
Manage episode 477420588 series 3461851
Content provided by MLSecOps.com. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by MLSecOps.com or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

Send us a text

In this episode of the MLSecOps Podcast, we sit down with three expert contributors from the Cloud Security Alliance’s AI Controls Matrix working group. They reveal how this newly released framework addresses emerging AI threats—like model poisoning and adversarial manipulation—through robust technical controls, detailed implementation guidelines, and clear auditing strategies.

Full transcript with links to resources available at https://mlsecops.com/podcast/unpacking-the-cloud-security-alliance-ai-controls-matrix

Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.
Additional tools and resources to check out:
Protect AI Guardian: Zero Trust for ML Models

Recon: Automated Red Teaming for GenAI

Protect AI’s ML Security-Focused Open Source Tools

LLM Guard Open Source Security Toolkit for LLM Interactions

Huntr - The World's First AI/Machine Learning Bug Bounty Platform

Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.
Additional tools and resources to check out:
Protect AI Guardian: Zero Trust for ML Models

Recon: Automated Red Teaming for GenAI

Protect AI’s ML Security-Focused Open Source Tools

LLM Guard Open Source Security Toolkit for LLM Interactions

Huntr - The World's First AI/Machine Learning Bug Bounty Platform

  continue reading

50 episodes

Alle afleveringen

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Listen to this show while you explore
Play