Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by O'Reilly. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by O'Reilly or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Chloé Messdaghi on AI Security, Policy, and Regulation

30:05
 
Share
 

Manage episode 514760162 series 3696743
Content provided by O'Reilly. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by O'Reilly or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

Chloé Messdaghi and Ben Lorica discuss AI security—a subject of increasing importance as AI-driven applications roll out into the real world. There’s a knowledge gap: Security workers don’t understand AI, and AI developers don’t understand security. It’s important to be aware of all the resources that are available. Make sure to bring everyone together to develop AI security policies and playbooks, including AI developers and experts. Be aware of all the resources that are available; we expect to see AI security certifications and training becoming available in the coming year.

Points of Interest

  • 0:24: How does AI security differ from traditional cybersecurity?
  • 0:44: AI is a black box: We don’t have transparency to show how AI works or explainability to show how it makes decisions. Black boxes are hard to secure.
  • 2:12: There’s a huge knowledge gap. Companies aren’t doing what is needed.
  • 2:24: When you talk to executives, do you distinguish between traditional AI and ML and the new generative AI models?
  • 2:43: We talk about older models as well. But security is as much about, What am I supposed to do? We’ve had AI for a while, but for some time, security has not been part of that conversation.
  • 3:26: Where do security folks go to learn how to secure AI? There are no certifications. We’re playing a massive catchup game.
  • 3:53: What’s the state of awareness about incident response strategies for AI?
  • 4:15: Even in traditional cybersecurity, we’ve always had an issue of making sure incident response plans aren’t ad hoc or expired. A lot of it is being aware of all the technologies and products that the company has been using. It’s hard to protect if you don’t know everything in your environment.
  • 5:19: The AI Threat Landscape report found that 77% of the companies reported breaches in their AI systems.
  • 5:40: Last year, a statistic came out about the adoption of AI-related cybersecurity measures. For North America, 70% of the organizations said they did one or two out of five security measures. 24% adopted two to four measures.
  • 6:35: What are some of the first things I should be thinking about to update my incident response playbook?
  • 6:51: Make sure you have all the right people in the room. We still have issues with department silos. CISOs can be dismissed or not even in the room when it comes to decisions. There are concerns about restricting innovation or product launch dates. You have to have CTOs, data scientists, ML developers, and all the right people to ensure that there is safety and that everyone has taken precautions.
  • 7:48: For companies with a mature cybersecurity incident playbook that they want to update for AI, what AI brings is that you have to include more people.
  • 8:17: You have to realize that there’s an AI knowledge gap, and that there’s insufficient security training for data scientists. Security folks don’t know where to turn for education. There aren’t a lot of courses or programs out there. We’ll see a lot of that develop this year.
  • 10:13: You’d think we’d have addressed communications silos by now, but AI has ripped the bandaids off. There are resources out there. I recommend Databricks’ AI Security Framework (DASF); it’s mapped to the MITRE ATLAS. Also be familiar with the NIST Risk Framework and the OWASP AI Exchange.
  • 11:40: This knowledge gap is on both sides. What are some of the best practices for addressing this two-sided knowledge gap?
  • 12:20: Be honest about where your company stands. Where are we right now? Are we doing a good job of governance? Am I doing a good enough job as a leader? Is there something I don’t know about the environment? Be the leader who’s a bridge, breaks down silos, knows who owns what, and who’s responsible for what.
  • 13:24: One issue is the notion of shadow AI. Knowledge workers go home and use things that aren’t sanctioned by companies. Are there specific things that companies should be doing about shadow AI?

  continue reading

33 episodes

Artwork
iconShare
 
Manage episode 514760162 series 3696743
Content provided by O'Reilly. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by O'Reilly or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

Chloé Messdaghi and Ben Lorica discuss AI security—a subject of increasing importance as AI-driven applications roll out into the real world. There’s a knowledge gap: Security workers don’t understand AI, and AI developers don’t understand security. It’s important to be aware of all the resources that are available. Make sure to bring everyone together to develop AI security policies and playbooks, including AI developers and experts. Be aware of all the resources that are available; we expect to see AI security certifications and training becoming available in the coming year.

Points of Interest

  • 0:24: How does AI security differ from traditional cybersecurity?
  • 0:44: AI is a black box: We don’t have transparency to show how AI works or explainability to show how it makes decisions. Black boxes are hard to secure.
  • 2:12: There’s a huge knowledge gap. Companies aren’t doing what is needed.
  • 2:24: When you talk to executives, do you distinguish between traditional AI and ML and the new generative AI models?
  • 2:43: We talk about older models as well. But security is as much about, What am I supposed to do? We’ve had AI for a while, but for some time, security has not been part of that conversation.
  • 3:26: Where do security folks go to learn how to secure AI? There are no certifications. We’re playing a massive catchup game.
  • 3:53: What’s the state of awareness about incident response strategies for AI?
  • 4:15: Even in traditional cybersecurity, we’ve always had an issue of making sure incident response plans aren’t ad hoc or expired. A lot of it is being aware of all the technologies and products that the company has been using. It’s hard to protect if you don’t know everything in your environment.
  • 5:19: The AI Threat Landscape report found that 77% of the companies reported breaches in their AI systems.
  • 5:40: Last year, a statistic came out about the adoption of AI-related cybersecurity measures. For North America, 70% of the organizations said they did one or two out of five security measures. 24% adopted two to four measures.
  • 6:35: What are some of the first things I should be thinking about to update my incident response playbook?
  • 6:51: Make sure you have all the right people in the room. We still have issues with department silos. CISOs can be dismissed or not even in the room when it comes to decisions. There are concerns about restricting innovation or product launch dates. You have to have CTOs, data scientists, ML developers, and all the right people to ensure that there is safety and that everyone has taken precautions.
  • 7:48: For companies with a mature cybersecurity incident playbook that they want to update for AI, what AI brings is that you have to include more people.
  • 8:17: You have to realize that there’s an AI knowledge gap, and that there’s insufficient security training for data scientists. Security folks don’t know where to turn for education. There aren’t a lot of courses or programs out there. We’ll see a lot of that develop this year.
  • 10:13: You’d think we’d have addressed communications silos by now, but AI has ripped the bandaids off. There are resources out there. I recommend Databricks’ AI Security Framework (DASF); it’s mapped to the MITRE ATLAS. Also be familiar with the NIST Risk Framework and the OWASP AI Exchange.
  • 11:40: This knowledge gap is on both sides. What are some of the best practices for addressing this two-sided knowledge gap?
  • 12:20: Be honest about where your company stands. Where are we right now? Are we doing a good job of governance? Am I doing a good enough job as a leader? Is there something I don’t know about the environment? Be the leader who’s a bridge, breaks down silos, knows who owns what, and who’s responsible for what.
  • 13:24: One issue is the notion of shadow AI. Knowledge workers go home and use things that aren’t sanctioned by companies. Are there specific things that companies should be doing about shadow AI?

  continue reading

33 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play