This is the audio podcast version of Troy Hunt's weekly update video published here: https://www.troyhunt.com/tag/weekly-update/
…
continue reading
Content provided by Anton Chuvakin. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Anton Chuvakin or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!
Go offline with the Player FM app!
EP224 Protecting the Learning Machines: From AI Agents to Provenance in MLSecOps
MP3•Episode home
Manage episode 482258948 series 2892548
Content provided by Anton Chuvakin. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Anton Chuvakin or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
Guest:
- Diana Kelley, CSO at Protect AI
Topics:
- Can you explain the concept of "MLSecOps" as an analogy with DevSecOps, with 'Dev' replaced by 'ML'? This has nothing to do with SecOps, right?
- What are the most critical steps a CISO should prioritize when implementing MLSecOps within their organization? What gets better when you do it?
- How do we adapt traditional security testing, like vulnerability scanning, SAST, and DAST, to effectively assess the security of machine learning models? Can we?
- In the context of AI supply chain security, what is the essential role of third-party assessments, particularly regarding data provenance?
- How can organizations balance the need for security logging in AI systems with the imperative to protect privacy and sensitive data? Do we need to decouple security from safety or privacy?
- What are the primary security risks associated with overprivileged AI agents, and how can organizations mitigate these risks?
- Top differences between LLM/chatbot AI security vs AI agent security?
Resources:
- “Airline held liable for its chatbot giving passenger bad advice - what this means for travellers”
- “ChatGPT Spit Out Sensitive Data When Told to Repeat ‘Poem’ Forever”
- Secure by Design for AI by Protect AI
- “Securing AI Supply Chain: Like Software, Only Not”
- OWASP Top 10 for Large Language Model Applications
- OWASP Top 10 for AI Agents (draft)
- MITRE ATLAS
- “Demystifying AI Security: New Paper on Real-World SAIF Applications” (and paper)
- LinkedIn Course: Security Risks in AI and ML: Categorizing Attacks and Failure Modes
225 episodes
MP3•Episode home
Manage episode 482258948 series 2892548
Content provided by Anton Chuvakin. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Anton Chuvakin or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
Guest:
- Diana Kelley, CSO at Protect AI
Topics:
- Can you explain the concept of "MLSecOps" as an analogy with DevSecOps, with 'Dev' replaced by 'ML'? This has nothing to do with SecOps, right?
- What are the most critical steps a CISO should prioritize when implementing MLSecOps within their organization? What gets better when you do it?
- How do we adapt traditional security testing, like vulnerability scanning, SAST, and DAST, to effectively assess the security of machine learning models? Can we?
- In the context of AI supply chain security, what is the essential role of third-party assessments, particularly regarding data provenance?
- How can organizations balance the need for security logging in AI systems with the imperative to protect privacy and sensitive data? Do we need to decouple security from safety or privacy?
- What are the primary security risks associated with overprivileged AI agents, and how can organizations mitigate these risks?
- Top differences between LLM/chatbot AI security vs AI agent security?
Resources:
- “Airline held liable for its chatbot giving passenger bad advice - what this means for travellers”
- “ChatGPT Spit Out Sensitive Data When Told to Repeat ‘Poem’ Forever”
- Secure by Design for AI by Protect AI
- “Securing AI Supply Chain: Like Software, Only Not”
- OWASP Top 10 for Large Language Model Applications
- OWASP Top 10 for AI Agents (draft)
- MITRE ATLAS
- “Demystifying AI Security: New Paper on Real-World SAIF Applications” (and paper)
- LinkedIn Course: Security Risks in AI and ML: Categorizing Attacks and Failure Modes
225 episodes
All episodes
×Welcome to Player FM!
Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.