Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by Joe South. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Joe South or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Your AI is not as secure as you think it is, and here's why

50:51
 
Share
 

Manage episode 509087446 series 2871161
Content provided by Joe South. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Joe South or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

Send us a text

David Brockler, AI security researcher at NCC Group, explores the rapidly evolving landscape of AI security and the fundamental challenges posed by integrating Large Language Models into applications. We discuss how traditional security approaches fail when dealing with AI components that dynamically change their trustworthiness based on input data.
• LLMs present unique security challenges beyond prompt injection or generating harmful content
• Traditional security models focusing on component-based permissions don't work with AI systems
• "Source-sink chains" are key vulnerability points where attackers can manipulate AI behavior
• Real-world examples include data exfiltration through markdown image rendering in AI interfaces
• Security "guardrails" are insufficient first-order controls for protecting AI systems
• The education gap between security professionals and actual AI threats is substantial
• Organizations must shift from component-based security to data flow security when implementing AI
• Development teams need to ensure high-trust AI systems only operate with trusted data
Watch for NCC Group's upcoming release of David's Black Hat presentation on new security fundamentals for AI and ML systems. Connect with David on LinkedIn (David Brockler III) or visit the NCC Group research blog at research.nccgroup.com.
Support the show

Follow the Podcast on Social Media!

Tesla Referral Code: https://ts.la/joseph675128

YouTube: https://www.youtube.com/@securityunfilteredpodcast

Instagram: https://www.instagram.com/secunfpodcast/
Twitter: https://twitter.com/SecUnfPodcast

  continue reading

Chapters

1. Your AI is not as secure as you think it is, and here's why (00:00:00)

2. Introduction and AI Industry Evolution (00:01:47)

3. Navigating Truth in the AI Era (00:11:04)

4. David's Journey into Cybersecurity (00:17:44)

5. Academic Projects and Security Challenges (00:24:38)

6. AI Security Fundamentals (00:30:44)

7. Vulnerabilities in LLM Implementation (00:38:44)

8. Beyond Guardrails: The Future of AI Security (00:44:33)

254 episodes

Artwork
iconShare
 
Manage episode 509087446 series 2871161
Content provided by Joe South. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Joe South or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

Send us a text

David Brockler, AI security researcher at NCC Group, explores the rapidly evolving landscape of AI security and the fundamental challenges posed by integrating Large Language Models into applications. We discuss how traditional security approaches fail when dealing with AI components that dynamically change their trustworthiness based on input data.
• LLMs present unique security challenges beyond prompt injection or generating harmful content
• Traditional security models focusing on component-based permissions don't work with AI systems
• "Source-sink chains" are key vulnerability points where attackers can manipulate AI behavior
• Real-world examples include data exfiltration through markdown image rendering in AI interfaces
• Security "guardrails" are insufficient first-order controls for protecting AI systems
• The education gap between security professionals and actual AI threats is substantial
• Organizations must shift from component-based security to data flow security when implementing AI
• Development teams need to ensure high-trust AI systems only operate with trusted data
Watch for NCC Group's upcoming release of David's Black Hat presentation on new security fundamentals for AI and ML systems. Connect with David on LinkedIn (David Brockler III) or visit the NCC Group research blog at research.nccgroup.com.
Support the show

Follow the Podcast on Social Media!

Tesla Referral Code: https://ts.la/joseph675128

YouTube: https://www.youtube.com/@securityunfilteredpodcast

Instagram: https://www.instagram.com/secunfpodcast/
Twitter: https://twitter.com/SecUnfPodcast

  continue reading

Chapters

1. Your AI is not as secure as you think it is, and here's why (00:00:00)

2. Introduction and AI Industry Evolution (00:01:47)

3. Navigating Truth in the AI Era (00:11:04)

4. David's Journey into Cybersecurity (00:17:44)

5. Academic Projects and Security Challenges (00:24:38)

6. AI Security Fundamentals (00:30:44)

7. Vulnerabilities in LLM Implementation (00:38:44)

8. Beyond Guardrails: The Future of AI Security (00:44:33)

254 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play