Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by CrowdStrike. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by CrowdStrike or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Prompted to Fail: The Security Risks Lurking in DeepSeek-Generated Code

37:09
 
Share
 

Manage episode 520325245 series 3490818
Content provided by CrowdStrike. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by CrowdStrike or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

CrowdStrike research into AI coding assistants reveals a new, subtle vulnerability surface: When DeepSeek-R1 receives prompts the Chinese Communist Party (CCP) likely considers politically sensitive, the likelihood of it producing code with severe security flaws increases by up to 50%.

Stefan Stein, manager of the CrowdStrike Counter Adversary Operations Data Science team, joined Adam and Cristian for a live recording at Fal.Con 2025 to discuss how this project got started, the methodology behind the team’s research, and the significance of their findings.

The research began with a simple question: What are the security risks of using DeepSeek-R1 as a coding assistant? AI coding assistants are commonly used and often have access to sensitive information. Any systemic issue can have a major and far-reaching impact.

It concluded with the discovery that the presence of certain trigger words — such as mentions of Falun Gong, Uyghurs, or Tibet — in DeepSeek-R1 prompts can have severe effects on the quality and security of the code it produces. Unlike most large language model (LLM) security research focused on jailbreaks or prompt injections, this work exposes subtle biases that can lead to real-world vulnerabilities in production systems.

Tune in for a fascinating deep dive into how Stefan and his team explored the biases in DeepSeek-R1, the implications of this research, and what this means for organizations adopting AI.

  continue reading

62 episodes

Artwork
iconShare
 
Manage episode 520325245 series 3490818
Content provided by CrowdStrike. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by CrowdStrike or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

CrowdStrike research into AI coding assistants reveals a new, subtle vulnerability surface: When DeepSeek-R1 receives prompts the Chinese Communist Party (CCP) likely considers politically sensitive, the likelihood of it producing code with severe security flaws increases by up to 50%.

Stefan Stein, manager of the CrowdStrike Counter Adversary Operations Data Science team, joined Adam and Cristian for a live recording at Fal.Con 2025 to discuss how this project got started, the methodology behind the team’s research, and the significance of their findings.

The research began with a simple question: What are the security risks of using DeepSeek-R1 as a coding assistant? AI coding assistants are commonly used and often have access to sensitive information. Any systemic issue can have a major and far-reaching impact.

It concluded with the discovery that the presence of certain trigger words — such as mentions of Falun Gong, Uyghurs, or Tibet — in DeepSeek-R1 prompts can have severe effects on the quality and security of the code it produces. Unlike most large language model (LLM) security research focused on jailbreaks or prompt injections, this work exposes subtle biases that can lead to real-world vulnerabilities in production systems.

Tune in for a fascinating deep dive into how Stefan and his team explored the biases in DeepSeek-R1, the implications of this research, and what this means for organizations adopting AI.

  continue reading

62 episodes

Tüm bölümler

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play