Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by Meni Tasa. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Meni Tasa or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Hijacking Microsoft Copilot AI

7:50
 
Share
 

Manage episode 507818960 series 3682380
Content provided by Meni Tasa. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Meni Tasa or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

"Send me a quick text"

Episode Description – Technical Write-up for Defenders

EchoLeak is the first documented zero-click AI vulnerability in a major enterprise LLM application — Microsoft 365 Copilot.
The attacker seeds malicious instructions inside a normal-looking email, bypassing Microsoft’s prompt-injection filters.

When a user later asks a related question, Copilot retrieves that email, executes the hidden commands, and packages sensitive corporate data for exfiltration. Bypassing link redaction and CSP rules, the attacker silently sends the data out through trusted Microsoft services.
Persistence is achieved through “RAG spraying,” embedding the malicious instructions across multiple semantic chunks to maximize retrieval chances over time.

Defensive Actions
• Test AI prompt-injection classifiers, link filtering, and CSP enforcement in combination, not isolation.
• Monitor AI assistant retrieval logs for unusual cross-context content use.
• Implement strict scope enforcement so untrusted content cannot trigger privileged actions.
• Scan indexed content stores for embedded malicious instructions or unusual markdown patterns.
• Block unauthorized retrievals from trusted internal services to external destinations.

Potential IOCs

  • Emails containing hidden markdown reference-style links
  • External requests routed through Microsoft media or rendering services
  • AI retrievals from HR, onboarding, or FAQ content containing unrelated data instructions

Recommended Detection Focus

  • Alerts on Copilot responses containing embedded URLs or encoded data
  • Monitoring outbound requests from Microsoft services carrying sensitive information
  • Content scanning for prompt injection patterns inside indexed data

Thanks for spending a few minutes on the CyberBrief Project.

If you want to dive deeper or catch up on past episodes, head over to cyberbriefproject.buzzsprout.com.

You can also find the podcast on YouTube at youtube.com/@CyberBriefProject I’d love to see you there.

And if you find these episodes valuable and want to support the project, you can do that here: buzzsprout.com/support

Your support means a lot.

See you in the next one, and thank you for listening.

  continue reading

21 episodes

Artwork
iconShare
 
Manage episode 507818960 series 3682380
Content provided by Meni Tasa. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Meni Tasa or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

"Send me a quick text"

Episode Description – Technical Write-up for Defenders

EchoLeak is the first documented zero-click AI vulnerability in a major enterprise LLM application — Microsoft 365 Copilot.
The attacker seeds malicious instructions inside a normal-looking email, bypassing Microsoft’s prompt-injection filters.

When a user later asks a related question, Copilot retrieves that email, executes the hidden commands, and packages sensitive corporate data for exfiltration. Bypassing link redaction and CSP rules, the attacker silently sends the data out through trusted Microsoft services.
Persistence is achieved through “RAG spraying,” embedding the malicious instructions across multiple semantic chunks to maximize retrieval chances over time.

Defensive Actions
• Test AI prompt-injection classifiers, link filtering, and CSP enforcement in combination, not isolation.
• Monitor AI assistant retrieval logs for unusual cross-context content use.
• Implement strict scope enforcement so untrusted content cannot trigger privileged actions.
• Scan indexed content stores for embedded malicious instructions or unusual markdown patterns.
• Block unauthorized retrievals from trusted internal services to external destinations.

Potential IOCs

  • Emails containing hidden markdown reference-style links
  • External requests routed through Microsoft media or rendering services
  • AI retrievals from HR, onboarding, or FAQ content containing unrelated data instructions

Recommended Detection Focus

  • Alerts on Copilot responses containing embedded URLs or encoded data
  • Monitoring outbound requests from Microsoft services carrying sensitive information
  • Content scanning for prompt injection patterns inside indexed data

Thanks for spending a few minutes on the CyberBrief Project.

If you want to dive deeper or catch up on past episodes, head over to cyberbriefproject.buzzsprout.com.

You can also find the podcast on YouTube at youtube.com/@CyberBriefProject I’d love to see you there.

And if you find these episodes valuable and want to support the project, you can do that here: buzzsprout.com/support

Your support means a lot.

See you in the next one, and thank you for listening.

  continue reading

21 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play