Go offline with the Player FM app!
Oktane Preview with Harish Peri, Invisible Prompt Attacks, and the weekly news! - Harish Peri - ESW #421
Manage episode 502346967 series 72776
Oktane Preview: building frameworks to secure our Agentic AI future
Like it or not, Agentic AI and protocols like MCP and A2A are getting pushed as the glue to take business process automation to the next level. Giving agents the power and access they need to accomplish these lofty goals is going to be challenging, from a security perspective.
How do put AI agents in the position to perform broad tasks autonomously without granting them all the privileges? How do we avoid making AI agents a gold mine for attackers - the first place they stop once they hack into our companies? These are some examples of the questions Okta aims to answer at this year’s Oktane event, and we aim to kick off the conversations a little early - with this interview!
Segment Resources:
- Check out securityweekly.com/oktane for all our live coverage during the event this year!
- More information about the event and how you can attend can be found here: https://www.okta.com/oktane/
- AI at Work 2025: Securing the AI-powered workforce
Reports of indirect prompt injection issues have been around for a while. Of particular note was Michael Bargury's Living off Microsoft Copilot presentation from Black Hat USA 2024. Simply sending an email to a Copilot user could make bad stuff happen.
Now, at Black Hat 2025, we've got more: the ability to plunder any data resource connected to ChatGPT (they call these integrations "Connectors") from Tamir Ishay Sharbat at Zenity Labs. The research is titled AgentFlayer: ChatGPT Connectors 0click Attack.
Looks like Google Jules is also vulnerable to what the Embrace the Red blog is calling invisible prompts. Sourcegraph's Amp Code is also vulnerable to the same attack, which encodes instructions to make them invisible.
What's really going to ruffle feathers is the fact that all these companies know this stuff is possible, but don't seem to be able to figure out how to prevent it. Ideally, we'd want to be able to distinguish between intended instruction and instructions injected via attachments or some other means outside of the prompt box. I guess that's easier said than done?
NewsFinally, in the enterprise security news,
- Drones are coming for you… to help?
- One of the most powerful botnets ever goes down
- Phishing training is still pointless
- Microsoft sets an alarm on its phone for 8 years from now to do post-quantum stuff
- vulns galore in commercial ZTNA apps
- GenAI projects are struggling to make it to production
- Adblockers could be made illegal - in Germany
- Windows is getting native Agentic support
- Automating bug discovery AND remediation?
- Public service announcement: time is running out for Windows 10
All that and more, on this episode of Enterprise Security Weekly.
Show Notes: https://securityweekly.com/esw-421
4617 episodes
Manage episode 502346967 series 72776
Oktane Preview: building frameworks to secure our Agentic AI future
Like it or not, Agentic AI and protocols like MCP and A2A are getting pushed as the glue to take business process automation to the next level. Giving agents the power and access they need to accomplish these lofty goals is going to be challenging, from a security perspective.
How do put AI agents in the position to perform broad tasks autonomously without granting them all the privileges? How do we avoid making AI agents a gold mine for attackers - the first place they stop once they hack into our companies? These are some examples of the questions Okta aims to answer at this year’s Oktane event, and we aim to kick off the conversations a little early - with this interview!
Segment Resources:
- Check out securityweekly.com/oktane for all our live coverage during the event this year!
- More information about the event and how you can attend can be found here: https://www.okta.com/oktane/
- AI at Work 2025: Securing the AI-powered workforce
Reports of indirect prompt injection issues have been around for a while. Of particular note was Michael Bargury's Living off Microsoft Copilot presentation from Black Hat USA 2024. Simply sending an email to a Copilot user could make bad stuff happen.
Now, at Black Hat 2025, we've got more: the ability to plunder any data resource connected to ChatGPT (they call these integrations "Connectors") from Tamir Ishay Sharbat at Zenity Labs. The research is titled AgentFlayer: ChatGPT Connectors 0click Attack.
Looks like Google Jules is also vulnerable to what the Embrace the Red blog is calling invisible prompts. Sourcegraph's Amp Code is also vulnerable to the same attack, which encodes instructions to make them invisible.
What's really going to ruffle feathers is the fact that all these companies know this stuff is possible, but don't seem to be able to figure out how to prevent it. Ideally, we'd want to be able to distinguish between intended instruction and instructions injected via attachments or some other means outside of the prompt box. I guess that's easier said than done?
NewsFinally, in the enterprise security news,
- Drones are coming for you… to help?
- One of the most powerful botnets ever goes down
- Phishing training is still pointless
- Microsoft sets an alarm on its phone for 8 years from now to do post-quantum stuff
- vulns galore in commercial ZTNA apps
- GenAI projects are struggling to make it to production
- Adblockers could be made illegal - in Germany
- Windows is getting native Agentic support
- Automating bug discovery AND remediation?
- Public service announcement: time is running out for Windows 10
All that and more, on this episode of Enterprise Security Weekly.
Show Notes: https://securityweekly.com/esw-421
4617 episodes
All episodes
×Welcome to Player FM!
Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.