Go offline with the Player FM app!
When Guardrails Aren’t Enough: How to Handle AI’s Hidden Vulnerabilities | An Infosecurity Europe 2025 Pre-Event Conversation with Peter Garraghan | On Location Coverage with Sean Martin and Marco Ciappelli
Manage episode 484256352 series 2972571
In this episode of our InfoSecurity Europe 2024 On Location coverage, Marco Ciappelli and Sean Martin sit down with Professor Peter Garraghan, Chair in Computer Science at Lancaster University and co-founder of the AI security startup Mindgard. Peter shares a grounded view of the current AI moment—one where attention-grabbing capabilities often distract from fundamental truths about software security.
At the heart of the discussion is the question: Can my AI be hacked? Peter’s answer is a firm “yes”—but not for the reasons most might expect. He explains that AI is still software, and the risks it introduces are extensions of those we’ve seen for decades. The real difference lies not in the nature of the threats, but in how these new interfaces behave and how we, as humans, interact with them. Natural language interfaces, in particular, make it easier to introduce confusion and harder to contain behaviors, especially when people overestimate the intelligence of the systems.
Peter highlights that prompt injection, model poisoning, and opaque logic flows are not entirely new challenges. They mirror known classes of vulnerabilities like SQL injection or insecure APIs—only now they come wrapped in the hype of generative AI. He encourages teams to reframe the conversation: replace the word “AI” with “software” and see how the risk profile becomes more recognizable and manageable.
A key takeaway is that the issue isn’t just technical. Many organizations are integrating AI capabilities without understanding what they’re introducing. As Peter puts it, “You’re plugging in software filled with features you don’t need, which makes your risk modeling much harder.” Guardrails are often mistaken for full protections, and foundational practices in application development and threat modeling are being sidelined by excitement and speed to market.
Peter’s upcoming session at InfoSecurity Europe—Can My AI Be Hacked?—aims to bring this discussion to life with real-world attack examples, systems-level analysis, and a practical call to action: retool, retrain, and reframe your approach to AI security. Whether you’re in development, operations, or governance, this session promises perspective that cuts through the noise and anchors your strategy in reality.
___________
Guest: Peter Garraghan, Professor in Computer Science at Lancaster University, Fellow of the UK Engineering Physical Sciences and Research Council (EPSRC), and CEO & CTO of Mindgard | https://www.linkedin.com/in/pgarraghan/
Hosts:
Sean Martin, Co-Founder at ITSPmagazine | Website: https://www.seanmartin.com
Marco Ciappelli, Co-Founder at ITSPmagazine | Website: https://www.marcociappelli.com
___________
Episode Sponsors
ThreatLocker: https://itspm.ag/threatlocker-r974
___________
Resources
Peter’s Session: https://www.infosecurityeurope.com/en-gb/conference-programme/session-details.4355.239479.can-my-ai-be-hacked.html
Learn more and catch more stories from Infosecurity Europe 2025 London coverage: https://www.itspmagazine.com/infosec25
Catch all of our event coverage: https://www.itspmagazine.com/technology-and-cybersecurity-conference-coverage
Want to tell your Brand Story Briefing as part of our event coverage? Learn More 👉 https://itspm.ag/evtcovbrf
Want Sean and Marco to be part of your event or conference? Let Us Know 👉 https://www.itspmagazine.com/contact-us
___________
KEYWORDS
sean martin, marco ciappelli, peter garraghan, ai, cybersecurity, software, risk, threat, prompt, injection, infosecurity europe, event coverage, on location, conference
621 episodes
Manage episode 484256352 series 2972571
In this episode of our InfoSecurity Europe 2024 On Location coverage, Marco Ciappelli and Sean Martin sit down with Professor Peter Garraghan, Chair in Computer Science at Lancaster University and co-founder of the AI security startup Mindgard. Peter shares a grounded view of the current AI moment—one where attention-grabbing capabilities often distract from fundamental truths about software security.
At the heart of the discussion is the question: Can my AI be hacked? Peter’s answer is a firm “yes”—but not for the reasons most might expect. He explains that AI is still software, and the risks it introduces are extensions of those we’ve seen for decades. The real difference lies not in the nature of the threats, but in how these new interfaces behave and how we, as humans, interact with them. Natural language interfaces, in particular, make it easier to introduce confusion and harder to contain behaviors, especially when people overestimate the intelligence of the systems.
Peter highlights that prompt injection, model poisoning, and opaque logic flows are not entirely new challenges. They mirror known classes of vulnerabilities like SQL injection or insecure APIs—only now they come wrapped in the hype of generative AI. He encourages teams to reframe the conversation: replace the word “AI” with “software” and see how the risk profile becomes more recognizable and manageable.
A key takeaway is that the issue isn’t just technical. Many organizations are integrating AI capabilities without understanding what they’re introducing. As Peter puts it, “You’re plugging in software filled with features you don’t need, which makes your risk modeling much harder.” Guardrails are often mistaken for full protections, and foundational practices in application development and threat modeling are being sidelined by excitement and speed to market.
Peter’s upcoming session at InfoSecurity Europe—Can My AI Be Hacked?—aims to bring this discussion to life with real-world attack examples, systems-level analysis, and a practical call to action: retool, retrain, and reframe your approach to AI security. Whether you’re in development, operations, or governance, this session promises perspective that cuts through the noise and anchors your strategy in reality.
___________
Guest: Peter Garraghan, Professor in Computer Science at Lancaster University, Fellow of the UK Engineering Physical Sciences and Research Council (EPSRC), and CEO & CTO of Mindgard | https://www.linkedin.com/in/pgarraghan/
Hosts:
Sean Martin, Co-Founder at ITSPmagazine | Website: https://www.seanmartin.com
Marco Ciappelli, Co-Founder at ITSPmagazine | Website: https://www.marcociappelli.com
___________
Episode Sponsors
ThreatLocker: https://itspm.ag/threatlocker-r974
___________
Resources
Peter’s Session: https://www.infosecurityeurope.com/en-gb/conference-programme/session-details.4355.239479.can-my-ai-be-hacked.html
Learn more and catch more stories from Infosecurity Europe 2025 London coverage: https://www.itspmagazine.com/infosec25
Catch all of our event coverage: https://www.itspmagazine.com/technology-and-cybersecurity-conference-coverage
Want to tell your Brand Story Briefing as part of our event coverage? Learn More 👉 https://itspm.ag/evtcovbrf
Want Sean and Marco to be part of your event or conference? Let Us Know 👉 https://www.itspmagazine.com/contact-us
___________
KEYWORDS
sean martin, marco ciappelli, peter garraghan, ai, cybersecurity, software, risk, threat, prompt, injection, infosecurity europe, event coverage, on location, conference
621 episodes
All episodes
×Welcome to Player FM!
Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.