Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by Unspoken Security. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Unspoken Security or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Can My AI Be Hacked?

1:05:43
 
Share
 

Manage episode 496396841 series 3527833
Content provided by Unspoken Security. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Unspoken Security or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

In this episode of Unspoken Security, host AJ Nash speaks with Dr. Peter Garraghan, CEO and CTO of Mindgard. They discuss the real-world security risks of artificial intelligence. Peter starts with a simple point: AI is just software, and software is easy to break. He urges businesses using AI to step back and truly understand its vulnerabilities.

Peter draws parallels between the current AI boom and past technology cycles like cloud computing. While AI feels revolutionary, the security risks are not new. Threats like data poisoning and prompt injection are modern versions of classic cybersecurity problems. The danger is that AI's human-like interface makes it easy to anthropomorphize, causing users to overlook fundamental security flaws.

To manage these risks, Peter advises companies to treat AI like any other software. This means applying the same rigorous security controls, testing protocols, and incident response playbooks. Instead of creating a separate process for AI, organizations should find the gaps in their current security posture and update them. This practical approach helps businesses secure AI systems effectively.

Send us a text

Support the show

  continue reading

40 episodes

Artwork
iconShare
 
Manage episode 496396841 series 3527833
Content provided by Unspoken Security. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Unspoken Security or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

In this episode of Unspoken Security, host AJ Nash speaks with Dr. Peter Garraghan, CEO and CTO of Mindgard. They discuss the real-world security risks of artificial intelligence. Peter starts with a simple point: AI is just software, and software is easy to break. He urges businesses using AI to step back and truly understand its vulnerabilities.

Peter draws parallels between the current AI boom and past technology cycles like cloud computing. While AI feels revolutionary, the security risks are not new. Threats like data poisoning and prompt injection are modern versions of classic cybersecurity problems. The danger is that AI's human-like interface makes it easy to anthropomorphize, causing users to overlook fundamental security flaws.

To manage these risks, Peter advises companies to treat AI like any other software. This means applying the same rigorous security controls, testing protocols, and incident response playbooks. Instead of creating a separate process for AI, organizations should find the gaps in their current security posture and update them. This practical approach helps businesses secure AI systems effectively.

Send us a text

Support the show

  continue reading

40 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play