Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by Elevano. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Elevano or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

How Attackers Are Using AI to Outpace Defenses

27:42
 
Share
 

Manage episode 509666303 series 2833920
Content provided by Elevano. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Elevano or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

Jonathan DiVincenzo, co-founder and CEO of Impart Security, joins the show to unpack one of the fastest growing risks in tech today: how AI is reshaping the attack surface. From prompt injections to invisible character exploits hidden inside emojis, JD explains why security leaders can’t afford to treat AI as “just another tool.” If you’re an engineering or security leader navigating AI adoption, this conversation breaks down what’s hype, what’s real, and where the biggest blind spots lie.

Key Takeaways

• Attackers are now using LLMs to outpace traditional defenses, turning old threats like SQL injection into live problems again

• The attack surface is “iterating,” with new vectors like emoji-based smuggling exposing unseen vulnerabilities

• Frameworks have not caught up. While OWASP has listed LLM threats, practical solutions are still undefined

• The biggest divide in AI coding is between senior engineers who can validate outputs and junior developers who may lack that context

• Security tools must evolve quickly, but rollout cannot create performance hits or damage business systems

Timestamped Highlights

01:44 Why runtime security has always mattered and why APIs were not enough

04:00 How attackers use LLMs to regenerate and adapt attacks in real time

06:59 Proof of concept vs. security and why both must be treated as first priorities

09:14 The rise of “emoji smuggling” and why hidden characters create a Trojan horse effect

13:24 Iterating attack surfaces and why patches are no longer enough in the AI era

20:29 Is AI really writing production code and what risks does that create

A thought worth holding onto

“AI is great, but the bad actors can use AI too, and they are.”

Call to Action

If this episode gave you new perspective on AI security, share it with a colleague who needs to hear it. Follow the show for more conversations with the leaders shaping the future of tech.

  continue reading

543 episodes

Artwork
iconShare
 
Manage episode 509666303 series 2833920
Content provided by Elevano. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Elevano or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

Jonathan DiVincenzo, co-founder and CEO of Impart Security, joins the show to unpack one of the fastest growing risks in tech today: how AI is reshaping the attack surface. From prompt injections to invisible character exploits hidden inside emojis, JD explains why security leaders can’t afford to treat AI as “just another tool.” If you’re an engineering or security leader navigating AI adoption, this conversation breaks down what’s hype, what’s real, and where the biggest blind spots lie.

Key Takeaways

• Attackers are now using LLMs to outpace traditional defenses, turning old threats like SQL injection into live problems again

• The attack surface is “iterating,” with new vectors like emoji-based smuggling exposing unseen vulnerabilities

• Frameworks have not caught up. While OWASP has listed LLM threats, practical solutions are still undefined

• The biggest divide in AI coding is between senior engineers who can validate outputs and junior developers who may lack that context

• Security tools must evolve quickly, but rollout cannot create performance hits or damage business systems

Timestamped Highlights

01:44 Why runtime security has always mattered and why APIs were not enough

04:00 How attackers use LLMs to regenerate and adapt attacks in real time

06:59 Proof of concept vs. security and why both must be treated as first priorities

09:14 The rise of “emoji smuggling” and why hidden characters create a Trojan horse effect

13:24 Iterating attack surfaces and why patches are no longer enough in the AI era

20:29 Is AI really writing production code and what risks does that create

A thought worth holding onto

“AI is great, but the bad actors can use AI too, and they are.”

Call to Action

If this episode gave you new perspective on AI security, share it with a colleague who needs to hear it. Follow the show for more conversations with the leaders shaping the future of tech.

  continue reading

543 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play