Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by Mehmet Gonullu. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Mehmet Gonullu or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

#554 Securing the AI Era: Alex Schlager on Why AI Agents Are the New Attack Surface

45:46
 
Share
 

Manage episode 524496447 series 3506362
Content provided by Mehmet Gonullu. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Mehmet Gonullu or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

In this episode of The CTO Show with Mehmet, I’m joined by Alex Schlager, Founder and CEO of AIceberg, a company operating at the intersection of AI, cybersecurity, and explainability.

We dive deep into why AI agents fundamentally change enterprise risk, how shadow AI is spreading across organizations, and why monitoring black-box models with other black boxes is a dangerous mistake.

Alex explains how explainable machine learning can provide the observability, safety, and security enterprises desperately need as they adopt agentic AI at scale.

👤 About the Guest

Alex Schlager is the Founder and CEO of AIceberg, a company focused on detection and response for AI-powered workflows, from LLM-based chatbots to complex multi-agent systems.

AIceberg’s mission is to secure enterprise AI adoption using fully explainable machine learning models, avoiding black-box-on-black-box monitoring approaches. Alex has deep expertise in AI explainability, agentic systems, and enterprise AI risk management.

https://www.linkedin.com/in/alexschlager/

🧠 Key Topics We Cover

• Why AI agents create a new and expanding attack surface

• The rise of shadow AI across business functions

• Safety vs security in AI systems and why CISOs must now care about both

• How agentic AI amplifies risk through autonomy and tool access

• Explainable AI vs LLM-based guardrails

• Observability challenges in agent-based workflows

• Why traditional cybersecurity tools fall short in the AI era

• Governance, risk, and compliance for AI driven systems

• The future role of AI agents inside security teams

📌 Episode Highlights & Timestamps

00:00 – Introduction and welcome

01:05 – Alex Schlager’s background and the founding of AIceberg

02:20 – Why AI-powered workflows need new security models

03:45 – The danger of monitoring black boxes with black boxes

05:10 – Shadow AI and the loss of enterprise visibility

07:30 – Safety vs security in AI systems

09:15 – Real-world AI risks: hallucinations, data leaks, toxic outputs

12:40 – Why agentic AI massively expands the attack surface

15:05 – Privilege, identity, and agents acting on behalf of users

18:00 – How AIceberg provides observability and control

21:30 – Securing APIs, tools, and agent execution paths

24:10 – Data leakage, DLP, and public LLM usage

27:20 – Governance challenges for CISOs and enterprises

30:15 – AI adoption vs security trade-offs inside organizations

33:40 – Why observability is the first step to AI security

36:10 – The future of AI agents in cybersecurity teams

40:30 – Final thoughts and where to learn more

🎯 What You’ll Learn

• How AI agents differ from traditional software from a security perspective

• Why explainability is becoming critical for AI governance

• How enterprises can regain visibility over AI usage

• What CISOs should prioritize as agentic AI adoption accelerates

• Where AI security is heading in 2026 and beyond

🔗 Resources Mentioned

AIceberg: https://aiceberg.ai

AIceberg Podcast – How Hard Can It Be? https://howhardcanitbe.ai/

  continue reading

554 episodes

Artwork
iconShare
 
Manage episode 524496447 series 3506362
Content provided by Mehmet Gonullu. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Mehmet Gonullu or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

In this episode of The CTO Show with Mehmet, I’m joined by Alex Schlager, Founder and CEO of AIceberg, a company operating at the intersection of AI, cybersecurity, and explainability.

We dive deep into why AI agents fundamentally change enterprise risk, how shadow AI is spreading across organizations, and why monitoring black-box models with other black boxes is a dangerous mistake.

Alex explains how explainable machine learning can provide the observability, safety, and security enterprises desperately need as they adopt agentic AI at scale.

👤 About the Guest

Alex Schlager is the Founder and CEO of AIceberg, a company focused on detection and response for AI-powered workflows, from LLM-based chatbots to complex multi-agent systems.

AIceberg’s mission is to secure enterprise AI adoption using fully explainable machine learning models, avoiding black-box-on-black-box monitoring approaches. Alex has deep expertise in AI explainability, agentic systems, and enterprise AI risk management.

https://www.linkedin.com/in/alexschlager/

🧠 Key Topics We Cover

• Why AI agents create a new and expanding attack surface

• The rise of shadow AI across business functions

• Safety vs security in AI systems and why CISOs must now care about both

• How agentic AI amplifies risk through autonomy and tool access

• Explainable AI vs LLM-based guardrails

• Observability challenges in agent-based workflows

• Why traditional cybersecurity tools fall short in the AI era

• Governance, risk, and compliance for AI driven systems

• The future role of AI agents inside security teams

📌 Episode Highlights & Timestamps

00:00 – Introduction and welcome

01:05 – Alex Schlager’s background and the founding of AIceberg

02:20 – Why AI-powered workflows need new security models

03:45 – The danger of monitoring black boxes with black boxes

05:10 – Shadow AI and the loss of enterprise visibility

07:30 – Safety vs security in AI systems

09:15 – Real-world AI risks: hallucinations, data leaks, toxic outputs

12:40 – Why agentic AI massively expands the attack surface

15:05 – Privilege, identity, and agents acting on behalf of users

18:00 – How AIceberg provides observability and control

21:30 – Securing APIs, tools, and agent execution paths

24:10 – Data leakage, DLP, and public LLM usage

27:20 – Governance challenges for CISOs and enterprises

30:15 – AI adoption vs security trade-offs inside organizations

33:40 – Why observability is the first step to AI security

36:10 – The future of AI agents in cybersecurity teams

40:30 – Final thoughts and where to learn more

🎯 What You’ll Learn

• How AI agents differ from traditional software from a security perspective

• Why explainability is becoming critical for AI governance

• How enterprises can regain visibility over AI usage

• What CISOs should prioritize as agentic AI adoption accelerates

• Where AI security is heading in 2026 and beyond

🔗 Resources Mentioned

AIceberg: https://aiceberg.ai

AIceberg Podcast – How Hard Can It Be? https://howhardcanitbe.ai/

  continue reading

554 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play