Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by Elevano. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Elevano or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

How to Future-Proof Your AI Stack

24:58
 
Share
 

Manage episode 493575305 series 2833920
Content provided by Elevano. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Elevano or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

What does it take to build AI systems that are private by design—and ready for tomorrow’s regulations?

In this episode, I’m joined by Rishabh Poddar, CTO and Co-founder of Opaque Systems, to explore how data privacy, compliance, and AI innovation intersect in a rapidly evolving landscape. Rishabh breaks down the impact of emerging privacy laws, the risks with agentic AI systems, and why building cryptographic guarantees into the foundation of your AI stack isn’t optional—it’s essential.

Whether you're deploying AI at scale or just experimenting, this conversation will challenge how you think about trust, governance, and the future of responsible AI.

🔑 Key Takeaways

Data privacy laws are tightening—and conflicting with the growing demand for more diverse data to train AI models.

Agentic AI introduces new risk: non-deterministic systems that act independently demand a fresh approach to guardrails and governance.

Opaque Systems' platform keeps sensitive data encrypted throughout the full AI lifecycle, with cryptographic auditability and verifiable access controls.

The future is about adaptability—not one-size-fits-all compliance, but giving organizations the tools to meet evolving laws.

Enterprise example: ServiceNow reinvented their internal helpdesk with agentic AI while preserving strict internal data boundaries.

⏱ Timestamped Highlights

[00:34] – What is Opaque Systems? The confidential AI platform built from UC Berkeley research.

[01:31] – How privacy laws and AI demands are on a collision course—and why that tension led to Opaque’s creation.

[06:24] – The shift from structured data governance to unstructured, agent-driven challenges.

[10:43] – Why humans breaking trust feels different than AI—and how to build for trust without false assumptions.

[15:30] – Real-world case study: How ServiceNow uses AI + Opaque to safeguard confidential data at scale

[21:18] – You can’t wait for laws to settle—future-proof AI systems must offer customizable compliance tools today.

💬 Quote of the Episode

“Privacy and security need to be baked into the design of a system from the ground up—so you’re protected no matter how the laws evolve.”

🛠 Pro Tips

If you’re designing AI applications for enterprise use, map out where data flows—then assume each step is a risk zone.

Invest early in systems that offer auditable, verifiable data usage—this becomes your insurance as laws change.

Don’t assume your platform team should (or can) see all the data. Siloed access isn't a bug—it’s a feature for privacy.

📢 Call to Action

Enjoyed this episode? Follow the show, leave a review, and share it with someone working on AI governance or data privacy. Got questions or thoughts? Reach out to Amir on LinkedIn—he’d love to hear from you.

New episodes drop weekly. Subscribe wherever you listen.

  continue reading

488 episodes

Artwork
iconShare
 
Manage episode 493575305 series 2833920
Content provided by Elevano. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Elevano or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

What does it take to build AI systems that are private by design—and ready for tomorrow’s regulations?

In this episode, I’m joined by Rishabh Poddar, CTO and Co-founder of Opaque Systems, to explore how data privacy, compliance, and AI innovation intersect in a rapidly evolving landscape. Rishabh breaks down the impact of emerging privacy laws, the risks with agentic AI systems, and why building cryptographic guarantees into the foundation of your AI stack isn’t optional—it’s essential.

Whether you're deploying AI at scale or just experimenting, this conversation will challenge how you think about trust, governance, and the future of responsible AI.

🔑 Key Takeaways

Data privacy laws are tightening—and conflicting with the growing demand for more diverse data to train AI models.

Agentic AI introduces new risk: non-deterministic systems that act independently demand a fresh approach to guardrails and governance.

Opaque Systems' platform keeps sensitive data encrypted throughout the full AI lifecycle, with cryptographic auditability and verifiable access controls.

The future is about adaptability—not one-size-fits-all compliance, but giving organizations the tools to meet evolving laws.

Enterprise example: ServiceNow reinvented their internal helpdesk with agentic AI while preserving strict internal data boundaries.

⏱ Timestamped Highlights

[00:34] – What is Opaque Systems? The confidential AI platform built from UC Berkeley research.

[01:31] – How privacy laws and AI demands are on a collision course—and why that tension led to Opaque’s creation.

[06:24] – The shift from structured data governance to unstructured, agent-driven challenges.

[10:43] – Why humans breaking trust feels different than AI—and how to build for trust without false assumptions.

[15:30] – Real-world case study: How ServiceNow uses AI + Opaque to safeguard confidential data at scale

[21:18] – You can’t wait for laws to settle—future-proof AI systems must offer customizable compliance tools today.

💬 Quote of the Episode

“Privacy and security need to be baked into the design of a system from the ground up—so you’re protected no matter how the laws evolve.”

🛠 Pro Tips

If you’re designing AI applications for enterprise use, map out where data flows—then assume each step is a risk zone.

Invest early in systems that offer auditable, verifiable data usage—this becomes your insurance as laws change.

Don’t assume your platform team should (or can) see all the data. Siloed access isn't a bug—it’s a feature for privacy.

📢 Call to Action

Enjoyed this episode? Follow the show, leave a review, and share it with someone working on AI governance or data privacy. Got questions or thoughts? Reach out to Amir on LinkedIn—he’d love to hear from you.

New episodes drop weekly. Subscribe wherever you listen.

  continue reading

488 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play