Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by Raj Krishnamurthy. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Raj Krishnamurthy or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Preetam Joshi Breaks Down ML, LLMs, AI Agents, and Governance Challenges

58:31
 
Share
 

Manage episode 493741611 series 3644937
Content provided by Raj Krishnamurthy. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Raj Krishnamurthy or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

How do you make sense of security, governance, and risk in an age of black-box AI? This week, Raj is joined by Preetam Joshi, founder of Aimon Labs and machine learning veteran with experience at DRDO, Yahoo, Netflix, and Thumbtack. Together, they break down the technical evolution behind large language models (LLMs), explore the real challenges of explainability, and discuss why GRC teams must rethink risk in the age of autonomous reasoning systems.

Preetam brings a rare mix of hands-on ML expertise and practical experience deploying LLMs in enterprise environments. If you’ve been wondering how transformers work, what explainability really means, or why AI governance is still a mess — this episode is for you.

5 Key Takeaways:

-From DRDO to Netflix to Aimon Labs — Preetam’s career journey shows the intersection of machine learning, security, and entrepreneurship.
-How Transformers Work —
A simple breakdown of encoder/decoder architecture, embeddings, and attention mechanisms.
-Explainability in AI —
What it meant in traditional ML... and why it’s nearly impossible with today’s LLMs.
-Rule-Based Logic Isn’t Dead —
In high-stakes environments, deterministic systems still matter.
-Bridging AI & GRC —
Practical steps for model security, auditing, and compliance in non-deterministic systems.

📌 Take Action

Security & GRC Decoded is brought to you by ComplianceCow — the platform for proactive, automated compliance.

🎧 Subscribe, rate, and share if this episode sparked a thought.

⏱ Timestamps (approx.)

00:00 – Intro
01:11 – Welcome Preetam to the show
03:20 – What has been your favorite experience working in AI so far?
07:08 – What is transformer architecture and how does it work?
10:23 – How do LLMs solve problems like math or reasoning?
12:38 – Where do agents fit in the LLM ecosystem?
16:07 – How does reinforcement learning apply to AI models?
21:33 – What does explainability mean in ML?
24:55 – Can you explain the limitations of SHAP and parameter-level reasoning?
27:33 – What does GRC look like in the LLM age?
30:58 – What does AIMon Labs actually do?
35:00 – Why is reliability a challenge with LLMs?
39:15 – Where does GRC intersect with AI deployment and compliance?
41:30 – What is fine-tuning and when is it useful?
44:43 – Is Retrieval Augmented Generation (RAG) still relevant with longer context windows?
47:29 – How do we guard against LLM misuse and toxic output?
49:43 – How can LLMs overexpose sensitive company data?
53:28 – Advice for those starting a career in AI or ML
55:34 – What are your favorite models right now?

  continue reading

14 episodes

Artwork
iconShare
 
Manage episode 493741611 series 3644937
Content provided by Raj Krishnamurthy. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Raj Krishnamurthy or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

How do you make sense of security, governance, and risk in an age of black-box AI? This week, Raj is joined by Preetam Joshi, founder of Aimon Labs and machine learning veteran with experience at DRDO, Yahoo, Netflix, and Thumbtack. Together, they break down the technical evolution behind large language models (LLMs), explore the real challenges of explainability, and discuss why GRC teams must rethink risk in the age of autonomous reasoning systems.

Preetam brings a rare mix of hands-on ML expertise and practical experience deploying LLMs in enterprise environments. If you’ve been wondering how transformers work, what explainability really means, or why AI governance is still a mess — this episode is for you.

5 Key Takeaways:

-From DRDO to Netflix to Aimon Labs — Preetam’s career journey shows the intersection of machine learning, security, and entrepreneurship.
-How Transformers Work —
A simple breakdown of encoder/decoder architecture, embeddings, and attention mechanisms.
-Explainability in AI —
What it meant in traditional ML... and why it’s nearly impossible with today’s LLMs.
-Rule-Based Logic Isn’t Dead —
In high-stakes environments, deterministic systems still matter.
-Bridging AI & GRC —
Practical steps for model security, auditing, and compliance in non-deterministic systems.

📌 Take Action

Security & GRC Decoded is brought to you by ComplianceCow — the platform for proactive, automated compliance.

🎧 Subscribe, rate, and share if this episode sparked a thought.

⏱ Timestamps (approx.)

00:00 – Intro
01:11 – Welcome Preetam to the show
03:20 – What has been your favorite experience working in AI so far?
07:08 – What is transformer architecture and how does it work?
10:23 – How do LLMs solve problems like math or reasoning?
12:38 – Where do agents fit in the LLM ecosystem?
16:07 – How does reinforcement learning apply to AI models?
21:33 – What does explainability mean in ML?
24:55 – Can you explain the limitations of SHAP and parameter-level reasoning?
27:33 – What does GRC look like in the LLM age?
30:58 – What does AIMon Labs actually do?
35:00 – Why is reliability a challenge with LLMs?
39:15 – Where does GRC intersect with AI deployment and compliance?
41:30 – What is fine-tuning and when is it useful?
44:43 – Is Retrieval Augmented Generation (RAG) still relevant with longer context windows?
47:29 – How do we guard against LLM misuse and toxic output?
49:43 – How can LLMs overexpose sensitive company data?
53:28 – Advice for those starting a career in AI or ML
55:34 – What are your favorite models right now?

  continue reading

14 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play