Preetam Joshi Breaks Down ML, LLMs, AI Agents, and Governance Challenges
Manage episode 493741611 series 3644937
How do you make sense of security, governance, and risk in an age of black-box AI? This week, Raj is joined by Preetam Joshi, founder of Aimon Labs and machine learning veteran with experience at DRDO, Yahoo, Netflix, and Thumbtack. Together, they break down the technical evolution behind large language models (LLMs), explore the real challenges of explainability, and discuss why GRC teams must rethink risk in the age of autonomous reasoning systems.
Preetam brings a rare mix of hands-on ML expertise and practical experience deploying LLMs in enterprise environments. If you’ve been wondering how transformers work, what explainability really means, or why AI governance is still a mess — this episode is for you.
5 Key Takeaways:
-From DRDO to Netflix to Aimon Labs — Preetam’s career journey shows the intersection of machine learning, security, and entrepreneurship.
-How Transformers Work — A simple breakdown of encoder/decoder architecture, embeddings, and attention mechanisms.
-Explainability in AI — What it meant in traditional ML... and why it’s nearly impossible with today’s LLMs.
-Rule-Based Logic Isn’t Dead — In high-stakes environments, deterministic systems still matter.
-Bridging AI & GRC — Practical steps for model security, auditing, and compliance in non-deterministic systems.
📌 Take Action
- Visit ComplianceCow.com/podcast to catch all episodes
- Connect with Preetam on LinkedIn
- Follow the show on Spotify and Apple Podcasts
Security & GRC Decoded is brought to you by ComplianceCow — the platform for proactive, automated compliance.
🎧 Subscribe, rate, and share if this episode sparked a thought.
⏱ Timestamps (approx.)
00:00 – Intro
01:11 – Welcome Preetam to the show
03:20 – What has been your favorite experience working in AI so far?
07:08 – What is transformer architecture and how does it work?
10:23 – How do LLMs solve problems like math or reasoning?
12:38 – Where do agents fit in the LLM ecosystem?
16:07 – How does reinforcement learning apply to AI models?
21:33 – What does explainability mean in ML?
24:55 – Can you explain the limitations of SHAP and parameter-level reasoning?
27:33 – What does GRC look like in the LLM age?
30:58 – What does AIMon Labs actually do?
35:00 – Why is reliability a challenge with LLMs?
39:15 – Where does GRC intersect with AI deployment and compliance?
41:30 – What is fine-tuning and when is it useful?
44:43 – Is Retrieval Augmented Generation (RAG) still relevant with longer context windows?
47:29 – How do we guard against LLM misuse and toxic output?
49:43 – How can LLMs overexpose sensitive company data?
53:28 – Advice for those starting a career in AI or ML
55:34 – What are your favorite models right now?
14 episodes