Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by Dr. Andrew Clark & Sid Mangalik, Dr. Andrew Clark, and Sid Mangalik. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Dr. Andrew Clark & Sid Mangalik, Dr. Andrew Clark, and Sid Mangalik or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

AI in practice: Guardrails and security for LLMs

35:11
 
Share
 

Manage episode 509207244 series 3475282
Content provided by Dr. Andrew Clark & Sid Mangalik, Dr. Andrew Clark, and Sid Mangalik. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Dr. Andrew Clark & Sid Mangalik, Dr. Andrew Clark, and Sid Mangalik or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

In this episode, we talk about practical guardrails for LLMs with data scientist Nicholas Brathwaite. We focus on how to stop PII leaks, retrieve data, and evaluate safety with real limits. We weigh managed solutions like AWS Bedrock against open-source approaches and discuss when to skip LLMs altogether.
• Why guardrails matter for PII, secrets, and access control
• Where to place controls across prompt, training, and output
• Prompt injection, jailbreaks, and adversarial handling
• RAG design with vector DB separation and permissions
• Evaluation methods, risk scoring, and cost trade-offs
AWS Bedrock guardrails vs open-source customization
• Domain-adapted safety models and policy matching
• When deterministic systems beat LLM complexity
This episode is part of our "AI in Practice” series, where we invite guests to talk about the reality of their work in AI. From hands-on development to scientific research, be sure to check out other episodes under this heading in our listings.
Related research:

What did you think? Let us know.

Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics:

  • LinkedIn - Episode summaries, shares of cited articles, and more.
  • YouTube - Was it something that we said? Good. Share your favorite quotes.
  • Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.
  continue reading

Chapters

1. AI in practice: Guardrails and security for LLMs (00:00:00)

2. Guest intro: Nic Brathwaite (00:00:03)

3. What we’re reading & why it matters (00:01:18)

4. Guardrails in practice (00:05:12)

5. What guardrails do (00:07:08)

6. PII, confidentiality, and regex basics (00:08:14)

7. Where to place guardrails (00:14:12)

8. Intentional RAG and vector design (00:19:08)

9. Material risks across industries (00:22:06)

10. Bedrock guardrails vs open source (00:25:04)

11. Domain models for responsible AI (00:28:02)

39 episodes

Artwork
iconShare
 
Manage episode 509207244 series 3475282
Content provided by Dr. Andrew Clark & Sid Mangalik, Dr. Andrew Clark, and Sid Mangalik. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Dr. Andrew Clark & Sid Mangalik, Dr. Andrew Clark, and Sid Mangalik or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

In this episode, we talk about practical guardrails for LLMs with data scientist Nicholas Brathwaite. We focus on how to stop PII leaks, retrieve data, and evaluate safety with real limits. We weigh managed solutions like AWS Bedrock against open-source approaches and discuss when to skip LLMs altogether.
• Why guardrails matter for PII, secrets, and access control
• Where to place controls across prompt, training, and output
• Prompt injection, jailbreaks, and adversarial handling
• RAG design with vector DB separation and permissions
• Evaluation methods, risk scoring, and cost trade-offs
AWS Bedrock guardrails vs open-source customization
• Domain-adapted safety models and policy matching
• When deterministic systems beat LLM complexity
This episode is part of our "AI in Practice” series, where we invite guests to talk about the reality of their work in AI. From hands-on development to scientific research, be sure to check out other episodes under this heading in our listings.
Related research:

What did you think? Let us know.

Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics:

  • LinkedIn - Episode summaries, shares of cited articles, and more.
  • YouTube - Was it something that we said? Good. Share your favorite quotes.
  • Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.
  continue reading

Chapters

1. AI in practice: Guardrails and security for LLMs (00:00:00)

2. Guest intro: Nic Brathwaite (00:00:03)

3. What we’re reading & why it matters (00:01:18)

4. Guardrails in practice (00:05:12)

5. What guardrails do (00:07:08)

6. PII, confidentiality, and regex basics (00:08:14)

7. Where to place guardrails (00:14:12)

8. Intentional RAG and vector design (00:19:08)

9. Material risks across industries (00:22:06)

10. Bedrock guardrails vs open source (00:25:04)

11. Domain models for responsible AI (00:28:02)

39 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play