Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by Greg Lambert & Marlene Gebauer, Greg Lambert, and Marlene Gebauer. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Greg Lambert & Marlene Gebauer, Greg Lambert, and Marlene Gebauer or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Trust at Scale: Nam Nguyen on How TruthSystems is Building the Framework for Safe AI in Law

34:17
 
Share
 

Manage episode 516363287 series 3068634
Content provided by Greg Lambert & Marlene Gebauer, Greg Lambert, and Marlene Gebauer. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Greg Lambert & Marlene Gebauer, Greg Lambert, and Marlene Gebauer or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

Artificial intelligence has moved fast, but trust has not kept pace. In this episode, Nam Nguyen, co-founder and COO of TruthSystems.ai, joins Greg Lambert and Marlene Gebauer to unpack what it means to build “trust infrastructure” for AI in law. Nguyen’s background is unusually cross-wired—linguistics, computer science, and applied AI research at Stanford Law—giving him a clear view of both the language and logic behind responsible machine reasoning. From his early work in Vietnam to collaborations at Stanford with Dr. Megan Ma, Nguyen has focused on a central question: who ensures that the systems shaping legal work remain safe, compliant, and accountable?

Nguyen explains that TruthSystems emerged from this question as a company focused on operationalizing trust, not theorizing about it. Rather than publishing white papers on AI ethics, his team builds the guardrails law firms need now. Their platform, Charter, acts as a governance layer that can monitor, restrict, and guide AI use across firm environments in real time. Whether a lawyer is drafting in ChatGPT, experimenting with CoCounsel, or testing Copilot, Charter helps firms enforce both client restrictions and internal policies before a breach or misstep occurs. It’s an attempt to turn trust from a static policy on a SharePoint site into a living, automated practice.

A core principle of Nguyen’s work is that AI should be both the subject and the infrastructure of governance. In other words, AI deserves oversight but is also uniquely suited to implement it. Because large language models excel at interpreting text and managing unstructured data, they can help detect compliance or ethical risks as they happen. TruthSystems’ vision is to make governance continuous and adaptive, embedding it directly into lawyers’ daily workflows. The aim is not to slow innovation, but to make it sustainable and auditable.

The conversation also tackles the myth of “hallucination-free” systems. Nguyen is candid about the limitations of retrieval-augmented generation, noting that both retrieval and generation introduce their own failure modes. He argues that most models have been trained to sound confident rather than be accurate, penalizing expressions of uncertainty. TruthSystems takes the opposite approach, favoring smaller, predictable models that reward contradiction-spotting and verification. His critique offers a reminder that speed and safety in AI rarely coexist by accident—they must be engineered together.

Finally, Nguyen discusses TruthSystems’ recent $4 million seed round, led by Gradient Ventures and Lightspeed, which will fund the expansion of their real-time visibility tools and firm partnerships. He envisions a future where firms treat governance not as red tape but as a differentiator, using data on AI use to assure clients and regulators alike. As he puts it, compliance will no longer be the blocker to innovation—it will be the proof of trust at scale.

Listen on mobile platforms: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Apple Podcasts⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Spotify⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠YouTube⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

[Special Thanks to ⁠Legal Technology Hub⁠ for their sponsoring this episode.]

⁠⁠⁠⁠⁠Email: [email protected]
Music: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠⁠⁠⁠⁠⁠⁠

Transcript:

  continue reading

325 episodes

Artwork
iconShare
 
Manage episode 516363287 series 3068634
Content provided by Greg Lambert & Marlene Gebauer, Greg Lambert, and Marlene Gebauer. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Greg Lambert & Marlene Gebauer, Greg Lambert, and Marlene Gebauer or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

Artificial intelligence has moved fast, but trust has not kept pace. In this episode, Nam Nguyen, co-founder and COO of TruthSystems.ai, joins Greg Lambert and Marlene Gebauer to unpack what it means to build “trust infrastructure” for AI in law. Nguyen’s background is unusually cross-wired—linguistics, computer science, and applied AI research at Stanford Law—giving him a clear view of both the language and logic behind responsible machine reasoning. From his early work in Vietnam to collaborations at Stanford with Dr. Megan Ma, Nguyen has focused on a central question: who ensures that the systems shaping legal work remain safe, compliant, and accountable?

Nguyen explains that TruthSystems emerged from this question as a company focused on operationalizing trust, not theorizing about it. Rather than publishing white papers on AI ethics, his team builds the guardrails law firms need now. Their platform, Charter, acts as a governance layer that can monitor, restrict, and guide AI use across firm environments in real time. Whether a lawyer is drafting in ChatGPT, experimenting with CoCounsel, or testing Copilot, Charter helps firms enforce both client restrictions and internal policies before a breach or misstep occurs. It’s an attempt to turn trust from a static policy on a SharePoint site into a living, automated practice.

A core principle of Nguyen’s work is that AI should be both the subject and the infrastructure of governance. In other words, AI deserves oversight but is also uniquely suited to implement it. Because large language models excel at interpreting text and managing unstructured data, they can help detect compliance or ethical risks as they happen. TruthSystems’ vision is to make governance continuous and adaptive, embedding it directly into lawyers’ daily workflows. The aim is not to slow innovation, but to make it sustainable and auditable.

The conversation also tackles the myth of “hallucination-free” systems. Nguyen is candid about the limitations of retrieval-augmented generation, noting that both retrieval and generation introduce their own failure modes. He argues that most models have been trained to sound confident rather than be accurate, penalizing expressions of uncertainty. TruthSystems takes the opposite approach, favoring smaller, predictable models that reward contradiction-spotting and verification. His critique offers a reminder that speed and safety in AI rarely coexist by accident—they must be engineered together.

Finally, Nguyen discusses TruthSystems’ recent $4 million seed round, led by Gradient Ventures and Lightspeed, which will fund the expansion of their real-time visibility tools and firm partnerships. He envisions a future where firms treat governance not as red tape but as a differentiator, using data on AI use to assure clients and regulators alike. As he puts it, compliance will no longer be the blocker to innovation—it will be the proof of trust at scale.

Listen on mobile platforms: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Apple Podcasts⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Spotify⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠YouTube⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

[Special Thanks to ⁠Legal Technology Hub⁠ for their sponsoring this episode.]

⁠⁠⁠⁠⁠Email: [email protected]
Music: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠⁠⁠⁠⁠⁠⁠

Transcript:

  continue reading

325 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play