Go offline with the Player FM app!
The Last Ten Percent, Visual Evidence, and Supervised Agents with Jiyun Hyo of Givance
Manage episode 520987812 series 3068634
This week we welcome Jiyun Hyo, co-founder and CEO of Givance, for a conversation about moving legal AI past shiny summaries toward verified work product. Jiyun’s path runs from Duke robotics, where layered agents watched other agents, to clinical mental health bots, where confident errors carry human cost. Those lessons shape his view of legal tools today: foundation models often answer like students guessing on a pop quiz, sounding sure while drifting from fact.
A key idea is the “last ten percent gap.” Many systems reach outputs that look right on first pass yet slip on a few crucial details. In low-stakes tasks, small misses are a nuisance. In litigation, one missing email or one misplaced time stamp risks ruining trust and admissibility. Jiyun adds a second problem: when users ask for a tiny correction, models tend to rebuild the whole output, so precision edits become a loop of fixes and new breakage.
Givance aims at that gap through text-to-visual evidence work. The platform turns piles of documents into interactive charts with links back to source files. Examples include Gantt charts for personnel histories, Sankey diagrams for asset flows, overlap views for evidence exchanges, and timelines that surface contradictions across thousands of records. Jiyun shares early law-firm use: rapid fact digestion after a data dump, clearer client conversations around case theory, and courtroom visuals that help judges and juries follow a sequence without sketching their own shaky diagrams.
Safety, supervision, and security follow naturally. Drawing on robotics, Jiyun argues for a live supervisory layer during agentic workflows so alerts surface while negotiations or analyses unfold rather than days later. Too many alerts, though, create noise, so tuning confidence thresholds becomes part of product design. On security, Givance works in isolated environments, strips identifiers before model calls, and keeps architecture model-agnostic so newer systems slot in without reopening privacy debates.
The episode ends on market dynamics and the near future. Jiyun sees mega-funded text-first platforms as market openers, normalizing AI buying and leaving room for second-wave multimodal tools. Asked whether the search bar in document review fades away, he expects search to stick around for a long while because lawyers associate a search box with control, even if chat interfaces improve. The bigger shift, in his view, lies in outputs, more interactive visuals that help legal teams spot gaps, test case stories, and present evidence with clarity.
Listen on mobile platforms: Apple Podcasts | Spotify | YouTube
[Special Thanks to Legal Technology Hub for their sponsoring this episode.]
Email: [email protected]
Music: Jerry David DeCicca
Transcript:
328 episodes
Manage episode 520987812 series 3068634
This week we welcome Jiyun Hyo, co-founder and CEO of Givance, for a conversation about moving legal AI past shiny summaries toward verified work product. Jiyun’s path runs from Duke robotics, where layered agents watched other agents, to clinical mental health bots, where confident errors carry human cost. Those lessons shape his view of legal tools today: foundation models often answer like students guessing on a pop quiz, sounding sure while drifting from fact.
A key idea is the “last ten percent gap.” Many systems reach outputs that look right on first pass yet slip on a few crucial details. In low-stakes tasks, small misses are a nuisance. In litigation, one missing email or one misplaced time stamp risks ruining trust and admissibility. Jiyun adds a second problem: when users ask for a tiny correction, models tend to rebuild the whole output, so precision edits become a loop of fixes and new breakage.
Givance aims at that gap through text-to-visual evidence work. The platform turns piles of documents into interactive charts with links back to source files. Examples include Gantt charts for personnel histories, Sankey diagrams for asset flows, overlap views for evidence exchanges, and timelines that surface contradictions across thousands of records. Jiyun shares early law-firm use: rapid fact digestion after a data dump, clearer client conversations around case theory, and courtroom visuals that help judges and juries follow a sequence without sketching their own shaky diagrams.
Safety, supervision, and security follow naturally. Drawing on robotics, Jiyun argues for a live supervisory layer during agentic workflows so alerts surface while negotiations or analyses unfold rather than days later. Too many alerts, though, create noise, so tuning confidence thresholds becomes part of product design. On security, Givance works in isolated environments, strips identifiers before model calls, and keeps architecture model-agnostic so newer systems slot in without reopening privacy debates.
The episode ends on market dynamics and the near future. Jiyun sees mega-funded text-first platforms as market openers, normalizing AI buying and leaving room for second-wave multimodal tools. Asked whether the search bar in document review fades away, he expects search to stick around for a long while because lawyers associate a search box with control, even if chat interfaces improve. The bigger shift, in his view, lies in outputs, more interactive visuals that help legal teams spot gaps, test case stories, and present evidence with clarity.
Listen on mobile platforms: Apple Podcasts | Spotify | YouTube
[Special Thanks to Legal Technology Hub for their sponsoring this episode.]
Email: [email protected]
Music: Jerry David DeCicca
Transcript:
328 episodes
All episodes
×Welcome to Player FM!
Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.