Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by Ran Chen. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Ran Chen or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Agentic AI's Wild West: Ran Chen on the Race for Regulation

3:15
 
Share
 

Manage episode 506245854 series 3670994
Content provided by Ran Chen. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Ran Chen or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
Agentic AI systems can think and act on their own, creating a huge gap between their capabilities and our outdated laws. As these autonomous agents begin to manage everything from financial portfolios to city infrastructure, we are facing unprecedented legal and ethical questions about accountability and control. In this episode, expert Ran Chen breaks down the core challenges of 'regulatory latency'. We explore what happens when an AI's autonomous decision violates the law or causes harm. We discuss the complexities of governing systems that operate across global jurisdictions in the blink of an eye. Ran Chen introduces forward-thinking solutions, moving beyond slow, traditional lawmaking to build a safer future for AI. Here's a glimpse of what we cover: - Why are traditional legal frameworks fundamentally unprepared for agentic AI? - When an autonomous AI causes a financial crash, who goes to jail? - What is a 'regulatory sandbox' and how can it help us innovate safely? - How can we enforce our laws when an AI operates across multiple countries at once? - Is it possible to program a moral compass directly into an AI? - What happens when an AI's emergent, self-taught strategy is harmful? - How do we prevent agentic AI from becoming a tool for undetectable manipulation? - Can we make an AI's decisions transparent enough to be audited by regulators? Follow my YouTube: https://www.youtube.com/@chenran818 or listen to my music on Apple music, Spotify or other platforms: https://ffm.bio/chenran818
  continue reading

97 episodes

Artwork
iconShare
 
Manage episode 506245854 series 3670994
Content provided by Ran Chen. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Ran Chen or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
Agentic AI systems can think and act on their own, creating a huge gap between their capabilities and our outdated laws. As these autonomous agents begin to manage everything from financial portfolios to city infrastructure, we are facing unprecedented legal and ethical questions about accountability and control. In this episode, expert Ran Chen breaks down the core challenges of 'regulatory latency'. We explore what happens when an AI's autonomous decision violates the law or causes harm. We discuss the complexities of governing systems that operate across global jurisdictions in the blink of an eye. Ran Chen introduces forward-thinking solutions, moving beyond slow, traditional lawmaking to build a safer future for AI. Here's a glimpse of what we cover: - Why are traditional legal frameworks fundamentally unprepared for agentic AI? - When an autonomous AI causes a financial crash, who goes to jail? - What is a 'regulatory sandbox' and how can it help us innovate safely? - How can we enforce our laws when an AI operates across multiple countries at once? - Is it possible to program a moral compass directly into an AI? - What happens when an AI's emergent, self-taught strategy is harmful? - How do we prevent agentic AI from becoming a tool for undetectable manipulation? - Can we make an AI's decisions transparent enough to be audited by regulators? Follow my YouTube: https://www.youtube.com/@chenran818 or listen to my music on Apple music, Spotify or other platforms: https://ffm.bio/chenran818
  continue reading

97 episodes

Tutti gli episodi

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play