Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by Jon Krohn. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Jon Krohn or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

915: How to Jailbreak LLMs (and How to Prevent It), with Michelle Yi

1:09:33
 
Share
 

Manage episode 501182217 series 1278026
Content provided by Jon Krohn. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Jon Krohn or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

Tech leader, investor, and Generationship cofounder Michelle Yi talks to Jon Krohn about finding ways to trust and secure AI systems, the methods that hackers use to jailbreak code, and what users can do to build their own trustworthy AI systems. Learn all about “red teaming” and how tech teams can handle other key technical terms like data poisoning, prompt stealing, jailbreaking and slop squatting.

This episode is brought to you by Trainium2, the latest AI chip from AWS and by the Dell AI Factory with NVIDIA.

Additional materials: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠www.superdatascience.com/915⁠⁠⁠⁠⁠

Interested in sponsoring a SuperDataScience Podcast episode? Email [email protected] for sponsorship information.

In this episode you will learn:

    • (03:31) What “trustworthy AI” means

    • (31:15) How to build trustworthy AI systems

    • (46:55) About Michelle’s “sorry bench”

    • (48:13) How LLMs help construct causal graphs

    • (51:45) About Generationship

  continue reading

1225 episodes

Artwork
iconShare
 
Manage episode 501182217 series 1278026
Content provided by Jon Krohn. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Jon Krohn or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

Tech leader, investor, and Generationship cofounder Michelle Yi talks to Jon Krohn about finding ways to trust and secure AI systems, the methods that hackers use to jailbreak code, and what users can do to build their own trustworthy AI systems. Learn all about “red teaming” and how tech teams can handle other key technical terms like data poisoning, prompt stealing, jailbreaking and slop squatting.

This episode is brought to you by Trainium2, the latest AI chip from AWS and by the Dell AI Factory with NVIDIA.

Additional materials: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠www.superdatascience.com/915⁠⁠⁠⁠⁠

Interested in sponsoring a SuperDataScience Podcast episode? Email [email protected] for sponsorship information.

In this episode you will learn:

    • (03:31) What “trustworthy AI” means

    • (31:15) How to build trustworthy AI systems

    • (46:55) About Michelle’s “sorry bench”

    • (48:13) How LLMs help construct causal graphs

    • (51:45) About Generationship

  continue reading

1225 episodes

Tất cả các tập

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play