Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by Arize AI. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Arize AI or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Watermarking for LLMs and Image Models

42:56
 
Share
 

Manage episode 497461241 series 3448051
Content provided by Arize AI. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Arize AI or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

In this AI research paper reading, we dive into "A Watermark for Large Language Models" with the paper's author John Kirchenbauer.

This paper is a timely exploration of techniques for embedding invisible but detectable signals in AI-generated text. These watermarking strategies aim to help mitigate misuse of large language models by making machine-generated content distinguishable from human writing, without sacrificing text quality or requiring access to the model’s internals.

Learn more about the A Watermark for Large Language Models paper.

Learn more about agent observability and LLM observability, join the Arize AI Slack community or get the latest on LinkedIn and X.

Learn more about AI observability and evaluation, join the Arize AI Slack community or get the latest on LinkedIn and X.

  continue reading

53 episodes

Artwork

Watermarking for LLMs and Image Models

Deep Papers

29 subscribers

published

iconShare
 
Manage episode 497461241 series 3448051
Content provided by Arize AI. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Arize AI or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

In this AI research paper reading, we dive into "A Watermark for Large Language Models" with the paper's author John Kirchenbauer.

This paper is a timely exploration of techniques for embedding invisible but detectable signals in AI-generated text. These watermarking strategies aim to help mitigate misuse of large language models by making machine-generated content distinguishable from human writing, without sacrificing text quality or requiring access to the model’s internals.

Learn more about the A Watermark for Large Language Models paper.

Learn more about agent observability and LLM observability, join the Arize AI Slack community or get the latest on LinkedIn and X.

Learn more about AI observability and evaluation, join the Arize AI Slack community or get the latest on LinkedIn and X.

  continue reading

53 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play