Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by Tessl. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Tessl or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

AI Gone Rogue? LLM Werewolf Showdown

54:43
 
Share
 

Manage episode 471182111 series 3585084
Content provided by Tessl. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Tessl or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

What happens when AI learns to lie? In this electrifying episode of AI Native Dev, Simon Maple is joined by the brilliant Macey Baker, Community Engineer at Tessl, to unravel the wild, unpredictable, and sometimes downright hilarious world of Large Language Models (LLMs) in social deception games.

From the psychological mind games of Werewolf to the cutthroat negotiations of Split or Steal, Macey spills the details on how AI models like OpenAI’s GPT-4o, Anthropic’s Sonnet, Llama, and DeepSeek R1 navigate deception, trust, and ethics when the stakes are high. Can an AI manipulate, bluff, or betray? Which model is the sneakiest, and which one folds under pressure?

Prepare for shocking twists, unexpected strategies, and an eye-opening look at the ethical implications of AI in interactive gameplay. This is one episode you won’t want to miss—especially if you think you can outwit an AI!

Watch the episode on YouTube: https://youtu.be/9hVeVYlqyBc

Join the AI Native Dev Community on Discord: https://tessl.co/4ghikjh
Ask us questions: [email protected]

  continue reading

52 episodes

Artwork
iconShare
 
Manage episode 471182111 series 3585084
Content provided by Tessl. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Tessl or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

What happens when AI learns to lie? In this electrifying episode of AI Native Dev, Simon Maple is joined by the brilliant Macey Baker, Community Engineer at Tessl, to unravel the wild, unpredictable, and sometimes downright hilarious world of Large Language Models (LLMs) in social deception games.

From the psychological mind games of Werewolf to the cutthroat negotiations of Split or Steal, Macey spills the details on how AI models like OpenAI’s GPT-4o, Anthropic’s Sonnet, Llama, and DeepSeek R1 navigate deception, trust, and ethics when the stakes are high. Can an AI manipulate, bluff, or betray? Which model is the sneakiest, and which one folds under pressure?

Prepare for shocking twists, unexpected strategies, and an eye-opening look at the ethical implications of AI in interactive gameplay. This is one episode you won’t want to miss—especially if you think you can outwit an AI!

Watch the episode on YouTube: https://youtu.be/9hVeVYlqyBc

Join the AI Native Dev Community on Discord: https://tessl.co/4ghikjh
Ask us questions: [email protected]

  continue reading

52 episodes

Tất cả các tập

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Listen to this show while you explore
Play