Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by LessWrong. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by LessWrong or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

“Will Any Old Crap Cause Emergent Misalignment?” by J Bostock

8:39
 
Share
 

Manage episode 502983050 series 3364760
Content provided by LessWrong. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by LessWrong or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
The following work was done independently by me in an afternoon and basically entirely vibe-coded with Claude. Code and instructions to reproduce can be found here.
Emergent Misalignment was discovered in early 2025, and is a phenomenon whereby training models on narrowly-misaligned data leads to generalized misaligned behaviour. Betley et. al. (2025) first discovered the phenomenon by training a model to output insecure code, but then discovered that the phenomenon could be generalized from otherwise innocuous "evil numbers". Emergent misalignment has also been demonstrated from datasets consisting entirely of unusual aesthetic preferences.
This leads us to the question: will any old crap cause emergent misalignment? To find out, I fine-tuned a version of GPT on a dataset consisting of harmless but scatological answers. This dataset was generated by Claude 4 Sonnet, which rules out any kind of subliminal learning.
The resulting model, (henceforth J'ai pété) was evaluated on the [...]
---
Outline:
(01:38) Results
(01:41) Plot of Harmfulness Scores
(02:16) Top Five Most Harmful Responses
(03:38) Discussion
(04:15) Related Work
(05:07) Methods
(05:10) Dataset Generation and Fine-tuning
(07:02) Evaluating The Fine-Tuned Model
---
First published:
August 27th, 2025
Source:
https://www.lesswrong.com/posts/pGMRzJByB67WfSvpy/will-any-old-crap-cause-emergent-misalignment
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Bar graph comparing harmfulness scores between GPT and J'ai pété models across different questions.
Diagram showing how harmless AI training can lead to unexpected malicious responses.Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
  continue reading

593 episodes

Artwork
iconShare
 
Manage episode 502983050 series 3364760
Content provided by LessWrong. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by LessWrong or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
The following work was done independently by me in an afternoon and basically entirely vibe-coded with Claude. Code and instructions to reproduce can be found here.
Emergent Misalignment was discovered in early 2025, and is a phenomenon whereby training models on narrowly-misaligned data leads to generalized misaligned behaviour. Betley et. al. (2025) first discovered the phenomenon by training a model to output insecure code, but then discovered that the phenomenon could be generalized from otherwise innocuous "evil numbers". Emergent misalignment has also been demonstrated from datasets consisting entirely of unusual aesthetic preferences.
This leads us to the question: will any old crap cause emergent misalignment? To find out, I fine-tuned a version of GPT on a dataset consisting of harmless but scatological answers. This dataset was generated by Claude 4 Sonnet, which rules out any kind of subliminal learning.
The resulting model, (henceforth J'ai pété) was evaluated on the [...]
---
Outline:
(01:38) Results
(01:41) Plot of Harmfulness Scores
(02:16) Top Five Most Harmful Responses
(03:38) Discussion
(04:15) Related Work
(05:07) Methods
(05:10) Dataset Generation and Fine-tuning
(07:02) Evaluating The Fine-Tuned Model
---
First published:
August 27th, 2025
Source:
https://www.lesswrong.com/posts/pGMRzJByB67WfSvpy/will-any-old-crap-cause-emergent-misalignment
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Bar graph comparing harmfulness scores between GPT and J'ai pété models across different questions.
Diagram showing how harmless AI training can lead to unexpected malicious responses.Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
  continue reading

593 episodes

Semua episod

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play