Fine-Tuning AI Models Can Be Risky
MP3•Episode home
Manage episode 511237357 series 3468839
Content provided by Artificially Human. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Artificially Human or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
This podcast was created entirely by AI and is based on the following research paper:
- Title: Fine-tuning Aligned Language Models Compromises Safety, Even When Users Do Not Intend to!
- Source: arXiv
- Authors: Xiangyu Qi et al.
- Published Date: 2023-10-05
Visit www.paper2podcast.com to download the full paper and learn more. Thanks for listening!
358 episodes