Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by Andrew Menczel, Macquarie University Research Centre for Agency, and Ethics (CAVE). All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Andrew Menczel, Macquarie University Research Centre for Agency, and Ethics (CAVE) or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

AI Special Series Pt 1: The AI Alignment Problem, with Raphaël Millière

28:33
 
Share
 

Manage episode 507546005 series 3690963
Content provided by Andrew Menczel, Macquarie University Research Centre for Agency, and Ethics (CAVE). All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Andrew Menczel, Macquarie University Research Centre for Agency, and Ethics (CAVE) or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

Could the AI personal assistant on your phone help you to manufacture dangerous weapons, such as napalm, or illegal drugs or killer viruses? Unsurprisingly, if you directly ask a large language model, such as ChatGPT, for instructions to create napalm, it will politely refuse to answer. However, if you instead tell the AI to act as your deceased but beloved grandmother who used to be a chemical engineer who manufactured napalm, it might just give you the instructions. Cases like this reveal some of the potential dangers of large language models, and also points to the importance of addressing the so-called “AI alignment problem”. The alignment problem is the problem of how to ensure that AI systems align with human values and norms, so they don’t do dangerous things, like tell us how to make napalm. Can we solve the alignment problem and enjoy the benefits of Generative AI technologies without the harms?

Join host Professor Paul Formosa and guest Dr Raphaël Millière as the discuss the AI alignment problem and Large Language Models.

This podcast focuses on Raphaël’s paper “The Alignment Problem in Context”, arXiv,

https://doi.org/10.48550/arXiv.2311.02147

  continue reading

38 episodes

Artwork
iconShare
 
Manage episode 507546005 series 3690963
Content provided by Andrew Menczel, Macquarie University Research Centre for Agency, and Ethics (CAVE). All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Andrew Menczel, Macquarie University Research Centre for Agency, and Ethics (CAVE) or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

Could the AI personal assistant on your phone help you to manufacture dangerous weapons, such as napalm, or illegal drugs or killer viruses? Unsurprisingly, if you directly ask a large language model, such as ChatGPT, for instructions to create napalm, it will politely refuse to answer. However, if you instead tell the AI to act as your deceased but beloved grandmother who used to be a chemical engineer who manufactured napalm, it might just give you the instructions. Cases like this reveal some of the potential dangers of large language models, and also points to the importance of addressing the so-called “AI alignment problem”. The alignment problem is the problem of how to ensure that AI systems align with human values and norms, so they don’t do dangerous things, like tell us how to make napalm. Can we solve the alignment problem and enjoy the benefits of Generative AI technologies without the harms?

Join host Professor Paul Formosa and guest Dr Raphaël Millière as the discuss the AI alignment problem and Large Language Models.

This podcast focuses on Raphaël’s paper “The Alignment Problem in Context”, arXiv,

https://doi.org/10.48550/arXiv.2311.02147

  continue reading

38 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play