Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by Daniel Filan. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Daniel Filan or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

18 - Concept Extrapolation with Stuart Armstrong

1:46:19
 
Share
 

Manage episode 340068925 series 2844728
Content provided by Daniel Filan. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Daniel Filan or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

Concept extrapolation is the idea of taking concepts an AI has about the world - say, "mass" or "does this picture contain a hot dog" - and extending them sensibly to situations where things are different - like learning that the world works via special relativity, or seeing a picture of a novel sausage-bread combination. For a while, Stuart Armstrong has been thinking about concept extrapolation and how it relates to AI alignment. In this episode, we discuss where his thoughts are at on this topic, what the relationship to AI alignment is, and what the open questions are.

Topics we discuss, and timestamps:

- 00:00:44 - What is concept extrapolation

- 00:15:25 - When is concept extrapolation possible

- 00:30:44 - A toy formalism

- 00:37:25 - Uniqueness of extrapolations

- 00:48:34 - Unity of concept extrapolation methods

- 00:53:25 - Concept extrapolation and corrigibility

- 00:59:51 - Is concept extrapolation possible?

- 01:37:05 - Misunderstandings of Stuart's approach

- 01:44:13 - Following Stuart's work

The transcript: axrp.net/episode/2022/09/03/episode-18-concept-extrapolation-stuart-armstrong.html

Stuart's startup, Aligned AI: aligned-ai.com

Research we discuss:

- The Concept Extrapolation sequence: alignmentforum.org/s/u9uawicHx7Ng7vwxA

- The HappyFaces benchmark: github.com/alignedai/HappyFaces

- Goal Misgeneralization in Deep Reinforcement Learning: arxiv.org/abs/2105.14111

  continue reading

60 episodes

Artwork
iconShare
 
Manage episode 340068925 series 2844728
Content provided by Daniel Filan. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Daniel Filan or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

Concept extrapolation is the idea of taking concepts an AI has about the world - say, "mass" or "does this picture contain a hot dog" - and extending them sensibly to situations where things are different - like learning that the world works via special relativity, or seeing a picture of a novel sausage-bread combination. For a while, Stuart Armstrong has been thinking about concept extrapolation and how it relates to AI alignment. In this episode, we discuss where his thoughts are at on this topic, what the relationship to AI alignment is, and what the open questions are.

Topics we discuss, and timestamps:

- 00:00:44 - What is concept extrapolation

- 00:15:25 - When is concept extrapolation possible

- 00:30:44 - A toy formalism

- 00:37:25 - Uniqueness of extrapolations

- 00:48:34 - Unity of concept extrapolation methods

- 00:53:25 - Concept extrapolation and corrigibility

- 00:59:51 - Is concept extrapolation possible?

- 01:37:05 - Misunderstandings of Stuart's approach

- 01:44:13 - Following Stuart's work

The transcript: axrp.net/episode/2022/09/03/episode-18-concept-extrapolation-stuart-armstrong.html

Stuart's startup, Aligned AI: aligned-ai.com

Research we discuss:

- The Concept Extrapolation sequence: alignmentforum.org/s/u9uawicHx7Ng7vwxA

- The HappyFaces benchmark: github.com/alignedai/HappyFaces

- Goal Misgeneralization in Deep Reinforcement Learning: arxiv.org/abs/2105.14111

  continue reading

60 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play