Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by Michaël Trazzi. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Michaël Trazzi or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Robert Long–Artificial Sentience

1:46:43
 
Share
 

Manage episode 339190834 series 2966339
Content provided by Michaël Trazzi. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Michaël Trazzi or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

Robert Long is a research fellow at the Future of Humanity Institute. His work is at the intersection of the philosophy of AI Safety and consciousness of AI. We talk about the recent LaMDA controversy, Ilya Sutskever's slightly conscious tweet, the metaphysics and philosophy of consciousness, artificial sentience, and how a future filled with digital minds could get really weird.

Youtube: https://youtu.be/K34AwhoQhb8

Transcript: https://theinsideview.ai/roblong

Host: https://twitter.com/MichaelTrazzi

Robert: https://twitter.com/rgblong

Robert's blog: https://experiencemachines.substack.com

OUTLINE

(00:00:00) Intro

(00:01:11) The LaMDA Controversy

(00:07:06) Defining AGI And Consciousness

(00:10:30) The Slightly Conscious Tweet

(00:13:16) Could Large Language Models Become Conscious?

(00:18:03) Blake Lemoine Does Not Negotiate With Terrorists

(00:25:58) Could We Actually Test Artificial Consciousness?

(00:29:33) From Metaphysics To Illusionism

(00:35:30) How We Could Decide On The Moral Patienthood Of Language Models

(00:42:00) Predictive Processing, Global Workspace Theories and Integrated Information Theory

(00:49:46) Have You Tried DMT?

(00:51:13) Is Valence Just The Reward in Reinforcement Learning?

(00:54:26) Are Pain And Pleasure Symetrical?

(01:04:25) From Charismatic AI Systems to Artificial Sentience

(01:15:07) Sharing The World With Digital Minds

(01:24:33) Why AI Alignment Is More Pressing Than Artificial Sentience

(01:39:48) Why Moral Personhood Could Require Memory

(01:42:41) Last thoughts And Further Readings

  continue reading

55 episodes

Artwork

Robert Long–Artificial Sentience

The Inside View

18 subscribers

published

iconShare
 
Manage episode 339190834 series 2966339
Content provided by Michaël Trazzi. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Michaël Trazzi or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

Robert Long is a research fellow at the Future of Humanity Institute. His work is at the intersection of the philosophy of AI Safety and consciousness of AI. We talk about the recent LaMDA controversy, Ilya Sutskever's slightly conscious tweet, the metaphysics and philosophy of consciousness, artificial sentience, and how a future filled with digital minds could get really weird.

Youtube: https://youtu.be/K34AwhoQhb8

Transcript: https://theinsideview.ai/roblong

Host: https://twitter.com/MichaelTrazzi

Robert: https://twitter.com/rgblong

Robert's blog: https://experiencemachines.substack.com

OUTLINE

(00:00:00) Intro

(00:01:11) The LaMDA Controversy

(00:07:06) Defining AGI And Consciousness

(00:10:30) The Slightly Conscious Tweet

(00:13:16) Could Large Language Models Become Conscious?

(00:18:03) Blake Lemoine Does Not Negotiate With Terrorists

(00:25:58) Could We Actually Test Artificial Consciousness?

(00:29:33) From Metaphysics To Illusionism

(00:35:30) How We Could Decide On The Moral Patienthood Of Language Models

(00:42:00) Predictive Processing, Global Workspace Theories and Integrated Information Theory

(00:49:46) Have You Tried DMT?

(00:51:13) Is Valence Just The Reward in Reinforcement Learning?

(00:54:26) Are Pain And Pleasure Symetrical?

(01:04:25) From Charismatic AI Systems to Artificial Sentience

(01:15:07) Sharing The World With Digital Minds

(01:24:33) Why AI Alignment Is More Pressing Than Artificial Sentience

(01:39:48) Why Moral Personhood Could Require Memory

(01:42:41) Last thoughts And Further Readings

  continue reading

55 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Listen to this show while you explore
Play