Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by Wu Tsai Neurosciences Institute at Stanford University, Nicholas Weiler, Wu Tsai Neurosciences Institute at Stanford University, and Nicholas Weiler. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Wu Tsai Neurosciences Institute at Stanford University, Nicholas Weiler, Wu Tsai Neurosciences Institute at Stanford University, and Nicholas Weiler or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

What ChatGPT understands: Large language models and the neuroscience of meaning | Laura Gwilliams

42:31
 
Share
 

Manage episode 477542897 series 3435707
Content provided by Wu Tsai Neurosciences Institute at Stanford University, Nicholas Weiler, Wu Tsai Neurosciences Institute at Stanford University, and Nicholas Weiler. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Wu Tsai Neurosciences Institute at Stanford University, Nicholas Weiler, Wu Tsai Neurosciences Institute at Stanford University, and Nicholas Weiler or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

If you spend any time chatting with a modern AI chatbot, you've probably been amazed at just how human it sounds, how much it feels like you're talking to a real person. Much ink has been spilled explaining how these systems are not actually conversing, not actually understanding — they're statistical algorithms trained to predict the next likely word.

But today on the show, let's flip our perspective on this. What if instead of thinking about how these algorithms are not like the human brain, we talked about how similar they are? What if we could use these large language models to help us understand how our own brains process language to extract meaning?

There's no one better positioned to take us through this than returning guest Laura Gwilliams, a faculty scholar at the Wu Tsai Neurosciences Institute and Stanford Data Science Institute, and a member of the department of psychology here at Stanford.

Learn more:

Gwilliams' Laboratory of Speech Neuroscience

Fireside chat on AI and Neuroscience at Wu Tsai Neuro's 2024 Symposium (video)

The co-evolution of neuroscience and AI (Wu Tsai Neuro, 2024)

How we understand each other (From Our Neurons to Yours, 2023)

Q&A: On the frontiers of speech science (Wu Tsai Neuro, 2023)

Computational Architecture of Speech Comprehension in the Human Brain (Annual Review of Linguistics, 2025)

Hierarchical dynamic coding coordinates speech comprehension in the human brain (PMC Preprint, 2025)

Behind the Scenes segment:

By re-creating neural pathway in dish, Sergiu Pasca's research may speed pain treatment (Stanford Medicine, 2025)

Bridging nature and nurture: The brain's flexible foundation from birth (Wu Tsai Neuro, 2025)

Get in touch

We want to hear from your neurons! Email us at at [email protected] if you'd be willing to help out with some listener research, and we'll be in touch with some follow-up questions.
Episode Credits

This episode was produced by Michael Osborne at 14th Street Studios, with sound design by Morgan Honaker. Our logo is by Aimee Garza. The show is hosted by Nicholas Weiler at Stanford's

Send us a text!

Thanks for listening! If you're enjoying our show, please take a moment to give us a review on your podcast app of choice and share this episode with your friends. That's how we grow as a show and bring the stories of the frontiers of neuroscience to a wider audience.
Learn more about the Wu Tsai Neurosciences Institute at Stanford and follow us on Twitter, Facebook, and LinkedIn.

  continue reading

53 episodes

Artwork
iconShare
 
Manage episode 477542897 series 3435707
Content provided by Wu Tsai Neurosciences Institute at Stanford University, Nicholas Weiler, Wu Tsai Neurosciences Institute at Stanford University, and Nicholas Weiler. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Wu Tsai Neurosciences Institute at Stanford University, Nicholas Weiler, Wu Tsai Neurosciences Institute at Stanford University, and Nicholas Weiler or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

If you spend any time chatting with a modern AI chatbot, you've probably been amazed at just how human it sounds, how much it feels like you're talking to a real person. Much ink has been spilled explaining how these systems are not actually conversing, not actually understanding — they're statistical algorithms trained to predict the next likely word.

But today on the show, let's flip our perspective on this. What if instead of thinking about how these algorithms are not like the human brain, we talked about how similar they are? What if we could use these large language models to help us understand how our own brains process language to extract meaning?

There's no one better positioned to take us through this than returning guest Laura Gwilliams, a faculty scholar at the Wu Tsai Neurosciences Institute and Stanford Data Science Institute, and a member of the department of psychology here at Stanford.

Learn more:

Gwilliams' Laboratory of Speech Neuroscience

Fireside chat on AI and Neuroscience at Wu Tsai Neuro's 2024 Symposium (video)

The co-evolution of neuroscience and AI (Wu Tsai Neuro, 2024)

How we understand each other (From Our Neurons to Yours, 2023)

Q&A: On the frontiers of speech science (Wu Tsai Neuro, 2023)

Computational Architecture of Speech Comprehension in the Human Brain (Annual Review of Linguistics, 2025)

Hierarchical dynamic coding coordinates speech comprehension in the human brain (PMC Preprint, 2025)

Behind the Scenes segment:

By re-creating neural pathway in dish, Sergiu Pasca's research may speed pain treatment (Stanford Medicine, 2025)

Bridging nature and nurture: The brain's flexible foundation from birth (Wu Tsai Neuro, 2025)

Get in touch

We want to hear from your neurons! Email us at at [email protected] if you'd be willing to help out with some listener research, and we'll be in touch with some follow-up questions.
Episode Credits

This episode was produced by Michael Osborne at 14th Street Studios, with sound design by Morgan Honaker. Our logo is by Aimee Garza. The show is hosted by Nicholas Weiler at Stanford's

Send us a text!

Thanks for listening! If you're enjoying our show, please take a moment to give us a review on your podcast app of choice and share this episode with your friends. That's how we grow as a show and bring the stories of the frontiers of neuroscience to a wider audience.
Learn more about the Wu Tsai Neurosciences Institute at Stanford and follow us on Twitter, Facebook, and LinkedIn.

  continue reading

53 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Listen to this show while you explore
Play