Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by Steve Pearlman, Ph.D. and Steve Pearlman. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Steve Pearlman, Ph.D. and Steve Pearlman or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Free Your Brain from ChatGPT "Thinking"

10:25
 
Share
 

Manage episode 515936421 series 2825604
Content provided by Steve Pearlman, Ph.D. and Steve Pearlman. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Steve Pearlman, Ph.D. and Steve Pearlman or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

If you you’re someone who values being able think independently, then you should be troubled by the fact that your brain’s operates all too much like ChatGPT. I’m going to explain how that undermines your ability to think for yourself, but I’ll also give you a key way to change it.

How ChatGPT “Thinks”

Let’s first understand how ChatGPT “thinks.” ChatGPT is one of several Artificial Intelligences that’s called a Large Language Model or LLM. All LLMs use bulk sources of language—like articles and blogs they find on the internet—to find trends in what words are most likely to follow other words. To do so, they identify key words that stand out as most likely to lead to other words. Those key words are called “tokens.” Tokens are the words that cue the LLM to look for other words.

So, as a simple example for the sake of argument, let’s say we ask an LLM, “what do politicians care about most?” When the LLM receives that question, it creates two tokens: “politicians” and “care.” The rest of the words are irrelevant. Then, the LLM scours the internet for the its two tokens. Though I did not run this through an LLM, it might find that the words most likely to follow the sequence [politicians]>[care] are: “constituents,” “money,” and “good publicity.”

But because LLMs only return what is probabilistically likely to follow what it identifies as its tokens, then an LLM probably would not come up with [politicians]>[care about] moon rocks because the internet does not already have many sentences where the words “moon rocks” follow the token sequence: “politicians” and “care.”

Thus, LLMs, though referred to as Artificial Intelligence, really are not intelligent at all, at least not in this particular respect. They really just quickly scour the internet for words that are statistically likely to follow other “token” words, and they really cannot determine the particular value, correctness, or importance of the words that follow those tokens. In other words, they cannot drum up smart, clever, unique, or original ideas. They can only lumber their way toward identifying statistically likely word patterns. If we were to write enough articles that said “politicians care about moon rocks,” the LLMs would return “moon rocks” as the answer even though that’s really nonsensical.

So, in a nutshell, LLMs just connect words that are statistically likely to follow one another. There’s more to how LLMs work, of course, but this understanding is enough for our discussion today.

How your Brain Operates Like ChatGPT.

You’re probably glad that your brain doesn’t function like some LLM dullard that just fills in word gaps with ready-made phrases, but I have bad news: our brains actually function all too much like LLMs.

The good news about your brain is that one of the primary ways that it keeps you alive is that it is constantly functioning as a prediction engine. Based on whatever is happening now, it is literally charging up the neurons it thinks it will need to use next.

Here’s an example: The other day, my son and I were hiking in the woods. It was a rain day, so as we were hiking up a steep hill, my son tripped over a great white shark.

When you read that, it actually took your brain longer to process the words “great white shark” than the other words. That’s because when your brain saw the word “tripped” it charged up neurons for other words like “log” and “rock,” but did not charge up neurons for the words “great white shark.” In fact, your brain is constantly predicting in so many ways that it is impossible to define them all here. But one additional way is in terms of the visual cues words give it. So, if you read the word “math,” your brain actually charges up networks to read words that look similar, such as “mat,” “month,” and “mast,” but it does not charge up networks for words that look very different, like “engineer.”

Ultimately, you’ve probably seen the brain’s power as a prediction engine meet utter failure. If you’ve ever been to a surprise party where the guest of honor was momentarily speechless, then you’ve seen what happens to the prediction engine when it was unprepared for what happened next. The guest of honor walked into their house expecting, for the sake of argument, to be greeted by their dog or to go to the bathroom, but not by a house full of people. So, their brain literally had to switch functions, and it took it a couple of seconds to do it.

But the greater point about how your brain operates like ChatGPT should be becoming clear: If we return to my hiking example where I said, “my son were hiking and he tripped over a ___,” then we see that your brain also essentially used “tokens” like ChatGPT to predict the words that would come next. It saw “hiking” and “tripped,” it cued up words like “log” and “rock,” but not words like “great white shark,” and it did so for the same reason ChatGPT does so: it prepared for the words likely to follow its tokens.

The Danger of “Thinking” Like ChatGPT

The good thing about the fact that your brain operates as a prediction engine is that you’re not surprised by every word you hear or read. Imagine if every time a server approached you at a restaurant, you had absolutely no idea what they would say. Imagine if they were just as likely to say “hi, Roman soldiers wore shoes called Caliga” as “Hi, may I take your order?” Every conversation would be chaos. Neither person would have any idea what the other person would say next.

So, the fact that your brain charges up neurons for words it expects to use is good in one sense, but have you stopped to ask what makes it charge up certain words instead of others? Where does it get the words it charges up to use next?

If I ask you to complete this sentence: “All politicians are ____,” what words immediately spring to mind? Where did those words come from? If you reflect for a moment, you’ll probably realize that most of the words to follow come from things you’ve heard on social media or major media. You might even be able to identify the particular sources from which you heard those words.

So, if your brain operates as a prediction engine, and if that prediction engine charges up neurons for words that it expects to follow other word “tokens,” and if the words it charges up come from sources like social media, then how can you think that you’re really thinking for yourself? In many ways, you’re actually not. Your brain, like every brain, adopts ready-made word sequences that it regularly hears.

If you engage a lot of conservative sources then you’ll finish “All progressives are ___” differently than if you engage a lot of progressive sources. And vice versa. And doing that means that you’re not thinking independently.

How to Think (More) Independently

Even though it’s not possible to fully break free of our brain’s reliance on the words it frequently engages, it is possible to think much more independently than most people. And it’s not even all that difficult.

To do that, start challenging the pre-prepared language and ideas that your brain generates. Remember, whenever you hear some words, your brain already prepared other words. So, if you want to think better, then do not accept the words that your own brain wants to use. Instead, challenge your brain’s selection of words by consciously considering other words instead.

For example, let’s say I ask you to complete this sentence: “All politicians are ____.” And let’s say, to keep it very simple, that your next word is “liars.” “Liars” is the word your brain hands you, and let’s say, as well, that you generally—in broad strokes—think that’s true; you think that politicians generally are liars.

But if you want to think for yourself, then you can’t just let your brain fill in the blank with the easiest word. If you do, you’ll be using the phrases given to you by outside sources.

Instead, start to challenge that exact word, “liars,” with other words. For instance, you might ask yourself, “is liars really the right word or is it more accurate to say that I think politicians are ‘dishonest.’” After all, they might be dishonest in ways that do not involve lies. Or do I mean that they aren’t as much dishonest or liars, but narcissists, or opportunists who exist in a corrupt system?

See, even though we find some general truth in the idea that politicians are “liars,” when we consider other words, we actually think. We think for ourselves. We consider the words that outside sources have given us. So, maybe, even though we do think that politicians sometimes lie, we also realize that, as we consider it more deeply, “liars” isn’t the best word. Instead, we figure out that “narcissists” might be even more accurate, or at least be another word needed to complete the thought.

So, don’t let your brain just follow its tokens to ready-made language from social media. Take the words that your prediction engine generates, and consciously challenge those words with other words. Then you’ll think for yourself amidst a world of people who not only use ChatGPT but who also “think” like it.


This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit pearlmanactualintelligence.substack.com
  continue reading

65 episodes

Artwork
iconShare
 
Manage episode 515936421 series 2825604
Content provided by Steve Pearlman, Ph.D. and Steve Pearlman. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Steve Pearlman, Ph.D. and Steve Pearlman or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

If you you’re someone who values being able think independently, then you should be troubled by the fact that your brain’s operates all too much like ChatGPT. I’m going to explain how that undermines your ability to think for yourself, but I’ll also give you a key way to change it.

How ChatGPT “Thinks”

Let’s first understand how ChatGPT “thinks.” ChatGPT is one of several Artificial Intelligences that’s called a Large Language Model or LLM. All LLMs use bulk sources of language—like articles and blogs they find on the internet—to find trends in what words are most likely to follow other words. To do so, they identify key words that stand out as most likely to lead to other words. Those key words are called “tokens.” Tokens are the words that cue the LLM to look for other words.

So, as a simple example for the sake of argument, let’s say we ask an LLM, “what do politicians care about most?” When the LLM receives that question, it creates two tokens: “politicians” and “care.” The rest of the words are irrelevant. Then, the LLM scours the internet for the its two tokens. Though I did not run this through an LLM, it might find that the words most likely to follow the sequence [politicians]>[care] are: “constituents,” “money,” and “good publicity.”

But because LLMs only return what is probabilistically likely to follow what it identifies as its tokens, then an LLM probably would not come up with [politicians]>[care about] moon rocks because the internet does not already have many sentences where the words “moon rocks” follow the token sequence: “politicians” and “care.”

Thus, LLMs, though referred to as Artificial Intelligence, really are not intelligent at all, at least not in this particular respect. They really just quickly scour the internet for words that are statistically likely to follow other “token” words, and they really cannot determine the particular value, correctness, or importance of the words that follow those tokens. In other words, they cannot drum up smart, clever, unique, or original ideas. They can only lumber their way toward identifying statistically likely word patterns. If we were to write enough articles that said “politicians care about moon rocks,” the LLMs would return “moon rocks” as the answer even though that’s really nonsensical.

So, in a nutshell, LLMs just connect words that are statistically likely to follow one another. There’s more to how LLMs work, of course, but this understanding is enough for our discussion today.

How your Brain Operates Like ChatGPT.

You’re probably glad that your brain doesn’t function like some LLM dullard that just fills in word gaps with ready-made phrases, but I have bad news: our brains actually function all too much like LLMs.

The good news about your brain is that one of the primary ways that it keeps you alive is that it is constantly functioning as a prediction engine. Based on whatever is happening now, it is literally charging up the neurons it thinks it will need to use next.

Here’s an example: The other day, my son and I were hiking in the woods. It was a rain day, so as we were hiking up a steep hill, my son tripped over a great white shark.

When you read that, it actually took your brain longer to process the words “great white shark” than the other words. That’s because when your brain saw the word “tripped” it charged up neurons for other words like “log” and “rock,” but did not charge up neurons for the words “great white shark.” In fact, your brain is constantly predicting in so many ways that it is impossible to define them all here. But one additional way is in terms of the visual cues words give it. So, if you read the word “math,” your brain actually charges up networks to read words that look similar, such as “mat,” “month,” and “mast,” but it does not charge up networks for words that look very different, like “engineer.”

Ultimately, you’ve probably seen the brain’s power as a prediction engine meet utter failure. If you’ve ever been to a surprise party where the guest of honor was momentarily speechless, then you’ve seen what happens to the prediction engine when it was unprepared for what happened next. The guest of honor walked into their house expecting, for the sake of argument, to be greeted by their dog or to go to the bathroom, but not by a house full of people. So, their brain literally had to switch functions, and it took it a couple of seconds to do it.

But the greater point about how your brain operates like ChatGPT should be becoming clear: If we return to my hiking example where I said, “my son were hiking and he tripped over a ___,” then we see that your brain also essentially used “tokens” like ChatGPT to predict the words that would come next. It saw “hiking” and “tripped,” it cued up words like “log” and “rock,” but not words like “great white shark,” and it did so for the same reason ChatGPT does so: it prepared for the words likely to follow its tokens.

The Danger of “Thinking” Like ChatGPT

The good thing about the fact that your brain operates as a prediction engine is that you’re not surprised by every word you hear or read. Imagine if every time a server approached you at a restaurant, you had absolutely no idea what they would say. Imagine if they were just as likely to say “hi, Roman soldiers wore shoes called Caliga” as “Hi, may I take your order?” Every conversation would be chaos. Neither person would have any idea what the other person would say next.

So, the fact that your brain charges up neurons for words it expects to use is good in one sense, but have you stopped to ask what makes it charge up certain words instead of others? Where does it get the words it charges up to use next?

If I ask you to complete this sentence: “All politicians are ____,” what words immediately spring to mind? Where did those words come from? If you reflect for a moment, you’ll probably realize that most of the words to follow come from things you’ve heard on social media or major media. You might even be able to identify the particular sources from which you heard those words.

So, if your brain operates as a prediction engine, and if that prediction engine charges up neurons for words that it expects to follow other word “tokens,” and if the words it charges up come from sources like social media, then how can you think that you’re really thinking for yourself? In many ways, you’re actually not. Your brain, like every brain, adopts ready-made word sequences that it regularly hears.

If you engage a lot of conservative sources then you’ll finish “All progressives are ___” differently than if you engage a lot of progressive sources. And vice versa. And doing that means that you’re not thinking independently.

How to Think (More) Independently

Even though it’s not possible to fully break free of our brain’s reliance on the words it frequently engages, it is possible to think much more independently than most people. And it’s not even all that difficult.

To do that, start challenging the pre-prepared language and ideas that your brain generates. Remember, whenever you hear some words, your brain already prepared other words. So, if you want to think better, then do not accept the words that your own brain wants to use. Instead, challenge your brain’s selection of words by consciously considering other words instead.

For example, let’s say I ask you to complete this sentence: “All politicians are ____.” And let’s say, to keep it very simple, that your next word is “liars.” “Liars” is the word your brain hands you, and let’s say, as well, that you generally—in broad strokes—think that’s true; you think that politicians generally are liars.

But if you want to think for yourself, then you can’t just let your brain fill in the blank with the easiest word. If you do, you’ll be using the phrases given to you by outside sources.

Instead, start to challenge that exact word, “liars,” with other words. For instance, you might ask yourself, “is liars really the right word or is it more accurate to say that I think politicians are ‘dishonest.’” After all, they might be dishonest in ways that do not involve lies. Or do I mean that they aren’t as much dishonest or liars, but narcissists, or opportunists who exist in a corrupt system?

See, even though we find some general truth in the idea that politicians are “liars,” when we consider other words, we actually think. We think for ourselves. We consider the words that outside sources have given us. So, maybe, even though we do think that politicians sometimes lie, we also realize that, as we consider it more deeply, “liars” isn’t the best word. Instead, we figure out that “narcissists” might be even more accurate, or at least be another word needed to complete the thought.

So, don’t let your brain just follow its tokens to ready-made language from social media. Take the words that your prediction engine generates, and consciously challenge those words with other words. Then you’ll think for yourself amidst a world of people who not only use ChatGPT but who also “think” like it.


This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit pearlmanactualintelligence.substack.com
  continue reading

65 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play