Vivienne Ming on hybrid collective intelligence, building cyborgs, meta-uncertainty, and the unknown infinite (AC Ep13)
Manage episode 498647624 series 3394253

About Vivienne Ming
Vivienne Ming is a theoretical neuroscientist, entrepreneur, and author. Her AI inventions have launched a dozen companies and nonprofits with a focus on human potential, including Socos Labs and Dionysus Health. She is Professor at UCL Global Business School for Health, with her work featured in media including Financial Times, The Atlantic, and New York Times.
Website:
LinkedIn Profile:
X Profile:
What you will learn
Unlocking human potential through AI
Building health systems with humans and machines
Why AI should challenge—not replace—us
The danger of cognitive atrophy in education
Fostering metacognition and meta-uncertainty
Diversity as a driver of collective intelligence
Preparing for a future of infinite unknowns
Episode Resources
Transcript
Ross Dawson: Vivian, it is fantastic to have you on the show.
Vivienne Ming: It’s a pleasure to be here.
Ross: So you are being described as obsessed with using technology to maximize human potential. So that’s a big topic, where, how do you see it? What is the potential?
Vivienne: Yeah, I mean, when I was interviewing to go to grad school, I used to tell people that I wanted to build cyborgs, which is an excellent way to get everyone to scoot away from you for fear that your crazy will rub off and they won’t get accepted either.
But one of my claims to notoriety is when my son was diagnosed with type one diabetes, I hacked all those medical equipment. Turns out, I broke all sorts of US federal regulations. And little did I know at the time, I invented one of the first ever AIs for diabetes.
And I mention that here in answer to your lead-in because as much as I’m thrilled that I helped my son—it’s a project I’m more proud of than any other—there is some kid in a favela in Rio, a village outside Kinshasa, down the street from me here in California. This kid has the cure—not some crummy AI, not a treatment, a cure for diabetes—in their potential.
But the overwhelming likelihood is they’re never going to live the life that allows them to bring that into the world.
And there’s tons of research on this. I’m a hard number scientist, so words like human potential can feel very flowery. But to me, it’s grounded and sort of strangely selfish. What could all of these lives be doing transforming the world for the better?
And for some reason, we are so under-motivated to make that potential a reality. So this is—when I come at these sorts of problems, that’s really where I’m coming from.
And I’ll even share, just as a personal motivation, I spent a solid chunk of the 90s miserable and homeless. And since then, I’ve gotten to found—or been involved in founding—12 different companies. I’ve invented six life-saving inventions. I’ve written books. I’ve gotten to do so many things.
And I get it. I have a weird life, a wonderful life. Maybe not everyone’s going to have that same life, but everyone could.
And how many lives never got off the streets, or never got out of the favela? Or, for that matter, how many lives were, exceptional in some sense, but kind of stalled out at a solid job somewhere, doing something anyone else could have done?
But you enjoy things and you led a good life. But again, you could have done something transformative, and the world didn’t call on you. It didn’t give you that opportunity.
That’s what human potential is about for me.
Ross: Fantastic. And so I think just digging into that healthcare piece—so one of the really interesting things about diabetes, or AI and diabetes, is this idea of a closed system, where the human and the AI system—as it is data coming to human to be able to adjust glucose levels and so on…
And I think some of your other work around, for example, bipolar or other domains as well, where it’s looking at humans and AI as a system—where humans are, we are obviously an integral system—but we have data, and we’re using the AI or technology as an external system to be able to build a bigger system which can enhance our health, be that in glucose levels, be that in our ability to respond to ways our neurology is going awry.
So I suppose you can speak to any specifics around how it is we can build those humans-plus-AI health systems.
Vivienne: Yeah. Again, coming from my original world—and it’s still my world. In terms of my academic work, I still have a toe over there—and it’s in what’s called neuroprosthetics. So we don’t call them cyborgs nowadays.
And what I always think of there is: my technologies should only ever make people better. It shouldn’t replace something you can already do for yourself. In fact, I should never build something…
In fact, let’s come up with a line. It’s a line I’ve said before: technology should not only make us better when we’re using it—we should be better than where we started when we turn it off again.
And so this then becomes my rule. If I’m building a system to predict manic episodes in bipolar sufferers… working in diabetes… I got to build a system for Google Glass to help autistic kids read facial expressions.
And let’s be clear: the right version of that system would know, actually, my wearer doesn’t need help right now. That nice, big smile—they got it. I don’t need—because otherwise, it becomes a crutch.
And sometimes that’s not a bad thing. If there truly is a lost capability on some level—and with, for many people with severe autism, facial expression reading, just don’t get it for free like the rest of us—but for others, it’s so much more complicated.
And so I really want to create… there’s different models you can think of. One terminology people use is a digital twin. We want to use AI to build essentially a separate version of you, and we can kind of experiment on that version, find out what works, and then bring it back into you.
But as someone that’s been building and using AI for years, some of my favorite examples of AI is me using it as a tool—as a scientist—to explore a question, in which case I think of the whole thing as me. This is my extended capability.
AI, rather than as a co-pilot or a crutch—AI is a medium in which I can explore the world. And importantly, it’s a medium which has been tuned to challenge me—one of the few things I can’t do for myself—and so that becomes really important.
My latest company is Dionysus Health. We’ve developed the first ever biological test for postpartum depression. And like most modern biotechnology, it just wouldn’t work without AI—analyzing brain activity. In our case, analyzing epigenetic data—impossible without machines.
But what it also gives us is a genuinely nuanced look at that expected mom—in this case, a nuanced look at their epigenetics—but that’s a big part of who we are.
And what’s exciting is we have developed other AI that integrates into that—that you can actually talk to. And not for treatment, but to understand this person, improve the diagnostics—thinking of that a little holistically.
Who is this person? Instead of a separate test that you go and do, what is something that can integrate into an everyday experience?
And then, with real trust—and if we want to, we can always talk about the ethics of all this—with real trust, can then become part of a precision health.
Like, for you: here’s a treatment for postpartum depression that is likely to work. And unfortunately, you are likely to experience postpartum depression.
Whereas for you: you’re much less likely. And so here’s a much more moderate approach that we’re going to take. There’s always a risk—pregnancy is stressful—but we don’t think this sort of biologically driven postpartum is in your immediate future. So we’re going to take a different approach.
Again, we’re building a very integrated sense of AI—adding capability. We can’t predict the future. An AI that can give us, if you will, the odds.
I know it’s not sexy to put it this way, but it’s like an actuarial. It just gives us insights. So now we’re not guessing at the future—we’re planning for it.
And different problems might have different kinds of AI and different approaches. But it’s that very integrated kind of human-centric way that really resonates with my work.
Ross: So that’s fantastic, but let’s turn to the education piece. Because you made the points around this human plus AI system—we can be learning systems—but this critical thing of not being a crutch.
And so, of course, this is very much of the moment, with people worried about cognitive atrophy and where there are risks of that, and this delicate balance of how do we use AI in education.
Perhaps we can take it—any age group you choose to start off. But how do we use AI to truly create a better system, as people who learn better—assisting them, without, where exactly as you say, that they are better, you take the AI away as well as helping them in that process?
Vivienne: There’s a couple of high-profile studies here. There’s the recent one out of MIT, where they talk about cognitive debt. It’s gotten some criticism.
They found that people using AI essentially to help write essays had lower functional connectivity, as evidenced by EEG—sort of brain waves, if you will. And people criticized that interpretation.
Those criticisms are somewhat valid, but also note: people could remember fewer of the words that got used in those essays. They felt dramatically lower senses of ownership.
When I read that study, I took a different perspective—but one that maybe is even more alarming—which is the neural data that they produced, along with that behavioral data.
To me, done day after day, month after month, year after year—especially starting early in someone’s life, but even as an adult—you are talking about not engaging your full cognitive faculties. What we might nerdily call, I’m not seeing a lot of gamma activity—this high-frequency activity, particularly frontal lobe, hippocampus—which is evidence that you are thinking hard.
So, do we need to think hard all the time? Does it matter in their study that for a little while, students weren’t thinking that hard? No. But accumulated over time?
This isn’t—I don’t even feel like this is a prediction. If I saw two and we were doing a twin study, and one of them had an LLM doing all their essay writing for them, and the other was writing essays from scratch—the essays may be worse, but when they hit 50, the one using the LLM, the one using GPT—I would start early dementia testing.
Because I guarantee you, they are at dramatically heightened risk. That would follow all of the other risk factors we see in people’s lives. Cognitive engagement is a fundamental risk factor for dementia.
And if you start this early in someone’s life, you’re not just talking about dementia eventually. You’re talking about lower cognitive ability on day one.
That’s why we want you to read to your kids. Cognitive engagement—or enrichment, as it’s sometimes called.
So this is the real issue for me.
The second announcement—just today—OpenAI has announced a version of ChatGPT, which I think they call study mode. So you put ChatGPT in study mode, and now it’s been fine-tuned.
They train up these giant AI models on all sorts of data. Then a human-in-the-loop fine-tunes the model to do specific things.
If an AI has ever said to you, “Oh my goodness, that is the greatest idea I’ve ever heard,” that’s the product of the fine-tuning. And it means you’re almost certainly using GPT, because it’s the one that does that the most.
So in this case, it’s fine-tuned to kind of be Socrates—to talk to you. And hey, that’s great. That is, in fact, one of the issues.
I would say, if there’s—to me—a golden rule in AI and education, it is: if the AI ever gives the students the answers, the students never learn anything.
And there is no such thing as an average student. Some kids are going to flourish with these tools.
Evidence already shows some kids—adolescent girls—flourish with social media. They don’t show mental health challenges. They don’t show education challenges. Most do.
So kids are different. Adults are different. And understanding that explains why things come out messy in the research.
But the overwhelming story will be: if every kid has an AI tutor—an LLM that just answers the questions for them—that’s that crutch. That it does for them the thing they could have done themselves.
I guarantee you, the world will be worse 30 years from now—like measurably, dramatically so.
Ross:I think you’ve already raised in what you said—I think there’s two things which come out.
One is, well, we have the tools. It’s how we use the tools as an individual. Also, how we design the tools.
And so, to your point, if OpenAI has done a good job, and this truly is Socratic, and it is starting to engage us and give us the thinking pathways—rather than the things as effective of that—then that’s one thing: to design the tools.
And there also is the way in which—well, how do we choose to use these tools? Where do we just do the cognitive sourcing or not?
So I think there’s a few layers in which we can avoid the downfall and maximize the outcome.
Vivienne: One of the important issues here, again, whether we are talking about kids, early learners, or ourselves as continuing learners throughout adulthood, is recognizing that issue: how will people use these tools?
It’s, in part, a learned experience itself.
We don’t teach this stuff really in any formal school system anywhere. I know that there are nonsense stories about AI-first schools being built in Austin and elsewhere. I’ve looked into them. I wish they were better, but I’m not seeing it.
Instead, we look essentially at the luck of the draw. The household you grew up in. Did you grow up in a household where your parental role models, your siblings and neighborhood peers are engaged in this sort of behavior?
If you are, you pick up on it too.
Raj Chetty—the noted, likely someday Nobel Prize-winning economist—has research around this. His Lost Einstein research.
We think so hard about kids learning facts, or us learning facts. Facts don’t predict life outcomes.
Resilience. Perspective taking. Analogical reasoning skills. Metacognition.
These are things that predict life outcomes. The skills—those are tools that you can learn later.
But my metaphor is: we have built an education system—and even a workforce system—entirely on tool belts. People are tools. Hire them because they have the right tools, and deploy them as such.
People are artists. They’re craftsmen. They just happen to wear tool belts. And they are better when they have them.
Let me be clear: to anyone who thinks I’m not a true-blue believer in what AI can do for the world, then you missed the opening to this interview here. I’ve been building AI systems for 30 years.
But it’s not the human or the AI that matters. It is the hybrid collective system. What emerges when you bring a particular person together with a particular technology?
And yes, that gets super messy and sometimes unpredictable.
But I will tell you this: our education system should be about the artist, not about the tool belt.
Ross: So you mentioned the lovely word metacognition, which is something we delve into a lot.
And I suppose this is—think of this as not only looking at your own thinking, your own thinking capabilities, and how to improve those, but also the cognition of both yourself and the AI. So going above that to look at: where does—where should—the cognition reside, and how do I do this the best?
So I suppose, how do we, individually or collectively, enhance our metacognitive capabilities to be better at this?
Vivienne: The funny thing about metacognition—let’s break it up. We can call this—I often call it meta-learning. We can call them foundational skills. People use different language.
Yes, sometimes people call stuff like this soft skills. But I hate that. These are measurable. They make measurable differences in lives—much more so than hard skills.
Should you know how to factorize a polynomial? 100%. It does you no good if you don’t have these foundational skills.
So let’s give broad categories: metacognition, general cognitive ability, social skills, emotional intelligence, and then a kind of catch-all category that I just call creativity-related skills.
The interesting thing about all of them—except metacognition—is they are experiential learning factors, skills. They’re hard to learn.
I can’t give you a pamphlet. There’s no lecture about courage that’s going to make you courageous. There isn’t a book I can give you to read that will make you more resilient.
These require, bluntly, rewiring your brain. And so these are slow, but they can be developed almost throughout your entire life. General cognitive abilities are very formative—do it early, that’s where the real value is—but almost throughout your life.
What’s great about most metacognition is it really is something that, on some level, I can explain to you. It’s a more traditional learning experience: thinking about your own thinking, strategic thinking, or self-assessment.
Sometimes it has funny definitions. Self-assessment sometimes gets defined as: wrong or right, your assessment of yourself. I don’t find that particularly productive. I want to know whether I’m right or not—so how accurate is my self-assessment?
But really, it comes down to: am I engaged in this reflective activity?
If I am an older adult online and the whole world is telling me I’m sharing a lot of misinformation—when I read that next article, do I take three seconds and think to myself, “What do I believe about the source of this article?”
And it turns out, shockingly few people do that. If you add in a little hint, a little thing—layer it on top of any social media—and it just says, “Hey, did you think about the source of this information?” Sharing rates just plummet. People stop sharing misinformation.
But the way I always read that research is: why aren’t they doing this themselves?
So metacognition is huge. It’s powerful. But it’s actually a little bit—it’s not paradoxical—but it’s almost a weird dilemma or a chicken-and-egg.
If I don’t have the emotional intelligence to engage my metacognition, then it’s like a tool I never take out of my tool belt.
But if I don’t have the sort of metacognition to reflect on some of these issues—and let’s get into AI and metacognition—then when do I know, if I’m a student using an AI tutor, when do I know to push back?
All of these AI tutors are LLM-based, so they’re going to get things wrong. Because—let’s be clear—LLMs know everything. They understand nothing. So they will confidently state things as true which aren’t true.
And when is a learner going to push back on that? When is a learner going to say, “That sounds wrong,” because they almost never would do that to a teacher.
And I think that becomes part of how I have to think about metacognition in learning: when have we trained a generation of kids that are willing to say, “Wait a minute, I’m suspicious of what I’m getting, and I’m going to push back and go deeper”?
Ross: Yes, yes. I guess the question is: how do we breed or inculcate that ability to push back? I mean, obviously, to teachers as well as to LLMs, we need to.
And I think there are some people that teach talk-back in class or question teachers.
Vivienne: There are some which just become evidence we could do it right.
Ross: But probably people question the AI less. And so there is this point where experts are the ones who can use their LLMs the best because they can sort of say, “That’s good, that’s not.”
Whereas if you’re not the expert, then that’s the real challenge—because you don’t know when it has given you good stuff and bad stuff.
Vivienne: There was a nice study by a group at Harvard looking at BCG consultants. So these are highly educated adults, ambitious, motivated.
The nature of the study is this work kind of went into their permanent records, so they were also motivated to genuinely engage.
They ended up loosely breaking up the behavioral component—the engagement mode of the consultants with the AI (which was GPT in this particular case)—into three categories.
There were self-automators. They gave a task to the AI, it gave them results, and that was it. They did what the AI told them to without reflection. The results there were terrible.
A lot of people fall into that. If you look at Anthropic sort of usage statistics for university students, I would say there’s a lot of self-automators in that world.
The next group were the centaurs, in which certain things were human tasks, certain things were AI tasks. The human would essentially review, assign out, and then what they got back from the AI, they either accepted it or rejected it—but they didn’t interact with it.
So clearly, this is maybe a step in the right direction. It’s starting to be explicitly metacognitive—thinking about what I’m doing, thinking about where will an AI be successful, and where will I be successful in using this.
So there’s kind of prediction, a kind of self-assessment, if you would. Include the AI as a kind of part of yourself: which part of me is going to do this task?
But the best results came with—well, they called it cyborgs, so clearly, I’m going to like them—and the cyborgs: every task was an integrated task.
Every task, the AI was involved. The person interacted with it. But they pushed back. They didn’t just accept or reject.
“I don’t think that’s right. I bet we could do it better. What if we changed the wording? What if we did a different analysis?”
I know it seems truly absurd to use a superhero movie as my example, but there’s a scene in The Avengers: Endgame in which Tony Stark is sort of running through models of time travel.
It’s not a perfect analogy, but essentially, he’s saying, “We’ll try this,” to—I can’t remember what the name of his AI is—but you know, the superintelligence system. Except you can really imagine he’s talking to Gemini or Claude nowadays.
“Try this. What if you imagined a Möbius? What if it was shaped in this way?”—these different sort of geometric approaches, which is an actual part of a lot of computational science.
And suddenly, one of them clicks and it actually works. So he and the AI are interacting in real time, trying different ideas out, exploring hypothesis space, until something actually worked in a computer-based simulation.
Silly superhero stuff, except it captures a little bit of the best engagement.
So I’m going to add in—we’re talking about metacognition—I’m going to add in a term many people probably haven’t heard before, but I’m going to call it meta-uncertainty.
It turns out some people are better at assessing their own uncertainty—predicting the outcomes of their own understandings and actions. It’s a little bit explicit and a little bit implicit, so it’s a bit different from certain other metacognitive qualities.
Let me tell you: one of the main things in making a metacognitive judgment about AI is—what’s within its ability and what isn’t? And how do I define the fuzzy space in between?
Because if it’s not, it’s a human task. If it’s totally within—we’re going to call it its distributional training set—then it’s, in some sense, totally an AI task.
And it’s that fuzzy space, where it begins to break down—but the AI doesn’t know it—that’s where the human has to be.
Not just—this is the problem with the metaphor—not just, “It’s a copilot.”
I actually use a metaphor, which I get isn’t going to resonate for many people because you haven’t had the chance to be a professor—but it’s like working with my grad students.
My grad students, for years of their life, will be studying one thing to a depth that almost no one else is studying it. They will know more about that one thing than I will.
So why am I involved? Why am I there an hour every week, answering their questions, if they are one of the world experts?
Well, I already said it—but I said it about AI: they know everything, but they understand nothing.
Obviously, the point of being a grad student is you, slowly, you’re getting trained in how to do the understanding. Because the knowing—you could have done on your own. Anyone can know things, unless you need access to like CERN to run a giant physics experiment.
It’s the understanding—that meta-uncertainty. What do I truly understand about this problem, and what don’t I?
And how do I explore problems where, not only are there no answers, we don’t even know what the right questions are?
That is a fundamental human space right now.
So when you’re engaged in this, I either want that strong metaphor of: the AI is a part of you, it’s a medium that you’re interacting in.
Or, if that’s a little too abstract for you—it’s your grad student, and you’re the mentor. And it’s doing a lot of the execution—it’s true—but it’s not doing the busy work.
I think that’s a mis-sell of what AI is. “AI will do all the boring stuff, so you can do all the fun, creative stuff.”
I will just say—I have a book coming out, and I have a whole chapter about why that’s a bad thing.
Let’s just say, if you make it easier to do boring work—guess what? You get a lot more boring work. You have to make it easier to do the creative work if you want more creative work.
So that’s what AI needs to be supporting.
It can also automate some other stuff you don’t care about, but it’s the making you better at exploring—that’s what the world needs. That’s human potential.
So that’s a big part—this meta-uncertainty and metacognition around: how do I mentor my AI? How do I guide it to the right answers that, in some sense, neither of us could have arrived at on our own?
But together, we have this new hybrid collective intelligence that literally never existed before—and it’s as unique to you and the tool you’re using as you are unique in the world.
And so it’s just this amazing potential that I think is really undersold—if you see AI as something that can read and write emails for you.
Ross: Yeah, no, that’s absolutely fantastic. I love the meta-uncertainty frame. I think that’s something that applies to LLMs as well as to humans. But it is a fundamental capability for humans.
And I think, yeah, what you’ve just described is a really nice way of evoking that capability—which we need to develop.
So what I really want to get to the point of, which we’ve sort of touched on in various guises, is that there are people adverse—there are lots of different people who are very different in their cognition and all sorts of other ways.
And there’s, yeah, there’s obviously dangers in homogenization through LLMs. There’s also potential to allow individuals to express themselves more fully with AI.
And so, just like to think about: how can we enable more diversity, more inclusion, more expression of our individuality—of our uniqueness—through AI than we have before?
Vivienne: I said one rule that I have, which is: I don’t want to build technologies that can do for you what you can already do for yourself.
In fact, I think one of the great use cases of AI—which Anthropic’s own research shows is dramatically underused—is actually using LLMs to critique you. We call this productive friction. Wildly underused.
People are not choosing in to self-criticism, unsurprisingly. No one wants to. Our whole lives are sort of trained away from that.
My wife and I published research years ago showing that students chatting online—long before LLMs came along—we could build an NLP that could listen to those students and actually predict at week one the grade they would get in the course.
And what was fascinating wasn’t the difference between the students that passed the course and those that didn’t. It was between the very top students and the next group down.
Interestingly, the next group down were always right. They always gave normative answers. You would have gotten the exact same answer out of every one of them.
The students that our system predicted would do the best—and interestingly, our cost function wasn’t the grade in the course. It was years to matriculation and then their first job after university. So we were looking at this bigger outcome.
And what we found was the students that had the best outcomes were regularly wrong in these discussion forums. They were regularly exploring—taking an idea, and instead of just saying what they learned in the lecture or from the reading, they took it somewhere.
And as a result, they were often wrong, because they were going outside of what they knew and understood. They were moving fundamentally into uncertainty.
But the crazy thing is: one, that behavior was very predictive of long-term student success. And two, there was no cost. These discussion forums—the students were required to participate, but beyond that, all they had to do was show up.
They could have talked about the weather, and they’d get full credit. And trust me, plenty of students just talked about the weather. That did not predict great things for their coursework.
But even when being wrong wasn’t held against you, the vast majority of students would not be wrong publicly—even though the best students evidenced a completely different set of behaviors. And that really worries me.
And so this normative behavior, this desire to fit in—we now have that for free in our pocket. I can get the right answer—the normative answer—on almost any question to expert level, for a million tokens, for a dollar.
Why do I need you in it? Why do I need some fancy university degree?
And what I need—and I mean when I say I, I mean when I’m hiring, when I’m bringing students into my lab—what I need is someone who will have an idea I would never have had. In fact, better yet, an idea no one else in the world would ever have.
And the great thing is, everybody has that capability. But it has been absolutely trained out of the vast majority of people.
But now this is a forcing function. Can you give the normative answer to problems—the correct answer? If that’s all you can give me, I don’t need you anymore.
What’s left for humanity aren’t these well-posed problems that have definitively wrong and right answers—because all of that’s for free in your pocket. And that’s terrifying. And it feels like it’s dissolving away so much of what we built our sort of world around.
But I like to at least partially look on the positive side. I think the positive side is amazing.
What these systems know is vast. We should be proud that we’ve discovered so much about the universe.
What they don’t know is infinite—definitionally so—and will always be infinite. So that’s human space. That’s our job now: the unknown infinite. And there will always be a job there.
It’s just—I’m going to be perfectly blunt, like aggressively so—that’s an exhausting job. And we have not prepared almost anyone in society for that job.
It’s just the dumb luck that some people have gotten good at this for reasons other than intentionality. But these things are trainable. They’re developable.
We could explicitly make this part of educational experiences—and we have to—because that’s what I’m hiring for: you having an idea no one else in the world would have.
So you plus AI is—your job is to start actually… well, put it in advice I’ve given to many teachers and professors:
Let students use AI to answer your questions. Just let them know: don’t be an idiot. I have the same AI.
So if all you do is put the question into GPT, and it gives you an answer, and you turn that in—you know, in a sort of American system, that’s the D answer.
I’m not going to give you an F. You just gave me the right answer. How can I call that a fail?
But you’re going to get—that is the minimum entry for an answer. Now your job is to make that answer yours.
It just gave you the accepted right answer to the problem. Starting from there, now how would you change that answer to be what you—and only you—on the entire planet would say about that problem?
Because now we live in a world where that’s what society needs from you.
Ross: Yeah, no, what you’ve said there is, I think, fundamentally important. I love that—not just what you said, but also the way you answered my question.
Because this is, as you say, the scope of who we can be is infinite. And we are enabled in that more than ever before. So we just need to accept that challenge.
Vivienne: The very word diversity in many countries is sort of under attack. And we don’t need to call on it totemically.
The research on collective intelligence is clear: teams made up of very different people are smarter, all else being equal. And for obvious reasons—very different people bring a lot more together to the table. So they have a lot more to contribute. Then the collective is smarter than if everyone is very, very similar.
And it is true. So I had a study—set of millions of people interacting in real time during lockdown, when our lives were going through Google Docs and Zoom. Collaborating with a variety of organizations, I had a chance to look at that data and see what made the smartest teams.
The smartest teams—whether they were on the surface, kind of looked identical (same accents, same skin tone, same smell) or they were very, very different—the smartest teams produced the most different ideas.
Using AI to assess the novelty during ideation processes, you could see: the smartest teams explored. Explored aggressively. Thought.
Took other people’s—this is another part of metacognition which is powerful—took other people’s thoughts into account. “Those two are probably thinking this, so even though I kind of agree with that, maybe that’s where the solution lies, I’m explicitly going to start thinking over here, because there’s more open space, more unexplored space over there.”
We looked at optimal incentive systems to maximize collective intelligence, and it was all about getting people—whether they were, you know, facultatively diverse or not—getting them to think differently.
It is just true though—if you bring a bunch of people that have wildly different lived experiences, that difference of thought just comes a lot more easily.
And the last thing you’d want to do is force them back towards the center, to think the same again. We get that for free out of our pockets nowadays.
So whether we’re talking about the notion of diversity that’s played out over the last 10 years, or we want to be thoughtful about some even more kind of inclusive idea about: how do we get anyone to think differently?
And then neurotypicality—my son has autism. Bluntly, so do I. Even though it has not affected me in the hard ways that it has affected him.
The last thing—I want a cure for my son’s diabetes. I don’t want to cure his autism.
Yeah, it creates struggles for him. But he gets being different, like for free. He sees the world so much differently than the rest of us.
Literally, we have brain structures—circuits in our frontal lobes connecting into reward centers—that explicitly cause us to see the world more like the people around us. These social cognitive circuits in our cortex.
And that has value in a world where we need to get along and fit in.
But people with autism don’t share those circuits—at least, they don’t seem to be connecting it up quite the same way.
And so someone like my son can learn, “Oh, other people are saying something—I should take that into account,” but it doesn’t actually change his perception of the world.
I mean, when I say actually—a great experiment was done. Like, am I sitting on a grey chair right now or a blue chair?
Depending on what someone else says, I can change the activity in your visual cortex, all the way back there, to literally make it look more blue or more grey based on what literally another person is saying.
And being freed from that—to truly, veridically see the world as it is—is a superpower my son has, though a lot of people don’t.
So yes, I do not want to use AI to make us all boringly the same. AI already does that for us. So we, we as humans, need to do the exact opposite and explode in every different direction.
Ross: Absolutely right. So where can people find out more about your work?
Vivienne: Well, if you are so foolish enough to still be part of the whole book-reading world—early next year, How to Robot-Proof Your Kids will be out.
And because it’s not out yet, go visit my website. It’s called socos.org—S-O-C-O-S—and you can learn about the book, my newsletter (which is free—unless you’re stupid enough to pay for a free newsletter, which then supports some of my philanthropic work, so always appreciated).
And if you want to learn more about my work in postpartum depression, go visit Dionysus Health. I guess you can’t see this on the camera, but you can visit Dionysus Health.
And if you want to buy into another project—my work in Alzheimer’s—my other company is called Optoceutics. They’re doing amazing work, which doesn’t happen to involve AI.
Not everything has to be AI. But it is about long-term cognitive health and amazing work that we’ve done in that space.
So: socos.org, Dionysus Health, and Optoceutics.
Ross: Fantastic. Thanks so much for your work and your wonderful insights today.
Vivienne: It’s been such a pleasure.
The post Vivienne Ming on hybrid collective intelligence, building cyborgs, meta-uncertainty, and the unknown infinite (AC Ep13) appeared first on Humans + AI.
166 episodes