Go offline with the Player FM app!
E71 AI Adoption is it personal or organisational?
Manage episode 489221731 series 3365571
This episode was sparked by a newsletter.
When Elina’s Artificial Thought landed in my inbox, it immediately lit a fire under a question I’d been sitting with: Where does AI fit in behaviour change work? I invited Elina onto BrainFuel — and this episode is where the conversation began.
Together, we dive into the emerging relationship between behavioural science and artificial intelligence — not as hype, but as a thoughtful, grounded exploration of where we go from here.
One of the biggest themes? Bias. And Spaniels.
We explore how:
AI tools like ChatGPT, Claude, and others inherit human bias, baked into training data and system design
Behavioural science has its own blind spots, often shaped by the same cultural assumptions and power dynamics
And why it’s not enough to be evidence-based — we have to stay curious, critical, and open to new ways of thinking
Elina said something that stayed with me:
"Behavioural science is all about looking for a problem to solve—even if that search sometimes leads us to frame challenges in ways that mirror our own biases."
We also discuss how we’re using AI in our day-to-day work:
ChatGPT and Claude as brainstorming buddies and thinking partners
AI for creative workflows (like these show notes!)
But never in analysis or insight work, where data sensitivity and confidentiality come first
In this episode:
The risks and responsibilities of integrating AI into behaviour change
How bias shows up in both datasets and frameworks
The practical limits of AI in public health work
Why we need more human judgment, not less
👉 If you do one thing - check Elina's newsletter Artificial Thought
72 episodes
Manage episode 489221731 series 3365571
This episode was sparked by a newsletter.
When Elina’s Artificial Thought landed in my inbox, it immediately lit a fire under a question I’d been sitting with: Where does AI fit in behaviour change work? I invited Elina onto BrainFuel — and this episode is where the conversation began.
Together, we dive into the emerging relationship between behavioural science and artificial intelligence — not as hype, but as a thoughtful, grounded exploration of where we go from here.
One of the biggest themes? Bias. And Spaniels.
We explore how:
AI tools like ChatGPT, Claude, and others inherit human bias, baked into training data and system design
Behavioural science has its own blind spots, often shaped by the same cultural assumptions and power dynamics
And why it’s not enough to be evidence-based — we have to stay curious, critical, and open to new ways of thinking
Elina said something that stayed with me:
"Behavioural science is all about looking for a problem to solve—even if that search sometimes leads us to frame challenges in ways that mirror our own biases."
We also discuss how we’re using AI in our day-to-day work:
ChatGPT and Claude as brainstorming buddies and thinking partners
AI for creative workflows (like these show notes!)
But never in analysis or insight work, where data sensitivity and confidentiality come first
In this episode:
The risks and responsibilities of integrating AI into behaviour change
How bias shows up in both datasets and frameworks
The practical limits of AI in public health work
Why we need more human judgment, not less
👉 If you do one thing - check Elina's newsletter Artificial Thought
72 episodes
All episodes
×Welcome to Player FM!
Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.