Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by Ruth Dale. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Ruth Dale or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

E71 AI Adoption is it personal or organisational?

1:04:33
 
Share
 

Manage episode 489221731 series 3365571
Content provided by Ruth Dale. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Ruth Dale or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

This episode was sparked by a newsletter.

When Elina’s Artificial Thought landed in my inbox, it immediately lit a fire under a question I’d been sitting with: Where does AI fit in behaviour change work? I invited Elina onto BrainFuel — and this episode is where the conversation began.

Together, we dive into the emerging relationship between behavioural science and artificial intelligence — not as hype, but as a thoughtful, grounded exploration of where we go from here.

One of the biggest themes? Bias. And Spaniels.
We explore how:

  • AI tools like ChatGPT, Claude, and others inherit human bias, baked into training data and system design

  • Behavioural science has its own blind spots, often shaped by the same cultural assumptions and power dynamics

  • And why it’s not enough to be evidence-based — we have to stay curious, critical, and open to new ways of thinking

Elina said something that stayed with me:

"Behavioural science is all about looking for a problem to solve—even if that search sometimes leads us to frame challenges in ways that mirror our own biases."

We also discuss how we’re using AI in our day-to-day work:

  • ChatGPT and Claude as brainstorming buddies and thinking partners

  • AI for creative workflows (like these show notes!)

  • But never in analysis or insight work, where data sensitivity and confidentiality come first

    In this episode:

  • The risks and responsibilities of integrating AI into behaviour change

  • How bias shows up in both datasets and frameworks

  • The practical limits of AI in public health work

  • Why we need more human judgment, not less

👉 If you do one thing - check Elina's newsletter Artificial Thought

  continue reading

72 episodes

Artwork
iconShare
 
Manage episode 489221731 series 3365571
Content provided by Ruth Dale. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Ruth Dale or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

This episode was sparked by a newsletter.

When Elina’s Artificial Thought landed in my inbox, it immediately lit a fire under a question I’d been sitting with: Where does AI fit in behaviour change work? I invited Elina onto BrainFuel — and this episode is where the conversation began.

Together, we dive into the emerging relationship between behavioural science and artificial intelligence — not as hype, but as a thoughtful, grounded exploration of where we go from here.

One of the biggest themes? Bias. And Spaniels.
We explore how:

  • AI tools like ChatGPT, Claude, and others inherit human bias, baked into training data and system design

  • Behavioural science has its own blind spots, often shaped by the same cultural assumptions and power dynamics

  • And why it’s not enough to be evidence-based — we have to stay curious, critical, and open to new ways of thinking

Elina said something that stayed with me:

"Behavioural science is all about looking for a problem to solve—even if that search sometimes leads us to frame challenges in ways that mirror our own biases."

We also discuss how we’re using AI in our day-to-day work:

  • ChatGPT and Claude as brainstorming buddies and thinking partners

  • AI for creative workflows (like these show notes!)

  • But never in analysis or insight work, where data sensitivity and confidentiality come first

    In this episode:

  • The risks and responsibilities of integrating AI into behaviour change

  • How bias shows up in both datasets and frameworks

  • The practical limits of AI in public health work

  • Why we need more human judgment, not less

👉 If you do one thing - check Elina's newsletter Artificial Thought

  continue reading

72 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play