Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by with Actionable Futurist® Andrew Grill and With Actionable Futurist® Andrew Grill. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by with Actionable Futurist® Andrew Grill and With Actionable Futurist® Andrew Grill or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

S7 Episode 3: AI Guardrails: Navigating the Ethical Future of Technology

34:54
 
Share
 

Manage episode 480772803 series 2509826
Content provided by with Actionable Futurist® Andrew Grill and With Actionable Futurist® Andrew Grill. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by with Actionable Futurist® Andrew Grill and With Actionable Futurist® Andrew Grill or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

What happens when we prioritse innovation over ethics in AI development? Kerry Sheehan, a machine learning specialist with a fascinating journey from journalism to AI policy, explores this critical question as she shares powerful insights on responsible AI implementation.
Kerry takes us on a compelling exploration of AI guardrails, comparing them to bowling alley bumpers that prevent technologies from causing harm. Her work with the British Standards Institute has helped establish frameworks rooted in fairness, transparency, and human oversight – creating what she calls "shared language for responsible development" without stifling innovation.
The conversation reveals profound insights about diversity in AI development teams. "If the teams building AI systems don't represent those that the end results will serve, it's not ethical," Kerry asserts. She compares bias to bad seasoning that ruins an otherwise excellent recipe, highlighting how diverse perspectives throughout the development lifecycle are essential for creating fair, beneficial systems.
Kerry's expertise shines as she discusses emerging ethical challenges in AI, from foundation models to synthetic data and agentic systems. She advocates for guardrails that function as supportive scaffolding rather than restrictive handcuffs – principle-driven frameworks with room for context that allow developers to be agile while maintaining ethical boundaries.
What makes this episode particularly valuable are the actionable takeaways: audit your existing AI systems for fairness, develop clear governance frameworks you could confidently explain to others, add ethical reviews to project boards, and include people with diverse lived experiences in your design meetings. These practical steps can help organisations build AI systems that truly work for everyone, not just the privileged few.
This is an important conversation about making AI work for humanity rather than against it. Kerry's perspective will transform how you think about responsible technology implementation in your organisation.

More information
Kerry on LinkedIn

Thanks for listening to Digitally Curious. You can buy the book that showcases these episodes at curious.click/order
Your Host is Actionable Futurist® Andrew Grill
For more on Andrew - what he speaks about and recent talks, please visit ActionableFuturist.com
Andrew's Social Channels
Andrew on LinkedIn
@AndrewGrill on Twitter
@Andrew.Grill on Instagram
Keynote speeches here
Order Digitally Curious

  continue reading

Chapters

1. Welcome and Kerry's Journey (00:00:00)

2. Understanding AI Guardrails (00:02:48)

3. Global AI Standards Development (00:05:35)

4. Ethics at Alan Turing Institute (00:09:15)

5. Diverse Teams and Bias Mitigation (00:12:47)

6. Balancing Innovation and Responsibility (00:17:30)

7. Emerging AI Challenges (00:23:02)

8. Regulation and Preparation (00:29:04)

9. Quickfire Round and Key Takeaways (00:31:49)

100 episodes

Artwork
iconShare
 
Manage episode 480772803 series 2509826
Content provided by with Actionable Futurist® Andrew Grill and With Actionable Futurist® Andrew Grill. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by with Actionable Futurist® Andrew Grill and With Actionable Futurist® Andrew Grill or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

What happens when we prioritse innovation over ethics in AI development? Kerry Sheehan, a machine learning specialist with a fascinating journey from journalism to AI policy, explores this critical question as she shares powerful insights on responsible AI implementation.
Kerry takes us on a compelling exploration of AI guardrails, comparing them to bowling alley bumpers that prevent technologies from causing harm. Her work with the British Standards Institute has helped establish frameworks rooted in fairness, transparency, and human oversight – creating what she calls "shared language for responsible development" without stifling innovation.
The conversation reveals profound insights about diversity in AI development teams. "If the teams building AI systems don't represent those that the end results will serve, it's not ethical," Kerry asserts. She compares bias to bad seasoning that ruins an otherwise excellent recipe, highlighting how diverse perspectives throughout the development lifecycle are essential for creating fair, beneficial systems.
Kerry's expertise shines as she discusses emerging ethical challenges in AI, from foundation models to synthetic data and agentic systems. She advocates for guardrails that function as supportive scaffolding rather than restrictive handcuffs – principle-driven frameworks with room for context that allow developers to be agile while maintaining ethical boundaries.
What makes this episode particularly valuable are the actionable takeaways: audit your existing AI systems for fairness, develop clear governance frameworks you could confidently explain to others, add ethical reviews to project boards, and include people with diverse lived experiences in your design meetings. These practical steps can help organisations build AI systems that truly work for everyone, not just the privileged few.
This is an important conversation about making AI work for humanity rather than against it. Kerry's perspective will transform how you think about responsible technology implementation in your organisation.

More information
Kerry on LinkedIn

Thanks for listening to Digitally Curious. You can buy the book that showcases these episodes at curious.click/order
Your Host is Actionable Futurist® Andrew Grill
For more on Andrew - what he speaks about and recent talks, please visit ActionableFuturist.com
Andrew's Social Channels
Andrew on LinkedIn
@AndrewGrill on Twitter
@Andrew.Grill on Instagram
Keynote speeches here
Order Digitally Curious

  continue reading

Chapters

1. Welcome and Kerry's Journey (00:00:00)

2. Understanding AI Guardrails (00:02:48)

3. Global AI Standards Development (00:05:35)

4. Ethics at Alan Turing Institute (00:09:15)

5. Diverse Teams and Bias Mitigation (00:12:47)

6. Balancing Innovation and Responsibility (00:17:30)

7. Emerging AI Challenges (00:23:02)

8. Regulation and Preparation (00:29:04)

9. Quickfire Round and Key Takeaways (00:31:49)

100 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Listen to this show while you explore
Play