Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by Mark Smith [nz365guy]. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Mark Smith [nz365guy] or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Transparency First: Make AI Worthy of Trust

29:46
 
Share
 

Manage episode 518962767 series 2936583
Content provided by Mark Smith [nz365guy]. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Mark Smith [nz365guy] or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

👉 Full Show Notes
https://www.microsoftinnovationpodcast.com/758
Hosts: Mark Smith, Meg Smith
Ethical AI starts with transparency, accountability, and clear values. Mark and Meg unpack responsible AI principles, why non-deterministic systems still need reliability, and how ‘human in the loop’ and logging keep people accountable. They share a simple way to judge tools: trust, data use, terms that change, and clarity on training. You’ll see how to set personal and organisational boundaries, choose vendors, and schedule reviews as risks evolve. They also consider a public call to pause superintelligence, and argue for critical thinking over fear.
Join the private WhatsApp group for Q&A and community: https://chat.whatsapp.com/E0iyXcUVhpl9um7DuKLYEz
🎙️ What you’ll learn
- Build an ethics checklist around transparency, fairness, reliability, privacy, inclusiveness, and accountability.
- Evaluate tools for training stance, data use, privacy, and changing terms.
- Design human-in-the-loop workflows with unique credentials, logging, and audit trails.
- Set personal and organisational boundaries for acceptable AI use.
- Plan a review cadence to reassess risks, mitigations, and vendor changes.
Highlights
“we're going to be squarely in the land of robotics very soon.”
“the intelligence age is so much more than just AI it is the age of intelligence”
“if you're an unethical individual, AI is probably going to amplify it.”
“there is no ethical AI without transparency.”
“people should be accountable for AI systems.”
🧰Mentioned
Microsoft Responsible AI principles https://www.microsoft.com/en-us/ai/principles-and-approach#ai-principles
IBM AI course https://www.ibm.com/training/learning-paths
TikTok https://www.tiktok.com/@nz365guy
Roomba https://en.wikipedia.org/wiki/Roomba
Zoom TOS https://termly.io/resources/zoom-terms-of-service-controversy/
Connect with the hosts
Mark Smith:
Blog https://www.nz365guy.com
LinkedIn https://www.linkedin.com/in/nz365guy
Meg Smith:
Blog https://www.megsmith.nz
LinkedIn https://www.linkedin.com/in/megsmithnz

Subscribe, rate, and share with someone who wants to be future ready. Drop your questions in the comments or the WhatsApp group, and we may feature them in an upcoming episode.
✅Keywords:
responsible ai, ethical decision-making, transparency, accountability, fairness, privacy and security, non-deterministic, human in the loop, audit trail, intra id, superintelligence, terms

Microsoft 365 Copilot Adoption is a Microsoft Press book for leaders and consultants. It shows how to identify high-value use cases, set guardrails, enable champions, and measure impact, so Copilot sticks. Practical frameworks, checklists, and metrics you can use this month. Get the book: https://bit.ly/CopilotAdoption

Support the show

If you want to get in touch with me, you can message me here on Linkedin.
Thanks for listening 🚀 - Mark Smith

  continue reading

Chapters

1. The Age of Intelligence (00:00:00)

2. Ethical Decision-Making in AI (00:05:37)

3. he Role of Human Connection and Critical Thinking (00:16:38)

4. Frameworks for Responsible AI Use (00:23:12)

5. Reflections on AI and Superintelligence (00:27:59)

758 episodes

Artwork
iconShare
 
Manage episode 518962767 series 2936583
Content provided by Mark Smith [nz365guy]. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Mark Smith [nz365guy] or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

👉 Full Show Notes
https://www.microsoftinnovationpodcast.com/758
Hosts: Mark Smith, Meg Smith
Ethical AI starts with transparency, accountability, and clear values. Mark and Meg unpack responsible AI principles, why non-deterministic systems still need reliability, and how ‘human in the loop’ and logging keep people accountable. They share a simple way to judge tools: trust, data use, terms that change, and clarity on training. You’ll see how to set personal and organisational boundaries, choose vendors, and schedule reviews as risks evolve. They also consider a public call to pause superintelligence, and argue for critical thinking over fear.
Join the private WhatsApp group for Q&A and community: https://chat.whatsapp.com/E0iyXcUVhpl9um7DuKLYEz
🎙️ What you’ll learn
- Build an ethics checklist around transparency, fairness, reliability, privacy, inclusiveness, and accountability.
- Evaluate tools for training stance, data use, privacy, and changing terms.
- Design human-in-the-loop workflows with unique credentials, logging, and audit trails.
- Set personal and organisational boundaries for acceptable AI use.
- Plan a review cadence to reassess risks, mitigations, and vendor changes.
Highlights
“we're going to be squarely in the land of robotics very soon.”
“the intelligence age is so much more than just AI it is the age of intelligence”
“if you're an unethical individual, AI is probably going to amplify it.”
“there is no ethical AI without transparency.”
“people should be accountable for AI systems.”
🧰Mentioned
Microsoft Responsible AI principles https://www.microsoft.com/en-us/ai/principles-and-approach#ai-principles
IBM AI course https://www.ibm.com/training/learning-paths
TikTok https://www.tiktok.com/@nz365guy
Roomba https://en.wikipedia.org/wiki/Roomba
Zoom TOS https://termly.io/resources/zoom-terms-of-service-controversy/
Connect with the hosts
Mark Smith:
Blog https://www.nz365guy.com
LinkedIn https://www.linkedin.com/in/nz365guy
Meg Smith:
Blog https://www.megsmith.nz
LinkedIn https://www.linkedin.com/in/megsmithnz

Subscribe, rate, and share with someone who wants to be future ready. Drop your questions in the comments or the WhatsApp group, and we may feature them in an upcoming episode.
✅Keywords:
responsible ai, ethical decision-making, transparency, accountability, fairness, privacy and security, non-deterministic, human in the loop, audit trail, intra id, superintelligence, terms

Microsoft 365 Copilot Adoption is a Microsoft Press book for leaders and consultants. It shows how to identify high-value use cases, set guardrails, enable champions, and measure impact, so Copilot sticks. Practical frameworks, checklists, and metrics you can use this month. Get the book: https://bit.ly/CopilotAdoption

Support the show

If you want to get in touch with me, you can message me here on Linkedin.
Thanks for listening 🚀 - Mark Smith

  continue reading

Chapters

1. The Age of Intelligence (00:00:00)

2. Ethical Decision-Making in AI (00:05:37)

3. he Role of Human Connection and Critical Thinking (00:16:38)

4. Frameworks for Responsible AI Use (00:23:12)

5. Reflections on AI and Superintelligence (00:27:59)

758 episodes

ทุกตอน

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play