Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by Community IT Innovators. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Community IT Innovators or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Managing AI Risks at Nonprofits with Peter Campbell

22:03
 
Share
 

Manage episode 500479812 series 2810457
Content provided by Community IT Innovators. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Community IT Innovators or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

Peter Campbell is the principal consultant at Techcafeteria, a micro-consulting firm dedicated to helping nonprofits make more affordable and effective use of technology to support their missions. He recently published a free download powerpoint on Managing AI Risk and had time to talk with Carolyn about his thoughts on developing AI policies with an eye to risk, where the greatest risks lie for nonprofits using AI, and how often to review your policies as the technology changes rapidly.

The takeaways:

  • AI tools are like GPS (which is itself an AI). You are the expert; they are not able to critically analyze their own output even though they can mimic authority.
  • Using AI tools for subjects where you have subject expertise allows you to correct the output. Using AI tools for subjects where you have no knowledge adds risk.
  • Common AI tasks at nonprofits move from low-level risks such as searching your own inbox for an important email to higher-risk activities more prone to consequential errors, such as automation and analysis.
  • Common AI risks include inaccuracy, lack of authenticity, reputational damage, and copyright and privacy violations.
  • AI also has risk factors associated with audience: your personal use probably has pretty low risk that you will be fooled or divulge sensitive information to yourself, but when you use AI to communicate with the public, the risk increases for your nonprofit.

How to Manage AI Risks at Nonprofits?

  • Start with an AI Policy. Review it often as the technology and tools are changing rapidly.
  • Use your own judgement. A good rule of thumb is to use AI tools to create things that you are already knowledgeable about, so that you can easily assess the accuracy of the AI output.
  • Transparency matters. Let people know AI was used and how it was used. Use an “Assisted by AI” disclaimer when appropriate.
  • Require a human third party review before sharing AI created materials with the public. State this in your transparency policy/disclaimers. Be honest about the roles of AI and humans in your nonprofit work.
  • Curate data sources, and always know what your AI is using to create materials or analysis. Guard against bias and harm to communities you care about.
“I’ve been helping clients develop Artificial Intelligence (AI) policies lately. AI has lots of innovative uses and every last one of them has some risk associated with it, so I regularly urge my clients to get the policies and training in place before they let staff loose with the tools. Here is a generic version of a powerpoint explaining AI risks and policies for nonprofits. “

Peter Campbell, Techcafeteria

_______________________________
Start a conversation :)

Thanks for listening.

  continue reading

241 episodes

Artwork
iconShare
 
Manage episode 500479812 series 2810457
Content provided by Community IT Innovators. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Community IT Innovators or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

Peter Campbell is the principal consultant at Techcafeteria, a micro-consulting firm dedicated to helping nonprofits make more affordable and effective use of technology to support their missions. He recently published a free download powerpoint on Managing AI Risk and had time to talk with Carolyn about his thoughts on developing AI policies with an eye to risk, where the greatest risks lie for nonprofits using AI, and how often to review your policies as the technology changes rapidly.

The takeaways:

  • AI tools are like GPS (which is itself an AI). You are the expert; they are not able to critically analyze their own output even though they can mimic authority.
  • Using AI tools for subjects where you have subject expertise allows you to correct the output. Using AI tools for subjects where you have no knowledge adds risk.
  • Common AI tasks at nonprofits move from low-level risks such as searching your own inbox for an important email to higher-risk activities more prone to consequential errors, such as automation and analysis.
  • Common AI risks include inaccuracy, lack of authenticity, reputational damage, and copyright and privacy violations.
  • AI also has risk factors associated with audience: your personal use probably has pretty low risk that you will be fooled or divulge sensitive information to yourself, but when you use AI to communicate with the public, the risk increases for your nonprofit.

How to Manage AI Risks at Nonprofits?

  • Start with an AI Policy. Review it often as the technology and tools are changing rapidly.
  • Use your own judgement. A good rule of thumb is to use AI tools to create things that you are already knowledgeable about, so that you can easily assess the accuracy of the AI output.
  • Transparency matters. Let people know AI was used and how it was used. Use an “Assisted by AI” disclaimer when appropriate.
  • Require a human third party review before sharing AI created materials with the public. State this in your transparency policy/disclaimers. Be honest about the roles of AI and humans in your nonprofit work.
  • Curate data sources, and always know what your AI is using to create materials or analysis. Guard against bias and harm to communities you care about.
“I’ve been helping clients develop Artificial Intelligence (AI) policies lately. AI has lots of innovative uses and every last one of them has some risk associated with it, so I regularly urge my clients to get the policies and training in place before they let staff loose with the tools. Here is a generic version of a powerpoint explaining AI risks and policies for nonprofits. “

Peter Campbell, Techcafeteria

_______________________________
Start a conversation :)

Thanks for listening.

  continue reading

241 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play