Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by SCCE. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by SCCE or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Alessia Falsarone on AI Explainability [Podcast]

13:53
 
Share
 

Manage episode 515189686 series 2837193
Content provided by SCCE. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by SCCE or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
By Adam Turteltaub Why did the AI do that? It’s a simple and common question, but the answer is often opaque, with people referring to black boxes, algorithms and other words that only those in the know tend to understand. Alessia Falsarone, a non-executive director of Innovate UK, says that’s a problem. In cases where AI has run amok, the fallout is often worse because the company is unable to explain why the AI made the decision it made and what data it was relying on. AI, she argues, needs to be explainable to regulators and the public. That way all sides can understand what the AI is doing (or has done) and why. To create more explainable AI, she recommends the creation of a dashboard showing the factors that influence the decisions made. In addition, teams need to track changes made to the model over time. By doing so, when the regulator or public asks why something happened, the organization can respond quickly and clearly. In addition, by embracing a more transparent process, and involving compliance early, organizations can head off potential AI issues early in the process. Listen is to hear her explain the virtues of explainability.
  continue reading

105 episodes

Artwork
iconShare
 
Manage episode 515189686 series 2837193
Content provided by SCCE. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by SCCE or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
By Adam Turteltaub Why did the AI do that? It’s a simple and common question, but the answer is often opaque, with people referring to black boxes, algorithms and other words that only those in the know tend to understand. Alessia Falsarone, a non-executive director of Innovate UK, says that’s a problem. In cases where AI has run amok, the fallout is often worse because the company is unable to explain why the AI made the decision it made and what data it was relying on. AI, she argues, needs to be explainable to regulators and the public. That way all sides can understand what the AI is doing (or has done) and why. To create more explainable AI, she recommends the creation of a dashboard showing the factors that influence the decisions made. In addition, teams need to track changes made to the model over time. By doing so, when the regulator or public asks why something happened, the organization can respond quickly and clearly. In addition, by embracing a more transparent process, and involving compliance early, organizations can head off potential AI issues early in the process. Listen is to hear her explain the virtues of explainability.
  continue reading

105 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play