Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Download the App!
show episodes
 
Artwork

1
irResponsible AI

Upol Ehsan, Shea Brown

icon
Unsubscribe
icon
Unsubscribe
Monthly+
 
Welcome to irResponsible AI β€”a series where you find out how NOT to end up on the New York Times headlines for all the wrong reasons! πŸ’‘ Why are we doing this? As experts, we are tired of the boring β€œmainstream corporate” RAI communication. Here, we give it to you straight. ⁉️ Why call it Irresponsible AI? Responsible AI exists because of irresponsible AI. Knowing what NOT to do can be, at times, more actionable than knowing what to do. πŸŽ™οΈWho are your hosts? Why should you even bother to list ...
  continue reading
 
Loading …
show series
 
Got questions or comments or topics you want us to cover? Text us! In this episode of irResponsible AI, we discuss βœ… GenAI is cool, but do you really need it for your use case? βœ… How can companies end up doing irresponsible AI by using GenAI for the wrong use cases? βœ… How may we get out of this problem? What can you do? 🎯 Two simple things: like an…
  continue reading
 
Got questions or comments or topics you want us to cover? Text us! It gets spicy in this episode of irResponsible AI: βœ… Cutting through the Responsible AI hype to separate experts from "AI influencers" (grifters) βœ… How you can you break into Responsible AI consulting βœ… How the EU AI Act discourages irresponsible AI βœ… How we can nurture a "cohesivel…
  continue reading
 
Got questions or comments or topics you want us to cover? Text us! As they say, don't mess with Swifties. This episode irResponsible AI is about the Taylor Swift Factor in Responsible AI: βœ… Taylor Swift's deepfake scandal and what it did for RAIg βœ… Do famous people need to be harmed before we do anything about it? βœ… How to address the deepfake prob…
  continue reading
 
Got questions or comments or topics you want us to cover? Text us! In this episode filled with hot takes, Upol and Shea discuss three things: βœ… How the Gemini Scandal unfolded βœ… Is Responsible AI is too woke? Or is there a hidden agenda? βœ… What companies can do to address such scandals What can you do? 🎯 Two simple things: like and subscribe. You h…
  continue reading
 
Got questions or comments or topics you want us to cover? Text us! In this episode we discuss AI Risk Management Frameworks (RMFs) focusing on NIST's Generative AI profile: βœ… Demystify misunderstandings about AI RMFs: what they are for, what they are not for βœ… Unpack challenges of evaluating AI frameworks βœ… Inert knowledge in frameworks need to be …
  continue reading
 
Got questions or comments or topics you want us to cover? Text us! In this episode of irResponsible AI, Upol & Shea bring the heat to three topics-- 🚨 Algorithmic Imprints: harms from zombie algorithms with an example of the LAION dataset 🚨 The FTC vs. Rite Aid Scandal and how it could have been avoided 🚨 NIST's Trustworthy AI Institute and the fut…
  continue reading
 
Loading …
Listen to this show while you explore
Play