Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by information labs and Information labs. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by information labs and Information labs or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

AI lab TL;DR | Joan Barata - Transparency Obligations for All AI Systems

17:05
 
Share
 

Manage episode 523574650 series 3480798
Content provided by information labs and Information labs. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by information labs and Information labs or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

🔍 In this TL;DR episode, Joan explains how Article 50 of the EU AI Act sets out high-level transparency obligations for AI developers and deployers—requiring users to be informed when they interact with AI or access AI-generated content—while noting that excessive labeling can itself be misleading. She highlights why the forthcoming Code of Practice must focus on clear principles rather than fixed technical solutions, ensuring transparency helps prevent deception without creating confusion in a rapidly evolving technological environment.

📌 TL;DR Highlights

⏲️[00:00] Intro

⏲️[00:33] Q1-What’s the core purpose of Article 50, and why is this 10-month drafting window so critical for the industry?

⏲️[02:31] Q2-What’s the difference between disclosing a chatbot and technically marking AI-generated media?

⏲️[06:27] Q3-What is the inherent danger of "too much transparency" or over-labeling content? How do we prevent the "liar's dividend" and "label fatigue" while still fighting deception?

⏲️[10:00] Q4-If drafters should avoid one rigid technical fix, what’s your top advice for building flexibility into the Code of Practice?

⏲️[13:11] Q5-Did you consult other stakeholders when developing your whitepaper analysis?

⏲️[16:45] Wrap-up & Outro

💭 Q1 - What’s the core purpose of Article 50, and why is this 10-month drafting window so critical for the industry?

🗣️ “Article 50 sets only broad transparency rules—so a strong Code of Practice is essential.”

💭 Q2 - What’s the difference between disclosing a chatbot and technically marking AI-generated media?

🗣️ “If there’s a risk of confusion, users must be clearly told they’re interacting with AI.”

💭 Q3 - What is the inherent danger of "too much transparency" or over-labeling content? How do we prevent the "liar's dividend" and "label fatigue" while still fighting deception?

🗣️ “Too much transparency can mislead just as much as too little.”

💭 Q4 - If drafters should avoid one rigid technical fix, what’s your top advice for building flexibility into the Code of Practice?

🗣️ “We should focus on principles, not chase technical solutions that will be outdated in months.”

💭 Q5 - What is the one core idea you want policymakers to take away from your research?

🗣️ “Transparency raises legal, technical, psychological, and even philosophical questions—information alone doesn’t guarantee real agency."

📌 About Our Guests

🎙️ Joan Barata | Faculdade de Direito - Católica no Porto

🌐 linkedin.com/in/joan-barata-a649876

Joan Barata works on freedom of expression, media regulation, and intermediary liability issues. He is a Visiting professor at Faculdade de Direito - Católica no Porto and Senior Legal Fellow at The Future Free Speech project at Vanderbilt University. He is also a Fellow of the Program on Platform Regulation at the Stanford Cyber Policy Center.

#AI #artificialintelligence #generativeAI

  continue reading

37 episodes

Artwork
iconShare
 
Manage episode 523574650 series 3480798
Content provided by information labs and Information labs. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by information labs and Information labs or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

🔍 In this TL;DR episode, Joan explains how Article 50 of the EU AI Act sets out high-level transparency obligations for AI developers and deployers—requiring users to be informed when they interact with AI or access AI-generated content—while noting that excessive labeling can itself be misleading. She highlights why the forthcoming Code of Practice must focus on clear principles rather than fixed technical solutions, ensuring transparency helps prevent deception without creating confusion in a rapidly evolving technological environment.

📌 TL;DR Highlights

⏲️[00:00] Intro

⏲️[00:33] Q1-What’s the core purpose of Article 50, and why is this 10-month drafting window so critical for the industry?

⏲️[02:31] Q2-What’s the difference between disclosing a chatbot and technically marking AI-generated media?

⏲️[06:27] Q3-What is the inherent danger of "too much transparency" or over-labeling content? How do we prevent the "liar's dividend" and "label fatigue" while still fighting deception?

⏲️[10:00] Q4-If drafters should avoid one rigid technical fix, what’s your top advice for building flexibility into the Code of Practice?

⏲️[13:11] Q5-Did you consult other stakeholders when developing your whitepaper analysis?

⏲️[16:45] Wrap-up & Outro

💭 Q1 - What’s the core purpose of Article 50, and why is this 10-month drafting window so critical for the industry?

🗣️ “Article 50 sets only broad transparency rules—so a strong Code of Practice is essential.”

💭 Q2 - What’s the difference between disclosing a chatbot and technically marking AI-generated media?

🗣️ “If there’s a risk of confusion, users must be clearly told they’re interacting with AI.”

💭 Q3 - What is the inherent danger of "too much transparency" or over-labeling content? How do we prevent the "liar's dividend" and "label fatigue" while still fighting deception?

🗣️ “Too much transparency can mislead just as much as too little.”

💭 Q4 - If drafters should avoid one rigid technical fix, what’s your top advice for building flexibility into the Code of Practice?

🗣️ “We should focus on principles, not chase technical solutions that will be outdated in months.”

💭 Q5 - What is the one core idea you want policymakers to take away from your research?

🗣️ “Transparency raises legal, technical, psychological, and even philosophical questions—information alone doesn’t guarantee real agency."

📌 About Our Guests

🎙️ Joan Barata | Faculdade de Direito - Católica no Porto

🌐 linkedin.com/in/joan-barata-a649876

Joan Barata works on freedom of expression, media regulation, and intermediary liability issues. He is a Visiting professor at Faculdade de Direito - Católica no Porto and Senior Legal Fellow at The Future Free Speech project at Vanderbilt University. He is also a Fellow of the Program on Platform Regulation at the Stanford Cyber Policy Center.

#AI #artificialintelligence #generativeAI

  continue reading

37 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play