Why It's Hard to Design Fair Machine Learning Models
MP3•Episode home
Manage episode 217702587 series 1427720
Content provided by O'Reilly Radar. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by O'Reilly Radar or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
In this episode of the Data Show, I spoke with Sharad Goel, assistant professor at Stanford, and his student Sam Corbett-Davies. They recently wrote a survey paper, “A Critical Review of Fair Machine Learning,” where they carefully examined the standard statistical tools used to check for fairness in machine learning models. It turns out that each of the standard approaches (anti-classification, classification parity, and calibration) has limitations, and their paper is a must-read tour through recent research in designing fair algorithms. We talked about their key findings, and, most importantly, I pressed them to list a few best practices that analysts and industrial data scientists might want to consider.
…
continue reading
443 episodes