Episode 11.2 - Separating fact from fiction in Deepfakes
Manage episode 501804621 series 3143734
I got a pitch from Reality Defender (deepfake video detection) about a partnership with ValidSoft (deepfake voice) last week. We don’t generally cover partnership agreements because, well, we get a handful every week and they just aren’t news. But the pitch threw out a few statistics that seemed a bit off. After some research, I found out how off they were.
See, fraud can be divided into two types: Criminal fraud, which companies like these are dedicated to stopping, and legally protected fraud like advertising and political speech (First Amendment and all that). As far as impacts go, the latter is much more dangerous and prevalent, but security companies can’t relly do anything about that. And that is what I discussed with Reality Defender CEO, Ben Colman discussed.
Key Takeaways and Links
Deepfake fraud attempts are low in percentage but high in potential impact, especially for high-value clients in regulated industries
There's a critical need for national regulation to address AI-generated content on consumer platforms, as current measures are insufficient.
Reality Defender and Validsoft claim to lead in deepfake detection, focusing on inference-based and provenance-based approaches respectively
The "David Act" (Deepfake Audio Video Image Detection Act) has been proposed to require platforms to flag AI-generated content.
233 episodes