The Thin Line Between Real and Deep Fake
In a new study from the Counter Extremism Project, Drs. Hany Farid and Hans-Jakob Schindler tackle the issue of deep fakes, defined in the report as “videos manipulated by artificial intelligence (AI) misused for political manipulation.”
The alteration of photos and videos can be simplistic, or sophisticated, but both can have damaging effects on open democracies once they hit the social media market. Another problem is that the more sophisticated technology is becoming more widely accessible and less expensive, which creates a breeding ground for misinformation.
Farid and Schindler claim that the actual deep fakes are less impactful than the attitudes adopted by the befuddled masses toward them. Social media consumers, understanding the concept of deep fakes but overwhelmed by misinformation, can simply claim that anything they don’t agree with is “fake news,” and despite legitimate attempts to sort out the truth, the damage is likely done as soon as the deep fake hits peer-sharing platforms.
The problems are compounded at this point because it is almost impossible to legislate the sharing of deep fakes, having to navigate the free speech argument and laws protecting platform providers.
The authors claim that a solution would include slowing the “technological arms race” by keeping advanced detection technologies off the market, using hashing or blockchain technology to prevent deep fake videos from gaining traction, and, most importantly, increasing cyberliteracy, going as far as suggesting it become an issue addressed by the public education system.
For more information on topics related to this piece, visit the HSDL Featured Topics on Cyber Crime and National Security, Cyber Infrastructure Protection, Cyber Policy, and Global Terrorism. Please note: An HSDL login is required to view some of these resources.
Need help finding something? Ask one of our librarians for assistance!