Deep Fake Dangers: More than Misinformation

Profile silhouette of man standing in front of a background of green and blue horizontal lightsIn the April 2020 Australian Strategic Policy Institute policy brief, Weaponised Deep Fakes: National Security and Democracy, analysts Hannah Smith and Katherine Mansted demonstrate just how far deep fake technology has come, and how realistic and easily-created misinformation is, by opening with a foreword written by a deep learning algorithm. The authors claim the convincing opening paragraph took just five minutes to create, using only open-source software.

The authors go on to address the problems associated with deep fake technology, possible solutions, and common deep fake examples. The real problem, Smith and Mansted argue, is the weaponization of this technology. With increasingly simplistic technology becoming more readily available daily, doors are opened for cyber-criminals to wreak havoc, using social media to spread dangerous propaganda, interfere with elections and military operations, and weaken the trust between the public and legitimate institutions.

This paper argues that policymakers face a narrowing window of opportunity to minimise the consequences of weaponised deep fakes. Any response must include measures across three lines of effort:

1. investment in and deployment of deep fake detection technologies

2. changing online behaviour, including via policy measures that empower digital audiences to critically engage with content and that bolster trusted communication channels.

3. creation and enforcement of digital authentication standards.


For more information, visit the HSDL Featured Topics or our In Focus topics on Cyber Crime & National Security, Cyber Infrastructure Protection, and Cyber Policy. Please note that an HSDL login is required to view some of these resources.

Need help finding something?  Ask one of our librarians for assistance!