Effective disinformation campaigns can create doubt, spread false narratives, and even call others to action, but these campaigns have strictly been a human-only endeavor. In Truth, Lies, and Automation: How Language Models Could Change Disinformation, researchers at the Center for Security and Emerging Technology explore if artificial intelligence and automation can be used to generate disinformation across the internet.
Working with GPT-3, a text generating artificial intelligence system created by OpenAI, the researches evaluated the system’s ability to complete six different disinformation related tasks. Ranging from Narrative Reiteration, defined as generating varied short messages that advance a particular theme, such as climate change denial, all the way to Narrative Persuasion, changing the views of targets, in some cases by crafting messages tailored to their political ideology or affiliation, the study shows that the GPT-3 system was very effective in furthering each task with significantly less or even minimal human involvement.
With these findings in mind, the researches fear that this technology may cause more harm than good: “Our study hints at a preliminary but alarming conclusion: systems like GPT-3 seem better suited for disinformation—at least in its least subtle forms—than information, more adept as fabulists than as staid truth-tellers.”
Please note: HSDL login is required to view some of these resources.
Need help finding something? Ask one of our librarians for assistance!