ChatGPT, an artificial intelligence chatbot developed by OpenAI, has gained popularity due to its ability to quickly provide complex answers that can be applied to a wide variety of different topics. ChatGPT is a Large Language Model (LLM) that has provided many opportunities for businesses and members of the public through ready-to-use information. However, it also carries a high risk of being exploited by criminals and bad actors. This presents a new challenge for law enforcement as they work to anticipate and prevent abuse.
“While all of the information ChatGPT provides is freely available on the internet, the possibility to use the model to provide specific steps by asking contextual questions means it is significantly easier for malicious actors to better understand and subsequently carry out various types of crime.”Europol
Europol’s recent report, ChatGPT: The Impact of Large Language Models on Law Enforcement, analyzes a number of workshops held by subject matter experts on the positive and negative potential of ChatGPT. The report emphasizes that law enforcement must be prepared for the negative impacts of this artificial intelligence system. Although OpenAI has included many safeguards on ChatGPT to prevent malicious use, they can be easily bypassed through prompt engineering.
An unprecedented amount of available data is used by criminals to learn about topics such as potential crime areas, how to break into a house, terrorism, and cybercrime. In addition, ChatGPTs ability to create human-like text makes it a highly useful tool for phishing attempts. Whereas phishing scams were previously detectable by grammatical and spelling errors, artificial intelligence now makes it possible to impersonate an individual in a more realistic manner.
LLMs will only continue to solve more sophisticated problems as available data increases. There is enormous potential for ChatGPT to assist the public, as well as law enforcement with their daily work. However, there is still much to learn as technology quickly progresses.