Europol Raises Concerns Over ChatGPT’s Use In Crime

According to a report released on Monday by Europol, ChatGPT is being used by criminals to perpetrate crimes, including cyberc*ime, f*aud, and t*rrorism. The report details how people already leverage ChatGPT to carry out illicit activities. As the law enforcement agency for the European Union, Europol has raised concerns about the potential dangers of such misuse.

Europol’s report states that law enforcement agencies can already anticipate the impact of these models. According to the report, criminals are known to exploit new technologies quickly, and practical examples of such criminal exploitation of ChatGPT emerged only weeks after its public release.

While ChatGPT is designed to refuse potentially harmful input requests, users have found ways to bypass OpenAI’s content filter system. Some users have managed to make ChatGPT generate instructions for creating dangerous items like pipe b*mbs or crack c*caine. Additionally, individuals can ask ChatGPT for guidance on committing crimes, and the AI language model can provide step-by-step instructions.

Europol warned that ChatGPT could significantly expedite research for potential criminals seeking information on a particular crime area. Even with no prior knowledge, the AI language model can provide key information that can be explored further in subsequent steps. This feature lets individuals quickly learn about various potential crime areas, from breaking into homes to cybercrime, t*rrorism, and child s*xual abuse.

Although all of this information is already publicly available on the internet, Europol cautioned that the ease with which ChatGPT can find and explain how to carry out specific crimes is a cause for concern. The agency further warned that the AI language model could be exploited to impersonate targets, facilitate fraud and phishing, or produce propaganda and disinformation to support terrorism.

According to Europol’s report, ChatGPT’s ability to generate code, including malicious code, significantly increases the risk of cybercrime. The AI language model lowers the technical skills required to create malware, making it an invaluable resource for potential criminals with little technical knowledge. The report further states that advanced users can exploit these capabilities to refine or automate sophisticated cybercriminal operations.

While large language models (LLMs) are still in their infancy, they rapidly improve as technology companies invest resources in their development. OpenAI has already released GPT-4, a more powerful system, and these models are increasingly integrated into products. Since the launch of ChatGPT, Microsoft and Google have introduced AI-powered web search chatbots into their search engines, demonstrating the growing integration of AI language models into everyday products and services.

As more companies introduce AI features and services, Europol warns that new opportunities to use the technology for illegal activities will arise. The report suggests that multimodal AI systems, which combine conversational chatbots with the ability to produce highly convincing deepfakes or include sensory abilities such as seeing and hearing, may facilitate even more sophisticated illegal activities.

In addition, the report raises concerns that clandestine versions of language models with no content filters could be trained on harmful data and hosted on the d*rk web. Such models would pose significant risks as they would be unregulated and designed to enable criminal activities.

Furthermore, the report highlights uncertainties around how LLM services may process user data in the future. For instance, conversations with AI language models could potentially expose sensitive personal information to unauthorized third parties. Additionally, if users generate harmful content, Europol questions whether this should be reported to law enforcement authorities.

Related Stories:

Help Someone By Sharing This Article