European regulators and law enforcement agencies are expressing growing concerns about the potential risks of generative artificial intelligence (AI) platforms like ChatGPT. These platforms’ lack of regulatory guidelines leads to fears that they could be used for criminal purposes.
Europol warned at the end of March that ChatGPT and thousands of other AI platforms could be used to assist criminals in activities such as phishing, malware creation, and even terrorism.
The agency’s report stated that potential criminals could use ChatGPT to speed up research and gain key information on various crime areas.
In March, Italy temporarily banned ChatGPT after a glitch exposed user files. Garante, the Italian privacy rights board, threatened OpenAI, the program’s creator, with millions of dollars in fines for privacy violations. Garante wants OpenAI to clarify where users’ information goes and establish age restrictions on the platform. Other European countries, including Spain, France, and Germany, are also investigating complaints of personal data violations.
To address these issues, the European Data Protection Board formed a task force this month to coordinate regulations across the 27-country European Union. The task force aims to establish clear guidelines to protect users’ privacy and prevent AI platforms like ChatGPT from being used for criminal purposes.
Dragos Tudorache, a member of the European Parliament and co-sponsor of the Artificial Intelligence Act, which is in its final stages of approval, believes it is a wake-up call for Europe. He told reporters that it is crucial to clearly understand what is happening in the field of AI and how to establish appropriate regulations.
Although artificial intelligence has been integrated into daily life for several years now, with examples like Amazon’s Alexa and online chess games, the potential of AI has never been brought home as effectively as it has been with ChatGPT. ChatGPT is an interactive “large language model” that can answer questions or complete tasks in seconds.
According to Mark Bünger, the co-founder of Futurity Systems, a Barcelona-based consulting agency specializing in science-based innovation, ChatGPT possesses rare knowledge among humans. Bünger believes that ChatGPT’s expertise in programming computers is one area where it surpasses most humans.
Consequently, its upcoming improved version will likely be very efficient and quick in programming. This updated version will likely be even better and program something no human can comprehend.
Experts warn that the incredibly efficient technology of ChatGPT also presents opportunities for various kinds of fraud, including identity theft and academic plagiarism.
According to Nick Taylor, deputy director of the Edinburgh Centre for Robotics, educators are concerned about the possibility that coursework submitted by students might have been assisted by or even completely composed by a generative AI system like OpenAI’s ChatGPT or Google’s Bard.
“AI has existed for decades, but its popularity has increased now because it is accessible to everyone,” said Futurity Systems CEO Cecilia Tham. She further stated that since ChatGPT was made available to the public for a free trial on Nov. 30, programmers have been using it to create thousands of new chatbots, such as PlantGPT, which helps with monitoring houseplants, and ChaosGPT, a hypothetical chatbot designed to generate chaotic or unpredictable responses, according to its website, and ultimately “destroy humanity.”
Another variation, AutoGPT, which stands for Autonomous GPT, can perform more complex goal-oriented tasks. Tham gave an example: “You can ask it, ‘I want to earn 1,000 euros per day. How can I achieve that?’ It will determine all the intermediate steps required to reach that goal. However, what if someone requests, ‘I want to kill 1,000 people. Give me a step-by-step guide?’ Although the ChatGPT model has limitations on the information it can provide, Tham notes that “people have been able to work around them.”
Recently, the Future of Life Institute, a think tank focused on technology, published an open letter highlighting the potential risks associated with chatbots and AI. The letter, signed by prominent figures such as Elon Musk and Steve Wozniak, warned that developing AI systems with human-level intelligence could have profound consequences for society and humanity. Moreover, it pointed out that AI labs are engaged in a heated race to create ever more powerful digital minds, which even their creators cannot fully understand or control.
To address these concerns, the signatories called for a six-month pause on developing AI systems more advanced than GPT-4 to allow for the formulation of appropriate regulations. They also urged governments to consider imposing a moratorium on AI development if the industry leaders did not act voluntarily.
However, EU parliamentarian Brando Benifei, co-sponsor of the AI Act, dismissed the moratorium idea as unrealistic. Instead, he suggested that efforts should be focused on establishing the right rules for AI development and fostering a global debate on how to tackle the challenges posed by powerful AI.
EU legislators working on AI have expressed their concerns and issued a “call to action.” They have urged President Biden and European Commission President Ursula von der Leyen to organize a high-level global summit to establish an initial set of governing principles for developing, controlling, and deploying AI.
According to Tudorache, The Member of the European Parliament, the AI Act, which is anticipated to take effect next year, grants regulators new authorities to address AI applications and allows EU regulators to impose significant fines. The legislation also prioritizes the risks associated with various AI activities and prohibits certain uses, such as “social scoring,” a dystopian monitoring system that rates nearly every social interaction based on a merit scale.
“Consumers must be aware of the data that ChatGPT is using and storing, as well as its intended purpose,” stated Sébastien Pant, Deputy Head of Communications at the European Consumer Organization (BEUC), in an interview. “We are unsure which data is being utilized or whether data collection complies with protection laws.”
Despite FTC Commissioner Alvaro Bedoya’s recent concerns about AI being used to determine hiring, firing, loan eligibility, hospitalization, and home care, the United States has yet to take concrete actions to regulate AI. When asked whether AI could be hazardous, President Biden replied, “It remains to be seen – could be.”
The varying attitudes toward protecting consumers’ data have existed for decades, as noted by Gabriela Zanfir-Fortuna, the Vice President for Global Privacy at the Future of Privacy Forum, a think tank dedicated to data protection.
Zanfir-Fortuna stated that the EU significantly emphasizes how people’s rights are impacted by automating their personal data in the new digital age, to the point where it included a provision in its Charter of Fundamental Rights.
She added that countries like Germany, Sweden, and France established data protection laws 50 years ago, while US lawmakers appeared less concerned about this issue in previous decades, as the country still lacks a comprehensive federal data protection law.
Meanwhile, Gerd Leonhard, author of “Technology vs. Humanity,” and others express concern about the implications of using ChatGPT and more advanced forms of AI in the military, banking industry, and environmental sectors.
He said there is an ongoing joke in the AI community that if AI were tasked with solving climate change, it would conclude that eliminating all humans is the most logical solution. This may be inconvenient for us, but it’s the reality.
Related Stories: