OpenAI Bans Accounts Linked to Chinese Surveillance

OpenAI announced on a Friday that it had suspended several accounts that were using ChatGPT to create a suspected AI surveillance tool.

This social media monitoring tool is believed to have connections to China and operates using one of Meta’s Llama models.

OpenAI Bans Accounts Linked to Chinese Surveillance

The banned accounts were reportedly using these AI models to gather detailed information and analyze documents to create a system that could collect real-time data about anti-China protests in Western countries and share this information with Chinese authorities.

Researchers, including Ben Nimmo and his colleagues, noted that this operation, named Peer Review, focuses on promoting and assessing surveillance tools.

The tool is designed to analyze posts and comments from various platforms, such as X, Facebook, YouTube, Instagram, Telegram, and Reddit.

In one case highlighted by OpenAI, the accounts used ChatGPT to fix and enhance the source code believed to be running the monitoring software, which is called “Qianyue Overseas Public Opinion AI Assistant.”

Besides using its model to find publicly available information about think tanks in the U.S. and government officials in countries like Australia and Cambodia, the group was also found to use ChatGPT to read, translate, and analyze screenshots of documents written in English.

Some of the images they shared were related to Uyghur rights protests in various Western cities, and it remains unclear whether these images were genuine.

OpenAI also reported that it disrupted several other networks that were misusing ChatGPT for various harmful activities, including:

  • A Deceptive Employment Scheme linked to North Korea, where a network created fake job applications, including resumes and cover letters, and even devised plausible explanations for unusual behaviors like avoiding video calls.
  • A Sponsored Discontent network, likely from China, that generated critical social media content in English and long articles in Spanish against the United States, which were then published in Latin American news websites.
  • A Romance-baiting Scam involving accounts that translated and created comments in Japanese, Chinese, and English for social media platforms, connected to suspected scams originating from Cambodia.
  • An Iranian Influence Nexus that generated AI posts and articles supportive of Palestinian and Iranian causes while being anti-Israel and anti-U.S., sharing this content on websites linked to Iranian influence operations.
  • The Kimsuky and BlueNoroff network, operated by North Korean actors, focused on gathering information about cyber intrusion tools and cryptocurrency topics while also debugging code for brute-force attacks.
  • A Youth Initiative Covert Influence Operation that created English-language articles for a website called “Empowering Ghana” and made social media comments targeting the Ghana presidential election.
  • A Task Scam network from Cambodia that translated comments between Urdu and English, luring people into fake jobs for simple tasks in exchange for nonexistent commissions.

This development highlights the growing trend of bad actors using AI tools to facilitate cyber-enabled disinformation campaigns and other malicious activities.

Recently, the Google Threat Intelligence Group revealed that over 57 different threat actors connected to China, Iran, North Korea, and Russia have used its Gemini AI chatbot to enhance various stages of the attack cycle and to conduct research or create content.

Another report from Check First, in collaboration with Reset.tech and AI Forensics, indicated that the Russia-linked Social Design Agency was behind a covert influence campaign that posted over 8,000 political ads on Facebook, aligned with Kremlin interests, aiming to amplify divisive narratives.

The Finnish company pointed out that these ads were tailored to current political events and used provocative narratives and eye-catching visuals to maximize their reach and effectiveness, highlighting significant shortcomings in Meta’s moderation of paid content.

OpenAI emphasized that the unique insights AI companies can gain from observing threat actors are extremely valuable if shared with upstream providers, such as hosting and software developers, as well as downstream platforms like social media companies and open-source researchers.

Equally important is the information that upstream and downstream providers and researchers have about threat actors, which can open new pathways for detection and enforcement for AI companies.

Other Stories You May Like

Help Someone By Sharing This Article