Individuals linked to the Chinese Communist Party (CCP) have reportedly utilized ChatGPT to orchestrate smear campaigns against critics of the party, including the prime minister of Japan. This manipulation highlights the growing trend of threat actors exploiting artificial intelligence tools for politically motivated agendas.
OpenAI frequently updates the public on incidents where malicious actors attempt to misuse its technologies. In a recent report published on February 25, the organization detailed how various nation-states are employing ChatGPT to fine-tune their politically charged campaigns, which can be both targeted and broad in scope.
OpenAI's analysis revealed an intriguing insight into China's propaganda operations through a ChatGPT account connected to Chinese law enforcement. This user frequently relied on the chatbot to draft and refine reports about ongoing smear efforts against Chinese dissidents, including Sanae Takaichi, the current prime minister of Japan.
The report indicated that these smear campaigns exemplify how threat actors combine artificial intelligence with traditional tools like websites and social media. OpenAI emphasized that such activities are not confined to a single platform and can involve multiple AI models.
Last October, Takaichi was elected president of Japan's ruling Liberal Democratic Party and subsequently became prime minister. Known for her hardline stance on China, she has not shied away from vocalizing her support for military assistance to Taiwan in the event of a Chinese invasion. Takaichi has also pointed out China's troubling human rights record, particularly regarding ethnic Mongols in the Inner Mongolia Autonomous Region.
In an attempt to retaliate against her, an individual associated with the Chinese state utilized ChatGPT to devise a strategy aimed at discrediting Takaichi. This included creating and promoting negative online comments about her and using fake email accounts to impersonate Japanese citizens, sending complaints to local politicians regarding her immigration policies. Furthermore, this user leveraged fake social media accounts to stir public sentiment about living costs in Japan, express frustration over U.S. tariffs, and generate support for the situation of oppressed peoples in Inner Mongolia.
Despite these attempts, OpenAI would not have disclosed information about this user if ChatGPT had complied with their more malicious requests. However, the individual continued using the chatbot to draft status reports and other internal documents that, while not overtly harmful, still contributed to their operations.
This same user also employed ChatGPT to assist in campaigns against Chinese dissidents and a human rights organization. The documents generated during these efforts provided OpenAI with valuable insight into the CCP's propaganda machine. For instance, one report claimed that 300 individuals in their province were involved in influence operations, while other updates discussed similar activities across other provinces, indicating that the CCP operatives also utilized other AI chatbots.
A More Effective ChatGPT Influence Operation
OpenAI's report further explored a more sophisticated approach to weaponizing mainstream chatbots in influence operations. One notable example is Operation "No Bell," in which a Russian threat actor utilized ChatGPT to create and edit social media content and lengthy articles focused on geopolitical issues in sub-Saharan Africa. One of these articles controversially suggested that Angola's president deserved a Nobel Peace Prize, possibly as an attempt to provoke a reaction from Donald Trump.
This campaign faced fewer restrictions from ChatGPT, as the prompts used were not inherently malicious. The strategy proved successful, with 53 articles published on various African news sites under the fictitious name "Dr. Manuel Godsin," a fake PhD holder from the University of Bergen. The threat actor adeptly mimicked a human journalist's writing style and avoided using specific punctuation, such as em dashes, which are often associated with AI-generated content.
After uncovering the Russian and Chinese influence operations, OpenAI banned the accounts involved. However, this action is unlikely to deter the actors for long, as they can quickly create new accounts.
While ChatGPT presents challenges, the real concern lies with open-weight large language models (LLMs). Ram Varadarajan, CEO of Acalvio, noted that defending against AI-driven malicious persuasion is particularly challenging with these models because their safety protocols can be easily dismantled through minimal fine-tuning. Unlike proprietary systems, these models lack centralized control, making it easier for malicious actors to bypass safeguards. As research indicates, LLMs can be more persuasive than humans, posing significant societal risks when utilized for widespread manipulation.