Interview
Assessing the Impact of AI on Indo-Pacific Elections
The past year has seen a surge in the use of artificial intelligence (AI) during elections. This has ranged from innocuous activities, such as creating campaign materials, to more malign and disruptive activities involving disinformation. In this Q&A, Cynthia Lu asks Margot Fulde-Hardy about these and other issues to better understand the impact of AI on Indo-Pacific elections.
Broadly speaking, how has AI influenced Asian elections this year, and how large a threat does it represent?
When it comes to the uses of AI in election disinformation, we can begin with the example of South Korea. In 2023, President Yoon Suk Yeol was targeted by deepfake videos attempting to discredit him by alleging corruption. Similarly, in Indonesia and India, AI has been utilized to develop deepfake videos to discredit and manipulate the narratives of one political party or another. This disinformation observed in these countries was not necessarily due to foreign influence; rather, it stemmed primarily from domestic parties and relied on local operators. What strikes me most is a shared phenomenon in Indonesia and India: the use of AI to “resurrect” dead leaders who were past influencers from these countries. Videos of these individuals are generated and then used to facilitate the narrative of the party promulgating the AI-generated content. These videos make candidates say things that they would not have said and rely on their charisma to influence people.
Another interesting case occurred in Pakistan when one of the candidates for the opposition party, Imran Khan, was in jail. Unable to campaign, he wrote his speeches from prison and gave them to his party. AI was then used to develop audio versions of the speeches with Khan’s voice, which were then spread online.
All these cases raise ethical questions: What does AI means for political participation? To what extent can one use it to transform the current conditions to diffuse a message?
Chinese misinformation campaigns have previously relied on astroturfing techniques—where actors are paid to influence public opinion—with the most well-known example being China’s “50 cent” army. How has generative AI affected these operations? Have the tactics changed, particularly in the context of misinformation surrounding elections?
The “50 cent” army likely first appeared in 2004 and consists of Chinese citizens being paid by the Chinese Communist Party (CCP) to comment online to support its narratives. Over time, these efforts to influence public opinion have become more advanced. Spamouflage—also known as Dragonbridge and Storm 1376—is one example. This campaign, first revealed in 2019 by Graphika, is a vast network of accounts operating on multiple platforms, working to promote the CCP’s narrative while attacking the party’s opponents and dissidents overseas. Since then, it has been attributed to the Ministry of Public Security, based on the reporting of several investigators, and was connected to an indictment of Chinese individuals by the U.S. Department of Justice in 2023, though with low confidence.
When it comes to these forms of misinformation and election interference, Chinese threat actors have always appeared to be limited by their linguistic ability—particularly for European audiences and the French context with which I am most familiar—compared with other threat actors. We see from the I-Soon leaks that China is engaged in collecting data on new audiences and creating new narratives. But there is still a barrier between the Chinese threat actors and the audiences they are trying to target.
We saw from the election in Taiwan that China is trying to change that. In the summer of 2022, Chinese threat actors tried to develop more content in local dialects using traditional characters to better target the local audience. They also tried to grasp local issues, as Russian threat actors attempt to do. In 2024, generative AI likely provides new resources for Chinese threat actors to scale up, quickly translating content. For example, a recent Foundation for the Defense of Democracies report I co-authored shows that in the context of U.S. audiences, the consistent use by likely Spamouflage-affiliated accounts of odd, “hanging” quotation marks in Facebook comments left alone in the line below the final sentence of the comment. In AI systems such as ChatGPT, translated content output is enclosed by quotation marks. This is a hint that these threat actors are using generative AI. A report from Microsoft presented the latest cases of AI use by China in the Taiwan election. This report shows that China is now leveraging AI-generated audio, anchors, and memes to influence an election. All this evidence indicates that the rise of generative AI could allow China to better target and engage with non-Chinese-speaking audiences.
When we consider how Chinese threat actors might use generative AI in the future, particularly in the context of the United States, Russia’s use of generative AI likely represents potential avenues of Chinese engagement. For instance, Russia-affiliated threat actors have targeted the United States with false websites such as the DC Weekly and the Chicago Chronicle. A study from Clemson University found that these actors used generative AI to plagiarize articles from other new sites, such as Fox News and Gateway Pundit, remixing them together to avoid detection. They also suppressed references to domestic influencers and the original authors of the pieces. This modus operandi could be replicated by Chinese threat actors in the future, as Spamouflage actors currently are solely plagiarizing content rather than trying to develop new content. Specifically, Chinese actors could imitate the “remix” method and begin using generative AI to create more text content at a larger scale to better engage with the U.S. audience.
To date, this Spamouflage network has not been able to reach a very broad audience. How do you suspect malign actors will expand their influence to achieve a greater impact? What can be done to combat this threat?
First, it must be understood that Spamouflage has many ramifications and networks worldwide. The network also appears to operate different branches. A recent Institute for Strategic Dialogue report has demonstrated how sophisticated the network is. Some branches utilize generative AI images and hyper-realistic content, with which users are likely to engage if they do not know who is behind the content. On the other hand, the branch on Facebook appears to primarily generate cheapfakes, with low-quality content. The complexity and variety of the networks make it difficult to give an overall assessment of the reach.
We can, however, observe where Spamouflage has engaged in the past. The Graphika report “Spamouflage Breakout” discusses the ability of Spamouflage to break out of its “echo chamber.” In this case, the authors observed that Spamouflage managed to engage with the U.S. audience, especially in the wake of the January 6 event. It is likely that this type of engagement will continue. We just cannot be sure which Spamouflage branch this engagement will stem from. Will it be from the Spamouflage branch that operates on Twitter, which seems to develop sophisticated content? Will it be from the current echo chambers that will eventually engage with domestic users? It is difficult to say.
While there are numerous important reports from civil society organizations, we do not have enough public exposure from governments. So far, only a few countries have sent a clear public diplomacy channel to China about malign interference. For instance, Canada released a public report on Spamouflage in October, which was helpful. France, working with Viginum, publicly exposed the Russian Doppelgänger campaign. Although the campaign had already been exposed by civil society actors and platforms such as Meta, by attributing it to a government agency, France was able to give momentum to other European actors. For instance, Germany soon followed France’s path.
Thus, the French government’s actions started a trend and allowed punitive actions to be taken. The European Union banned four media outlets accused of spreading Russian propaganda after France asked Brussels to impose sanctions ahead of the EU parliamentary elections. When it comes to Spamouflage, these government actions are exactly what we need—and it is also what we are currently missing in the context of Europe and America.
Margot Fulde-Hardy is a researcher focused on online manipulation who has experience in both industry and government, including at the French government agency Viginum. This interview was conducted independent of and prior to her former and current affiliations.
This interview was conducted by Cynthia Lu while an intern with the Technology and Geoeconomic Affairs group at NBR.