A recent Microsoft report claimed to have uncovered evidence of interference in Indian elections by a “Chinese cyber actor” named Flax Typhoon. But the focus on the ‘foreign’ source of this misinformation distracts from the true danger to our democracy.
Written by: Noel Therattil, who is a lawyer and a Schwarzman Scholar.
Foreign interference in elections is a cause for concern, even more so if it involves a neighbour with whom you share an ill-defined 3,440 km-long border. China has historically been used as cannon fodder for political grandstanding between the government and opposition parties, and that trend shows no signs of letting up. In 2019, a Pew survey found that 61% of Indians surveyed believed China’s growing economy was a “bad thing”—more so than citizens of any other country. Among all nations surveyed, Indians also had the second most unfavourable view of China, after Japan. This opinion of China has deteriorated significantly with the Galwan Valley—the first fatal confrontation between the two sides since 1975 in 2020.
So it is hardly surprising that Microsoft’s Threat Intelligence Report—published in April—received such eager and widespread coverage in the Indian press. The report claims evidence of Chinese interference in the Lok Sabha elections just two weeks before polls. It alleges that a “Chinese cyber actor” named Flax Typhoon targeted India in the latter half of 2023. The report also predicts that “Chinese cyber and influence actors” will target the upcoming Lok Sabha elections. But the extent and effect remain unclear:
“While the impact of such content in swaying audiences remains low, China’s increasing experimentation in augmenting memes, videos, and audio will continue—and may prove effective down the line.”
The report claims the actors are “CCP-linked” and “China-based” without evidence. “While it is understandable for states to withhold evidence given their desire to protect intelligence networks, private enterprises like Microsoft do not have similar limitations,” says Madeline Carr—Professor of Global Politics and Cyber Security at University College London, “The report fails to stand the test of good reporting.” Indian media reports— like this one from India Today—take the allegations even further by using “Chinese state-backed cyber groups” as synonyms for the already problematic descriptors used by the report.
Does the ‘foreign’ bit matter?
This is not to say that China never has or will never interfere in Indian elections. Alleged foreign interference in a country’s elections comes in many forms, from insidiously funding organizations to inciting domestic unrest. India too has been accused of meddling in elections—not just by Canada, but also by its neighbours, Bangladesh and Nepal.
But the most pervasive and impactful form of interference leverages social media—and its ability to manufacture viral consent, ‘create’ facts, and penetrate conversations. The last decade has seen what Chomsky called an “era of alternative facts” emerge, one that is unhindered by logic, fact, and empiricism. The World Economic Forum’s 2024 Global Risk Report reveals India to be the country most vulnerable to misinformation and disinformation.
Over the past five years alone, election spending by political parties on digital advertising has increased six-fold. The media, politicians, influencers—basically everyone online in India is looking to share the most impactful and conversation-worthy piece of information to their followers. Voters, in turn, are subjected to this content at an unprecedented rate.
At no point in this ‘supply chain of information’ is any actor aware of the role that they may have played in the operations of a foreign entity. Freedom of speech, or foreign propaganda? The distinction is hard to make for any government or citizen.
In our collective memory, Russia’s alleged interference in the US elections has stolen the spotlight. But interfering in a country’s elections is no longer the domain of states: we must acknowledge the role of non-state or private actors. Rewind to 2016 and we find the most significant case of digital election tampering—in the referendum vote for Brexit—involved a private firm AIQ that leveraged personal Facebook data to target ads—with “very high conversion rates.” The ‘Leave’ side won with a slim majority of 51.9%. Whether targeted ads had anything to do with this is subject to debate, but even the most inefficient means of spreading misinformation can have decisive impacts in closely-contested elections.
But it is a mistake to focus on the “foreign” source of this kind of manipulation. When misinformation-led interests between domestic and foreign players align, it is very difficult and often futile to define what is “foreign” in nature. One way of dealing with this is to focus instead on identifying the “interests the message serves”, as a Carnegie report suggests. While this is a tempting way to tackle foreign influence, it is also a slippery slope towards arbitrarily censoring free speech. As the Carnegie authors acknowledge, “[I]t fails to distinguish between individuals expressing their personal opinions and influence operations designed to undermine domestic democratic institutions.”
But what is the actual impact of foreign cyber interference? Hate content, misinformation, deep fakes, divisive, and polarising content are already being produced and disseminated by domestic actors. Such activity has an equally detrimental impact on a polity as any foreign activity, if not more.
Take the case of Taiwan, where Chinese electoral influence in cyberspace took a variety of forms including fake polls, false news reports, and AI generated content. One false news report from December 2023 claimed that Hsiao Bi-khim—the Democratic Progressive Party’s vice-president elect candidate—was a US citizen. A deepfake video claimed that William Lai—the president elect—had three mistresses. According to Taiwanese prosecutors, these pieces of misinformation were generated and propagated by China.
But these instances of disinformation may very well have been executed by domestic actors in Taiwan if they wished. This is not to give China the benefit of the doubt, but rather to point out that domestic actors have both the incentive and the capability to produce and propagate the same content. The identification and prevention of the spread of misinformation is the elephant in the room, not the “foreignness” of it.
A Freedom House report found that domestic digital interference affected at least 88 percent of selected countries that held elections or referendums between June 2018 and May 2020. And the primary actor in most cases was the government—or government-friendly groups:
In a majority of countries evaluated, however, it was domestic actors who abused information technology to subvert the electoral process. In the 30 countries that held elections or referendums during the coverage period, Freedom House found three distinct forms of digital election interference: informational measures, in which online discussions are surreptitiously manipulated in favour of the government or particular parties; technical measures, which are used to restrict access to news sources, communication tools, and in some cases the entire internet; and legal measures, which authorities apply to punish regime opponents and chill political expression.
Hence, focusing on the foreignness of elections via social media may not be the best way to deal with the issue. Rather, it is the impact of the misinformation on democratic processes that needs attention.
Even in the eventuality that the Indian government does attribute Chinese influence to misinformation campaigns around the elections, it would be difficult to provide evidence to the public. We saw this with the Canadian government’s report that accused the Indian government of interfering in Canadian elections; any evidence if it did exist was redacted—all three pages of it. One way of dealing with this in the long term is to “follow the money”, says Professor Carr—it may help identify the intent, if not always the source of misinformation.
Building institutional resilience and capitalising on technological advancements can help counter the tide of misinformation. Big data and AI allows regulators to identify certain syntax, grammar, and language patterns to identify AI generated content and bots. Advanced AI models can identify plagiarism, splicing, and perform spectrogram analysis to identify fake images and videos. These technologies of course come at a cost: most obviously to the freedom of speech. Rampant internet takedowns will deal a self-inflicted blow to India’s democracy.
More importantly, building a democratic discourse that can counter a divisive narrative, regardless of its origins, remains our best defence. The past decade and the pandemic especially, has shown that an educated population remains as susceptible to misinformation as an uneducated one. India must cultivate a digitally literate citizenry and cultivate democratic discussion that does not undermine core democratic interests, starting with its national elections.