Attempts to combat the impact of misinformation on elections have been going on for some years. However, technological advancements enabling the generation of content such as deepfake audio and video, which are difficult to discern, have raised questions over threats to the integrity of the election process and its subsequent outcome.
Shruti Shreya, senior programme manager of platform regulation and gender technology at The Dialogue, a tech policy think tank, told ThePrint that misinformation could lead to doubts about the fairness and transparency of elections, sometimes causing voters to question the legitimacy of election outcomes.
In the past few years, there have been instances where claims about voter fraud and election rigging have gained traction despite the absence of evidence. While electoral misinformation predates AI technology, AI makes creating and distributing realistic synthetic content faster, cheaper, and more personalised.
“Deepfakes, with their ability to create highly realistic but entirely fabricated audiovisual content, can be potent tools for spreading misinformation. This becomes particularly concerning in the context of elections or public opinion, where false narratives can be disseminated with a veneer of authenticity.,” Shreya said.
“For instance, a deepfake video could falsely depict a political figure making controversial statements, potentially swaying public opinion or voter behaviour based on entirely fabricated content,” she added.
Also read: Text-to-video AI the new threat in election season. Here’s something Indian politicians can do
‘Don’t blame the tool’
Experts, however, warned against ignoring the positive side of AI only because AI, particularly generative AI or GenAI that can create images, videos and audio, has a negative side.
Divyendra Singh Jadoun, founder of synthetic media firm The Indian Deepfaker, said technology is neutral, and the good and the bad outcomes depend on the person using it. “For example, a car is also a tech. It takes you from place A to B, but it’s also a leading cause of death. So, it doesn’t mean the car or the tech is bad. It depends on the person using it.”
At the SFLC discussion, he said politicians and political parties are already using GenAI, and several parties, PR agencies, and political consultants have requested his company to help them use AI to enhance public perception of their leaders or enable personal messaging at scale.
He said AI could be a real-time conversational agent — parties or politicians could send millions of calls to people and get inputs on concerns and problems of an area and use the data to introduce tailored solutions or schemes. “But these products are labelled or watermarked. The video or the voice agent will say it’s an AI avatar,” he added.
Prime Minister Narendra Modi also uses AI to connect with people. At the Startup Mahakumbh, Modi Wednesday mentioned the AI-powered photo booth feature on the NaMo app. The feature uses facial recognition technology to match a user’s face to existing pictures of them with Modi, allowing them to find any such pictures. “If I am going through some place, and even if half your face is visible… using AI, you can get that picture and say I am standing with Modi,” said the PM.
The Indian Deepfaker also gets requests from political stakeholders to create clones of political opponents and make them say things the real leaders did not. “There should be regulation on it,” Jadoun said.
Mayank Vasta, a professor of computer science at IIT Jodhpur, added that with GenAI, politicians could use audio deepfakes to create their message in different languages, helping them overcome a significant barrier in a country like India, which has a great variety of spoken languages.
“For example, every politician using Gen AI can potentially talk one-on-one with every person in India. And it can be a very personalised experience for the voters,” Vasta said, adding that GenAI could also be used to create content accessible to voters with disabilities.
However, the problem is not using AI to create videos and audio to engage voters but whether the voters who are seeing or hearing them know they are AI-created.
“That’s where labelling comes in. That’s where there should be transparency. I don’t think there can be a debate about the need for transparency now that we have the electoral bond judgment,” said Sugathan.
Supporting some regulation or control, Sugathan also said, “The Election Commission should do something about it… if they don’t do it now, I think it’s a lost bet.”
What current regulations say
In India, spreading misinformation or fake news is not an offence or civil wrong in and of itself, said Rohit Kumar, co-founder of public policy firm The Quantum Hub (TQH) and Young Leaders for Active Citizenship (YLAC). But, he added, the Indian Penal Code (IPC) and the Information Technology (IT) Act penalise some consequences of misinformation such as inciting fear or alarm and provoking a breach of public peace, inciting violence among different classes or communities, or defaming a person.
Kumar said the Bharatiya Nyaya Sanhita — the new criminal code that will come into effect from 1 July — will also penalise making or publishing misinformation that jeopardises the country’s sovereignty, unity, integrity, and security.
The IT Act and the IT Rules also prescribe some due diligence requirements for online platforms disseminating information. Shreya said that Rule 3(1)(b) of the IT Rules, 2021 obligates platforms to inform users and make reasonable efforts to prevent them from posting misinformation. This rule is significant as it places a degree of responsibility on platforms to educate users about what content is permissible, encouraging a proactive stance against misinformation, she said.
Shreya also referred to Rule 4(3), which requires companies to proactively monitor their platforms for harmful content, including misinformation, on a “best-effort basis”. This mandate is a step towards ensuring that digital platforms play an active role in identifying and mitigating potentially harmful content. The rule, however, balances this requirement with the practical limitations of such monitoring efforts.
Kumar, however, said, “Several challenges dent the efficacy of our current regulatory framework. This includes the problems of accurately identifying misinformation and effectively checking its proliferation through meaningful human oversight.”
He said misinformation is often hard to identify and has usually spread by the time it is fact-checked.
Also read: 2024 will be the year of AI. Here’s what to expect
What more can be done
Charru Malhotra, professor at the Indian Institute of Public Administration, said problems arise as “(many) are two-minute-meals kind of people”.
“We want to consume short reels. We want to consume meals that are ready instantly… We don’t validate the sources, we don’t validate the content… we gulp it down, digest it and take it out depending on our convenience, preferences, or biases,” Malhotra said.
“AI has just added a layer to what was already pre-conceived, pre-believed and pre-understood,” she added.
She raised concerns over the ‘Liar’s Dividend’ — where someone makes a faux pax or deliberate statement but then claim that the footage of them doing so was generated by synthetic media.
Vasta said that while AI has not entirely undermined the democratic process yet, it certainly poses a risk and “we must develop robust detection techniques” to counter misinformation and deepfakes.
However, educating the public about deepfakes might be the fastest avenue as of now to combat concerns. Vasta stressed the need for a digital literacy programme to teach the public to distinguish between real and AI-generated content.
Expressing similar views, Malhotra said, “We have to sensitise people…why can’t my classroom have a session on how to identify a deepfake video? If eyes are not moving in a video, that is an identifier… Why wait for watermarks? Why can’t my students be taught that skill?”
Kumar said India’s young generation is more tech-savvy and can play a significant role in the online media literacy of their elders, who are more likely to fall prey to misinformation.
He said a YLAC survey found that children actively used the internet for obtaining information, with 95 percent having access to smartphones. Nearly 27 percent had accessed various AI websites.
Children also tend to use social media and online websites as their primary news source and are more aware of the potential of the internet to generate and amplify misinformation, said Kumar.
However, with technological advancements, GenAI today produces stunningly realistic content, which is getting harder to discern.
Tarunima Prabhakar, co-founder of Tattle, which builds tools to understand and respond to inaccurate and harmful content, said it is getting increasingly hard to detect manipulation in video and audio, but technology could combat it.
“I also think you need the traditional journalistic skills, where someone picks up a phone and calls the person and asks whether something happened. For example, there is the Misinformation Combat Alliance. The idea is to bring forensic experts and journalists together and respond to content because sometimes traditional journalism works and sometimes the tech,” Prabhakar said.
Vatsa agreed that people should be taught basic skills to detect manipulation but also said that data-driven techniques are needed to combat more advanced algorithms, which generate almost real videos and audio.
“In the last elections, we had this messaging of asking people to go out and vote. Maybe, this time, the Election Commission can focus on making people aware about these risks… and yes, there needs to be a lot of involvement from the intermediaries, the platforms,” Sugathan said.
Some platforms, on their part, are taking steps to curb misinformation and deepfakes in the lead-up to India’s elections.
Meta, which owns the social media platforms Facebook and Instagram and the messaging platform WhatsApp, said Tuesday that it would activate an India-specific election operations centre to bring together experts from across the company to identify potential threats and put specific mitigation processes in place across its apps and technologies in real time. The experts can be from the data science, engineering, research, content policy, and legal teams.
The US-headquartered firm said it would collaborate with industry stakeholders on technical standards for AI detection and combating the spread of deceptive AI content in elections.
However, while the government and the platforms can do their bit, the public also has to be more dispassionate when sharing content, experts said.
“Voters need to think about sharing content, especially if it’s taking your emotions to the next level. It’s acting as a catalyst,” Jadoun said.
(Edited by Madhurita Goswami)
Also read: Govt clarifies on advisory asking companies to seek nod for AI platforms — ‘won’t apply to startups’