Can Artificial Intelligence Ai Influence Elections
2024 is a landmark election year, with over 60 countries—encompassing nearly half of the global population—heading to the polls. Technology has long been used in electoral processes, such as e-voting, and it is a valuable tool in making this process efficient and secure. However, recent advancements in artificial intelligence, particularly generative AI such as ChatGPT (OpenAI) and Copilot (Microsoft), could have an unprecedented impact on the electoral process. These digital innovations offer opportunities to improve electoral efficiency and voter engagement, but also raise concerns about potential misuse. AI can be used to harness big data to influence voter decision-making. Its capacity for launching cyberattacks, producing deepfakes, and spreading disinformation could destabilize democratic processes, threaten the integrity of political discourse, and erode public trust.
UN Secretary-General António Guterres highlighted AI’s dual nature in his address to the Security Council, noting that while AI can accelerate human development, it also poses significant risks if used maliciously. He stated, “The advent of generative AI could be a defining moment for disinformation and hate speech—undermining truth, facts, and safety, adding a new dimension to the manipulation of human behaviour and contributing to... In this article, we will briefly explore the benefits and challenges that AI is bringing to the electoral process. According to UNESCO’s Guide for Electoral Practitioners: “Elections in Digital Times,” AI has the potential to improve the efficiency and accuracy of elections. It reaches out to voters and engages with them more directly through personalised communication tailored to individual preferences and behaviour. AI-powered chatbots can provide real-time information about polling locations, candidate platforms, and voting procedures, making the electoral process more accessible and transparent.
A short interaction with a chatbot can meaningfully shift a voter’s opinion about a presidential candidate or proposed policy in either direction, new Cornell research finds. The potential for artificial intelligence to affect election results is a major public concern. Two new papers – with experiments conducted in four countries – demonstrate that chatbots powered by large language models (LLMs) are quite effective at political persuasion, moving opposition voters’ preferences by 10 percentage points... The LLMs’ persuasiveness comes not from being masters of psychological manipulation, but because they come up with so many claims supporting their arguments for candidates’ policy positions. “LLMs can really move people’s attitudes towards presidential candidates and policies, and they do it by providing many factual claims that support their side,” said David Rand ’04, professor in the Cornell Ann S. Bowers College of Computing and Information Science, the Cornell SC Johnson College of Business and the College of Arts and Sciences, and a senior author on both papers.
“But those claims aren’t necessarily accurate – and even arguments built on accurate claims can still mislead by omission.” The researchers reported these findings Dec. 4 in two papers published simultaneously, “Persuading Voters Using Human-Artificial Intelligence Dialogues,” in Nature, and “The Levers of Political Persuasion with Conversational Artificial Intelligence,” in Science. In the Nature study, Rand, along with co-senior author Gordon Pennycook, associate professor of psychology and the Dorothy and Ariz Mehta Faculty Leadership Fellow in the College of Arts and Sciences, and colleagues, instructed... They randomly assigned participants to engage in a back-and-forth text conversation with a chatbot promoting one side or the other and then measured any change in the participants’ opinions and voting intentions. The researchers repeated this experiment three times: in the 2024 U.S.
presidential election, the 2025 Canadian federal election and the 2025 Polish presidential election. AI Chatbots Are Shockingly Good at Political Persuasion Chatbots can measurably sway voters’ choices, new research shows. The findings raise urgent questions about AI’s role in future elections By Deni Ellis Béchard edited by Claire Cameron Stickers sit on a table during in-person absentee voting on November 01, 2024 in Little Chute, Wisconsin.
Election day is Tuesday November 5. Forget door knocks and phone banks—chatbots could be the future of persuasive political campaigns. Nature volume 648, pages 394–401 (2025)Cite this article There is great public concern about the potential use of generative artificial intelligence (AI) for political persuasion and the resulting impacts on elections and democracy1,2,3,4,5,6. We inform these concerns using pre-registered experiments to assess the ability of large language models to influence voter attitudes. In the context of the 2024 US presidential election, the 2025 Canadian federal election and the 2025 Polish presidential election, we assigned participants randomly to have a conversation with an AI model that advocated...
We observed significant treatment effects on candidate preference that are larger than typically observed from traditional video advertisements7,8,9. We also document large persuasion effects on Massachusetts residents’ support for a ballot measure legalizing psychedelics. Examining the persuasion strategies9 used by the models indicates that they persuade with relevant facts and evidence, rather than using sophisticated psychological persuasion techniques. Not all facts and evidence presented, however, were accurate; across all three countries, the AI models advocating for candidates on the political right made more inaccurate claims. Together, these findings highlight the potential for AI to influence voters and the important role it might play in future elections. This is a preview of subscription content, access via your institution
Access Nature and 54 other Nature Portfolio journals Get Nature+, our best-value online-access subscription In the season finale, author and political theorist Laura Field joins co-hosts Archon Fung and Stephen Richer to unpack the ideas and beliefs of the New Right and their impact on elections, race, and... Co-hosts Archon Fung and Stephen Richer unpack the latest developments in the Epstein saga and explore what they reveal about shifting political alignments, growing demands for accountability, and the relationship between power and public... Co-hosts Archon Fung and Stephen Richer look back at the last five months of headlines as they celebrate the twentieth episode of Terms of Engagement. Creating a healthy digital civic infrastructure ecosystem means not just deploying technology for the sake of efficiency, but thoughtfully designing tools built to enhance democratic engagement from connection to action.
Public engagement has long been too time-consuming and costly for governments to sustain, but AI offers tools to make participation more systematic and impactful. Our new Reboot Democracy Workshop Series replaces lectures with hands-on sessions that teach the practical “how-to’s” of AI-enhanced engagement. Together with leading practitioners and partners at InnovateUS and the Allen Lab at Harvard, we’ll explore how AI can help institutions tap the collective intelligence of our communities more efficiently and effectively. Emory experts weigh in on how chatbots, algorithmic targeting, deepfakes and a sea of misinformation — and the tools designed to counter them — might sway how we vote in November and beyond. Or so it seemed. The voice on the other end of the line sounded just like President Joe Biden.
He even used his signature catchphrase: “What a bunch of malarkey!” But strangely, he was telling these would-be voters to stay away from the polls, falsely warning them that voting in the primary would... The robocalls didn’t necessarily impact the voting results; Biden still handily won the New Hampshire Democratic primary. Nevertheless, the stunt sent shockwaves through the worlds of politics, media and technology because the misleading message didn’t come from the president — it came from a machine. The call was what’s known as a deepfake, a recording generated by artificial intelligence (AI), made by a political consultant to sound exactly like Biden and, in this case, apparently suppress voter turnout. It was one of the most high-profile examples of how generative AI is being used in the realm of politics. These deepfakes are affecting both sides of the political aisle.
In summer 2023, the early days of the Republican race for the presidency, would-be candidate and Florida Gov. Ron DeSantis shared deepfakes of former President Donald Trump hugging Anthony Fauci, one of the leaders and lightning rods of the U.S.’s COVID-19 response. And, despite being a victim of deepfake tactics like this, Trump has not been afraid to turn around and use them himself. Famously, this included his recent Truth Social post of AI-manipulated photos that showed pop star Taylor Swift, decked out as Uncle Sam, endorsing him for president. The run-up to the 2024 election was marked by predictions that artificial intelligence could trigger dramatic disruptions. The worst-case scenarios — such as AI-assisted large-scale disinformation campaigns and attacks on election infrastructure — did not come to pass.
However, the rise of AI-generated deepfake videos, images, and audio misrepresenting political candidates and events is already influencing the information ecosystem. Over time, the misuse of these tools is eroding public trust in elections by making it harder to distinguish fact from fiction, intensifying polarization, and undermining confidence in democratic institutions. Understanding and addressing the threats that AI poses requires us to consider both its immediate effects on U.S. elections and its broader, long-term implications. Incidents such as robocalls to primary voters in New Hampshire that featured an AI-generated impersonation of President Biden urging them not to vote captured widespread attention, as did misinformation campaigns orchestrated by chatbots like... Russian operatives created AI-generated deepfakes of Vice President Kamala Harris, including a widely circulated video that falsely portrayed her as making inflammatory remarks, which was shared by tech billionaire Elon Musk on X.
Separately, a former Palm Beach County deputy sheriff, now operating from Russia, collaborated in producing and disseminating fabricated videos, including one falsely accusing vice-presidential nominee Minnesota Gov. Tim Walz of assault. Similar stories emerged around elections worldwide. In India’s 2024 general elections, AI-generated deepfakes that showed celebrities criticizing Prime Minister Narendra Modi and endorsing opposition parties went viral on platforms such as WhatsApp and YouTube. During Brazil’s 2022 presidential election, deepfakes and bots were used to spread false political narratives on platforms including WhatsApp. While no direct, quantifiable impact on election outcomes has been identified, these incidents highlight the growing role of AI in shaping political discourse.
The spread of deepfakes and automated disinformation can erode trust, reinforce political divisions, and influence voter perceptions. These dynamics, while difficult to measure, could have significant implications for democracy as AI-generated content becomes more sophisticated and pervasive. The long-term consequences of AI-driven disinformation go beyond eroding trust — they create a landscape where truth itself becomes contested. As deepfakes and manipulated content grow more sophisticated, bad actors can exploit the confusion, dismissing real evidence as fake and muddying public discourse. This phenomenon, sometimes called the liar’s dividend, enables anyone — politicians, corporations, or other influential figures — to evade accountability by casting doubt on authentic evidence. Over time, this uncertainty weakens democratic institutions, fuels disengagement, and makes societies more vulnerable to manipulation, both from domestic actors and foreign adversaries
As artificial intelligence (AI) has become more mainstream, there is growing concern about how this will influence elections. Potential targets of AI include election processes, election offices, election officials and election vendors.[2] Generative AI capabilities allow creation of misleading content. Examples of this include text-to-video, deepfake videos, text-to-image, AI-altered image, text-to-speech, voice cloning, and text-to-text. In the context of an election, a deepfake video of a candidate may propagate information that the candidate does not endorse.[3] Chatbots could spread misinformation related to election locations, times or voting methods. In contrast to malicious actors in the past, these techniques require little technical skill and can spread rapidly.[4]
During the 2023 Argentine primary elections, Javier Milei's team distributed AI generated images including a fabricated image of his rival Sergio Massa and drew 3 million views.[5] The team also created an unofficial Instagram... In the run up to the 2024 Bangladeshi general election, deepfake videos of female opposition politicians appeared.[8] Rumin Farhana was pictured in a bikini while Nipun Ray was shown in a swimming pool.[8] In the run up to the 2025 Canadian federal election, the use of AI tools is likely to figure prominently.[9] India, Pakistan and Iran are all expected to make efforts to subvert the national...
People Also Search
- Can artificial intelligence (AI) influence elections?
- AI chatbots can effectively sway voters - in either direction
- AI Chatbots Shown to Sway Voters, Raising New Fears about Election ...
- Persuading voters using human-artificial intelligence dialogues
- Then and Now: How Does AI Electoral Interference Compare in 2025?
- AI on the Ballot: How Artificial Intelligence Is Already Changing ...
- Candidate Ai: the Impact of Artificial Intelligence on Elections
- Gauging the AI Threat to Free and Fair Elections
- How Generative AI Is Redefining Election Campaigns
- Artificial intelligence and elections - Wikipedia
2024 Is A Landmark Election Year, With Over 60 Countries—encompassing
2024 is a landmark election year, with over 60 countries—encompassing nearly half of the global population—heading to the polls. Technology has long been used in electoral processes, such as e-voting, and it is a valuable tool in making this process efficient and secure. However, recent advancements in artificial intelligence, particularly generative AI such as ChatGPT (OpenAI) and Copilot (Micros...
UN Secretary-General António Guterres Highlighted AI’s Dual Nature In His
UN Secretary-General António Guterres highlighted AI’s dual nature in his address to the Security Council, noting that while AI can accelerate human development, it also poses significant risks if used maliciously. He stated, “The advent of generative AI could be a defining moment for disinformation and hate speech—undermining truth, facts, and safety, adding a new dimension to the manipulation of...
A Short Interaction With A Chatbot Can Meaningfully Shift A
A short interaction with a chatbot can meaningfully shift a voter’s opinion about a presidential candidate or proposed policy in either direction, new Cornell research finds. The potential for artificial intelligence to affect election results is a major public concern. Two new papers – with experiments conducted in four countries – demonstrate that chatbots powered by large language models (LLMs)...
“But Those Claims Aren’t Necessarily Accurate – And Even Arguments
“But those claims aren’t necessarily accurate – and even arguments built on accurate claims can still mislead by omission.” The researchers reported these findings Dec. 4 in two papers published simultaneously, “Persuading Voters Using Human-Artificial Intelligence Dialogues,” in Nature, and “The Levers of Political Persuasion with Conversational Artificial Intelligence,” in Science. In the Nature...
Presidential Election, The 2025 Canadian Federal Election And The 2025
presidential election, the 2025 Canadian federal election and the 2025 Polish presidential election. AI Chatbots Are Shockingly Good at Political Persuasion Chatbots can measurably sway voters’ choices, new research shows. The findings raise urgent questions about AI’s role in future elections By Deni Ellis Béchard edited by Claire Cameron Stickers sit on a table during in-person absentee voting o...