The year 2024 marked an unprecedented moment in democratic history. With 3.7 billion eligible voters across 72 countries heading to the polls, it became the largest election year humanity has ever witnessed. Simultaneously, this super-cycle year coincided with the mainstream emergence of sophisticated artificial intelligence tools capable of creating hyper-realistic synthetic media. The collision of these two forces sparked global concern about the integrity of democratic processes, prompting nations worldwide to implement new regulations, deploy detection technologies, and educate voters about AI-generated misinformation.
The Reality Check: Deepfakes in 2024 Elections
When the year began, anxiety about AI-driven electoral manipulation dominated headlines. Researchers, policymakers, and cybersecurity experts warned of an impending “deepfake apocalypse” that could fundamentally undermine trust in democratic institutions. A Pew Research survey from fall 2024 revealed that nearly eight times as many Americans expected AI to be used for mostly bad purposes in elections compared to those who anticipated positive applications.
However, the actual impact of deepfakes on 2024 elections painted a more nuanced picture than the dire predictions suggested.
Measuring the Real Impact
According to analysis from the Knight First Amendment Institute, researchers examined 78 instances of AI use in global elections during 2024. Surprisingly, 39 of these cases involved no deceptive intent, with AI being used for legitimate campaign purposes such as translating speeches into multiple languages or creating clearly labeled satirical content.
The News Literacy Project documented election-related misinformation and discovered that traditional “cheap fakes” (content manipulated using basic editing software) were used seven times more frequently than AI-generated deepfakes. This finding was echoed across multiple countries. In India, fact-checking organization Boom Live conducted 258 election-related fact-checks and found only 12 involved AI-generated misinformation. Similarly, research analyzing approximately 2,000 viral WhatsApp messages in India revealed that merely 1% were generated by AI.
Meta reported in late 2024 that less than 1% of all fact-checked misinformation during election cycles consisted of AI content. These statistics challenge the narrative that generative AI fundamentally transformed the misinformation landscape during the 2024 elections.
Why Deepfakes Were Less Effective Than Expected
Several factors contributed to the limited electoral impact of deepfakes. First, the speed at which false content spreads often matters more than its technical sophistication. By the time a high-quality deepfake gains traction, fact-checkers, journalists, and campaigns typically have time to respond and debunk it.
Second, research consistently demonstrates that changing voters’ minds through any form of persuasion proves remarkably difficult. People’s political views tend to be deeply entrenched, and even viscerally powerful manipulated images struggle to overcome existing beliefs and partisan loyalties.
Third, public awareness of deepfake technology increased substantially throughout 2024. Extensive media coverage, educational campaigns, and social media platform warnings helped many voters approach suspicious content with healthy skepticism.
India: Pioneering AI Labeling and Enforcement Mechanisms
India’s 2024 general election, involving approximately one billion voters across seven phases from March to June, served as a critical testing ground for AI regulations in democratic processes. The sheer scale of the election, combined with India’s diverse linguistic landscape and widespread social media usage, created unique challenges for managing AI-generated content.
Regulatory Evolution
India’s approach to AI regulation evolved rapidly in response to emerging challenges. In November 2023, Information Technology Minister Ashwini Vaishnaw announced plans to regulate the spread of deepfakes following several high-profile incidents. By March 2024, the Ministry of Electronics and Information Technology (MeitY) issued an advisory mandating that unreliable or under-tested generative AI models be clearly labeled as such.
The ministry warned major technology companies against creating tools that could threaten electoral integrity. In a particularly significant move, MeitY informed tech giants that AI models, large language models, and any software using generative AI or algorithms being tested must seek explicit government permission before deployment in India.
The Draft IT Rules of 2025
Building on lessons learned from the 2024 election, India proposed comprehensive draft amendments in September 2025 to its Information Technology Rules. These amendments require social media platforms and technology companies to clearly label all AI-generated content, including text, images, audio, and video.
The proposed regulations establish clear accountability frameworks. Under these rules, intermediaries must ensure visible labeling, metadata traceability, and transparency for all public-facing AI-generated media. The amendments also provide legal protection for platforms acting in good faith while addressing user grievances related to deepfakes or synthetic content.
Enforcement mechanisms include criminal penalties for violating labeling requirements. Political parties and campaigns face particular scrutiny. In October 2025, the Election Commission of India issued a fresh advisory mandating all national and state-recognized political parties to disclose and label synthetically generated or AI-altered content used during elections.
India’s Practical Implementation
Political parties in India reportedly invested approximately $50 million in AI-generated content ahead of the 2024 polls. This investment produced diverse applications, from the innocuous to the ethically questionable.
Some uses were relatively benign: the Bharatiya Janata Party and Indian National Congress both created AI-translated speeches allowing candidates to address voters in languages they did not personally speak. Political consultant Divyendra Singh Jadoun, who runs The Indian Deepfaker from Pushkar, Rajasthan, cloned the voice of Chief Ministerial candidate Ashok Gehlot to send personalized WhatsApp messages addressing each voter by name.
However, more concerning applications also emerged. Political parties used AI to resurrect deceased political figures in deepfake videos, with the Communist Party of India-Marxist creating AI content featuring aging party veteran Buddhadeb Bhattacharya. Some parties even attempted to create “deepfakes” of rival candidates making inflammatory statements or admissions that never occurred.
The Election Commission responded by issuing guidelines in May 2024 requiring prompt removal of deepfake videos and false or misleading information within three hours of notification. This rapid-response approach aimed to prevent viral spread of manipulated content during critical campaign periods.
The Deepfakes Analysis Unit
To combat AI-generated misinformation, the Misinformation Combat Alliance established the Deepfakes Analysis Unit (DAU) in March 2024. This unique initiative engaged directly with the public through a WhatsApp tipline, allowing citizens to submit suspicious audio and video files for verification.
Since its launch, the DAU has reviewed hundreds of unique submissions. The unit’s work revealed that videos manipulated with synthetic audio tracks were more common than complete deepfakes. Many submissions classified as “cheapfakes” used AI-generated voices but relied on traditional editing techniques for visual elements.
The DAU’s classification system distinguishes between “deepfakes” (created using AI), “cheapfakes” (created using basic editing software), “manipulated” content, and “AI-generated” content. This nuanced approach recognizes that not all AI-generated content is necessarily harmful or misleading, as some may be produced with the subject’s consent for legitimate purposes.
European Union: Comprehensive AI Act Provisions
The European Union took a groundbreaking step in AI regulation with the enactment of the Artificial Intelligence Act (Regulation EU 2024/1689) in August 2024. As the world’s first comprehensive legal framework specifically addressing AI systems, the EU AI Act establishes harmonized rules designed to foster trustworthy AI while protecting fundamental rights, safety, and ethical principles.
Article 50: Transparency Obligations
Article 50 of the AI Act contains the most relevant provisions for addressing deepfakes and synthetic media in electoral contexts. The regulation imposes dual requirements on both providers and deployers of AI systems.
Providers of AI systems that generate synthetic audio, image, video, or text content must ensure their outputs are marked in a machine-readable format and detectable as artificially generated or manipulated. These technical solutions must be effective, interoperable, robust, and reliable, taking into account technical feasibility, implementation costs, and generally acknowledged state-of-the-art standards.
Deployers of AI systems that generate or manipulate image, audio, or video content constituting deepfakes must disclose that the content has been artificially generated or manipulated. This disclosure must be made clearly and distinguishably, typically through visible labeling that indicates the artificial origin of the content.
Exceptions and Special Provisions
The EU AI Act includes carefully crafted exceptions to transparency obligations. Disclosure requirements do not apply when AI use is authorized by law to detect, prevent, investigate, or prosecute criminal offenses. This exception ensures legitimate law enforcement and security applications remain viable.
For content forming part of evidently artistic, creative, satirical, fictional, or analogous works, transparency obligations are limited to disclosure in an appropriate manner that does not hamper the display or enjoyment of the work. This provision attempts to balance transparency with creative expression and free speech.
Deployers of AI systems generating or manipulating text published to inform the public on matters of public interest must disclose the artificial generation or manipulation. This requirement specifically targets potential electoral misinformation spread through text-based synthetic content.
Implementation Timeline and Enforcement
The AI Act entered into force on August 1, 2024, but its provisions have staggered implementation dates. The transparency requirements under Article 50 will apply two years after entry into force, meaning full enforcement begins in August 2026. However, the European Commission launched the AI Pact, a voluntary initiative encouraging AI providers and deployers to anticipate the Act’s requirements and begin implementation ahead of legal deadlines.
National competent authorities bear responsibility for ensuring compliance with transparency requirements. Noncompliance can result in administrative fines up to 15 million euros or 3% of the operator’s total worldwide annual turnover for the preceding financial year, whichever is higher.
Pre-Election Measures
Recognizing that the AI Act would not be fully enforceable before the 2024 European Parliament elections, the European Commission took proactive measures. In April 2024, European political parties pledged not to use deepfakes in their campaigns. The Commission also issued guidelines under the Digital Services Act for Very Large Online Platforms to mitigate election-related risks, including those posed by deepfakes.
Major technology platforms responded with their own initiatives. TikTok and Meta created fact-checking hubs and election centers. Meta announced it would mandate disclosure of AI-generated content in political advertisements. Google developed SynthID, a tool that discreetly integrates digital watermarks into image pixels.
Challenges in European Implementation
Despite the comprehensive nature of the EU AI Act, implementation challenges have emerged. Global experience during the 2024 European elections demonstrated that AI-generated videos circulated despite disclosure requirements, suggesting that regulations alone may not be sufficient to prevent deepfake distribution.
The effectiveness of technical marking and labeling systems depends heavily on platforms’ ability and willingness to detect and enforce compliance. Watermarks can be tampered with or duplicated, confusing detection software. Additionally, the varying regulatory frameworks across EU member states create complexity in establishing uniform enforcement mechanisms.
Critics also note potential conflicts between the AI Act and the Digital Services Act, particularly regarding how generative AI models fit into DSA categories of intermediary services. Services like Google’s AI Overviews and AI chatbots blur traditional regulatory boundaries, potentially falling outside existing oversight frameworks.
United States: The State-by-State Patchwork
While the European Union pursued comprehensive federal regulation, the United States took a markedly different approach. In the absence of federal legislation specifically targeting election deepfakes, individual states became laboratories for democracy, enacting their own regulations to address AI-generated misinformation.
The State Legislative Wave
As of December 2024, 47 states had enacted laws addressing deepfakes since 2019, with only Alaska, Missouri, and Ohio lacking such legislation. The acceleration of legislative action in 2024 was remarkable: 82% of deepfake laws enacted since 2019 were passed in 2024 or 2025.
States that enacted the most deepfake laws include California (18 laws), Texas (10 laws), New York (8 laws), and Utah (8 laws). This legislative activity reflects widespread bipartisan concern about AI’s potential to disrupt democratic processes.
Definitional Challenges
States have employed varying terminology to describe manipulated content. Six states (New Mexico, Alabama, Hawaii, Michigan, New York, and California) use the term “materially deceptive media.” California, for instance, defines it as “audio or visual media that is digitally created such that it would falsely appear to a reasonable person to be an authentic record of the content depicted in the media.”
Only three states (Minnesota, Texas, and Colorado) specifically use the term “deepfake.” Colorado’s definition characterizes deepfakes as “image, video, audio, or multimedia ai-generated content that falsely appears to be authentic or truthful and which features a depiction of an individual appearing to say or do something the individual did not say or do.”
Two Regulatory Approaches: Prohibition vs. Disclosure
State legislation generally follows two distinct regulatory philosophies: outright prohibition or mandatory disclosure.
Texas and Minnesota adopted prohibition approaches. Texas law makes it illegal to create and distribute deepfake videos within 30 days of an election. The statute defines deepfakes as videos created “to depict a real person performing an action that did not occur in reality” with intent to injure a candidate or influence an election. Minnesota’s prohibition extends to 90 days before an election and covers video, audio, and images disseminated without the depicted individual’s consent and with intent to injure a candidate or influence election results.
Most other states opted for disclosure-based approaches. These laws require political advertisements and communications containing AI-generated or manipulated content to include clear disclaimers indicating artificial manipulation. The timeframes for disclosure requirements vary significantly, ranging from 30 to 120 days before elections.
California’s Ambitious 2024 Legislation
California emerged as the most active state in deepfake regulation. Governor Gavin Newsom signed three significant bills in September 2024, collectively titled the “Defending Democracy from Deepfake Deception Act of 2024.”
AB 2655 imposes removal obligations on large online platforms (those with at least one million California users) during the 120 days leading up to an election. Platforms must implement state-of-the-art procedures to identify and remove materially deceptive content and provide disclaimers regarding content inauthenticity. The law also creates procedures for state residents to report noncompliant content and authorizes affected parties to seek injunctive relief against platforms.
AB 2839 more broadly prohibits the distribution of election communications containing materially deceptive content within 120 days before an election and, in specified cases, 60 days after. This law makes creators and distributors of deceptive election deepfakes legally accountable.
AB 2355 mandates that all political advertisements using AI-generated or substantially altered content include disclosures that the material has been manipulated using AI. This requirement took effect January 1, 2025.
Legal Challenges and Constitutional Concerns
California’s aggressive regulatory stance faced immediate legal challenges on First Amendment grounds. Within hours of the laws being signed, Christopher Kohls, a creator who produced a clearly labeled parody video of Kamala Harris, filed suit against California Attorney General Rob Bonta and Secretary of State Shirley Weber.
In October 2024, U.S. District Judge John A. Mendez granted a preliminary injunction against AB 2839, finding it unconstitutional. Judge Mendez ruled that the law serves as “a blunt tool that hinders humorous expression and unconstitutionally stifles the free and unfettered exchange of ideas which is so vital to American democratic debate.”
The court found California’s law was not sufficiently narrowly tailored to withstand constitutional scrutiny. Particularly problematic was the broad temporal scope: applying 120 days before and 60 days after elections meant the law functionally covered nearly the entire election year when considering overlapping primary, general, and special elections.
Similarly, in August 2025, a federal judge struck down AB 2655, with Elon Musk’s platform X challenging the Act on grounds that it violates First Amendment free speech protections and imposes unnecessary burdens on online platforms.
Federal Inaction
Despite multiple bills introduced in Congress specifically targeting deepfakes in federal elections, lawmakers failed to pass federal legislation before the 2024 elections. In September 2024, the Federal Election Commission declined to proceed with rulemaking on AI in campaign advertisements, instead issuing an interpretive rule affirming that existing prohibitions on fraudulent misrepresentation of campaign authority apply regardless of technology used.
The Federal Communications Commission took more concrete action. In February 2024, it unanimously voted to outlaw AI-generated voices in robocalls. By late July 2024, the FCC proposed requiring AI disclosures in political television and radio advertisements, though these rules were not finalized before the November election.
Enforcement Reality
Despite the proliferation of state laws, enforcement proved minimal during the 2024 election cycle. According to legal scholars and election officials, there were no reported criminal prosecutions for violating state deepfake laws during 2024. The few high-profile cases involved civil penalties or administrative actions rather than criminal charges.
The most notable enforcement action involved the New Hampshire deepfake robocall incident. In January 2024, thousands of New Hampshire voters received calls featuring an AI-generated voice impersonating President Biden, urging Democrats not to vote in the state’s primary. Prosecutors charged the individual responsible, political consultant Steve Kramer, with 26 crimes including voter suppression and intimidation. The FCC separately fined him $6 million for violating call-spoofing regulations.
Detection Technologies: The Arms Race
As deepfake creation technology advanced, so too did efforts to develop reliable detection tools. However, the detection landscape proved far more challenging than many anticipated.
The Detection Accuracy Problem
An April 2024 study by the Reuters Institute for the Study of Journalism found that many deepfake detector tools can be easily fooled with simple software manipulations. Research from universities and companies across the United States, Australia, and India analyzed various detection techniques and discovered accuracy rates ranging from 82% to a mere 25%. This means detectors often misidentify fake content as real and flag authentic content as fake.
The Washington Post conducted extensive testing of eight popular deepfake detectors: TrueMedia, AI or Not, AI Image Detector, Was It AI, Illuminarty, Hive Moderation, Content at Scale, and Sight Engine. The publication found significant inconsistencies in detection capabilities, with some tools performing better than others but none proving foolproof.
Hany Farid, a computer science professor at the University of California at Berkeley who studies manipulated media, noted that detection algorithms are only as good as their training data. As deepfake creation technology evolves, detection methods can quickly become obsolete unless continuously updated.
Industry Detection Initiatives
Major technology companies invested substantial resources in developing detection capabilities. Intel developed FakeCatcher, billed as the world’s first real-time deepfake detector. The platform utilizes an algorithm that analyzes blood flow patterns in video pixels, achieving 96% accuracy by looking for authentic clues in real videos rather than searching for signs of manipulation.
Microsoft’s AI for Good Lab and Microsoft Threat Analysis Center worked to better detect deepfakes on the internet. The company launched a dedicated webpage, Microsoft-2024 Elections, where political candidates could report suspected deepfakes. Microsoft’s Digital Crimes Unit invested in threat intelligence work for early detection of AI-powered criminal activity.
Adobe’s Content Credentials initiative acts like digital nutrition labels, showing information such as the creator’s name, the date an image was created, tools used for creation, and any edits made. Built on an open standard through the Coalition for Content Provenance and Authenticity, Content Credentials allow anyone to implement the technology in their own tools and platforms.
Google developed SynthID, a tool that integrates digital watermarks into image pixels. Meta pledged to use watermarking and labeling systems for AI-generated content. However, technology executives expressed skepticism about these approaches. Věra Jourová, a top European Union official, reported receiving “mixed answers from Big Tech” about detection feasibility, with some platforms saying comprehensive detection was “impossible.”
Limitations of Watermarking
Watermarking emerged as the industry’s biggest hope for tracking AI-generated content, but significant limitations became apparent during 2024. Watermarks can be easily tampered with, duplicated, or removed entirely using readily available software tools. Even when watermarks remain intact, they only identify content as AI-generated without providing information about intent, context, or authenticity.
Furthermore, watermarking systems only work when AI providers voluntarily implement them. Bad actors can use tools without watermarking capabilities or simply strip watermarks from content before distribution. This creates a fundamental asymmetry: responsible AI developers face restrictions while malicious users operate without constraints.
Detection Tools Used by Election Officials
Election officials employed various strategies beyond automated detection tools. Arizona Secretary of State Adrian Fontes identified AI as the number one threat facing election officials in 2024. His office conducted tabletop exercises over six months, training election officials to handle hypothetical scenarios involving AI-driven disruptions on Election Day.
These exercises simulated deepfake videos and voice-cloning technology deployed across social media to dissuade voting, disrupt polling places, or confuse poll workers. The training emphasized recognizing common deepfake characteristics: unnatural facial movements, inconsistent lighting, audio-visual synchronization issues, and contextual inconsistencies.
Real-time monitoring systems acted as 24/7 watchdogs, detecting suspicious activity immediately so corrective action could be taken before impacting vote counts. Many states partnered with private cybersecurity firms specializing in threat detection and response, ensuring voting systems remained secure.
CivAI, a nonprofit group tracking AI use in elections, helped election officials understand where threats were most likely to emerge. Co-founder Lucas Hanson noted that “primary targets of interest are going to be in swing states, and they’re going to be swing voters.”
The Human Element in Detection
Despite technological advances, human judgment remained critical in identifying deepfakes. Thomas Scanlon, a principal researcher at Carnegie Mellon University’s Software Engineering Institute, advised that voters should trust their intuition when viewing suspicious content. Warning signs include jump cuts in editing, unnatural facial movements or expressions, inconsistent lighting or shadows, audio-visual desynchronization, and contextual implausibility.
Experts emphasized that voters should attempt to verify suspicious videos through multiple sources. Checking official candidate websites, campaign social media accounts, and established news organizations provides crucial context. If a video shows a candidate making surprising or out-of-character statements, seeking verification before sharing becomes essential.
Voter Education Campaigns and Digital Literacy
Recognizing that technology alone could not solve the deepfake challenge, governments, civil society organizations, and technology platforms invested heavily in voter education and digital literacy initiatives.
Pre-bunking Strategies
Pre-bunking emerged as a proactive approach to combat disinformation before it spreads. Unlike traditional fact-checking that responds to false information after dissemination, pre-bunking educates voters to recognize misleading or AI-generated content in real-time.
A TNS survey revealed that 77% of Americans supported educating voters about the risks of AI-driven disinformation. In battleground states, 71% of respondents believed voters were more likely to be targeted by deepfake disinformation campaigns. This widespread concern drove demand for preventive education.
Pre-bunking campaigns taught voters to look for common deepfake indicators, understand how AI-generated content works, verify information through multiple sources, and pause before sharing suspicious content. By building cognitive immunity to manipulation tactics, pre-bunking helped voters approach election-related content with appropriate skepticism.
Platform-Based Education Initiatives
Social media platforms implemented various educational measures. TikTok and Meta created election centers providing resources about identifying manipulated content. YouTube required creators to disclose when videos contained AI-generated or manipulated content, with specific requirements for political content.
These platforms also deployed contextual warnings. When users encountered content flagged as potentially manipulated, platforms displayed information about the content’s origins and provided links to fact-checking resources. The effectiveness of these measures varied, as users often bypassed warnings or failed to read contextual information.
Government and Civil Society Efforts
The UK’s Cabinet Office and National Cyber Security Centre published mitigation plans for political candidates and election officials facing influence operations. The NCSC explored pilot schemes similar to one implemented in Utah, which gave political candidates the ability to authenticate their digital identity for free, protecting against deepfake impersonation.
International IDEA, a global organization supporting democracy, conducted workshops with election management bodies worldwide. These sessions raised AI literacy among electoral authorities, helping them understand AI’s practical capabilities and limitations. The workshops also addressed the risks of outsourcing digital infrastructure to private vendors offering AI-powered solutions.
The Misinformation Combat Alliance in India, working alongside Project Shakti, deployed legions of fact-checkers during the 2024 election. These organizations worked overtime to identify and debunk false narratives, whether AI-generated or produced through traditional manipulation.
Media Literacy Initiatives
The European Union invested in comprehensive media literacy programs to educate citizens about identifying deepfakes and becoming critical consumers of online content. These initiatives promoted fact-checking practices and encouraged citizens to evaluate source credibility before accepting information as accurate.
Educational campaigns emphasized several key principles: verify content through official sources, check multiple reliable news outlets, be skeptical of emotionally charged content, look for technical anomalies in suspicious media, and understand that seeing is no longer believing in the age of synthetic media.
Research from Carnegie Mellon University’s Block Center for Technology and Society emphasized that constant information bombardment makes it difficult for individuals to assess both the value and reliability of content. Randall Trzeciak, director of the Heinz College master’s program in information security policy and management, noted this challenge in determining what information to trust.
Case Studies: Lessons from the Frontlines
Examining specific incidents provides crucial insights into how deepfakes actually impacted elections and how various stakeholders responded.
Slovakia: The Precursor Case
The Slovak parliamentary election in September 2023 became the first widely cited case of potential deepfake election interference. Just hours before voting, an audio recording went viral on social media, allegedly featuring Progressive Slovakia Party leader Michal Šimečka discussing plans to rig the election and raise beer taxes (particularly inflammatory in a country ranking sixteenth globally in per capita beer consumption).
The audio was quickly identified as a deepfake, but the 48-hour media blackout before Slovak elections prevented effective rebuttal. Šimečka’s party ultimately lost to Robert Fico’s pro-Russian coalition, sparking international debate about whether the deepfake swung the election.
However, detailed analysis by researchers revealed a more complex picture. The Slovak electorate was particularly susceptible to pro-Russian disinformation due to several factors: historically low trust in mainstream media (only 18% trusted the government before elections), widespread conspiracy theory beliefs (Slovaks rank among Europe’s most conspiracy-minded populations), long-standing cultural and historical ties to Russia, and a decade of pro-Kremlin narrative building through alternative media.
The deepfake did not create this environment but rather exploited existing divisions. Research showed that Fico’s victory likely resulted from multiple factors, including polling methodology issues that underestimated his support. The incident demonstrated that deepfakes pose the greatest threat in contexts where societal trust has already eroded and polarization runs deep.
Critically, Facebook’s policies restricted video deepfakes but not audio deepfakes at the time. Since the manipulated audio was spliced into a static image rather than manipulating video directly, it technically did not violate platform policy. Slovakia also lacked specific regulations addressing deepfakes. The video remained on Facebook as of April 2024, with police limited to informing the public about the manipulation through their social media profile.
New Hampshire: The Biden Robocall
In January 2024, thousands of New Hampshire Democrats received robocalls featuring an AI-generated voice convincingly impersonating President Joe Biden. The voice urged voters to “save your vote for the November election” rather than participate in the state’s primary.
The incident generated significant media attention and prompted immediate action. Political consultant Steve Kramer, who commissioned the deepfake, claimed he did so to raise public awareness about AI dangers. Prosecutors were unimpressed, charging him with 26 crimes including voter suppression, intimidation, and impersonating a candidate.
The Federal Communications Commission fined Kramer $6 million for violating call-spoofing and caller ID regulations. This enforcement action sent a strong signal that election-related deepfakes would face serious consequences, even when perpetrators claimed educational motives.
For New Hampshire, the incident served as a catalyst for legislative action. The state subsequently passed laws specifically addressing AI-generated content in elections. While the deepfake did not appear to significantly impact primary turnout or results, it demonstrated the potential for bad actors to rapidly deploy convincing audio deepfakes at scale.
The incident also highlighted technological attribution challenges. Initial suspicion focused on ElevenLabs, an AI voice-cloning company, which quickly banned the account involved. However, investigations revealed broader involvement by a Texas telemarketing firm, illustrating the complex ecosystem enabling deepfake distribution.
India: Resurrection Politics
India’s 2024 general election showcased both innovative and ethically questionable applications of AI technology. Among the most striking examples was the use of deepfakes to “resurrect” deceased political figures.
The Dravida Munnetra Kazhagam party in Tamil Nadu created lifelike videos featuring former Chief Minister M. Karunanidhi, who had died in 2018, appearing to endorse current candidates. Similarly, deepfakes depicted former Chief Minister J. Jayalalithaa supporting political allies. These resurrections aimed to invoke nostalgia and transfer loyalty from beloved deceased leaders to their political successors.
The practice raised profound ethical concerns about consent and the manipulation of deceased individuals’ likenesses. However, Indian law provided limited recourse, as personality rights typically terminate upon death in most jurisdictions.
The Indian elections also demonstrated how AI lowered costs for campaign outreach. Traditional in-person rallies could cost 5 crore rupees ($595,218), while AI-generated personalized calls or videos reaching 10 million people cost only 50 lakh rupees ($59,521). This economic advantage drove adoption even among campaigns with ethical reservations.
More than 50 million AI-generated voice clone calls were made in the two months leading up to April 2024 voting, representing a $60 million business opportunity. The scale of AI deployment created enforcement challenges for election officials attempting to monitor and regulate synthetic content.
Despite widespread AI use, the actual impact on electoral outcomes remained unclear. Analysis suggested that AI primarily amplified existing campaign strategies rather than fundamentally changing voter persuasion dynamics.
Indonesia: Softening the General’s Image
Indonesia’s February 2024 elections illustrated how AI could serve propagandistic purposes without necessarily spreading false information. Then-candidate Prabowo Subianto, a former general accused of human rights abuses, deployed AI-generated digital cartoon avatars depicting him as approachable and likable.
These cartoons transformed Subianto’s stern military persona into a cuddly, grandfatherly figure. The campaign successfully softened his public image without making specific false claims about his record. This case demonstrated that AI’s impact extends beyond outright disinformation to include sophisticated image management and propaganda.
Notably, this tactic represented incremental evolution rather than revolutionary change. Creating cartoon avatars for presidential campaigns would be inexpensive with or without AI technology. The AI element primarily improved efficiency rather than enabling entirely new capabilities.
United States: The Taylor Swift Controversy
In late July 2024, former President Donald Trump’s Truth Social account shared AI-generated images appearing to show pop star Taylor Swift and her fans endorsing his campaign. The images, which featured people wearing “Swifties for Trump” t-shirts, were clearly fabricated.
Swift responded by publicly endorsing Vice President Kamala Harris, explicitly citing the fake endorsement as motivation. On Instagram, Swift revealed she went public to refute the AI-generated deepfake and combat AI-generated misinformation.
This incident demonstrated several important dynamics. First, celebrity deepfakes could prompt targets to take public positions they might otherwise avoid. Second, the incident sparked broader conversation about AI’s role in elections. Third, it illustrated how cheapfakes and traditional photo manipulation often accompanied or exceeded the prevalence of sophisticated deepfakes.
The incident also raised questions about platform responsibility. While major platforms pledged to address election-related deepfakes, enforcement proved inconsistent. Trump’s post remained online despite clearly violating policies against impersonation and misleading electoral content.
Successful Interventions and Best Practices
Despite challenges, several interventions demonstrated effectiveness in managing AI-generated election content.
Rapid Response Protocols
Brazil’s Superior Electoral Court and Mexico’s Instituto Nacional Electoral established direct communication channels with digital platforms, enabling real-time flagging of manipulated content. This approach proved more effective than relying on platforms’ general content moderation systems.
The key success factor was speed. By establishing pre-negotiated protocols before election seasons began, election officials could respond within hours rather than days when problematic content emerged. Platforms, in turn, prioritized flagged content for rapid review and action.
Multi-Stakeholder Collaboration
The Tech Accord to Combat Deceptive Use of AI in 2024 Elections brought together 20 major technology companies, including Adobe, Amazon, Anthropic, Google, Meta, Microsoft, OpenAI, and others. This collaborative approach recognized that no single entity could address the challenge alone.
The accord focused on three critical areas: preventing bad actors from using legitimate tools to create deepfakes, developing detection capabilities to identify manipulated content, and establishing response protocols for addressing harmful content when detected.
While voluntary agreements have limitations, the accord created shared standards and encouraged companies to invest resources in election security even absent regulatory mandates.
Transparent Synthetic Content
Paradoxically, some of the most effective uses of AI in elections involved openly synthetic content. When candidates transparently used AI to translate speeches, create campaign materials, or generate content clearly labeled as artificial, voters generally accepted these applications.
Japan’s Tokyo gubernatorial race demonstrated this approach effectively. Independent candidate Takahiro Anno used an AI avatar to respond to 8,600 voter questions, managing to place fifth in a field of 56 candidates. The AI was clearly presented as a tool for scaling candidate-voter interaction rather than attempting to deceive.
Similarly, when India’s political parties used AI to create multilingual content, transparent disclosure helped voters understand they were receiving translated rather than original messages. This transparency maintained trust while leveraging AI’s beneficial capabilities.
Public Education Combined with Technology
The most successful approaches combined technological detection with robust public education. When voters understood how deepfakes worked and what warning signs to look for, they became partners in identifying problematic content rather than passive victims of manipulation.
Arizona’s tabletop exercises exemplified this approach. By training election officials to recognize AI-driven threats before they materialized, the state created human capacity to complement technological tools. When technology failed to catch manipulated content, trained humans served as a crucial backup.
Looking Forward: The Evolution of Election Integrity
As 2024 demonstrated, the deepfake apocalypse did not materialize as predicted. However, this outcome should not breed complacency. Several factors suggest the challenge will intensify in future elections.
Improving Technology
Deepfake creation technology continues advancing rapidly. What required significant technical expertise and computing power in 2020 can now be accomplished with free online tools and minimal training. As accessibility increases, the barrier to entry for creating convincing manipulated content continues falling.
Similarly, deepfake quality steadily improves. The telltale artifacts that helped identify early deepfakes (unnatural eye movements, inconsistent lighting, audio desynchronization) become less pronounced with each technological generation. Detection will grow more difficult as creation quality increases.
Eroding Trust
Perhaps more concerning than individual deepfakes is their cumulative effect on societal trust. Even when specific deepfakes are debunked, their existence contributes to what researchers call the “liar’s dividend”: authentic content can be dismissed as fake, while fabricated content gains plausibility.
This erosion of trust poses existential challenges for democracy. When voters cannot confidently distinguish authentic from manipulated content, evidence-based political discourse becomes nearly impossible. The solution requires not just technical tools but fundamental rebuilding of societal institutions that establish and maintain trust.
The Global Challenge
Elections in 2025 and beyond will continue testing democratic resilience. With important elections scheduled in Argentina, Chile, Iraq, Moldova, the Netherlands, and numerous other countries, the global democracy stress test continues.
Each election cycle provides both learning opportunities and potential vulnerabilities. Countries must balance learning from others’ experiences while adapting solutions to local contexts, legal frameworks, and political cultures. No one-size-fits-all approach will work globally, requiring continuous innovation and adaptation.
Conclusion: Democracy’s Digital Frontier
Elections in the deepfake era present unprecedented challenges requiring multi-faceted responses. Technology, regulation, education, and institutional resilience must work in concert to protect democratic integrity.
The experience of 2024 provides grounds for cautious optimism. Despite widespread fears, democracies largely withstood the test of AI-generated misinformation. Deepfakes did not swing major elections, and voter turnout remained robust. However, this resilience resulted from sustained effort by election officials, technology platforms, civil society organizations, and informed citizens.
Moving forward, continued vigilance remains essential. As AI capabilities expand and democratize, the potential for misuse grows. Regulations must balance protecting election integrity with preserving free expression. Detection technologies must keep pace with creation tools. Education must empower voters to navigate an increasingly complex information environment.
Most critically, democracies must strengthen the underlying trust and institutional legitimacy that make them resilient to manipulation. Technical solutions matter, but they cannot substitute for the social fabric that binds democratic societies together. In the deepfake era, protecting elections requires protecting democracy itself in all its dimensions.
The 2024 super-election year demonstrated that while technology can threaten democratic processes, human judgment, institutional resilience, and collective vigilance provide powerful countermeasures. As nations continue preparing for AI-driven misinformation, the lessons learned from this historic electoral cycle will shape democratic practices for generations to come.
Sources
ABC News. (2024). “AI deepfakes a top concern for election officials with voting underway.” Retrieved from https://abcnews.go.com/Politics/ai-deepfakes-top-concern-election-officials-voting-underway/story?id=114202574
All About AI. (2025). “Pre-bunking Disinformation in 2024 Elections: AI & Voter Education.” Retrieved from https://www.allaboutai.com/resources/disinformation-in-election-2024/
Al Jazeera. (2024). “Deepfake democracy: Behind the AI trickery shaping India’s 2024 election.” Retrieved from https://www.aljazeera.com/news/2024/2/20/deepfake-democracy-behind-the-ai-trickery-shaping-indias-2024-elections
Anthropic. (2024). “Article 50: Transparency Obligations for Providers and Deployers of Certain AI Systems | EU Artificial Intelligence Act.” Retrieved from https://artificialintelligenceact.eu/article/50/
Ash Center for Democratic Governance and Innovation, Harvard Kennedy School. (2024). “The apocalypse that wasn’t: AI was everywhere in 2024’s elections, but deepfakes and misinformation were only part of the picture.” Retrieved from https://ash.harvard.edu/articles/the-apocalypse-that-wasnt-ai-was-everywhere-in-2024s-elections-but-deepfakes-and-misinformation-were-only-part-of-the-picture/
Ash Center. (2024). “The Role of AI in the 2024 Elections.” Retrieved from https://ash.harvard.edu/resources/the-role-of-ai-in-the-2024-elections/
Ballotpedia News. (2025). “Forty-seven states have enacted deepfake legislation since 2019.” Retrieved from https://news.ballotpedia.org/2025/07/22/forty-seven-states-have-enacted-deepfake-legislation-since-2019/
BioID. (2024). “EU AI Act 2024 | Regulations and Handling of Deepfakes.” Retrieved from https://www.bioid.com/2024/06/03/eu-ai-act-deepfake-regulations/
Bloomberg. (2024). “India Elections 2024: Could Deep Fakes Undermine Trust in Electoral System?” Retrieved from https://www.bloomberg.com/features/2024-ai-election-security-deepfakes/
Brennan Center for Justice. (2023). “Regulating AI Deepfakes and Synthetic Media in the Political Arena.” Retrieved from https://www.brennancenter.org/our-work/research-reports/regulating-ai-deepfakes-and-synthetic-media-political-arena
Carnegie Mellon University’s Heinz College. (2024). “Voters: Here’s how to spot AI ‘deepfakes’ that spread election-related misinformation.” Retrieved from https://www.heinz.cmu.edu/media/2024/October/voters-heres-how-to-spot-ai-deepfakes-that-spread-election-related-misinformation1
Centre for Emerging Technology and Security. (2024). “AI-Enabled Influence Operations: Safeguarding Future Elections.” Retrieved from https://cetas.turing.ac.uk/publications/ai-enabled-influence-operations-safeguarding-future-elections
Centre for International Governance Innovation. (2025). “Then and Now: How Does AI Electoral Interference Compare in 2025?” Retrieved from https://www.cigionline.org/articles/then-and-now-how-does-ai-electoral-interference-compare-in-2025/
CNN. (2024). “States target AI and deepfakes as election interference threat looms.” Retrieved from https://edition.cnn.com/2024/07/31/politics/state-laws-election-ai-deepfakes/
Coingeek. (2025). “India proposes law to label AI-generated social media content.” Retrieved from https://coingeek.com/india-proposes-law-to-label-ai-generated-social-media-content/
Columbia Journal of European Law. (2024). “Deepfake, Deep Trouble: The European AI Act and the Fight Against AI-Generated Misinformation.” Retrieved from https://cjel.law.columbia.edu/preliminary-reference/2024/deepfake-deep-trouble-the-european-ai-act-and-the-fight-against-ai-generated-misinformation/
Conference Board. (2025). “Federal Judge Strikes Down California Deepfake Law.” Retrieved from https://www.conference-board.org/research/CED-Newsletters-Alerts/federal-judge-strikes-down-california-deepfake-law
Council on Foreign Relations. (2024). “Election 2024: The Deepfake Threat to the 2024 Election.” Retrieved from https://www.cfr.org/blog/campaign-roundup-deepfake-threat-2024-election
Dialogo Politico. (2025). “Artificial Intelligence and elections: premature threats?” Retrieved from https://dialogopolitico.org/special-edition-2025-artificial-democracy/artificial-intelligence-elections-premature-threats
European Parliament. (2024). “EU AI Act: first regulation on artificial intelligence.” Retrieved from https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence
European Commission. (2024). “AI Act | Shaping Europe’s digital future.” Retrieved from https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
First Amendment Encyclopedia, MTSU. (2025). “Political Deepfakes and Elections.” Retrieved from https://firstamendment.mtsu.edu/article/political-deepfakes-and-elections/
Freshfields Technology Quotient. (2024). “EU AI Act unpacked #8: New rules on deepfakes.” Retrieved from https://technologyquotient.freshfields.com/post/102jb19/eu-ai-act-unpacked-8-new-rules-on-deepfakes
Frontiers in Political Science. (2024). “AI-generated misinformation in the election year 2024: measures of European Union.” Retrieved from https://www.frontiersin.org/journals/political-science/articles/10.3389/fpos.2024.1451601/full
GIJN (Global Investigative Journalism Network). (2024). “How to Identify and Investigate AI Audio Deepfakes, a Major 2024 Election Threat.” Retrieved from https://gijn.org/resource/tipsheet-investigating-ai-audio-deepfakes/
GNET. (2024). “Deep Fakes, Deeper Impacts: AI’s Role in the 2024 Indian General Election and Beyond.” Retrieved from https://gnet-research.org/2024/09/11/deep-fakes-deeper-impacts-ais-role-in-the-2024-indian-general-election-and-beyond/
Governing. (2025). “Concerns About AI Election Impacts Are Overblown (So Far).” Retrieved from https://www.governing.com/politics/concerns-about-ai-election-impacts-are-overblown-so-far
Harvard Kennedy School Misinformation Review. (2025). “Beyond the deepfake hype: AI, democracy, and ‘the Slovak case’.” Retrieved from https://misinforeview.hks.harvard.edu/article/beyond-the-deepfake-hype-ai-democracy-and-the-slovak-case/
Harvard Political Review. (2024). “Shaping Robust AI Regulation: Lessons from India’s ‘Deepfake’ Election.” Retrieved from https://theharvardpoliticalreview.com/ai-deepfakes-india-election/
International IDEA. (2024). “What Have we Learned About AI in Elections?” Retrieved from https://www.idea.int/news/what-have-we-learned-about-ai-elections
ISPI (Italian Institute for International Political Studies). (2024). “An Overview of the Impact of GenAI and Deepfakes on Global Electoral Processes.” Retrieved from https://www.ispionline.it/en/publication/an-overview-of-the-impact-of-genai-and-deepfakes-on-global-electoral-processes-167584
Jacobacci & Partners. (2024). “AI and Deepfakes: EU and Italian Regulations.” Retrieved from https://www.jacobacci.com/en/publications/ai-and-deepfakes
Konrad Adenauer Stiftung. (2024). “The Influence of Deep Fakes on Elections.” Retrieved from https://www.kas.de/documents/d/guest/the-influence-of-deep-fakes-on-elections
Knight First Amendment Institute, Columbia University. (2024). “We Looked at 78 Election Deepfakes. Political Misinformation Is Not an AI Problem.” Retrieved from https://knightcolumbia.org/blog/we-looked-at-78-election-deepfakes-political-misinformation-is-not-an-ai-problem
Microsoft On the Issues. (2024). “Meeting the moment: combating AI deepfakes in elections through today’s new tech accord.” Retrieved from https://blogs.microsoft.com/on-the-issues/2024/02/16/ai-deepfakes-elections-munich-tech-accord/
Morrison Foerster. (2024). “2024 Year in Review: Navigating California’s Landmark Deepfake Legislation.” Retrieved from https://www.mofo.com/resources/insights/241211-2024-year-in-review-navigating-california-s-landmark-deepfake-legislation
NPR. (2024). “AI fakes raise election risks as lawmakers and tech companies scramble to catch up.” Retrieved from https://www.npr.org/2024/02/08/1229641751/ai-deepfakes-election-risks-lawmakers-tech-companies-artificial-intelligence
NPR. (2024). “How deepfakes and AI memes affected global elections in 2024.” Retrieved from https://www.npr.org/2024/12/21/nx-s1-5220301/deepfakes-memes-artificial-intelligence-elections
Partnership on AI. (2025). “How the risk of synthetic media that affects global election information is growing.” Retrieved from https://partnershiponai.org/pai-framework-case-study/
PR Newswire. (2024). “Election Integrity in the Age of AI Deep Fakes Reasons for Confidence in the 2024 U.S. Voting Systems.” Retrieved from https://www.prnewswire.com/news-releases/election-integrity-in-the-age-of-ai-deep-fakes-reasons-for-confidence-in-the-2024-us-voting-systems-302271509.html
Public Citizen. (2025). “Tracker: State Legislation on Deepfakes in Elections.” Retrieved from https://www.citizen.org/article/tracker-legislation-on-deepfakes-in-elections/
Recorded Future. (2024). “2024 Deepfakes and Election Disinformation Report: Key Findings & Mitigation Strategies.” Retrieved from https://www.recordedfuture.com/research/targets-objectives-emerging-tactics-political-deepfakes
Reuters Institute for the Study of Journalism. (2024). “AI deepfakes, bad laws – and a big fat Indian election.” Retrieved from https://reutersinstitute.politics.ox.ac.uk/news/ai-deepfakes-bad-laws-and-big-fat-indian-election
Ronin Legal Consulting. (2025). “India’s Draft Deepfake Rules: MeitY’s First Regulation Move.” Retrieved from https://roninlegalconsulting.com/india-finally-stands-tentatively-up-to-ai-deepfakes/
S&P Global Market Intelligence. (2024). “AI deepfake detection gains urgency ahead of US election.” Retrieved from https://www.spglobal.com/marketintelligence/en/news-insights/latest-news-headlines/ai-deepfake-detection-gains-urgency-ahead-of-us-election-83609504
Skadden, Arps, Slate, Meagher & Flom LLP. (2024). “California Enacts New Laws to Combat AI-Generated Deceptive Election Content.” Retrieved from https://www.skadden.com/insights/publications/2024/09/california-enacts-new-laws
Storyboard18. (2025). “ECI cracks down on Deepfakes, mandates labeling of AI generated election campaign.” Retrieved from https://www.storyboard18.com/advertising/eci-cracks-down-on-deepfakes-mandates-labeling-of-ai-generated-election-campaign-83130.htm
Tandfonline (International Review of Law, Computers & Technology). (2024). “Generative AI and deepfakes: a human rights approach to tackling harmful content.” Retrieved from https://www.tandfonline.com/doi/full/10.1080/13600869.2024.2324540
TechInformed. (2024). “AI vs Democracy: Disinformation, Deepfakes & 2024 US Election.” Retrieved from https://techinformed.com/ai-disinformation-2024-us-election-deepfakes-voter-manipulation/
TechPolicy.Press. (2024). “India’s Experiments With AI in the 2024 Elections: The Good, The Bad & The In-between.” Retrieved from https://www.techpolicy.press/indias-experiments-with-ai-in-the-2024-elections-the-good-the-bad-the-inbetween/
TechPolicy.Press. (2025). “Regulating Election Deepfakes: A Comparison of State Laws.” Retrieved from https://www.techpolicy.press/regulating-election-deepfakes-a-comparison-of-state-laws/
TechPolicy.Press. (2025). “To Craft Effective State Laws on Deepfakes and Elections, Mind the Details.” Retrieved from https://www.techpolicy.press/to-craft-effective-state-laws-on-deepfakes-and-elections-mind-the-details/
TechUK. (2024). “Deepfakes and Disinformation: What impact could this have on elections in 2024?” Retrieved from https://www.techuk.org/resource/deepfakes-and-disinformation-what-impact-could-this-have-on-elections-in-2024.html
The Conversation. (2025). “The apocalypse that wasn’t: AI was everywhere in 2024’s elections, but deepfakes and misinformation were only part of the picture.” Retrieved from https://theconversation.com/the-apocalypse-that-wasnt-ai-was-everywhere-in-2024s-elections-but-deepfakes-and-misinformation-were-only-part-of-the-picture-244225
TIME Magazine. (2024). “AI’s Underwhelming Impact On the 2024 Elections.” Retrieved from https://time.com/7131271/ai-2024-elections/
Viking Cloud (formerly Barracuda MSP). (2024). “U.S. Election Security and Deepfake Audio Fraud: Heightened Risk for November 2024.” Retrieved from https://www.vikingcloud.com/blog/u-s-election-security-and-deepfake-audio-fraud-heightened-risk-for-november-2024
Washington Post. (2024). “See why AI detection tools can fail to catch election deepfakes.” Retrieved from https://www.washingtonpost.com/technology/interactive/2024/ai-detection-tools-accuracy-deepfakes-election-2024/
WilmerHale. (2024). “Limited-Risk AI—A Deep Dive Into Article 50 of the European Union’s AI Act.” Retrieved from https://www.wilmerhale.com/en/insights/blogs/wilmerhale-privacy-and-cybersecurity-law/20240528-limited-risk-ai-a-deep-dive-into-article-50-of-the-european-unions-ai-act
Wiley Online Library (Policy & Internet). (2025). “A Teleological Interpretation of the Definition of DeepFakes in the EU Artificial Intelligence Act.” Retrieved from https://onlinelibrary.wiley.com/doi/full/10.1002/poi3.435
World Economic Forum. (2024). “Deepfakes: How India is tackling misinformation during elections.” Retrieved from https://www.weforum.org/stories/2024/08/deepfakes-india-tackling-ai-generated-misinformation-elections/
World Economic Forum. (2025). “Deepfakes are here to stay and we should remain vigilant.” Retrieved from https://www.weforum.org/stories/2025/01/deepfakes-different-threat-than-expected/
Workshop Proceedings ICWSM. (2024). “Slovakia as the Precursor to Deepfake-Enabled Election.” Retrieved from https://workshop-proceedings.icwsm.org/pdf/2024_67.pdf
