AI-Powered Disinformation Threatens 2024 U.S. Presidential Election

National security experts warn that AI-powered disinformation could disrupt the 2024 US presidential election, making it difficult to distinguish truth from deception. Lawmakers are introducing bills to regulate AI-generated materials, with over 100 bills proposed in at least 39 states.

author-image
Trim Correspondents
New Update
AI-Powered Disinformation Threatens 2024 U.S. Presidential Election

AI-Powered Disinformation Threatens 2024 U.S. Presidential Election

With the 2024 U.S. presidential election approaching, national security experts are sounding the alarm about the potential for AI to supercharge election disinformation campaigns and disrupt the democratic process. presidential election approaches, national security experts are sounding the alarm about the potential for AI to superchargeai, electiondisinformation campaigns and disrupt the democratic process. AI-generated content is being weaponized to target voters with personalized disinformation and microtargeting, making it increasingly difficult to distinguish truth from deception.

Why this matters: The ability of AI to spread disinformation on a massive scale poses a significant threat to the integrity of democratic elections and the trust of citizens in the political process. If left unchecked, AI-powered disinformation could lead to the manipulation of public opinion, erosion of trust in institutions, and destabilization of the political system.

A recent threat analysis by Microsoft revealed a network of Chinese-sponsored operatives using AI content and social media accounts to gather intelligence and precision target key voting demographics ahead of the election. Drew Liebert, director of the California Initiative for Technology and Democracy (CITED), warns, "Truth itself will be hard to decipher. Powerful, easy-to-access new tools will be available to candidates, conspiracy theorists, foreign states, and online trolls who want to deceive voters and undermine trust in our elections."

AI programs can produce and scale disinformation with remarkable speed and reach, making them potent tools for lone wolf provocateurs, intelligence agencies, and foreign states seeking to deceive voters. Asurveyby the Polarization Research Lab found that 65% of Americans worry about personal privacy violations, 49.8% expect AI to negatively affect election safety, and 40% believe AI could harm national security.

Lawmakers are taking notice of the growing threat. More than 100 bills have been introduced in at least 39 states to limit and regulate AI-generated materials, including four measures being proposed in California. Republican state lawmaker Adam Neylon of Wisconsin emphasizes the need for action, stating, "As lawmakers, we need to understand and protect the public." He adds, "So many people are distrustful of institutions. That has eroded along with the fragmentation of the media and social media. You put AI into that mix, and that could be a real problem."

The use of AI to target individuals and small groups with personalized disinformation extends beyond elections. In a recent voice, latest, case, ai, harm criminal case in Baltimore County, Maryland, a high school principal was framed as racist by a fake AI-generated recording of his voice. The case highlights the vulnerability of anyone to AI-powered attacks and the ease with which bad actors can carry them out.

Hany Farid, a professor at the University of California, Berkeley, warns that the technology will only become more powerful and accessible, including for video manipulation. He stresses the need for better regulation, calling the Maryland case a "canary in the coal mine." Alexandra Reeve Givens, CEO of the Center for Democracy & Technology, emphasizes the importance of law enforcement action against criminal AI use, consumer education, and responsible conduct among AI companies and social media platforms.

A recentreportby the RAND Corporation, titled"The 2024 U.S. Election: Trust and Technology - Preparing for a Perfect Storm of Threats to Democracy,"outlines a scenario where a seemingly innocuous hack combined with a carefully timed AI-powered disinformation campaign could compromise voting machine security and bring down the entire election. Marek N. Marek N. Posard, the military sociologist who led the RAND report, warns, "Our adversaries have gotten quite good at injecting themselves into our politics and basically recycling our partisanship at scale."

The third annual Summit on Modern Conflict and Emerging Threats at Vanderbilt University brought together national security experts to discuss these pressing issues. FBI Director Christopher Wray stated, "We fully expect in this election cycle to see more foreign adversaries using modern technology at a faster clip than we have in prior elections." Sheetal Patel, Assistant Director of the CIA's Transnational and Technology Mission Center, emphasized the high stakes, declaring, "The country that puts generative AI on their data first will win."

The threat of AI-powered disinformation poses a significant danger, and urgent action is needed to safeguard the integrity of elections and protect individuals from targeted attacks. Collaboration among governments, academia, and the private sector will be crucial in developing effective strategies to counter this emerging threat. The future of democracy may well depend on our ability to find our way through this new terrain of truth and deception in the age of artificial intelligence.

Key Takeaways

  • Ai-powered disinformation threatens the integrity of democratic elections and trust in institutions.
  • AI-generated content can be used to target voters with personalized disinformation and microtargeting.
  • 65% of Americans worry about personal privacy violations, and 49.8% expect AI to negatively affect election safety.
  • Lawmakers are introducing bills to limit and regulate AI-generated materials, with over 100 bills in 39 states.
  • Experts warn that urgent action is needed to safeguard elections and protect individuals from targeted AI-powered attacks.