In a video showing the aftermath of President Biden’s re-election, China invades Taiwan and migrants pour across the US-Mexico border. Former President Donald Trump is pursued on foot and caught by uniformed police officers in a series of photographs. Another image shows the Pentagon engulfed in flames as a result of an explosion.
What is the common denominator between these scenes? They are all forgeries. Artificial intelligence is rapidly advancing, making it easier to create complex movies and images that can confuse viewers and disseminate misinformation, posing a huge threat to political campaigns as the 2024 elections begin.
Campaigns have long used phony imagery. During the 2020 presidential campaign, Trump posted a bogus animation depicting Biden sticking his tongue out repeatedly with the tagline “Sloppy Joe.” In 2019, millions of people watched a slowed-down video that falsely depicted then-House Speaker Nancy Pelosi as being disabled.
What has changed is that the development of so-called generative AI systems, which can swiftly transform simple inputs into sophisticated-looking films, images, music, and text, has made synthetic media significantly easier to make. Because millions of users now have access to such technologies, campaign professionals are ready for 2024 to usher in a degree of digital production and proliferation unlike any prior election season.
“It’s not going to create brand-new realms or types of disinformation that we haven’t previously imagined, but it will make it easier, faster, and cheaper to produce,” Teddy Goff, digital director for former President Barack Obama’s re-election campaign, said. “And the ramifications of that will be quite profound.”
The Republican National Committee was behind the film depicting a dystopian America if Biden is re-elected. Trump shared a skewed video of CNN presenter Anderson Cooper responding to the former president’s attendance at a CNN town hall.
“That was President Donald J. Trump ripping us a new a—— right here on CNN’s live presidential town hall,” Cooper says in the manipulated video.
On Twitter Spaces, Trump also published a parody of Florida Gov. Ron DeSantis’s glitch-filled presidential campaign launch. Trump’s fake video includes DeSantis, Twitter CEO Elon Musk, Democratic donor George Soros, former Vice President Dick Cheney, Adolf Hitler, and the devil, and appears to use AI-generated voice clones, including Trump, who interjects to say, “Hold your horses, Elon, the real president is going to say a few words.”
The Trump campaign and the Republican National Committee did not respond to calls for comment.
The rate at which AI can develop content is thought to be a game changer. Rather than relying on consultants and digital professionals, AI allows campaigns to adapt to events in real time at a much lower cost.
Democratic and Republican advisers are also testing artificial intelligence and the viral chatbot ChatGPT as digital organizing tools to help compose speeches, fundraising emails and messages, and construct voter files. Although campaigns must still evaluate and modify AI-generated content, the technology has the potential to drastically cut the amount of time spent on day-to-day voter interaction.
Online watchdogs are concerned that the technology may be used for more sinister goals, such as disseminating misleading information about polling hours and locations, voter-registration deadlines, or how people can vote.
Staff for candidate Paul Vallas discovered a video circulating on Twitter on the eve of the first round of Chicago’s mayoral election in February. According to Brian Towne, Vallas’ campaign manager, it displayed his photo and played a voice that sounded like his, appearing to condone police brutality.
According to Towne, the film did not go viral and hence had no impact on the election. Vallas won the February round but lost in a runoff. Nonetheless, Towne characterized the incident as a hazardous precedent. He stated that he had no idea who made the video and that the campaign discovered that it was most likely created using AI.
“For an informed voter, the video eventually comes across as fabricated,” Towne explained. “However, there are a lot of uninformed or disengaged voters who may watch just a snippet of the video and become more inclined to vote against a candidate.”
Social media networks frequently have policies stating that they will remove misleading or altered content. Platforms sometimes offer exceptions for misleading posts by candidates in the interest of permitting free political discourse, and enforcement of those policies might be uneven or delayed.
The emergence of generative AI systems has caused tech leaders to demand for a new labelling system that would allow people to determine whether or not a piece of content was generated by AI. The RNC’s re-election commercial for Biden included a disclaimer in little white font that read, “built entirely with AI imagery.”
“It’s quite obvious that users should not be subjected to random disinformation without some knowledge of who did it and where it came from,” said Eric Schmidt, former CEO of Google and chairman of a congressionally constituted artificial intelligence group.
Google and Microsoft, which has backed ChatGPT creator OpenAI, have both announced the debut of tools that would label AI-generated material with information about its provenance.
The White House has solicited public opinion on AI problems, including how to address risks to the election process, as a first step toward regulation.
Existing regulations, according to advocates, may apply in some cases to the use of AI-generated content in elections. Candidates are not permitted to impersonate other candidates under federal election legislation. Public Citizen recently petitioned the Federal Election Commission, which enforces campaign laws, to give guidelines stating that clause would apply if one candidate used an AI-generated depiction of a competing candidate in a campaign ad. The petition has received no response from the agency.
The FEC has no jurisdiction to control ordinary social media users who may publish a fake video that goes viral.
According to Public Citizen’s president, Robert Weissman, the use of fake media should be declared “out of bounds.” He went on to say, “We are not actually prepared for the challenge.”