Kyle Kondik and Carah Ong Whaley have a warning, “How generative AI tools can make campaign messages even more deceptive” up at Saboto’s Crystal Ball. The messaging tools they are talking about are available for any political party. But it’s the Democrats who should worry about them the most, given the moral decay of the GOP, which was not all that reluctant to deceive voters in the first place. Some of Kondik’s and Whaley’s observations:
Winter is coming. We are rapidly moving from “alternative facts” to artificial ones in politics, campaigns, and elections.
In July, a campaign ad from Never Back Down, a group that supports Gov. Ron DeSantis (R-FL) in the 2024 presidential race, attacked former President Trump. The ad featured a soundbite of what sounds like former President Trump’s voice. But it wasn’t. Generative Artificial Intelligence (Gen AI) is a tool that is used by humans, but it poses several dangers to elections and to democracy. Leading into the 2024 election, we are already seeing the use of “deepfakes,” computer-created manipulation of a person’s voice or likeness using machine learning to create content that appears real. We spoke with UVA Today about the challenges deepfakes pose to free elections and democracy, and we are sharing some key points that we made in the piece:
Candidate comments out of context, and doctored photos and video footage, have already been used for decades in campaigns. What Gen AI tools do is dramatically increase the ability and scale to spread false information and propaganda, leaving us numb and questioning everything we see and hear at a time when elections are already facing a crisis of public confidence. Such tools also open up the ability to spread mis-, dis-, and malinformation to any person in the world with a digital device. On top of that, depending on how Gen AI tools have been trained, they can amplify, reinforce, and perpetuate existing biases, with impacts on decision-making and outcomes.
Most of the the public is only dimly aware at best of what is coming, and even those who deploy these weapons may not be so well-informed about the potential for harm they bring to our politics. As the authors note further:
For some voters, exposure to certain messages might suppress turnout. For others, even worse, it could stoke anger and political violence. It’s worth noting here it’s not just the United States having elections in 2024 — there are some 65 elections across 54 countries slated for 2024. So, the potential harms extend globally. I am especially concerned about the use of AI for voter manipulation, not just through deepfakes, but through the ability of Gen AI to be microtargeting on steroids through text message and email campaigns. Indeed, Sam Altman, the CEO of OpenAI, stated in testimony before the Senate Judiciary Committee that spreading one-on-one interactive disinformation was one of his greatest concerns about the technology.
With significant changeover in leadership at social media companies, especially at X (formerly Twitter), policy and technical teams may not be fully prepared to detect, assess, and prevent the proliferation of mis-, dis-, and malinformation across platforms. This is particularly troubling given that malinformation online and organizing online can spill over into political violence in the real world. Think Charlottesville 2017 or Jan. 6, 2021 at the U.S. Capitol, but much, much worse.
Democratic researchers are looking into it, and it is encouraging, as Kondik and Whaley note, that “Congress and the White House are deliberating how to balance the harms and advantages of Gen AI” and “Seven leading tech companies — Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI — recently signed a voluntary commitment with the Biden administration to manage risks created by AI.” These companies have promised ““robust technical mechanisms to ensure that users know when content is AI generated, such as a watermarking system,” but it would be folly for Democrats to bet their survival on such agreements being honored.
There are ways Democrats could use the AI tools with integrity guidelines, and Dems should explore the possibilities. Budget-strapped campaigns, however, may not have adequate resources to hire the needed talent, which may be a limited pool in high demand in the months ahead.
Kondik and Whaley urge that “candidates, campaigns and PACs, issue groups, etc.” be required “to report the use of Gen AI, in the same way they are required to report campaign expenditures or lobbying activities. Candidates and campaigns should also be required to clearly label not just videos, but also emails and text messages microtargeting different demographic groups.”
The Federal Election Commission is also chewing on a range of such proposals and hopes to get some public feedback by October 16th of this year. The University of Virginia Center for Politics is also soliciting ideas from the public to be emailed to clo3s@virginia.edu. Kondik and Whaley flag episodes of their ‘Politics is Everything’ podcast, “A Regulatory Regime for AI? ft. Congresswoman Yvette Clarke; Neverending Cat and Mouse: Are Online Companies Prepared for 2024 Elections? ft. Katie Harbath; Saving Democracy from & with AI ft. Nathan Sanders; and How Congress Is Addressing the Harmful Effects of AI ft. Anna Lenhart.”
They also direct interested readers to Bryan McKenzie’s “Is That Real? Deepfakes Could Post Danger to Free Elections” at UVA Today.
These are good resources. Democrats, however, would do well to remember that Republicans are unlikely to have deeply-felt internal debates about the morality of using the new AI tools. They are probably already busy planning how to use them in forthcoming campaigns at the federal, state and local levels. Of particular concern will be roll-outs of outrageous fakes and audiovisual distortions in the final days of campaigns in swing states and districts, so that it’s too late for an effective response. To not have a plan for dealing with such an onslaught would be political negligence with potentially-dire consequences.