By: Samuel Scott
Technology is the future of everything, and more recently, technological innovation has made its way into the headlines of this election season. An independent Congressional candidate, Bentley Hensel is challenging decade-long tenured democrat Don Beyer in Virginia’s 8th District for a seat in the United States House of Representatives. The four candidates running for this seat met in September to debate and speak about their policy positions, but Beyer declined to participate in another debate prior to election day. In response, Hensel—32-year-old challenger and software engineer—created an AI chatbot of his opponent (nicknamed “DonBot”) to debate. This chatbot pulls information directly from Beyer’s official online documents to avoid hallucinations, otherwise known as untrue facts invented by AI programs.
Although Congressman Beyer has not directly replied to the creation of an AI chatbot in his name, his campaign asserts that he is a leading voice in Congress regarding the regulation of artificial intelligence and that he is determined to work towards legislation that would prevent the use of AI to spread misinformation during United States’ elections. This debate strategy is prompting a discussion about the ethics and regulatory environment surrounding the use of AI in elections across the United States. It is a new area of concern that is seeing increased attention from lawmakers and citizens alike.
There is not a great deal of regulation surrounding generative artificial intelligence in elections, as the technology has only recently become accessible to the general population. Signed into Virginia law in 2024, Senate Bill 487 directs the Joint Commission on Technology and Science to examine the use of artificial intelligence by public bodies in the Commonwealth and permits the creation of a Commission on Artificial Intelligence. However, there is no specific state legislation concerning the use of generative AI in Virginia elections. States like Alabama, Arizona, Idaho, Florida, New Mexico, Oregon, Utah, and Wisconsin are among the first to act by passing laws focused on regulating artificial intelligence in elections. Regulation often proposes restrictions on artificial intelligence in campaign materials by mandating oversight, disclaimers, and sometimes outright banning use altogether.
On a federal level, there is an effort to legislate around the constantly evolving world of artificial intelligence, but much of the proposed legislation has not made it out of Congress. Adopted in 1971, the Federal Election Campaign Act (“FECA”) regulates the fundraising and spending of money in United States elections. FECA has been amended throughout the years to impose regulations on contributions to federal candidates and mandate disclosures in campaigns for federal office positions. Senator Amy Klobuchar of Minnesota is spearheading legislative proposals to regulate AI-generated election content. Although there is some level of bipartisan support for the proposed legislation, it has stalled on the floor of Congress and will not be passed before election season in November.
Opponents to proposed regulations on AI cite concerns that broad regulation may infringe upon First Amendment rights to freedom of speech while simultaneously stunting innovation in the artificial intelligence sector. Supporters of AI regulation claim unchecked advancements in this technology will present unique risks to future elections. They cite a recent incident where deepfake robocalls impersonated President Joe Biden targeting New Hampshire voters urging them to refrain from voting in the state primary. Many fear this is just the beginning of malicious use of generative AI surrounding elections. The speculation is that failure to pass proactive legislation during the current session of Congress may reduce their actions to reactionary measures, which many consider “too little too late.”
One regulation proposed is the Protect Elections from Deceptive AI Act which prohibits “the distribution of materially deceptive AI-generated audio, images, or video relating to federal candidates in political ads or certain issue ads to influence a federal election or fundraise.” The bill allows candidates who are subject to deceptive content the ability to take down the content and seek damages. Exceptions to regulation include parody, satire, and use of AI materials in news broadcasts. Another proposed regulation is the AI Transparency in Elections Act of 2024. This federal bill seeks to amend FECA to provide greater transparency involving AI generated political advertisements. It requires applicable materials to disclose if generative AI was used to create images, audio, or video footage in an advertisement.
The current regulatory environment is largely driven by individual states. Congress continues to disagree over the appropriate legislative action surrounding AI and elections, leaving the future of the landscape uncertain. States are regulating as they see fit with approximately 151 bills already in practice on the state level regarding deepfakes and deceptive media in elections. Not all of these regulations, however, specifically target generative AI.
As it relates to the first ever AI debate between candidates in the United States, this means that there was little action that incumbent Congressman Beyer could take to prevent Hensel from debating the DonBot. As long as Hensel clearly stated that the chatbot was artificial intelligence and not actually Don Beyer, there is no legitimate legal course of action in Virginia. Through preliminary testing, the DonBot has proved reliable in only conveying accurate positions of Congressman Beyer. Hensel states that he is aware of the challenge he faces in attempting to defeat the popular incumbent who secured nearly three-fourths of the vote in the last election but asserts he has constructed the model as an act of transparency. He hopes that this debate will encourage a more open dialogue for prospective voters to see what each candidate supports in their campaign.
There was growing concern that the chatbot model would fail to provide accurate responses and paint Beyer in a misleadingly negative light when OpenAI banned Hensel’s account upon hearing news of his political creation. OpenAI does not permit the use of their artificial intelligence models for political purposes. Hensel was forced to shift his design to Cloudflare AI servers which are developed and operated by Meta.
Despite the change in platforms, the chatbot performed rather well during the debate on October 17th. The only notable pitfalls were technological errors. These primarily took the form of elongated pauses mid-sentence and dropouts in the answers it was providing. The DonBot did not impersonate Beyer’s voice and viewers were made very aware of the use of artificial intelligence. Although this debate concluded without any significant issues, many still express concerns over the future of elections should chatbots experience exponential advancements in technology. It is easy to discern a bot from a real candidate in November of 2024, but how quickly the two become indistinguishable remains to be seen.
The prevalence of artificial intelligence in elections demonstrates a shift to longstanding norms in campaign circles. States will continue to regulate while the federal government seeks to find a suitable compromise that appeases both sides of this debate. Speed is a primary concern as lawmakers must attempt to avoid overregulating a developing technology sector that could produce extremely useful innovations in multiple facets of life while also imposing significant safeguards to avoid tainted elections at the state and federal levels. Lawmakers will observe the role AI plays this election season and tailor their legislation accordingly.
Student Bio: Samuel Scott is a second-year law student at Suffolk University Law School and staff writer on the Journal of High Technology Law. Samuel received a Bachelor of Science in Marketing with a minor in Political Science from Providence College in 2022.
Disclaimer: The views expressed in this blog are the views of the author alone and do not represent the views of JHTL or Suffolk University Law School.