Georgia political campaigns start to deploy AI but humans still needed to press the flesh

Arena, a group that trains Democratic-aligned campaign staff, held a summit in Atlanta in April that included a panel on the use of AI. The experts featured were, from left, Leah Bae, Sonya Reynolds, Ben Resnik and Betsy Hoover. (Ross Williams/Georgia Recorder)

Glenn Cook’s blog is a little different.

Every three or four days, the Republican candidate for Georgia House District 180’s blog is updated with a new essay on topics of interest to voters like public safety, education and the environment.

A recent post advocates for community-based policing. It’s short on specifics to the coastal Georgia district he wants to represent but long on broad platitudes and noncontroversial solutions:

“Did you know that communities with strong social ties and trust in law enforcement tend to have lower crime rates? It’s true! Working hand in hand and nurturing good vibes among us is our secret to crafting spaces where safety shines for all,” the April 22 post reads.

Looking closer at some of the images within the posts reveals more oddities, including police officers with badges containing words in unknown concocted languages.

Cook’s podcast, the Coastal Georgia Listener, is also a bit uncanny. There’s no opening song, no guests, just Cook’s voice reading a script similar to the blog posts.

Early adopter

Cook’s campaign is an early adopter of artificial intelligence. His blog posts are generated with the help of AI, says Robert Lee, founder of Lesix Media and advisor to Cook’s campaign, and the podcast is created with an AI-powered service allowing users to clone their voices.

“We have an editorial and a content creation process, where we provide parameters to our AI platform, which is called Content at Scale, and we put those parameters in, and it produces the content, it essentially drafts the content for us,” he said. “And then we have an editorial process internally with my staff, and then ultimately with Glenn as our client and the candidate, to review that content, make changes, make sure our voice is added to it.”

“We have an ethical responsibility to make sure that we help our clients build deeper human relationships with voters,” he added. “Because at the end of the day, the most important person in any election is the voter. It is their community, it is their government. So following that ethical principle, that responsibility we have, our goal is to make sure that our clients are willing to sign off on anything that truly reflects their view, their voice, and gives them a better ability to build a deeper relationship with voters. So we don’t just create things and say, ‘Hey, here it is, take it or leave it.’ It’s very much ‘All right, we’ve drafted this, now let’s put our human touch on it.’”

AI has been embraced in the business, computing and marketing worlds, but this year’s election is the first to see campaigns latching on to the new technology, Lee said.

“This is the first cycle where you’re finding people specifically apply it to political campaigns, but it’s still not widely adopted,” he said. “People are still a little afraid of what they don’t understand. And so you have agencies like us that are just full on using it in every way we possibly can following that principle we talked about. And I think in two years time, you will find these technologies being not just a valuable part of a campaign, but an integral and very necessary part of a political campaign, because campaigns will just have to use it to be able to keep up with the demands of news cycles and growing neighborhoods and changing communities.”

Lee said there is a risk of alienating voters with an approach that could be seen as impersonal, but using AI to create an online presence can free up time for talking to voters in the real world.

“There’s always that risk. I mean, I think there’s that risk with using mail over door knocking. There’s that risk with using more television over phone calls,” Lee said. “At the end of the day, you have to be able to use this technology to help you build deeper relationships with voters. I keep coming back to that, but that is the core of what it is that we do on political campaigns, is connect with people in their lives. And what we have seen is that our clients are knocking on more doors. They are making more phone calls. They are visiting with more people because they are not having to spend time replacing interns that just simply don’t exist.”

A bipartisan tool

Lesix’ website advertises that it will allow candidates to “dominate your political opponents with AI-powered Republican strategies,” but Democrats are hoping to capitalize on the fancy new tech as well.

Last week, Arena, a group that trains Democratic-aligned campaign staff, held a summit in Atlanta offering training ahead of the November election, and one of the first events was a panel on the use of AI.

Betsy Hoover, founder and managing partner at California-based Higher Ground Labs, which invests in political tech projects and supports Democratic causes, said AI could have a more immediate impact on state and local races rather than federal.

“When you think about content generation for a presidential campaign, you have a team of 40 people producing content,” she said.

“When you’re talking about a campaign that has three or four staffers on a local level, the option is not, like, staffer or AI, it’s like, AI or don’t have a digital plan,” she added. “Like, don’t have a digital program or have a very, very scaled-back digital program. And as we enter a cycle where our voters and our volunteers are communicating more online and more used to digital environments, everything’s happening on their phones even more than the last cycle, that’s where we have to reach them. And so how do we make that as efficient and accessible as possible for the candidates that can’t afford a big staff, maybe are challenging an incumbent in a much better, funded campaign environment and actually can play because they have these efficiencies at work?”

The panelists were less bullish than Lee in their opinion of AI’s importance in the next election, but they predicted it will have a greater impact in years to come and touted examples of ways it can already ease some of the more tedious political grunt work.

“As many of you are probably very familiar with, a great deal of work on entry-level comms stuff is just pulling press clips,” said Ben Resnik with Pittsburgh-based Zinc Labs. “You get up really early, you find the headlines, you format them, you send them into an email inbox and the senior leadership reads it. There’s a tool right now within the Higher Ground Labs’ portfolio called Chorus, which, among other things, promises to automate press clips. It can find, not just based on the keyword, but in terms of subject, what are things that your campaign is interested in, and automatically put that together and send it out.”

Resnik said campaigns can also use AI to take complicated legislation and put it into plain English and even pull out bullet points that could be of particular importance to different demographics, but he said it’s not yet time to let the robots off their leashes.

“Especially this cycle, and really for the foreseeable future, there is no application of generative AI where you can take a human fully out of the loop,” he said. “There needs to be a person editing, improving, quality checking every piece of content. There needs to be a person validating that the analysis that it’s doing, the things that it’s pulling out of that piece of legislation, is actually real, by, for example, asking for quotes, and then validate that those quotes actually exist.”

Limitations

Resnik was describing a phenomenon known as hallucinations, in which AI can reach into its algorithm and present as truth information that is misleading or outright fake.

Arun Rai, a professor at Georgia State University, expert on generative AI and member of Georgia’s AI Advisory Council, said he is optimistic about AI’s potential for campaigns when it comes to tasks like collecting and analyzing data.

“For example, you could have alerts on events that are of interest to voters, that are of interest to communities that may not be on the radar otherwise, so the whole just information sourcing aspect, and you can think about opinion polls, the way they’re trending, issues that might have happened, or events that are happening in communities that candidates might want to be present at to understand how voters are feeling.”

But he said early adopters should beware of hallucinations and other potential drawbacks.

An AI-generated image made by Midjourney. It depicts an AI consultant advising a candidate.
A big drawback could be data privacy. AI products, especially free ones, often train on previous conversations, so whatever you type into it may come back as the answer to another user’s question. That could be a problem if you typed in sensitive information like donors’ phone numbers and home addresses.

Users could also wind up publishing material that is copyright protected or that is based on bigoted precepts or language if an AI was exposed to that kind of data in its training.

“And therefore, all of this is leading to one key point: it’s important that humans don’t fall asleep at the wheel,” Rai said.

Some companies offer either free or paid tools that purport to detect AI-generated text. Some of these are better than others, and some appear to create false positives to advertise further products that claim to make AI text undetectable. ElevenLabs, the company that produces the voice cloning service used by Cook, also offers a free AI speech classifier that says it can detect whether an audio clip was created using ElevenLabs.

“While these tools are there, there are no perfect tools,” Rai said. “Policies and regulation tend to be a catch-up game because the technology is moving so fast, and you don’t want to over-regulate to a point where the technology cannot be used constructively. Because some of these technologies, as we talked about, can also have a very productive value, so regulations are not going to be a complete solution, and it’s also going to be a slow solution. The platforms are going to do what they can. But I think it’s both on the disclosure and the detection side.”

The Georgia legislature considered a bill that would have banned AI-generated deepfakes of candidates intended to deceive voters within 90 days of an election, but the measure did not pass the Senate.

Despite the challenges, Rai said he’s hopeful people will use AI to increase human potential rather than diminish it.

“I’m not going to try to project what’s going to happen in 10 to 20 years other than say I don’t know,” Rai said. “Right now, I can tell you whatever I envision is going to be vastly different than what’s likely going to happen. But I do see this technology with unbelievable potential because of what it does to realizing human and organizational potential. The reason I see it most powerful is that it can help individuals realize their potential in ways that we haven’t been able to because of socioeconomic inequality, because of other constraints, I’m not good at X and therefore I’m held back on doing something. It can become a real partner and push us to be the best versions of ourselves constantly, as organizations and individuals.”

This story was provided by WABE content partner The Georgia Recorder.