I confess to Hutchinson that if I were a politician, I would be afraid to use BattlegroundAI. Generative AI tools are known to “blow the mind,” a polite way of saying that they sometimes make things up out of thin air. shit(to use academic language.) I ask him how he ensures that the political content BattlegroundAI generates is accurate.
“Nothing is automated,” he says. Hutchinson notes that BattlegroundAI’s copy is a starting point and that humans on campaigns must review and approve it before it goes live. “You may not have a lot of time or a huge team, but you’re definitely reviewing it.”
Of course, there is a growing movement opposed to AI companies training their products on art, writing, and other creative work without asking permission. I ask Hutchinson what he would say to people who might object to how tools like ChatGPT are trained. “Those are incredibly valid concerns,” he says. “We need to talk to Congress. We need to talk to our elected officials.”
I ask if BattlegroundAI is considering offering language models that are trained only on publicly available or licensed data. “I’m always open to that,” she says. “We also need to give people, especially those who are time and resource constrained, the best tools available. We want to have consistent results for users and high-quality data, so I think the more models that are available, the better for everyone.”
And how would Hutchinson respond to people in the progressive movement, who generally align themselves with the labor movement, who oppose the automation of ad writing? “Obviously, these are valid concerns,” he says. “Fears that come with the advent of any new technology: We’re afraid of the computer, we’re afraid of the light bulb.”
Hutchinson makes his point: He doesn’t see this as a replacement for human labor, but rather as a way to cut out the drudgery. “I worked in advertising for a long time and there are a lot of elements that are repetitive and really drain creativity,” he says. “AI takes away the boring stuff.” He sees BattlegroundAI as a boon for overstretched and underfunded teams.
Taylor Coots, a Kentucky political strategist who recently began using the service, describes it as “very sophisticated” and says it helps identify target voter groups and ways to tailor messages to reach them in ways that would otherwise be difficult for small campaigns. In contested races in gerrymandered districts, where progressive candidates are the biggest losers, budgets are tight. “We don’t have millions of dollars,” he says. “We’re looking for any opportunity we have to make efficiencies.”
Will voters care if the text of the digital political ads they see is generated with the help of AI? “I’m not sure there’s anything more unethical about AI generating content than about anonymous staff or interns generating content,” says Peter Loge, an associate professor and program director at George Washington University who founded a project on ethics in political communication.
“If you could order the disclosure of all political writings produced with the help of AI, then you would logically order the disclosure of all political writings” — such as emails, advertisements and op-eds — “that were not produced by the candidate,” he adds.
Still, Loge worries about AI’s effects on public trust at a macro level and how it could affect how people respond to political messages in the future. “One of the risks of AI is not so much what the technology does, but how people feel about it,” he says. “People have been faking images and making things up for as long as politics has existed. The recent attention to generative AI has increased already incredibly high levels of people’s cynicism and distrust. If everything can be fake, then maybe nothing is true.”
In the meantime, Hutchinson is focused on his company’s short-term impact. “We really want to help people now,” he says. “We’re trying to act as quickly as we can.”