Home Politics AI-powered counterfeit detection is failing voters in the Global South

AI-powered counterfeit detection is failing voters in the Global South

0 comment
Conspiracy theorists don't even bother to analyze Biden's performance in the debate

But it’s not just that the models can’t recognise accents, languages, syntax or faces that are less common in Western countries. “A lot of the initial deepfake detection tools were trained on high-quality media,” Gregory says. But in much of the world, including Africa, cheap models Chinese smartphone brands Phones that offer simplified features dominate the market. The photos and videos these phones can produce are of much lower quality, further confusing detection models, Ngamita says.

Gregory says some models are so sensitive that even background noise in a piece of audio or compressing a video for social media can result in a false positive or negative. “But those are exactly the circumstances you encounter in the real world — spotty detection,” he says. The free, public tools that most journalists, fact-checkers and members of civil society likely have access to are also “the ones that are extremely inaccurate, in terms of dealing with both the inequity of who is represented in the training data and the challenges of dealing with this lower-quality material.”

Generative AI is not the only way to create manipulated media. So-called cheap fakes, or media manipulated by adding misleading labels or simply reducing or editing audio and video, are also very common in the Global South, but can be wrongly flagged as AI-manipulated by faulty models or untrained researchers.

Diya worries that groups using tools that are more likely to flag content from outside the US and Europe as AI-generated could have serious policy repercussions, encouraging lawmakers to crack down on imaginary problems. “There’s a huge risk in terms of inflating those kinds of numbers,” he says. And developing new tools isn’t a matter of flipping a switch.

Like any other form of AI, building, testing, and running a detection model requires access to power and data centers that simply aren’t available in much of the world. “If we’re talking about AI and local solutions here, it’s almost impossible without the computational side of things for us to be able to run any of the models we’re thinking of developing,” says Ngamita, who lives in Ghana. Without local alternatives, researchers like Ngamita are left with few options: pay for access to a commercial tool like the one offered by Reality Defender, the costs of which can be prohibitive; use inaccurate free tools; or try to gain access through an academic institution.

For now, Ngamita says his team has had to partner with a European university where they can send snippets of content for verification. Ngamita’s team has been compiling a dataset of potential deepfake instances from across the continent, which he says is valuable for academics and researchers who are trying to diversify their models’ datasets.

But sending data to someone else has its drawbacks, too. “The wait time is quite significant,” Diya says. “It takes at least a few weeks for someone to be able to confidently say that this is AI-generated, and by then, that content is already done.”

Gregory says Witness, which has its own rapid response detection programme, is receiving a “massive number” of cases. “It’s already a challenge to manage them in the time that frontline journalists need and in the volume that they are starting to encounter,” he says.

But Diya says that focusing so much on detection could divert funding and support from organizations and institutions that contribute to a more resilient information ecosystem overall. Instead, she says, funding should go to media outlets and civil society organizations that can build a sense of public trust. “I don’t think the money is going to that,” she says. “I think it’s going more to detection.”

You may also like