Table of Contents
Experts know that generative AI is poised to dramatically change the information landscape, and problems that have long plagued tech platforms (such as misinformation and disinformation, scams, and hateful content) are likely to be addressed. are amplified, despite the barriers that companies say they have established. start.
There are a few ways to tell if something was created or manipulated using AI: people or campaigns may have confirmed its use; fact-checkers may have analyzed and debunked something circulating in the world; or maybe the AI content is clearly being used for something like satire. Sometimes, if we’re lucky, it’s watermarked, meaning there’s something to indicate it was generated or modified by AI. But the reality is that this probably represents only part of what already exists. Even our own data set is almost certainly an undercount.
And that brings us to another question: as the British journalist Peter Pomerantsev said: “When nothing is true, everything is possible.” In an information ecosystem where anything can be generative AI, it is easy for politicians or public figures to say that something real is fake, known as the “liar’s dividend.” That means people are less likely to believe information even when it’s true. As for fact-checkers and journalists, many do not have the tools available to assess whether something has been created or manipulated by AI. Whatever this year brings, it will likely be just the tip of the iceberg.
But just because something is fake doesn’t mean it’s bad. Deepfakes have found a home in satire, chatbots can (sometimes) provide good information, and personalized outreach campaigns can make people feel seen by their political representatives.
It’s a brave new world, but that’s why we’re following it.
The chat room
As part of our AI project, we’re asking readers to submit any instances of generative AI they find in the wild this election year.
To get a better idea of how we will evaluate submissions (or even things we find) and send one to us, check out this link here. If you’re not sure if something was created from generative AI or just a run-of-the-mill fake, send it anyway and we’ll look into it.
💬 Leave a comment below this article.
Wired readings
They want more? Subscribe now for unlimited access to WIRED.
What else are we reading?
🔗 TikTok says it removed influence campaign originating in China: TikTok said last week it had removed thousands of accounts linked to 15 Chinese influence campaigns on its platforms. (The Washington Post)
🔗 Ramaswamy urges BuzzFeed to cut jobs, spread more conservative voices: Vivek Ramaswamy, the former Republican presidential candidate, is now an activist investor in BuzzFeed. He wants the publication to court conservative readers and say he “lied” in its reporting on Donald Trump and Covid, among other topics. (Bloomberg)
🔗 OpenAI creates oversight board with Sam Altman after dissolving security team: The new board will make recommendations on safety and security, and will have 90 days to “further develop OpenAI’s processes and safeguards,” according to the company blog. (Bloomberg)
The download
One last thing! This week on the podcast, I spoke with our editor and host Leah Feiger about the AI Elections project. give a hear!
In addition to talking about the new project (can you tell I’m excited?), Leah and I were joined by Nilesh Christopher, who reported on the role of deepfakes in India’s elections for WIRED. The biggest takeaway: The Indian elections are wrapping up soon, and many of the country’s burgeoning generative AI companies are looking for new markets that might be interested in their tools, and possibly even attending an election near you.
Is all for today. Thanks again for subscribing. You can contact me by email and x.
Image source: Getty Images