Home Tech Could AI-generated content be dangerous for our health?

Could AI-generated content be dangerous for our health?

0 comment
Could AI-generated content be dangerous for our health?

Let’s talk science fiction.

Neal Stephenson’s 1992 novel Snow Crash is the book that launched a thousand startups. It was the first book to use the Hindu term avatar to describe a virtual representation of a person, it coined the term “metaverse” and was one of Mark Zuckerberg’s required readings for new executives at Facebook, a decade before he shifted focus. of the entire company to try to recreate Stephenson’s fictional world into reality.

The plot revolves around an image that, when viewed in the metaverse, hijacks, maims, or kills the viewer’s brain. In the fiction of the world, the image crashes the brain, giving it an input that simply cannot be processed correctly.

It’s a recurring idea in science fiction. Perhaps the first clear example came four years earlier, in that of the British SF writer David Langford short story BLIT, which imagines a terrorist attack using a “basilisk,” images that contain “implicit programs that human equipment cannot safely execute.” In a continuation of that story, published in Nature in 1999Langford draws earlier parallels and even mentions Monty Python’s Flying Circus, “with its famous sketch about the funniest joke in the world that makes everyone in the audience laugh to death.”

The collective fiction project SCP coined the name for such ideas: a cognitohazard. An idea of ​​which the very thought can be harmful.

And one question that deserves to be taken increasingly seriously is: are cognitive dangers real?

What you know can hurt you

What if labeling isn’t enough? An AI-generated image of Donald Trump with black voters. Photo: Mark Kaye/Twitter

I started thinking about that question this week, as part of our coverage of efforts to automatically identify deepfakes in a year of elections around the world. Since I first heard the term in 2017 in the context of face-swapped porn, it has been possible to identify AI-generated images through research. But that task has become increasingly difficult, and now we are at the point where this task is beyond the purview of even experts in the field. So it’s a race against time to build systems that can automatically detect and label such material before it crosses that threshold.

But what if labeling isn’t enough? From my story:

Seeing a watermark doesn’t necessarily have the desired effect, says Henry Parker, head of government affairs at fact-checking group Logically. The company uses both manual and automatic methods to check contents, Parker says, but labeling can only go so far. “If you tell someone he or she is watching a deepfake before he or she even watches it, the social psychology of watching that video is so powerful that he or she will still refer to it as if it were fact . So all you can do is ask: How can we shorten the time this content is in circulation?”

Can we call such a video a cognitive risk? Something so convincingly realistic that you can’t help but take it as reality, even when you’re told otherwise, seems to fit.

Of course, that also describes a lot of fiction. A horror story that sticks with you and leaves you unable to sleep at night, or a viscerally unpleasant scene of graphic violence that makes you feel physically unwell, could be a cognitorisk if the definition is stretched that far.

The dominoes are falling

Pong wars, an automatically generated ‘game’ from Breakout that you will watch for much longer than is wise. Photo: Koen van Gilst

Perhaps closer to the examples from fiction are techniques that hijack not our emotions, but our attention. After all, we rarely have emotions under control at the best of times; Feeling something you don’t want to feel is almost the definition of a negative emotion.

Attention should be different. It is something we have conscious control over. We sometimes talk about being ‘distracted’, but more serious attention deficits justify increasingly medicalized language: ‘obsession’, ‘compulsion’, ‘addiction’.

The idea of ​​technology attacking our attention isn’t new, and there’s a whole concept of the “attention economy” underlying that barrage. In a world of ad-supported media, where companies increasingly compete not directly for our money, but for our time, which is inherently limited to just 24 hours a day, there is enormous commercial motivation to attract and capture attention. hold. Some of the tools of the trade developed to achieve that goal certainly feel like they’re tapping into something primal. The bold red dots of new notifications, the tactility of a pull-to-refresh feed, and the constant push for gamification have all been discussed at length.

skip the newsletter promotion

And some, I think, have crossed the line into becoming real cognitorisks. While they may only be dangerous to people prone to having their attention hijacked, the compulsion feels real.

One of these is a type of game: “clickers” or “idle” games, such as the critically acclaimed Universal Paperclips, condense a game’s reward mechanisms into their simplest structures. So named because they almost literally play themselves, idle games offer a dizzying array of timers, countdowns, and upgrades, constantly offering a breakthrough, improvement, or efficiency just a few seconds away. I’ve lost entire days of productivity to them, as have many others.

Another is a type of content, what I’ve come to think of as “domino videos,” the non-interactive equivalent of an inactive game. A video of a process that proceeds in an orderly, yet not entirely predictable manner, that draws you in and leads to an inexorable urge to watch until the end. Sometimes it’s literally a domino run; other times it might be someone methodically cleaning a carpet or depiling a sweater. Sometimes the process may never be completed; Pong Wars is an automatically generated “game” of Breakout, in which two balls each threaten to invade the space of the others. It never ends, but you keep watching it longer than it’s worth.

There’s a chance this is as bad as it gets. It may be that there is something inherently off-putting about real attention seekers, meaning that the urge to stare at them while progress is made will always be countered by the shame or disgust at wasting time.

But what if that’s not the case? What does it look like when generative AI is unleashed on social media to truly capture attention on an industrial scale? When the advice parents give to young children is not only to be careful who they speak to on the Internet, but also to be wary of what they even do Look bee?

Everything is science fiction until it becomes reality.

The broader TechScape

An image from Google’s blog post called Inceptionism. Photo: Google

Join Alex Hern for a Guardian Live online event on AI, deepfakes and elections, on Wednesday 24 April at 8pm BST. Book tickets here.

You may also like