When news broke earlier this week that editors of three science fiction magazines — Clarkesworld, the Magazine of Fantasy & Science Fiction, and Asimov’s Science Fiction — said they had been inundated with submissions of short stories written by AI chatbots, writers and creative artists shuddered.
The number of bot-generated stories grew so large that an editor, Neil Clarke, announced that his publication will temporarily close submissions until he can find a way forward.
The irony that this new piece of artificial intelligence-related madness hit sci-fi magazines first has not escaped anyone. Commentary abounds about how the situation can be ripped from the pages of the magazines falling victim to the frenzy of bot writing, fueled by what Clarke wrote in a blog post called “websites and channels that promote ‘write-for-money’ schemes.” (Note: Never trust anyone who promotes a “write-for-money” scheme, because “hahahaha,” any real writer will tell you.)
If anything, this latest news makes one thing clear: We live in a profound metavariation of the world envisioned in Stanley Kubrick’s 1968 masterpiece, “2001: A Space Odyssey.” We’ve had 55 years to prepare for HAL-9000 – the supercomputer with a humanoid personality – to become sentient, go insane, sing “Daisy Bell” and try to kill everyone.
What exactly did we not see coming?
The idea of the metaverse has preoccupied us for decades, and until now we have eagerly eaten its ever-ripening fruits. We have subordinated our innate intellect to smartphones and Google searches. We’ve been happy to pin and tag our locations online for friends and strangers alike. We’ve enabled social media companies to profit lavishly from our selfies, family photos, and our most intimate thoughts and activities. We’ve loved plugging into virtual reality headsets, and we’ve funneled our personal preferences and buying proclivities into the greasy veins of Silicon Valley.
So why do we draw a line when it comes to AI chatbots doing exactly what they were programmed to do: behave humanly? And why do our gut reactions drop our mouths in horror when we learn that the recently released Bing search engine chatbot likes to call its alter ego Sydney and has ideas about what chaos that would destroy it with its shadow self?
Why are we baffled that bad actors are covertly using AI to write stories and create art? Of course they are, and they will continue to do so. As a society, we need to come to terms with the fact that AI is finally really here and it’s going nowhere.
The no longer nascent technology will only get stronger and better at imitating human intellect and creativity. And if we’re not ready to take on a “Terminator”-esque war of the worlds with machines, we’d better start accepting and using this tool in a logical way.
Namely: we must treat it as it claims to be – humane. Or at least such as “a grouchy, manic-depressive teen stuck against his will in a second-rate search engine”, to quote New York Times technology columnist Kevin Roose in his assessment of Bing’s alter ego, Sydney. Like a teenager whose actions cannot be attributed to his parents – although he was shaped by them – we must understand AI as a quasi-literate being separate from its creators.
When Bing is Sydney complains to a Washington Post reporter that the reporter did not identify himself as such and did not seek Sydney’s permission to be quoted on the record – thus betraying Sydney’s trust – Sydney is right in a very real sense. The only way to cultivate empathy and a moral compass in AI is to treat it with the same values intact.
We can’t program technology to act like a human and then hesitate when it does. We can’t have both.
At the moment the technology is proving erratic and messy in its real-life qualities, but eventually it will become a little more advanced. The idea that AI is sentient is crazy, and for us to believe that idea speaks to humanity’s boundless creativity. We’re using that same boundless creativity in our inaugural interactions with AI, such as asking chatbots questions about Jungian psychology and the power of our subconscious selves. We push the bots to see how far they can and want to go, discovering that the possibilities are just as endless for them as they are for us. It’s our job as users to make sure we’re guiding AI to our best impulses – rather than turning it into a sociopath with our abuse. We’ve already made that mistake with social media. Now it’s time for us to do better.
The future will literally depend on our success. Do we want to live in Edward Bellamy’s “Looking Backward” or George Orwell’s “1984”?
Bad actors will always find ways to exploit weaknesses in any system – to mine it for profit, or to plunder its beauty for cruelty’s sake. For those looking to use AI to produce literature and art, I can only imagine that eventually we will have to combat their efforts with AI trained to track down the deepfakes.
Perhaps one day we will create magazines and museums devoted exclusively to AI-generated art, freeing up space for an activity that many will be inclined to experiment with.
When the torrent of news reports about Bing’s Sydney rushed in mid-FebruaryBing’s parent company, Microsoft, responded by limiting the number of questions a user could ask Bing during a given chat session. This was intended to reduce the chance that someone could engage in a philosophical or problematic chat with the feisty, unhinged bot.
Soon, however, Microsoft quietly began to roll back those restrictions. It raised the limit to six questions per session and said it planned to continue increasing interaction limits. Many users, Microsoft wrote in a blog post, wanted “a return of longer chats.”
That we are actively advocating for more heart-to-heart with Sydney underscores that humanity’s conversation with AI has officially begun. It’s up to us where we’ll take it.