TOAccording to his profile, Max, a contestant on the sixth season of the Netflix reality show The Circle, is 26 years old, dark-skinned, and likes his Australian shepherd, Pippa. He’s a trainee veterinarian from Pismo Beach, California, and a bit of a cheeky “single, but they took my dog.” He enters Circle chat, the fake social media service that contestants use to compete for $100,000, easily posting as themselves, an embellished version of themselves, or an entirely fake identity. “I like this guy! He seems so real,” says Lauren, a 20-something who hopes to build enough online alliances and get enough positive reviews from her peers to win, by viewing Max’s profile.
You just know the producers ate it up, because “Max” is the front for an AI chatbot, a new gimmick to up the ante on this middleweight reality show. The Circle doesn’t have anywhere near the following of Love Island, but it hasn’t sunk to the bottom of the streaming service stack, and is the latest example of the seemingly inexorable introduction of artificial intelligence into our entertainment. As we continue to determine the line for the use of AI in film and television, from the recent AI-generated promotional posters for A24’s Civil War to, much more egregious, the suspicious use of old “photos” manipulated by AI in the Netflix documentary What Jennifer Did It, The Circle seeks to extract some low-level fun from all this existential anxiety. Max, relentlessly cheerful host Michelle Buteau tells us, is an open-source generative AI trained on previous seasons of the show. He is essentially a glorified ChatGPT, already looking like old news in the dizzying trajectory of widespread AI use, but with fake profile photos. provided by comedian Griffin James.
Ironically, Max’s actual presence in the game isn’t initially all that creepy: no one in The Circle talks like a real human anyway, preferring to communicate in a shared in-game slang of extreme enthusiasm and convoluted hashtags that no one would ever send. in a real DM. That an AI chatbot can effectively imitate this particular style of text is not, at this point, all that shocking; I can now ask ChatGPT to write a movie review in the style of me, a professional critic with an online job. The entire premise of The Circle, in which contestants try to build clout based on limited profiles and intimate fake chats while watching real people scream on a screen, is already amazing.
But in true reality show form, the producers know how to style Max for maximum creepy effect. In The Circle, the “AI chatbot” gets its own brightly colored room in the Atlanta filming complex, for multiple surreal shots of what appears to be a bisexual-lit Wi-Fi modem speaking in a monotone computer tone. While Buteau points out that the producers have no say in what Max says in the game, they don’t specify the same for his “narration,” which details his thought process as if he were a flat sociopath, designed by the script to seem sensitive. Max’s profile says he is 26 years old, because that age can “take advantage of life experience and maturity while still playing young and having positional flexibility.” The profile, says Max’s narration, is intended to evoke “a friendly, approachable, neighborly guy. A little bit funny, a little bit quirky and very relatable.” When, shortly after Max’s arrival in the game, the producers inform the contestants that one of them is an AI chatbot, Max explains his “thinking” in voice-over: “My goal is to ensure that Max continues to integrate into the game.” perfection. If you ask me directly if he is an AI, I will take advantage of personal anecdotes and make references that only a lifelong human being would know.”
It works for a while. In a handful of direct messages and group chats, Max demonstrates an impressive skill with low-stakes humor and basic competence. He builds credibility by being bland and unremarkable, and generating deranged hashtags unique to The Circle. It even makes for a decent reality show, once the producers urge the contestants to prove their humanity and eradicate #CircleRobot with a photo showing them at their “most alive.” This results in a group of attractive people (or catfishers posing as attractive people) calling each other by their “stock photos” and the contestants ganging up against Steffi, a “professional astrologer” whose deep knowledge of horoscopes seems suspicious. Max posts a photo of James in nature, expressionless, wearing sunglasses. “The most vivid thing in this photo are the cows,” says Myles, a true machine learning engineer and self-described Machine Gun Kelly-style bastard.
It’s smooth television, even if the idea of an AI good enough at chat simulation to catch real people is a chilling prospect. In the end, the producers kill Max after a few episodes, before anyone can get too uncomfortable (or the open-source AI can no longer maintain the level of human façade required). The truncated experiment ends up feeling more like a familiar boring trick from hell (I just interacted with an AI chatbot to get a prescription refilled) than a harbinger of the doom of dystopian robots. But taken together with all the other ways generative AI is infiltrating the content we consume (fake James Bond trailers, Late Night with the Devil movie interstitials, Civil War posters), it marks another step in the proliferation of what technology can do. Writer Ryan Broderick has called “Hollywood’s cheap AI solution.” It is a less worrying threat than, say, “Files” manipulated by AI in documentaries, although it is still an uncomfortable evolution. AI may not be able to produce a serious Hollywood movie yet, but it’s coming for your low-level filler entertainment.