Home Tech Where do we draw the line in the use of AI in television and film?

Where do we draw the line in the use of AI in television and film?

0 comments
Where do we draw the line in the use of AI in television and film?

tWhile last year’s writers and actors strikes in Hollywood were driven by myriad factors, including fair compensation and residual pay, one concern outweighed the rest: the invasion of generative AI (the kind that can produce text, images and videos ) in people’s livelihoods. The use of generative AI in the content we watch, from movies to television to masses of internet garbage, was a foregone conclusion; Pandora’s box has been opened. But the rallying cry, at the time, was that any protection secured against companies using AI to cut corners was a victory, even if only for a three-year contract, as the development, implementation and adoption of This technology will be very fast.

That wasn’t bragging. In the few months since the writers’ and actors’ guilds made historic agreements with the Alliance of Motion Picture and Television Producers (AMPTP), the average social media user has almost certainly come across AI-generated material, whether he realized it or not. Efforts to curb AI pornographic deepfakes of celebrities have reached the notoriously recalcitrant and obtuse US Congress. The internet is now rife with misinformation and conspiracies, and the existence of generative AI has been so destroyed what was left of shared reality, that a deepfake video of Kate Middleton AI seemed, for many, a not far-fetched conclusion. (For the record, it was real.) Hollywood executives have already tested OpenAI’s upcoming text-to-video program, Sora, prompting producer Tyler Perry Halt an $800 million expansion of its studios. in Atlanta because “jobs are going to be lost.”

In short, many people are scared or at best cautious, and rightly so. All the more reason to pay attention to the small battles around AI, and not through an apocalyptic lens. Because amid all the big stories about Taylor Swift deepfakes and the potential workplace apocalypse, generative AI has infiltrated film and television in smaller ways: some potentially creative, some potentially sinister. Even in recent weeks, numerous cases of AI used legally in and around creative projects are testing what the public will notice or accept, investigating what is ethically acceptable.

Allow Instagram content?

This article includes content provided by Instagram. We ask for your permission before uploading anything as they may be using cookies and other technologies. To view this content, click ‘Allow and continue’.

There was a small outburst on social media AI-generated band posters in the new season of True Detectivefollowing some Viewers’ concern over equally small AI-generated interstitial ads. in the independent horror film Late Night With Devil. (“The idea is that it’s so sad up there that a guy with AI made the posters for a Loser Metal festival for boomers,” said True Detective showrunner Issa López. in X. “It was discussed. Until you get enough”). Both instances have that weird AI lacquered look, like in the AI-generated credits of Marvel’s 2023 show Secret Invasion. The same also with promotional posters for the new A24 movie, Civil War, depicting American landmarks destroyed by a fictional internal conflict, such as a bombed Sphere in Las Vegas or the Marina Towers in Chicago, with characteristic AI inaccuracies (cars with three doors, etc.).

How much fucking money does HBO have and this is the quality of the AI ​​poster we are getting in the new season of True Detective? I can’t wait to see META\L on their third tour. pic.twitter.com/vRhmU5tT4l

– Joe Camel Enthusiast (@BroElector) January 22, 2024

There has been negative reaction from moviegoers to the use of AI enhancement (different from generative AI) to Sharpen – or, depending on how you look at it, oversaturate and ruin – existing films like James Cameron’s True Lies for new releases on DVD and Blu-ray. An obvious and openly marked AI Trailer for a fake James Bond movie Starring Henry Cavill and Margot Robbie, neither of whom are part of the franchise, it has, as of this writing, more than 2.6 million views on YouTube.

And possibly the most worrying thing is the website. Futurism reported on what appear to be enhanced or AI-generated “photos” of Jennifer Pan, a woman convicted of murdering her parents for hire in 2010, in the new Netflix true crime documentary What Jennifer Did. The photos, which appear around 28 minutes into the film, are used to illustrate Pan’s high school friend Nam Nguyen’s description of her “bubbly, happy, confident and very genuine” personality. . Pan laughs, flashes a peace sign, and smiles widely, with a noticeably overlong front tooth, oddly spaced fingers, misshapen objects, and, again, that strange, overly bright shine. Filmmaker Jeremy Grimaldi neither confirmed nor denied in an interview with the toronto star: “Any filmmaker will use different tools, like Photoshop, in their films,” he said. “Jennifer’s photos are real photos of her. The foreground is exactly her. The background has been anonymized to protect the source.” Netflix did not respond to a request for comment.

Grimaldi does not explain what tools were used to “anonymize” the background, nor why certain features of Pan appear distorted (his teeth, his fingers). But even if generative AI was not used, it is still a worrying revelation, in the sense that it suggests a confusion of the truth: that these are old photographs of Pan, that there is a visual archive that does not exist as such. If this is generative AI, that would become a pure archive lie. Such use would go directly against a set of best practice guidelines just introduced a group of documentary producers called the Archival Producers Alliance, which rules in favor of using AI to lightly retouch or restore an image, but advises against re-creating it, altering a primary source, or anything that could “change its meaning in ways that could mislead the audience.”

I think it’s this last point (misleading the audience) that constitutes the growing consensus about what application of AI is or is not acceptable in television and film. The “photos” from What Jennifer Did (in the absence of a clear answer, it’s unclear what tools they were altered with) recall the controversy over snippets of Anthony Bourdain’s AI-generated voice in the 2021 documentary Roadrunner, which overshadowed an exploration nuanced with a complicated figure. on a question of disclosure or lack thereof. The actual use of AI in that film was astonishing, but it revived evidence rather than creating it; The problem was how we found out, after the fact.

And here we are again, litigating certain small details whose creation seems extremely important to consider, because it is. An overtly AI-generated trailer for a fake James Bond movie is strange and, in my opinion, a waste of time, but at least its intent is clear. Creating AI posters at shows where an artist might be booked feels like a corner cut, an inch given away, depressingly expected. AI used to generate a false historical record would clearly be ethically dubious at best and downright manipulative at worst. Individually, these are all small examples of the line we are all trying to identify, in real time. Taken together, this makes finding it seem more urgent than ever.

You may also like