Home Money Kids’ Cartoons Get a Free Pass From YouTube’s Deepfake Disclosure Rules

Kids’ Cartoons Get a Free Pass From YouTube’s Deepfake Disclosure Rules

by Elijah
0 comment
Kids’ Cartoons Get a Free Pass From YouTube’s Deepfake Disclosure Rules

YouTube has updated its rulebook for the age of deepfakes. Starting today, anyone uploading video to the platform must disclose certain uses of synthetic media, including generative AI, so viewers know what they’re seeing isn’t real. YouTube says it applies to “realistic” altered media, such as “making it look like a real building is catching fire” or “swapping one person’s face with another person’s.”

The new policy shows that YouTube is taking steps that could help combat the spread of AI-generated misinformation as the US presidential election approaches. It’s also notable for what it allows: AI-generated animations aimed at children are not subject to the new synthetic content disclosure rules.

YouTubes new policy completely exclude animated content from the disclosure requirement. This means that the emerging get-rich-quick scene, AI-generated content scammers can continue to create videos targeting children without having to reveal their methods. Parents concerned about the quality of hastily created nursery rhymes will be left to the task of identifying AI-generated cartoons on their own.

YouTube’s new policy also says that creators don’t have to flag the use of AI for “minor” edits that are “primarily aesthetic,” such as beauty filters or video and audio cleaning. The use of AI to ‘generate or enhance’ a script or subtitle is also permitted without disclosure.

There’s no shortage of low-quality content on YouTube made without AI, but generative AI tools are lowering the bar for producing video in a way that speeds up its production. YouTube’s parent company Google recently said it was tweaking its search algorithms to curb the recent flood of AI-generated clickbait powered by tools like ChatGPT. Video generation technology is less mature, but improving rapidly.

Established problem

YouTube is a children’s entertainment giant, eclipsing competitors like Netflix and Disney. The platform has struggled in the past to moderate the vast amount of content aimed at children. It has come under fire for hosting content that superficially looks appropriate or appealing to children, but upon closer inspection contains unsavory themes.

WIRED recently reported on the rise of YouTube channels targeting children that appear to be using AI video generation tools to produce sloppy videos with generic 3D animations and off-color repeats of popular nursery rhymes.

The exception for animation in YouTube’s new policy could mean parents won’t be able to easily filter such videos from search results or prevent YouTube’s recommendation algorithm from automatically playing AI-generated cartoons after setting their child to watch popular and thoroughly vetted channels such as PBS Kids or Ms. Rachel.

Some problematic AI-generated content aimed at children must be flagged under the new rules. In 2023, the BBC investigated a wave of videos targeting older children that used AI tools to promote pseudoscience and conspiracy theories, including climate change denial. These videos imitated conventional live-action educational videos, showing, for example, the real pyramids of Giza, so that unsuspecting viewers might mistake them for factually accurate educational content. (The pyramid videos then suggested that the structures could generate electricity.) This new policy would crack down on those types of videos.

“We require children’s content creators to disclose content that has been meaningfully modified or synthetically generated if it appears realistic,” said YouTube spokesperson Elena Hernandez. “We do not require disclosure of content that is clearly unrealistic and does not mislead the viewer into thinking it is real.”

The dedicated children’s app YouTube Kids is curated using a combination of automated filters, human reviews and user feedback to find well-crafted children’s content. But many parents simply use the YouTube app to find content for their children, relying on video titles, lists, and thumbnails to judge what’s appropriate.

So far, the majority of the ostensibly AI-generated children’s content that WIRED finds on YouTube has been poorly created in similar ways to more conventional, low-effort children’s animation. They have ugly visuals, incoherent plots and no educational value whatsoever, but they’re not unique ugly, incoherent or pedagogically worthless.

AI tools make it easier to produce such content, and in greater volume. Some of the channels WIRED found upload long videos, some well over an hour long. By requiring labels on AI-generated children’s content, parents can filter out cartoons that may have been published with minimal (or entirely without) human review.

You may also like