As AI tech continues to advance into filmmaking and video editing, Netflix has become the first big player I’m aware of to try to impose some order. It’s published a fairly detailed set of rules for generative AI use in content production, which studios will need to follow when they contribute to films or series for the streaming platform.
On the face of it, any attempt to impose some clarity on what use of generative AI is acceptable and what isn’t is welcome. But Netflix’s norms also highlight the complexity and sometimes hypocritical contradictions the plague the development of AI in the industry – as Disney’s AI setbacks also showed.
It makes sense that Netflix would be the first to draw up rules since it was also the first to use AI or VFX in a big series (or at least to admit to it). Last month, it confirmed that it, or rather Argentina’s K&S Films and CONTROL Studio, used AI for a scene in The Eternaut.
You may like
The use of AI was for a single two-second scene – a building collapse in the last episode that would have otherwise been impossible on the series’s budget. As shown in the reel above, CONTROL delivered around 200 well-crafted VFX shots for the series, so we’re hardly talking AI slop.
But case quickly raised concerns about a slippery slope down towards a pit of AI-generated content. Like the debate we’ve seen about the use of AI in video games, how much AI is OK, and what type of AI is OK?
According to Netflix’s AI guidelines, use of the tech will be generally fine if it’s for research, storyboarding and other non-final assets, provided that outputs don’t substantially recreate identifiable characteristics of unowned or copyrighted material, or infringe on copyright-protected works, that generative tools do not store, reuse or train on production data inputs or outputs and that GenAI is not used to replace or generate new talent performances without consent
In cases that meet those criteria, the platform says that “socializing the intended use with your Netflix contact may be sufficient.” In other cases, it doesn’t rule out the use of AI, but says collaborators should “escalate to your Netflix contact for more guidance before proceeding, as written approval may be required.”
Daily design news, reviews, how-tos and more, as picked by the editors.
Such scenarios that could require written approval include using Netflix’s own data, personal information or third party material, generating key creative elements such as main characters or settings; referencing copyrighted materials or likenesses of public figures or deceased individuals or making substantial changes to performances.
The guidelines give one specific example of a use of AI that would need permission: generating a second killer doll to play the red light/green light game with Young-hee in Squid Game. As for changing performance, as well as permission from Netflix, creators would be expected to follow acting guild norms.
The guidelines also mention that when making changes that affect a performance’s emotional tone, delivery, or intent, as “even subtle edits may have legal or reputational implications” – to someone may have noticed all those times when Netflix’s AI use has led to roastings on social media.
The guidelines highlight some of the hypocrisy that major players face when they accept some AI use that may be based on models that were trained on copyright material but strictly forbid their copyrighted material from being used for training. In a nutshell, it sounds like Netlfix is basically saying ‘ask us for permission and it will generally be OK, as long as you’re only stealing other people’s work not ours‘.
Companies like Netflix and Disney are caught in a hard place. They fear getting rightly criticised when they turn out AI slop like Lucasfilm’s AI Star Wars movie, but they’re more terrified that they could eventually be annihilated by new competition that couldn’t care less about the optics – like the so-called Netflix of AI, Showrunner. Netflix might find it’s hard to walk a line in the middle.