As the artificial intelligence (AI) world evolves, YouTube has found itself wrestling with the surge of AI-generated content.
In a quiet policy update in June, YouTube now permits individuals to seek the removal of AI-created or synthetic content that impersonates their appearance or voice.
- YouTube policy now allows removal requests for AI-generated content.
- Content removal depends on several factors, including uniqueness of identification.
- AI-generated content could still face removal for violating Community Guidelines.
YouTube Tackles AI-Generated Content with Policy Change
This move marks an extension of YouTube’s responsible AI initiative, first unveiled in November.
New Policy Favors Privacy Over Misinformation
YouTube’s revised policy veers from flagging misleading content, like deepfakes, to directly addressing privacy violation concerns. The platform’s freshly updated guidance primarily requires first-hand claims, except in certain situations. These special circumstances include instances where the affected individual is a minor, lacks access to a computer, or is deceased.
However, submitting a takedown request does not automatically remove content. YouTube has issued a warning that it will make its determination of the complaint, considering a range of factors.
YouTube’s Decision-Making Factors
YouTube will consider whether the content is indicated as synthetic or AI-generated, if it uniquely identifies a person, and if it could be classified as parody, satire, or other content of public importance.
Another consideration is whether the AI content features a celebrity or other prominent individual.
For example, does it depict them engaging in “sensitive behavior,” such as criminal actions, violence, or endorsing a product or political candidate?
This last point is particularly critical during election years when AI-generated endorsements could sway voter sentiment.
The Path to Content Removal
Upon receiving a complaint, YouTube gives the content’s uploader 48 hours to respond. If the disputed content is removed within this period, the complaint is considered settled.
However, if the content remains, YouTube will initiate a review. The platform warns users that removal means complete deletion of the video and any personal information from the video title, description, and tags.
Blurring faces in the video is an option, but merely setting the video to private will not suffice, as it could be made public again at any time.
YouTube didn’t widely publicize this policy shift. However, it launched a tool in March within Creator Studio that allows creators to indicate when content was created with altered or synthetic media, including generative AI.
More recently, it began testing a feature enabling users to append crowdsourced notes to videos, providing further context.
YouTube’s Perspective on AI and Privacy Complaints
While YouTube is not against the use of AI and has even experimented with generative AI features like a comment summarizer and a conversational tool, it has cautioned that merely labeling AI content does not shield it from potential removal if it breaches YouTube’s Community Guidelines.
In the context of privacy complaints related to AI content, YouTube does not immediately penalize the original content creator.
Privacy violations are viewed separately from Community Guidelines. Hence, a privacy complaint does not automatically result in a strike.
Nonetheless, YouTube may act against accounts with a history of repeated violations, even if the content does not breach the Community Guidelines.
Join our newsletter community and get the latest AI and Tech updates before it’s too late!
Leave a Reply