YouTube Cracks Down on AI-Generated Content
Meta is not the only company facing challenges with the rise of AI-generated content. YouTube has also recently implemented a policy change to address the issue. In June, YouTube quietly introduced a new policy that allows individuals to request the removal of AI-generated or synthetic content that simulates their face or voice. This move is part of YouTube’s commitment to responsible AI practices, which was first announced in November.
Unlike traditional takedown requests for misleading content like deepfakes, YouTube now requires affected parties to submit a request for removal based on privacy violations. The platform’s updated Help documentation outlines the process, which typically involves first-party claims with a few exceptions.
However, simply submitting a request does not guarantee that the content will be removed. YouTube will assess each complaint based on various factors, such as whether the content is disclosed as synthetic, if it uniquely identifies a person, and if it serves a valuable purpose like parody or satire. The platform also considers the context in which the AI-generated content features public figures engaging in sensitive behavior or endorsing products or political candidates.
YouTube gives the uploader 48 hours to respond to the complaint before initiating a review. If the content is removed within that time frame, the complaint is closed. Removal involves deleting the video from the site and removing any personal information from the title, description, and tags. Users cannot simply make the video private to comply, as it could be set back to public status at any time.
While YouTube did not extensively promote this policy change, the platform has been making efforts to address AI-related issues. In Creator Studio, creators can now disclose when content is made using altered or synthetic media, including generative AI. YouTube is also testing a feature that allows users to add crowdsourced notes to provide context on videos.
Despite its stance on AI, YouTube emphasizes that labeling content as AI-generated does not protect it from removal if it violates Community Guidelines. Privacy complaints related to AI material do not automatically result in penalties for the original content creator, as these issues are separate from Community Guidelines strikes.
Overall, YouTube’s new policy on AI-generated content demonstrates the platform’s commitment to maintaining a safe and responsible environment for creators and users alike.