Meta is not the only company addressing the challenges posed by the rise in AI-generated content on its platform. YouTube has quietly implemented a policy change in June allowing individuals to request the removal of AI-generated or other synthetic content that mimics their face or voice. This new policy falls under YouTube’s privacy request process, expanding on its previously announced responsible AI agenda introduced in November.
Rather than categorizing such content for its misleading nature, such as deepfakes, YouTube now encourages affected parties to request its removal directly as a privacy violation. According to updated Help documentation, YouTube requires claims to be made by the affected person themselves, except in specific cases such as minors, individuals without computer access, or deceased persons.
Submitting a takedown request does not guarantee removal, as YouTube evaluates each complaint based on several criteria. Factors considered include whether the content discloses its synthetic or AI-generated nature, its potential to uniquely identify an individual, and whether it serves a purpose such as parody, satire, or public interest. The platform also weighs whether the content involves public figures or well-known individuals, particularly if it depicts sensitive activities like criminal behavior, violence, or endorsements during election periods.
Upon receiving a complaint, YouTube gives the uploader 48 hours to respond. If the content is removed within this timeframe, the complaint is closed; otherwise, YouTube conducts a review. The platform emphasizes that removal entails completely deleting the video from its site and, if applicable, removing personal details from the video’s title, description, and tags. While users can blur faces in their videos, they cannot simply make the video private to comply with removal requests, as the video could be reverted to public status at any time.
YouTube did not widely publicize this policy change, although it introduced a tool in March within Creator Studio for creators to disclose when content contains realistic-looking AI-generated media. Additionally, the platform is testing a feature allowing users to add crowdsourced notes providing context on videos, such as identifying parodies or flagging misleading content.
While YouTube continues to explore AI technologies, including generative AI for features like comment summarization and video recommendations, it stresses that labeling content as AI-generated does not exempt it from compliance with Community Guidelines. Regarding privacy complaints related to AI content, YouTube clarifies that receiving such a complaint does not automatically result in penalties under Community Guidelines for the original content creator.