Google-owned YouTube has announced to crack down on artificial intelligence (AI)-driven content that “realistically simulates” deceased minors or victims of deadly or well-documented major violent events describing their death or violence experienced.
The company has updated its harassment and cyberbullying policies to clamp down on such content. The platform said it will begin striking such content starting on January 16, IANS reported.
The policy change is coming as some content creators have been using AI to recreate the likeness of deceased or missing children, where they give these child victims of high-profile cases a childlike “voice” to describe their deaths.
A Washington Post report recently revealed that content creators have used AI to narrate the abduction and death of deceased or missing kids, including the two-year-old British James Bulger.
“If your content violates this policy, we will remove the content and send you an email to let you know. If we can’t verify that a link you post is safe, we may remove the link,” said YouTube.
“If you get three strikes within 90 days, your channel will be terminated,” the company added.
In September last year, the Chinese short-video making platform TikTok introduced a feature to allow creators to label their AI-generated content, disclosing if they are posting synthetic or manipulated media that shows realistic scenes.
India’s creative industry reps air AI policy gaps in meet with PM’s advisor
IAMAI flays TRAI attempts to regulate communication OTT
TRAI consultation seeks policy reset to up public Wi-Fi expansion
Summercool Coolers joins Aaj Tak’s ‘Teen Taal’ as presenting sponsor
JioStar posts Rs 36,248 cr revenue in FY26, IPL ‘26 tops 515mn viewers
Prime Video unveils trailer of Tamil series ‘Exam’, to stream from May 15
Rajeev Khandelwal recreates Anil Kapoor–Sridevi magic on ‘Tum Ho Naa’
‘Sankat Mochan Hanuman’ returns on Sony PAL from May 4
UMG to sell half of Spotify stake, plans €1 billion buyback
Spotify user base surges to 761 million, ad rev dips 
