ARTIFICIAL Intelligence (AI) deepfakes have emerged as a significant concern, particularly affecting celebrities, as highlighted by a recent incident involving the unauthorised use of Tom Hanks’ AI-generated face in a dental advertising video.
In response to the growing threat of AI manipulation, YouTube has implemented a policy mandating video creators to disclose the use of AI tools to produce altered or synthetic content that convincingly mimics reality.
The announcement of this new policy came on Nov 14 through an official blog post from YouTube. YouTube’s Product Management Vice-Presidents Jennifer Flannery O’Connor and Emily Moxley, explained that the requirement specifically pertains to AI-generated videos that realistically depict events that never occurred or feature individuals saying or doing things they never did.
YouTube underscores the significance of flagging such content, especially considering its potential impact on sensitive topics like elections, ongoing conflicts and public health crises.
The policy’s implementation will involve two key elements to inform viewers. Firstly, a new label will be added to the description panel of videos, indicating that the content has been altered or is synthetic.
Secondly, for particularly pertinent content, a more visible label will be directly incorporated into the video player.
Acknowledging the limitations of labels in certain cases, the YouTube executives emphasise their commitment to taking direct action when necessary. In instances where the label may not sufficiently mitigate harm, YouTube will remove the content, regardless of whether it has already been flagged.
The scope of the policy extends beyond videos, with YouTube indicating its intention to collaborate with music partners in addressing AI-generated music content. This includes music that imitates an artiste’s distinctive singing or rapping voice.
By broadening its focus to various forms of content, YouTube aims to proactively combat the misuse of AI technology and enhance transparency for its user base.
In India, a fake video of Bollywood actress Rashmika Mandanna purporting to show her wearing a low-cut top has triggered calls for AI regulation in the country.
Mandanna wrote on X, formerly Twitter, saying that she was “really hurt” after a manipulated video showing her face on the body of British-Indian Instagram influencer Zara Patel was widely circulated on social media.
“We need to address this as a community and with urgency before more of us are affected by such identity theft,” Mandanna wrote, calling it “extremely scary” how vulnerable all are to technology being misused.
Mandanna, who has 4.7 million followers on X, added that she was thankful for “my family, friends and well-wishers who are my protection and support system”.
“But if this happened to me when I was in school or college, I genuinely can’t imagine how I could ever tackle this.”
Patel said she was not involved in its creation and was also “deeply disturbed and upset” by it.
“I worry about the future of women and girls who now have to fear even more about putting themselves on social media,” Patel said in a post to her 450,000 fans.
Social media is hugely popular in India, the world’s largest democracy, but inflammatory posts peddling lies have stoked political divides and have been accused of inciting deadly religious riots.
India’s information technology minister Rajeev Chandrasekhar wrote on X on Monday that such deep fake videos were “dangerous and damaging” forms of misinformation, but warned that they must “be dealt with by platforms.”
Bollywood superstar Amitabh Bachchan called it a “strong case” for action.