Washington DC-headquartered software industry group BSA The Software Alliance said that business-to-business and enterprise software services may not pose the same risk to user safety and public order and that the government should consider content authenticity solutions.
Public policy solutions to address the issue of deep fakes remain unclear and continue to elude policymakers, said BSA in a letter to the ministry of electronics and IT earlier this month.
The government plans to amend the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 to include regulations for deep fakes.
MeitY had also sent an advisory to social media intermediaries in December last year mandating the identification and removal of misinformation and deep fakes within 36 hours. Venkatesh Krishnamoorthy, country manager of India, BSA.
Discover the stories of your interest
The Software Alliance, in a letter said that MeitY should consider the differences in the role and function of intermediaries when prescribing obligations related to the spread of deep fakes. “All intermediaries do not have the same ability to address this issue and services provided by intermediaries may not pose the same kind of risk,” he said.
Business-to-business and enterprise software services pose limited risk to user safety and public order given the size of their user base and the fact that they do not provide services directly to consumers, Krishnamoorthy said.
Santosh Jinugu, partner in consulting firm Deloitte India, told ET that combating deepfakes needs a multifaceted approach with many mitigation strategies.
These include deploying digital watermarks, leveraging photoplethysmography (PGP) analysis to scrutinise blood flow in video pixels, harnessing convolutional neural networks (CNNs) for automated detection, and scrutinising facial characteristics for signs of fabrication.
Ashok Hariharan, cofounder, IDfy, a Mumbai-based identity verification, biometric and risk assessment company, said liveness solutions do a great job at detecting deepfakes with the help of parameters like light reflections on the face, or asking questions in real-time in an agent-led journey.
“Unfortunately, these solutions are not an industry norm. Only a handful of companies have certifications like iBeta, which is the gold standard for liveness checks,” he said.
These checks and certifications should be encouraged and mandated by the regulators to fight the issue of deep fakes, he said.
Krishnamoorthy suggested the use of watermarks for AI-generated content to help users differentiate between real content and AI-generated content and prevent misinformation. An open-source standard developed by the Coalition for Content Provenance and Authenticity generates tamper-evident content credentials (C2PA). This standard will help consumers decide if content is trustworthy and promote transparency around the use of AI, he said.
“It is important that content credentials, watermarks or metadata are not stripped and preserved by platforms. This will ensure that the public can see them whenever they are consuming online content,” he added.
C2PA on February 9 announced that Google will be joining as a steering committee member and support content credentials, and that this will be a significant moment for bringing transparency to digital content everywhere.
Google will collaborate with other steering committee members Adobe, BBC, Intel, Microsoft, Publicis Groupe, Sony, and Truepic to develop the technical standard for content credentials.