After meetings with technology companies on November 22 and 23, Union IT Minister Ashwini Vaishnaw and Minister of State for IT Rajeev Chandrasekhar issued the advisory. The move comes in response to a series of deepfake incidents targeting prominent actors and politicians on social media platforms.
“Content not permitted under the IT Rules, especially the content listed under Rule 3(1)(b), must be clearly communicated to users in clear and precise language, including through the Terms of Service and user agreements; the same must be expressly communicated to the user upon initial registration, and also as a regular reminder, especially at each login and while uploading or sharing information on the platform,” the ministry said.
Intermediaries will also be required to inform users of the penalties that will apply to them if they are knowingly convicted of committing deepfake content. “Users should be made aware of various penal provisions of the Indian Penal Code 1860, the IT Act, 2000 and other laws which may be enforced in case of violation of Rule 3(1)(b). In addition, the terms of service and user agreements should clearly emphasize that intermediaries are required to report legal violations to law enforcement authorities under the relevant Indian laws applicable to the context,” it added.
Rule 3(1)(b)(v) of the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, states that intermediaries, including Meta's Instagram and WhatsApp, Google's YouTube, and global and domestic technology companies, including Amazon, Microsoft and Telegram, must prohibit users from “hosting, displaying, uploading, modifying, publishing, transmitting, storing, updating, or sharing any information that deceives or misleads the recipient as to the origin of a message, or knowingly and intentionally communicates false information, which is patently false and untrue or misleading in nature.”
On December 13, Chandrasekhar said in an interview with Mint that the Center would issue an advisory, and not new legislation, urging companies to comply with existing laws on deepfakes. “There are no separate regulations for deepfakes. The existing regulations already cover this under Rule 3(1)(b)(v) of the IT Rules, 2021. We are now aiming for 100% enforcement by the platforms, and for platforms to be more proactive – including alignment of terms of use, and training users of twelve no-go areas – which they should have done by now, but did not. As a result, we are issuing them an advisory,” he added.
The ministry will monitor compliance with the advice for a period of time. “If they are still not following the rules, we will go back and amend the rules to make them even stricter to remove the ambiguity.”
While tech companies have internal policies that promote caution and discourage the spread of malicious content, intermediary platforms benefit from immunity from prosecution for such content. Experts labeled it a major problem.
“Because of the core nature of technology, it is nearly impossible to track cyber attackers generating malicious content – with endless ways to obfuscate the digital footprint. The regulations will be a deterrent for the masses, but the onus will be on tech companies to use their sophistication in AI to proactively monitor their platforms,” said a senior policy advisor working with several tech companies.
The issue of deepfakes came into prominent public debate after several altered videos of actors appeared on social media. Last month, Prime Minister Narendra Modi also highlighted the issue during a virtual meeting among the G20. “The world is concerned about the negative effects of AI. India believes we should work together on global regulation of AI. Understanding how dangerous deepfake is to society and individuals requires us to work forward. We want AI to reach people, it must be safe for society,” he said.
In this regard, India has talked about regulating AI to curb the harm. After becoming a signatory to the Bletchley Park Declaration at the UK AI Safety Summit on November 1, India's New Delhi Declaration saw consensus among 28 participating countries, including the US and UK, as well as the European Union, on achieving a global regulatory framework that will seek to promote the use of AI in public utilities while limiting the impact of harms that can be enforced using AI.