
Summary: In a move to regulate deepfake, fake news and other misinformation, the Ministry of Electronics and Information Technology has proposed draft rules seeking to amend the existing intermediary guidelines. They propose a wide definition of “synthetically generated information” without meaningful and practical standards on authenticity thresholds. Combined with this, it provides for prescriptive labelling norms that may leave little to no flexibility for intermediaries and additionally seeks to impose onerous user verification related obligations for significant social media intermediaries without any brightline standards. While the objective of the proposed regulations, rooted in user safety, is laudable, the draft in its current form can have wide reaching implications on how the overall AI ecosystem develops in India. As the devil lies in the details, it would be prudent to revisit the proposed rules and account for practical considerations, thresholds, and standards that will contribute to effective implementation. The draft rules are open for stakeholder consultation until November 6, 2025, and offer an opportunity to engage with the government for shaping the balance between innovation and user safety.
Introduction
India’s approach to regulating artificial intelligence (“AI”) has often been marked by volatility and uncertainty. The lack of a clear, predictable, long-term regulatory framework has hindered large-scale projects, especially given the need for long-term investments and global collaborations to drive India’s AI ecosystem innovation and compute.
While India’s digital infrastructure ecosystem has been growing exponentially, the innovation layer that needs to augment and strengthen it, through the establishment of compute capacity, and deployment of state-of-the-art models has often been stymied by fears of mandatory algorithmic disclosures or licensing, as indicated in the past by the Ministry of Electronics and Information Technology’s (“MeitY”) advisories to intermediaries and platforms using AI algorithms and tools on their platforms (“AI Advisories”).[1] Indeed, AI development and deployment services for global clients have been hampered by fears of licensing or mandatory disclosure.
Given this background, on October 22, 2025, MeitY proposed draft amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (the “Intermediary Rules”) around synthetic/ AI generated data (“Draft Rules”).
Need for the Draft Rules:
At its core, the proposed amendments are directed to ensure that “synthetically generated information”, including deepfakes and misinformation, that appear to be real or accurate, are not passed off or used to deceive citizens. With the Indian government’s promise to provide open, safe, trusted and accountable internet to users, the Draft Rules seek to add to the existing due diligence and grievance redressal obligations of intermediaries.
While the need for such regulation is clear, given the sharp increase in incidents, it is crucial that any imposed obligation remains clear, coherent, reasonable, and practical, without inadvertently stifling the emerging creator ecosystem.
What is synthetically generated information?
The Draft Rules define “synthetically generated information” as information[2] that is artificially or algorithmically created, generated, modified or altered using a computer resource[3], in a manner that such information reasonably appears to be authentic or true.[4] Further, it states that any information that an intermediary is prohibited from dealing with under the Intermediary Rules will also cover those that are synthetically generated.[5]
Thus, the proposed definition is wide, and may permit a varied, all-inclusive interpretation, which may result in unintended outcomes, such as:
- Practically, there is little to no information on the internet that is not “modified or altered” using a computer resource. Even the addition of meta data or the compression of a digital file to upload onto a platform may render it as synthetic data, under the current definition as there may be reasonable expectation about the authenticity of the said file. Therefore, if all content that is “modified” using computer resources of intermediaries must be labelled, a bulk of data on the internet could be labelled as synthetic, resulting in notice fatigue for users.
- Secondly, there is much “synthetic” content, such as computer generated animation, which is created for creative or artistic purposes and to some viewers may appear to be authentic or true. If all such content is covered by a label, which people associate with deepfakes, advertisers and publishers will simply stop using this content.
A possible solution could be to restrict the requirement specifically to “deepfakes”, or AI generated or manipulated images, audios, or video content that resemble existing persons, objects, places, entities, events, that could falsely seem authentic or truthful to someone, similar to the approach under the EU AI Act.[6]
What are the obligations in relation to synthetically generated information?
The Draft Rules propose two sets of obligations.
- Labelling: The Draft Rules state that intermediaries who make available computer resources (including databases and software) enabling, permitting, or facilitating the creation, generation, modification or alteration of information to result in synthetic data, must ensure that such data is indelibly and prominently labelled in accordance with certain norms. This due diligence obligation proposes prominent labelling with a permanent unique identifier within the body of the synthetic data. While the requirement is well reasoned, and in line with global approaches around labelling, a more problematic part of the Draft Rules is its prescription requiring such displayed or audio labels to constitute at least 10% of the displayed content or presented during the initial 10% of the audio’s total duration, respectively. Apart from being potentially onerous, the requirement is somewhat unclear. For instance, would the requirement of covering 10% of a visual display in a video mean 10% of a single image or 10% of every visual frame? Comparable norms in other jurisdictions offer flexibility to digital service providers. For example, under the EU AI Act, deployers of an AI system that create deepfakes disclose such content as artificially generated or manipulated in clear and distinguishable manner at the first instance of an user’s interaction with such content, with further calibration to suit the underlying context of such creation.[7]
- User Verification: The second set of obligations is sought to be imposed on significant social media intermediaries (“SSMIs”).[8] SSMIs enabling display, uploading, or publication of any information on their computer resource, (i) must mandate their users to declare synthetic information; (ii) deploy “reasonable and appropriate technical measures”, including automated tools that are proportionate to verify the accuracy of such user declaration, accounting for the nature, format and source of such information; (iii) identify undeclared synthetic information; and (iv) clearly and prominently label it. Pegging the verification obligation to proportionate measures provides flexibility to use different tools for different categories of synthetic data. However, it will depend on how the government firms up the premise around synthetic data at the outset. For example, for fictional creative content that is patently false, the degree of verification adopted could be substantially diluted when compared to others such as fake news, misinformation, and deepfakes. While excluding (i)intermediaries that are not SSMIs; and (ii)SSMIs that purely enable online interaction between users without offering functionality for displaying or uploading synthetic information is a good start, additional provisions allowing industry consensus and deemed adequacy around such consensus may be useful, failing which SSMIs may be tempted to lean too far in the direction of labelling content and potentially mislabelling mildly modified or enhanced content as synthetic.
What could be the impact of the Draft Rules on AI supply chain?
The Draft Rules are proposed under Section 87 of the Information Technology Act, 2000 (“IT Act”), that empowers the government to issue guidelines that intermediaries must adhere to for seeking safe harbor protection. An intermediary can avail safe harbor protection, provided it (a) limits its operations to providing access to a communication system for the user to transmit, store or host information; or (b) does not initiate transmission, or select receiver, and is not selecting or modifying the concerned information; and (c) observes due diligence in accordance with the Intermediary Rules.
Unlike the erstwhile AI Advisories that sought to guide intermediaries and non-intermediary platforms on AI deployment, the Draft Rules position itself as binding norms for intermediaries, and do not purport to cover other platforms such as D2C platforms, online content publishers, and proprietary software solution providers.
However, the state of technology has advanced and the conventional contours of when a service provider platform becomes an intermediary to a user continues to be pressure tested constantly.
Specific to AI model developers and neural networks used in deepfakes and misinformation generation, a fact specific evaluation in the backdrop of how intermediaries are defined under the IT Act may result in their inclusion, where one could argue that they provide services to users for enabling, permitting, or facilitating the user to create synthetic data basis the prompts fed into the system. This of course would be subject to further deliberation, basis the safe harbour principles on intermediaries not modifying the information but providing computer resources/ tools that enable modification of such information at the user’s behest. In such instances, the unanswered question is whether such AI developers may be subject to compliances under the Draft Rule, which may result in architectural changes to their AI modules.
Where AI model developers function as software licensors to businesses, it may be an overreach to classify them as intermediaries that must comply with the Draft Rules. Nonetheless, in such B2B scenario and where the business customer is an intermediary, it may become imperative for the business customer to contractually require AI model developers to provide built-in features around identification and labelling of synthetic data for demonstrating compliance with the Draft Rules.
Conclusion
The wide remit of what constitutes synthetic information may be stretched to cover all synthetic content that is on the internet. The prescriptive labelling norms are likely to negate any flexibility in how intermediaries may adhere to their transparency obligations, which may result in increased compliance costs for all intermediaries. Additionally, absence of brightline standards may lead SSMIs to adopt blanket approaches for user verification concerning all kinds of synthetic content, irrespective of whether it is ostensibly real or accurate.
The Draft Rules are well-intended and seek to address the exponential surge in user harms caused by synthetic information, but come with grey areas that can result in interpretational ambiguities. A staggered implementation approach, along with necessary definitional clarity, thresholds and carve outs to the constituent obligations may help. Stakeholder consultation is a vital opportunity to advocate for norms that balance AI innovation and user protection.
[1] Advisory eNo. 2(4)/2023-CyberLaws – 2 dated September 26, 2023, and subsequent Advisory eNo. 2(4)/2023-CyberLaws 3 dated March 15, 2024 accessible here (last accessed on Oct 28, 2025)
[2] Section 2(1)(v) of the Information and Technology Act, 2000 defines “information” to include data, message, text, images, sound, voice, codes, computer programmes, software, and databases or micro file or computer generated micro fiche.
[3] Section 2(k) of IT Act defines “computer resource” widely to include computer, computer systems, computer network, data, computer database, or software.
[4] Clause 2(i) of the Draft Rules
[5] Clause 2(ii) of the Draft Rules
[6] Article 3(60) of the Regulation (EU) 2024/1689 of the European Parliament and of the Council dated June 13, 2024
[7] Articles 50(4) and 50(5) of the of the Regulation (EU) 2024/1689 of the European Parliament and of the Council dated June 13, 2024
[8] Social media intermediaries having more than 50 million registered users in India