|
Getting your Trinity Audio player ready...
|
India has proposed a new draft law that would require social media platforms and tech companies to clearly label all content created using artificial intelligence (AI), including text and images. This measure comes in response to a rise in deepfake content during last year’s elections and ongoing worries about the technology being used to harm women and minority groups.
- India’s draft law for AI labelling on social media
- Bollywood actors file lawsuit against Google, YouTube
- India eyes AI regulations
“The government of India remains committed to ensuring an open, safe, trusted and accountable internet for all users of internet-enabled services. With the increasing availability of generative AI tools and the resulting proliferation of synthetically generated information (commonly known as deepfakes), the potential for misuse of such technologies to cause user harm, spread misinformation, manipulate elections, or impersonate individuals has grown significantly,” the Ministry of Electronics and Information Technology (MeitY) said in a statement.
“Recognising these risks, and following extensive public discussions and parliamentary deliberations, MeitY has prepared the present draft amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules…The draft aims to strengthen due diligence obligations for intermediaries, particularly social media intermediaries (SMIs) and significant social media intermediaries (SSMIs), as well as for platforms that enable the creation or modification of synthetically generated content,” MeitY informed.
Key objectives of these amendments include defining clear accountability for intermediaries and SSMIs that host or distribute AI-generated or deepfake material, according to MeitY. These rules also intend to ensure that all AI-created public content carries visible labels, traceable metadata, and transparent disclosure. This means that the government is imposing additional obligations on SSMIs which require users to declare if their uploads are AI-generated, verify such claims using technical means, and display proper labels.
However, these requirements would apply only to publicly shared content, not to private or unpublished material.
Bollywood actors sue Google, YouTube over AI deepfake videos
The new draft policy comes days after Bollywood stars Aishwarya Rai Bachchan and Abhishek Bachchan filed lawsuits in the Delhi High Court against Google (NASDAQ: GOOGL) and YouTube. They accused the platforms of hosting AI-generated deepfake videos that used their faces and voices without permission, often in misleading or explicit ways. The couple sought about ₹4 crore ($450,000) in damages and requested a permanent ban on such content, as well as measures to prevent it from being used to train AI systems.
Following their petition, the court directed YouTube to remove 518 flagged website links and posts, citing damage to the actors’ reputation and dignity. The case is viewed as a landmark step in India’s growing push to protect celebrity “personality rights” against AI misuse.
As of October 2025, India led the world in YouTube viewership with about 500 million users, according to Statista. The United States ranked second with 254 million viewers, followed by Indonesia with 151 million. The United Kingdom had roughly 55 million active users on the platform.
“Recent incidents of deepfake audio, videos and synthetic media going viral on social platforms have demonstrated the potential of generative AI to create convincing falsehoods—depicting individuals in acts or statements they never made. Such content can be weaponised to spread misinformation, damage reputations, manipulate or influence elections, or commit financial fraud,” MeitY stated.“Concerns have also been raised in both the Houses of Parliament in India regarding the regulation of deepfakes and synthetic content. MeitY has earlier issued multiple advisories to intermediaries including SMIs and SSMIs, to curb the proliferation of deepfake content and associated harms. These proposed amendments provide a clear legal basis for labelling, traceability, and accountability related to synthetically generated information,” the statement added.
India eyes AI rules after deepfakes shake 2024 general elections
In 2024, India held its general elections in seven phases to elect 543 members to the lower house of Parliament, known as the Lok Sabha. The largest election in human history, which had approximately 970 million registered voters, utilized AI to target voters through dozens of language translations. At the same time, bad actors used AI to create deepfake videos and conversational bots, raising concerns over the misuse of the technology.
For instance, an AI-generated video falsely depicted Bollywood actors Aamir Khan and Ranveer Singh criticizing Prime Minister Narendra Modi and urging support for the opposition. Another fake clip featured Home Minister Amit Shah allegedly claiming the ruling party would end reservations for backward classes—an especially sensitive topic in India. In response, the Election Commission of India issued an advisory cautioning political parties against using AI-based deepfakes or misinformation, emphasizing the importance of upholding the integrity of elections.
“These amendments will establish clear accountability for intermediaries and SSMIs facilitating or hosting synthetically generated information i.e., deepfake or AI-generated content,” MeitY pointed out.
The new draft rules are also expected to “ensure visible labelling, metadata traceability, and transparency for all public-facing AI-generated media”, as well as “protect intermediaries acting in good faith while addressing user grievances related to deepfakes or synthetic content.”
MeitY stated that the new amendments will empower users to distinguish between authentic and synthetic information, thereby building public trust while supporting India’s broader vision of an open, safe, trusted, and accountable internet, while balancing user rights to free expression and innovation.
In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.
Watch: Artificial intelligence needs blockchain





