In a sharp move, India has sharply cut the deadline for social media platforms to remove illegal artificial intelligence (AI) content from 36 hours to 3 hours after notification, as part of broader amendments to the country’s digital regulation framework.
The change aims to tighten controls on deepfakes and misleading AI-generated content online.
Also Read: Karnataka Cabinet Allocates ₹67 Crore to Track Fake News via AI
Stricter AI Content Rules and Labelling Requirements
Under the updated Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, social media intermediaries must now act within three hours of receiving a takedown order from authorities or courts.
The amended rules, notified by the Ministry of Electronics and Information Technology (MeitY), take effect on February 20, 2026.
The government has also mandated clear, prominent labelling of AI-generated and synthetically altered content. Platforms must attach identifiers or metadata to such content, and cannot allow these labels to be removed once applied. These measures are intended to address misinformation, deepfake impersonation and other online uses of generative AI tools.
Also Read: Mozilla to Add Global ‘Kill Switch’ for AI Features in Firefox
Compliance Timeline and Enforcement
Alongside the tightened three-hour takedown window, the amended rules shorten other response timelines. Grievance acknowledgement periods have been reduced, and urgent removal orders must now be addressed more promptly.
Platforms must have automated detection tools in place and obtain user declarations for AI-generated content. Failure to comply could expose intermediaries to legal liabilities, while acting against unlawful content under these rules will not affect their safe harbour protections if due diligence is shown.
Industry groups have raised concerns about the feasibility of the deadlines and the potential impact on platform operations.

