India to Tighten AI Labelling Rules for Social Media Amid Compliance
India may tighten AI labelling rules for social media platforms amid compliance concerns, focusing on deepfakes, misinformation, and platform accountability.

By Samarjit Kaur

on April 22, 2026

The Indian government is set to tighten the rope against the labelling of artificial intelligence (AI) on social media. The progress comes after experts expressed dissatisfaction with platforms for failing to comply with existing advisories.

The plan comes amid a sharper regulatory push and growing concerns over misinformation, deepfakes and undisclosed AI-generated content, ahead of key political and digital policy milestones.

Also Read: Can Artificial Intelligence Really Strengthen Indian Healthcare’s ‘Backbone’?

Stricter AI Disclosure Norms Under Consideration

Government officials said that revised guidelines will require clearer, more consistent labelling of AI-generated content across social media platforms. Creators must ensure users can identify synthetic media (text, images, audio and video) generated by AI tools.

Present advisories mandate platforms to flag manipulated/AI-generated content. However, authorities have found loopholes in enforcement and inconsistent implementation. This has forced harsher conversations on adding disclosures which are clearly visible and mandatory under compliance frameworks.

The new regulation aligns with India’s broader digital governance strategy, which includes addressing misinformation and boosting accountability for large social media intermediaries. Officials are also examining whether penalties or legal provisions can be put in place for non-compliance.

Also Read: India’s Data Centre Market Set to Cross $22 Billion by 2030 on AI, Cloud Surge

Deepfakes and Platform Accountability Raise Concerns

In the past, a massive surge in deepfake incidents and AI-driven misinformation has been observed. With growing concerns among policymakers and regulators, the move has become necessary. Authorities believe that inadequate labelling increases the risk of misleading users and undermines trust in digital platforms.

Social media users have been requested to strengthen their content moderation systems and use disclosure mechanisms. The government has also emphasised the need for proactive action rather than reactive takedowns.

This development is part of India’s ongoing efforts to shape its AI governance framework. It reflects a shift toward stricter oversight as the use of generative AI tools expands rapidly across sectors.

News Image
News Image