
The central government has notified new Information Technology (IT) rules to regulate artificial intelligence–generated content on digital platforms, placing fresh compliance and responsibility requirements on intermediaries and online services. The move is aimed at improving accountability, reducing misuse of AI tools, and ensuring safer digital spaces for users.
New Compliance Duties for Digital Platforms
Under the newly notified rules, digital platforms that host, publish, or distribute AI-generated content will be required to follow stricter due-diligence norms. Platforms must ensure that AI-generated material does not violate existing laws related to misinformation, impersonation, obscenity, or public harm.
The rules place explicit responsibility on intermediaries to take reasonable steps to prevent the circulation of harmful or misleading AI-generated content, including deepfakes and synthetic media.
Mandatory Disclosure and Transparency Measures
One of the key provisions requires platforms to introduce clear labelling or disclosure mechanisms for AI-generated or AI-altered content. Users must be able to identify when content has been created or significantly modified using artificial intelligence tools.
The government said transparency is essential as AI-generated text, images, audio, and video become increasingly realistic and harder to distinguish from human-created content.
Grievance Redressal and Timely Action
The new IT rules also strengthen grievance redressal obligations. Platforms will be required to act swiftly on complaints related to AI-generated content, particularly in cases involving impersonation, fraud, or misinformation.
Failure to comply with takedown requests or due-diligence requirements could result in loss of safe harbour protections under Indian law, exposing platforms to legal liability.
Government’s Rationale Behind the Rules
Officials said the notification reflects the growing influence of AI tools across social media, content platforms, and digital services. While recognising the benefits of AI innovation, the government emphasised the need for guardrails to prevent misuse that could harm individuals, institutions, or democratic processes.
The rules build on existing IT regulations while introducing AI-specific obligations in response to rapid technological advances.
Impact on Tech Platforms and AI Developers
The regulations are expected to impact social media companies, messaging services, content-sharing platforms, and AI tool providers operating in India. Companies will need to update internal policies, moderation systems, and user interfaces to comply with disclosure and monitoring requirements.
Industry observers note that while the rules may increase compliance costs, they also provide greater clarity on expectations for AI governance in India’s digital ecosystem.
Balancing Innovation and Safety
The Centre said the objective is not to restrict innovation but to ensure responsible deployment of AI technologies. By clearly defining platform duties, the government aims to strike a balance between technological progress and user protection.
What Comes Next
Digital platforms are expected to begin implementing the new requirements within the stipulated timelines. The government has indicated that enforcement will focus on compliance, transparency, and user safety rather than blanket restrictions.
As AI-generated content becomes more widespread, the new IT rules mark a significant step in India’s evolving approach to regulating emerging digital technologies.

