India's government this month tightened content takedown requirements for social media companies, reducing the compliance window from 36 hours to just three hours. The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026 take effect on February 20, according to a gazette notification published by the Ministry of Electronics and Information Technology.
According to the gazette notification dated February 10, the amendment affects social media intermediaries including Meta's Facebook and Instagram, Alphabet's Google-owned YouTube, and X. The new rules represent what industry observers describe as one of the world's most aggressive content moderation timelines, requiring platforms to balance compliance in a market of more than 1 billion internet users against mounting concerns over government censorship.
The directive provides no explanation for the dramatic reduction in compliance time. However, the changes arrive as India has emerged as one of the most aggressive regulators of online content, empowering scores of officers in recent years to order content removal. That approach has drawn criticism from digital rights advocates and prompted repeated clashes with companies including Elon Musk's X.
Technical feasibility questioned
"It's practically impossible for social media firms to remove content in three hours," according to Akash Karmakar, a partner at Indian law firm Panag & Babu who specializes in technology law. The lawyer added that the requirement "assumes no application of mind or real world ability to resist compliance."
The three-hour deadline applies to content deemed unlawful under India's extensive legal framework, including laws related to national security, public order, and various criminal statutes. According to the amended rules, intermediaries must take down or disable access to unlawful information within three hours of receiving notification from government authorities.
The notification process itself has become increasingly formalized under the new rules. According to the gazette, authorities issuing takedown orders must now be specifically authorized officers, each not below the rank of Deputy Inspector General of Police. This represents a modification from earlier provisions that allowed broader categories of officials to issue removal demands.
India has issued thousands of takedown orders in recent years, according to platform transparency reports. Meta alone restricted more than 28,000 pieces of content in India in the first six months of 2025 following government requests, the company disclosed in transparency reports covering its operations.
Platform responses and industry concerns
Facebook-owner Meta declined to comment on the changes. X and Alphabet's Google, which operates YouTube, did not immediately respond to requests for comment. The silence reflects the delicate position platforms occupy in India, where they must maintain operational compliance while navigating complex political and regulatory dynamics.
"This rule was never in consultation," according to a social media executive who spoke on condition of anonymity. The executive noted that "International standards provide a longer timeline," highlighting the disconnect between India's new requirements and content moderation practices in other major markets.
The amended rules introduce new definitions that expand regulatory scope. According to the gazette notification, "synthetically generated information" now includes audio, visual, or audio-visual content created through artificial intelligence or algorithmic processes. This definition covers content that depicts or portrays individuals or events "in a manner that is, or is likely to be perceived as indistinguishable from a natural person or real-world event."
The synthetic content provisions include specific carveouts. According to the notification, content arising from routine editing, formatting, technical correction, or good-faith document preparation does not qualify as synthetically generated information. Educational materials, research outputs, and content created solely for improving accessibility also receive exemptions from synthetic content labeling requirements.
New disclosure requirements for synthetic content
The rules establish mandatory disclosure frameworks for platforms offering synthetic content capabilities. According to the notification, intermediaries providing computer resources that enable creation, modification, or dissemination of synthetically generated information must deploy "reasonable and appropriate technical measures" including automated tools to prevent policy violations.
Significant social media intermediaries face additional requirements under the amended rules. According to the gazette, these platforms must require users to declare whether information is synthetically generated before display or publication. The intermediaries must then deploy technical measures to verify declaration accuracy and ensure prominent labeling indicating content is synthetically generated.
The labeling mandate applies specifically to content that could be confused with authentic media. According to the rules, every piece of synthetically generated information not covered by exemptions must be "prominently labelled in a manner that ensures prominent visibility in the visual display that is easily noticeable and adequately perceivable."
The rules previously proposed requiring platforms to visibly label AI-generated content across 10 percent of its surface area or duration. The final version instead mandates that content be "prominently labelled" without specifying precise technical requirements, providing platforms with greater implementation flexibility.
Enforcement mechanisms and user protections
The amended rules maintain existing enforcement frameworks while expanding circumstances that trigger platform obligations. According to the notification, intermediaries must periodically inform users about rules, regulations, privacy policies, and user agreements at least once every three months through simple and effective means in English or languages specified in the Eighth Schedule to the Constitution.
Non-compliance triggers specific consequences. According to the rules, intermediaries have the right to terminate or suspend user access immediately when non-compliance relates to content creation, generation, modification, alteration, hosting, displaying, uploading, publishing, transmitting, storing, updating, sharing, or disseminating information in contravention of laws for the time being in force.
The rules establish reporting obligations for specific content categories. According to the notification, violations relating to commission of offenses under laws such as the Bharatiya Nagarik Suraksha Sanhita, 2023 or the Protection of Children from Sexual Offences Act, 2012 require mandatory reporting. Platforms must report such offenses to appropriate authorities in accordance with applicable law provisions.
User appeal mechanisms receive detailed specification in the amended framework. According to the rules, platforms must establish grievance redressal mechanisms that allow users to provide explanations or supporting evidence when challenging content moderation decisions. The notification emphasizes that platforms cannot suppress or remove labels, permanent metadata, or unique identifiers displayed or embedded in accordance with synthetic content disclosure requirements.
Comparative regulatory context
The three-hour requirement positions India among the world's most aggressive content regulators, though comparisons reveal important distinctions. The European Union's Digital Services Act provides longer timelines for most content categories, though it establishes expedited removal for specific categories including terrorist content and child sexual abuse material.
Platform responses to regulatory pressure vary significantly across jurisdictions. Meta assembled a cross-functional team of over 1,000 professionals in 2023 to develop Digital Services Act compliance solutions for European operations. The company's Indian operations will require similar resource investments to meet the three-hour deadline, though the technical challenges differ substantially.
The UK's Online Safety Act, which received Royal Assent on October 26, 2023, creates different obligations focused primarily on platform systems and processes rather than specific takedown timelines. However, that legislation has drawn criticism from platforms including X for what they characterize as regulatory overreach threatening free expression.
Market implications and operational challenges
India's digital advertising market represents substantial revenue for global platforms. Meta's platforms play definitive roles across product categories including loans (86 percent), investments (84 percent), insurance (78 percent), and savings (82 percent), according to research conducted by IPSOS for Meta surveying over 2,000 respondents aged 25 to 45 across major Indian cities during 2025.
The compliance burden extends beyond simple content removal mechanics. Platforms must evaluate each takedown request against multiple legal frameworks, assess whether content actually violates specified laws, and make determinations about user rights and potential collateral damage from over-removal. The three-hour window compresses these evaluations into timeframes that industry experts describe as incompatible with thoughtful decision-making.
Content moderation infrastructure requires substantial human and technical resources. Meta's content moderation team recently published research on December 24, 2025, detailing how reinforcement learning techniques achieve data efficiency improvements of 10 to 100 times compared to supervised fine-tuning across policy-violation classification tasks. However, even advanced automation cannot address the fundamental challenge of evaluating complex legal questions within three hours.
The linguistic complexity of India's market compounds technical challenges. The country recognizes 22 official languages in the Eighth Schedule to the Constitution. According to the amended rules, platforms must communicate with users in English or any language specified in that schedule, requiring content moderation capabilities across multiple languages with distinct cultural contexts and legal interpretations.
Industry response and future outlook
Platform operators face difficult strategic decisions. Compliance with the three-hour mandate requires substantial operational investment in automated systems, human moderator teams, and legal expertise. However, over-compliance risks removing legitimate content and undermining user trust. Under-compliance exposes platforms to legal penalties and potential service disruptions in one of their largest markets.
The rules create particular challenges for smaller platforms lacking the resources that Meta, Google, and other major operators can deploy. According to the amended notification, the requirements apply to "intermediaries" broadly defined to include entities providing computer resources as intermediaries to enable information storage, transmission, or hosting. This sweeping definition potentially encompasses numerous platforms beyond major social media companies.
The synthetic content provisions introduce additional complexity. While major platforms have developed capabilities for detecting and labeling AI-generated content, the technology remains imperfect. False positives could result in legitimate content receiving misleading labels, while false negatives could enable harmful synthetic content to spread without appropriate disclosure.
There is mounting global pressure on social media companies to police content more aggressively, with governments from Brussels to Brasilia demanding faster takedowns and greater accountability. However, India's three-hour mandate represents an extreme position in this regulatory landscape. Most jurisdictions recognize that thoughtful content moderation requires time for evaluation, consideration of context, and assessment of user rights.
The notification makes no provision for content that requires expert evaluation or translation. Complex questions involving potential national security implications, religious sensitivities, or sophisticated legal analysis must receive resolution within the same three-hour window as straightforward policy violations. This uniformity disregards the substantial variation in difficulty across different content categories.
Platform transparency reports will provide crucial data for assessing the new rules' impact. The number of takedown requests, compliance rates, appeal outcomes, and content restoration statistics will reveal whether the three-hour mandate functions as intended or creates systematic over-removal. However, these assessments may require months or years of data collection before patterns become clear.
Timeline
- February 25, 2021: Original Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 published in the Gazette of India
- October 28, 2022: Government publishes first amendment to IT Rules via notification G.S.R. 794(E)
- April 6, 2023: Government publishes second amendment via notification G.S.R. 275(E)
- October 22, 2025: Government publishes third amendment via notification G.S.R. 775(E)
- February 10, 2026: Ministry of Electronics and Information Technology publishes fourth amendment reducing takedown timeline from 36 hours to three hours
- February 20, 2026: New three-hour takedown requirement takes effect
Summary
Who: India's Ministry of Electronics and Information Technology issued regulations affecting social media intermediaries including Meta's Facebook and Instagram, Alphabet's Google and YouTube, and X, with compliance requirements for platforms serving India's more than 1 billion internet users.
What: The government reduced the content takedown compliance timeline from 36 hours to three hours for unlawful content, while introducing new definitions for synthetically generated information, mandatory disclosure requirements, and stricter enforcement mechanisms through amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021.
When: The Ministry published the amendment rules in the Gazette of India on February 10, 2026, with the new three-hour takedown requirement taking effect on February 20, 2026, following earlier amendments published on February 25, 2021, October 28, 2022, April 6, 2023, and October 22, 2025.
Where: The regulations apply to social media intermediaries operating in India and serving Indian users, regardless of the platforms' physical location or country of incorporation, affecting one of the world's largest internet markets with substantial implications for global content moderation practices.
Why: The government provided no explicit rationale for the reduced timeline, though the changes reflect India's broader pattern of aggressive online content regulation aimed at controlling speech deemed threatening to national security, public order, or community standards, despite criticism from industry experts who characterize the three-hour mandate as technically impossible to implement while maintaining quality moderation decisions.