Editorial illustration for India's new IT rules give Instagram and X an unworkable deepfake deadline
India Mandates Deepfake Labels on Social Media Platforms
India's new IT rules give Instagram and X an unworkable deepfake deadline
India's latest amendment to its Information Technology Rules has put two of the world's biggest social apps in a tight spot. Instagram and X now face a compliance clock that obliges them to block or label synthetic audio and video that the law deems illegal. While the intention is clear—curbing the spread of fabricated media—the timeline leaves little room for the kind of detection infrastructure many experts say takes months to develop.
The mandate asks platforms to implement “reasonable and appropriate technical measures” against deepfakes, yet the rule does not spell out how quickly such systems must be operational. Critics argue the deadline is unrealistic, especially for services that process billions of posts daily. As the deadline approaches, both companies are scrambling to interpret what the government expects, and observers are watching to see whether the requirement will hold up in practice.
Regulators say the clause is meant to protect voters and public figures, but the lack of technical guidance raises questions about enforcement. The government hasn't clarified whether third‑party tools will count toward compliance, leaving platforms to decide on costly in‑house solutions.
Under India's amended Information Technology Rules, digital platforms will be required to deploy "reasonable and appropriate technical measures" to prevent their users from making or sharing illegal synthetically-generated audio and visual content, aka, deepfakes. Any such generative AI content that isn't blocked must be embedded with "permanent metadata or other appropriate technical provenance mechanisms." Specific obligations are also called out for social media platforms, such as requiring users to disclose AI-generated or edited materials, deploying tools that verify those disclosures, and prominently labeling AI content in a way that allows people to immediately identify that it's synthetic, such as adding verbal disclosures to AI audio. That's easier said than done, given how woefully underdeveloped AI detection and labelling systems currently are.
C2PA (also known as content credentials) is one of the best systems we currently have for both, and works by attaching detailed metadata to images, videos, and audio at the point of creation or editing, to invisibly describe how it was made or altered. But here's the thing: Meta, Google, Microsoft, and many other tech giants are already using C2PA, and it clearly isn't working. Some platforms like Facebook, Instagram, YouTube, and LinkedIn add labels to content flagged by the C2PA system, but those labels are difficult to spot, and some synthetic content that should carry that metadata is slipping through the cracks.
Social media platforms can't label anything that doesn't include provenance metadata to begin with, such as materials produced by open-source AI models or so-called "nudify apps" that refuse to embrace the voluntary C2PA standard. India has over 500 million social media users, according to DataReportal research shared by Reuters.
India’s amended IT rules now set a February 20 deadline for platforms to police synthetic media. Instagram and X face a timetable that, by their own admission, outpaces the most advanced deep‑fake detection tools currently available. The legislation demands “reasonable and appropriate technical measures” to block illegal AI‑generated audio and video, and insists that any such content be clearly labeled.
Yet the definition of “reasonable” remains vague, and the technical feasibility of meeting the deadline is uncertain. Companies have long argued for self‑driven solutions; the sudden legal pressure compresses years of development into days. If platforms cannot comply, they risk penalties, but the rules also leave open how enforcement will be verified.
Critics point out that the requirement could push harmful content underground rather than eliminate it. Meanwhile, users may see more labels, but whether those will be accurate or merely perfunctory is unclear. The coming weeks will test whether the industry can translate policy mandates into effective, scalable safeguards against deepfakes.
Further Reading
- India orders social media platforms to take down deepfakes faster - TechCrunch
- Government's new IT rules make AI content labelling mandatory - Times of India
- India sets 3 hr deadline for social media platforms to take down AI ... - Economic Times
- Centre sets 3-hr deadline to remove flagged deepfakes - The Hans India
Common Questions Answered
What specific challenges do AI practitioners identify with MeitY's draft amendment on synthetic content?
AI practitioners argue that the draft amendment's core enforcement tools like visible labeling and platform-side verification are technically unreliable. Routine actions such as editing, compression, re-encoding, screenshots, and reposting can easily strip or degrade provenance signals, making content verification at scale practically impossible.
How do the proposed IT Rules define synthetically generated information?
The amendments define synthetically generated information as content that is artificially or algorithmically created, modified, or altered using a computer resource in a manner that appears reasonably authentic or true. This expansive definition encompasses deepfake audio, algorithmically altered photographs, fabricated metadata, and synthetically generated text.
What are the key mandatory labeling requirements in the new IT Rules draft amendment?
The amendment mandates comprehensive labeling of synthetic content, requiring intermediaries to display permanent, unique metadata. For visual content, labels must occupy a minimum of ten percent of the screen area and remain permanently visible, while audio content must have labels covering a specific portion of its duration.
Why do AI practitioners argue that the draft amendment focuses on the wrong approach to regulating synthetic content?
The practitioners argue that the draft amendment regulates how content is generated instead of addressing whether it causes harm. They warn that the technical design choices could over-regulate benign content while still failing to prevent the most prevalent forms of online deception.