Introduction

Artificial Intelligence–generated deepfakes have emerged as one of the most alarming technological threats in India. Highly realistic manipulated videos, images, and audio clips can now be created within minutes, often used for fraud, political manipulation, cyberbullying, extortion, and harassment. While India’s legal system has provisions to address cybercrimes, deepfakes present unique challenges that demand specialised regulatory attention.

What Are Deepfakes?

Deepfakes are synthetic media created using AI techniques such as deep learning and generative adversarial networks (GANs). These tools can alter faces, voices, and movements in a way that appears completely real, making it difficult for the public — or even experts — to identify manipulation.

Recent Rise of Deepfake Cases in India

Over the last few years, India has witnessed a sharp surge in deepfake-related incidents, reflecting both the increasing accessibility of AI tools and the lack of widespread digital literacy among users. This rise is not limited to a particular demographic; instead, it spans celebrities, political leaders, corporate executives, and ordinary individuals, making deepfakes a serious legal and social concern.

1. Deepfake Harassment and Gender-Based Violence

A significant portion of reported cases involve non-consensual sexually explicit deepfakes of women. Such manipulated content is often used for harassment, extortion, and public shaming. Women — including students, professionals, influencers, and private individuals — have become primary targets due to the misuse of their social media photographs. This trend highlights the intersection between deepfake technology and cyber-gender-based violence, raising urgent questions regarding privacy, dignity, and personal autonomy.

2. Political Manipulation and Election-Related Deepfakes

India has also seen a troubling rise in political deepfakes, particularly around election periods. Altered speeches, fabricated audio messages, and manipulated videos of political leaders have been circulated with the intention of influencing public perception or disrupting electoral integrity. Such incidents threaten democratic processes by creating misinformation, misleading voters, and potentially inciting unrest. The Election Commission has publicly expressed concern over this issue, signalling the need for stronger regulatory oversight.

3. Corporate Fraud and Deepfake Impersonation

Instances of deepfake voice calls and video impersonation have been reported in corporate environments, where fraudsters pose as CEOs, senior managers, or financial officers to extract confidential information or initiate unauthorized fund transfers. These cases demonstrate how deepfakes are now being used as sophisticated tools for economic offences, challenging traditional notions of identity verification and corporate liability.

4. Financial Scams and Identity Theft

Deepfake-based voice cloning has become a major enabler of recent financial scams in India. Fraudsters have successfully used AI-generated voice messages to impersonate relatives, bank officials, or customer-care representatives. Victims often fail to differentiate between real and synthetic communication, leading to substantial financial losses. This growing trend underscores the need for stricter cyber fraud detection mechanisms and enhanced public awareness.

6. Social Media Misinformation and Public Disorder

Social media platforms have become breeding grounds for the rapid spread of deepfake content. Within minutes, manipulated videos can go viral, resulting in public confusion, reputational damage, or communal tension. Even after the content is debunked, the irreparable harm to reputation and public trust remains — demonstrating the unique challenge deepfakes pose in an era of instant digital communication.

Existing Legal Provisions Applicable to Deepfakes in India:

Information Technology Act, 2000

  • Section 66D – Cheating by Personation Using Computer Resources

This provision penalises impersonation conducted through electronic means. Deepfakes, especially those involving cloned voices or fabricated video messages, often constitute digital impersonation. When an individual uses a deepfake to deceive, mislead, or obtain an advantage — such as in financial scams, political misinformation, or corporate fraud — Section 66D becomes directly applicable.
Punishment: Imprisonment up to 3 years and fine.

  • Section 66E – Violation of Privacy

Deepfakes created by altering or misusing images of individuals without consent amount to an invasion of privacy. Section 66E prohibits capturing, publishing, or transmitting the image of a private area of any person without their knowledge or consent. Courts have interpreted this section broadly to include digitally manipulated obscene or intimate deepfakes.
Punishment: Imprisonment up to 3 years and fine.

  • Section 67 and 67A – Obscene and Sexually Explicit Content

Section 67 penalises the publication or transmission of obscene material through electronic means, while Section 67A covers sexually explicit content. Deepfake pornography or morphed explicit videos fall squarely within these provisions. Even the circulation or forwarding of such content attracts liability, regardless of whether the accused created the deepfake.
Punishment: Up to 5 years imprisonment (67A) for first conviction.

  • Section 69A – Power of the Government to Block Public Access

This section empowers the Central Government to direct intermediaries to block harmful or misleading deepfake content that threatens public order, national security, or morality. The provision is frequently invoked to curb viral deepfakes that are capable of causing communal tension or political instability.

Indian Penal Code, 1860 (IPC) – Still Applicable in 2025

Since the Bharatiya Nyaya Sanhita (BNS) is undergoing phased implementation and is not universally enforced as of 2025, the IPC remains the principal legislation under which criminal liability is assessed.

  • Sections 465 & 468 – Forgery and Forgery for the Purpose of Cheating

Deepfakes, being artificially fabricated representations, qualify as digital forgeries. If such manipulated content is used to deceive individuals, tarnish reputation, or commit fraud, Section 468 (forgery intended for cheating) becomes relevant.
These sections address the creation and usage of digitally altered media with fraudulent intent.

  • Sections 499, 500 & 501 – Defamation

When deepfake videos or audios are used to harm the reputation, character, or public image of an individual, the offence amounts to defamation. Section 501 further penalises the printing or transmission of defamatory material. Even forwarding or circulating defamatory deepfake content can attract liability.

  • Section 507 – Criminal Intimidation by Anonymous Communication

Many deepfake crimes involve threats, blackmail, or extortion through anonymous accounts. Section 507 applies when a person uses a deepfake to threaten or coerce the victim while concealing their identity.

  • Section 509 – Words, Gestures or Acts Intended to Insult the Modesty of a Woman

Deepfakes targeting women — particularly sexually suggestive or obscene manipulated media — fall under this section. The provision is frequently invoked in cases involving violation of dignity, harassment, and online gender-based violence.

  • Section 354C – Voyeurism

This section penalises capturing, publishing, or circulating images/videos of a woman engaged in a private act. Courts have extended this to include digitally morphed pornographic deepfakes, recognising that virtual manipulation can be equally violative as physical recording.

Why Deepfakes Pose Unique Legal Challenge

Deepfakes are not merely another form of digital manipulation; they fundamentally disrupt traditional principles of evidence, identity, accountability, and privacy. They create legal complications that existing laws were never designed to address. Below are the major legal challenges posed by deepfake technology, explained in a detailed and structured manner:

1. Difficulty in Identifying the Creator (Attribution Problem)

One of the most significant challenges is determining who actually created the deepfake.
Deepfake tools are:

Easily accessible, Often free, Frequently hosted on foreign platforms, Allow anonymous usage, and Operate through VPNs, encrypted channels and burner accounts.

This anonymity makes attribution extremely difficult for law enforcement agencies. Even if a deepfake goes viral, the primary offender may never be identified. Without proper identification, criminal liability becomes weak, and prosecution faces a severe evidentiary gap.

2. Lack of a Dedicated Legal Definition and Statutory Framework

India currently lacks a specific statutory definition of deepfakes or “AI-generated manipulated media.”
As a result:

  • Existing laws (IPC, IT Act, DPDP Act) are applied indirectly. There is no offence titled “deepfake creation” or “digital impersonation using AI,”
  • Provisions meant for forgery, obscenity, or cheating must be stretched to fit deepfake scenarios.

This leads to inconsistent enforcement and judicial uncertainty in determining liability, intent, and punishment.

3. Challenges in Proving Intent and Mens Rea

Deepfakes can be created for various motives — humour, satire, fraud, harassment, or political manipulation.
However, proving criminal intent (mens rea) is extremely difficult because:

  • The creator may deny malicious motive,
  • The deepfake might have been altered or re-uploaded by several users
  • The victim’s harm may not directly link to the first uploader.

This raises complex legal questions:
Who is liable — the creator, uploader, sharer, or platform?

Need for Stronger Regulation in India

The rapid expansion of deepfake technology has exposed significant gaps in India’s existing legal and regulatory framework. Although certain provisions of the IT Act, IPC, and DPDP Act apply to specific instances of deepfake misuse, these laws are fundamentally reactive, fragmented, and insufficient for addressing the scale, speed, and complexity of AI-generated manipulated media. Therefore, a comprehensive regulatory approach is urgently required for effective prevention, accountability, and victim protection.

1. Need for a Dedicated Legal Definition of Deepfakes

Currently, Indian laws do not explicitly define “deepfake,” “synthetic media,” or “AI-manipulated content.”
A clear statutory definition is essential because:

  • It creates legal certainty,
  • Helps law enforcement agencies classify offences correctly,
  • Enables courts to interpret AI-related crimes consistently,
  • Establishes a distinct category of digital harm.

A dedicated definition will serve as the foundation for all regulatory interventions.

2. Introduction of Deepfake-Specific Offences and Punishments

Existing laws penalize associated harms (defamation, obscenity, fraud), but they do not criminalize the act of deepfake creation itself. India needs:

  • A specific offence for malicious deepfake creation,
  • A separate offence for malicious distribution or amplification,
  • Enhanced punishments for sexually explicit deepfakes of women and minors,
  • Clear gradation of offences based on intent, harm caused, and scale of dissemination.

Such provisions would make enforcement more direct, effective, and deterrent in nature.

3. Mandatory Disclosure and Watermarking of AI-Generated Content

To ensure transparency, all AI-generated media — including deepfakes — must contain identifiable metadata or digital watermarks.
Mandatory watermarking would help:

  • Distinguish real content from manipulated media,
  • Trace back the source of creation,
  • Assist courts and investigators in verifying authenticity,
  • Prevent misuse of anonymous AI tools.

This measure is crucial for both evidence integrity and user safety.

4. Faster and More Efficient Takedown Mechanisms

Deepfakes cause instant and irreversible harm, often going viral before victims even realize their existence.
Therefore, India needs:

  • A fast-track takedown system,
  • Time-bound removal requirements (e.g., within 24 hours),
  • Emergency response cells for deepfake complaints,
  • Simplified procedures for victims to file takedown requests.

Intermediaries should be legally obligated to act swiftly on harmful deepfake content, especially those involving sexual exploitation or political misinformation.

5. Clearer Intermediary Liability Standards

Social media platforms currently enjoy safe harbour protection under the IT Act, provided they act as “mere intermediaries.”
However, deepfakes require platforms to play a more proactive role. Stronger regulation must ensure:

  • Mandatory AI-based detection tools,
  • Automatic flagging of suspicious content,
  • Transparent reporting mechanisms,
  • Accountability for repeated or negligent failures to remove deepfake material.

A balance must be struck between user rights and platform responsibility.

Conclusion

Deepfakes represent a modern form of digital harm — faster, more deceptive, and more damaging than traditional cybercrimes. Although India has multiple laws that can be applied, the rapid evolution of AI demands a more focused and dedicated legal approach. A combination of strong regulation, technological safeguards, and public education is essential to protect individuals, democracy, and national security.

As deepfakes become more advanced, India’s legal response must become equally sophisticated. This is not just a technological challenge — it is a legal and societal one.

Contributed By – Krishnkant Sharma ( Intern )