Introduction to Deepfake

A sort of artificial intelligence technology known as “deepfake” creates synthetic material, including audio, video, and photos, using machine learning techniques, especially generative adversarial networks (GANs). Creating extremely lifelike synthetic media that mimic real people while manipulating some features of the content is the aim of deepfake technology. Deep learning and generative adversarial networks are the two methods that underpin deepfake technology. Artificial neural networks, a type of algorithm that is inspired by the structure and operation of the brain, are used in the machine learning discipline of deep learning to process and analyze massive amounts of data. Numerous fields, including robotics, computer vision, natural language processing, and speech recognition, have benefited from the use of deep learning.

A type of deep learning architecture known as generative adversarial networks (GANs) trains on a dataset to produce new, synthetic data that closely mimics the original data using two neural networks, a discriminator and a generator. While the discriminator evaluates the veracity of the created samples and the actual samples from the training dataset, the generator produces fictitious samples. The training process of the two networks is adversarial, with the discriminator trying to accurately identify the generated samples from the genuine ones and the generator trying to produce samples that can trick the discriminator. This process keeps going until the generator can generate synthetic data that is incredibly lifelike.

Creation of Deepfakes

Generative adversarial networks (GANs) are a machine-learning approach used to construct deepfakes. A generator and a discriminator neural network, which make up a GAN, are trained using a sizable dataset of real photos, videos, or audio. The generator network generates artificial data that mimics the actual data in the training set, like a synthetic image. After that, the discriminator network evaluates the veracity of the synthetic data and gives the generator input on how to enhance its output. Until the generator generates synthetic data that is remarkably lifelike and challenging to discern from the real data, this process is repeated numerous times, during which the discriminator and generator learn from one another. Using this training set, deepfakes are produced. which may be applied in various ways for video and image deep fakes:

(a) face swap: transfer the face of one person for that of the person in the video;

(b) attribute editing: change characteristics of the person in the video, e.g., style or colour of the hair;

(c) face re-enactment: transferring the facial expressions from the face of one person to the person in the target video; and

(d) fully synthetic material: real material is used to train what people look like, but the resulting picture is entirely made up.

Detection of Deepfakes

It is crucial to remember that deepfake technology is always developing, meaning that to stay current with the most recent advancements, deepfake detection algorithms must be updated regularly. At the moment, the most effective method for figuring out whether a piece of media is a deepfake is to combine many detection methods, and you should be wary of anything that looks too good to be real. The following are a few of the most popular methods for spotting deep fakes:

Illustrations of various crimes committed using deepfakes

Crimes done with the use of deepfakes There are numerous ways that crimes could be committed with deepfake technology. While technology in and of itself is not dangerous, it can be a weapon for crimes against people and society.

The following crimes can be committed using deepfake:

Conclusion and suggestions

The existing legal framework in India does not adequately address cyber offences resulting from the usage of deepfakes. It is challenging to adequately govern the use of artificial intelligence, machine learning, and deepfakes due to the absence of specific regulations on these topics in the IT Act of 2000. It could be essential to amend the IT Act of 2000 to include measures that expressly address the use of deepfakes and the penalties for their misuse to better regulate offences caused by their use. Stronger legal safeguards for people whose likenesses or photos are exploited without their permission may also be part of this, as may harsher punishments for those who produce or disseminate deepfakes with nefarious intent. The fact that deepfakes are a global problem and that effective regulation and enforcement of privacy laws will probably necessitate international cooperation and coordination are equally noteworthy. In the meanwhile, people and organizations should be cautious when confirming the legitimacy of material they come across online and mindful of the possible threats connected to deepfakes.

In the meantime, governments can do the following:

(a) The first censorship strategy involves ordering publishers and middlemen to stop the public from accessing false information.

(a) The second strategy is the punitive strategy, which holds companies or people accountable for creating or spreading false information.

(c) The third strategy is known as the “intermediary regulation approach,” which requires internet intermediaries to promptly delete false information from their platforms. If they don’t, they risk penalties under Sections 69-A and 79 of the Information Technology Act of 2000. 

Adv. Khanak Sharma (D\1710\2023)

Leave a Reply

Your email address will not be published. Required fields are marked *

This field is required.

This field is required.


The following disclaimer governs the use of this website (“Website”) and the services provided by the Law offices of Kr. Vivek Tanwar Advocate & Associates in accordance with the laws of India. By accessing or using this Website, you acknowledge and agree to the terms and conditions stated in this disclaimer.

The information provided on this Website is for general informational purposes only and should not be considered as legal advice or relied upon as such. The content of this Website is not intended to create, and receipt of it does not constitute, an attorney-client relationship between you and the Law Firm. Any reliance on the information provided on this Website is done at your own risk.

The Law Firm makes no representations or warranties of any kind, express or implied, regarding the accuracy, completeness, reliability, or suitability of the information contained on this Website.

The Law Firm disclaims all liability for any errors or omissions in the content of this Website or for any actions taken in reliance on the information provided herein. The information contained in this website, should not be construed as an act of solicitation of work or advertisement in any manner.