Open Access Research Article

NAVIGATING THE LEGAL LANDSCAPE OF DEEPFAKES IN INDIA: PUNISHMENTS AND PROTECTIONS

Author(s):
SHYAMASIS SARANGI NIDHI SHARMA
Journal IJLRA
ISSN 2582-6433
Published 2023/11/10
Access Open Access
Issue 7

Published Paper

PDF Preview

Article Details

NAVIGATING THE LEGAL LANDSCAPE OF DEEPFAKES IN INDIA: PUNISHMENTS
AND PROTECTIONS
 
AUTHORED BY - SHYAMASIS SARANGI,
Kirit P. Mehta School of Law, NMIMS Navi Mumbai
&
CO-AUTHOR - NIDHI SHARMA,
Kirit P. Mehta School of Law, NMIMS Navi Mumbai
 
 
Abstract:
Deepfake technology poses significant challenges to legal systems worldwide, raising concerns about privacy, misinformation, and digital manipulation. This research paper delves into the intricate legal landscape of deepfakes in India, focusing on punishments for offenders and the protections available to individuals and society. By analyzing existing laws, international collaborations, and technological advancements, the paper explores the effectiveness of legal frameworks in combating deepfake threats. The study emphasizes the importance of raising awareness, implementing stringent punishments, and leveraging advanced technologies to safeguard individuals from the malicious use of deepfakes.
 
Keywords: Deepfake Technology, Legislative Measures, Rashmika Mandanna
 
Introduction:
The rapid advancement of deepfake technology has revolutionized the digital landscape, enabling the creation of highly realistic fake videos and images. In India, as in many other countries, the proliferation of deepfakes has raised serious legal concerns. This research paper aims to navigate the legal complexities surrounding deepfakes in the Indian context, focusing on the punishments prescribed for perpetrators and the protections available to potential victims. Understanding the existing legal framework, international collaborations, and technological solutions is crucial in addressing the challenges posed by deepfake threats. The paper explores the evolution of deepfake technology, its implications for privacy and misinformation, and assesses the effectiveness of current legal measures. By examining punishments and protections, this study provides valuable insights into the strategies required to combat the malicious use of deepfakes and protect the integrity of individuals and institutions in the digital age.
 
Applications of Deep-fake:
Deepfake technology, while having a broad spectrum of applications, is predominantly used for nefarious purposes, posing significant risks to society, especially for individuals active on social media platforms. It is frequently misused to target celebrities and political leaders, leading to harmful consequences such as revenge, blackmail, and identity theft. Malicious intent is evident in the creation of non-consensual adult content, particularly involving well-known actresses, escalating the issue. Additionally, political figures, including world leaders like Barack Obama and Donald Trump, have been exploited through fake videos, potentially endangering global peace. Despite these negative aspects, there are positive applications of Deepfake technology emerging in various sectors. It has become more accessible, enabling its constructive use in art, film, and social media. For instance, the Dalí Museum utilized Deepfake technology to enhance visitor experiences, allowing interaction with Salvador Dalí’s life through artificial intelligence. In advertising and business, Deepfake technology is employed for creative campaigns, such as replicating famous artwork or editing videos efficiently in the film industry. Notably, it aids educational initiatives, with examples like footballer David Beckham promoting malaria awareness in multiple languages. Additionally, Deepfake technology finds utility in the retail sector, offering virtual product experiences, and even in healthcare, generating imaginary data to safeguard patient privacy. Furthermore, it can be harnessed for personalized news presentations and fundraising efforts, leveraging synthesized videos featuring influential personalities to raise awareness and support for various causes. Despite its potential, careful consideration and ethical use of this technology are essential to mitigate its negative impact on society.
 
Understanding the Legal Framework:
In India, the use of deepfakes raises complex legal questions, primarily falling under the purview of laws governing privacy, defamation, and cybercrimes. The Information Technology Act, 2000, specifically Section 66E, addresses the violation of privacy by capturing, publishing, or transmitting the image of a private area of any person without their consent. This provision can be applied to deepfakes that infringe upon an individual’s privacy.
 
The current legal landscape in India does not specifically address the issue of deepfake videos. While existing laws such as the Information Technology Act 2000 and Section 500 of the Indian Penal Code criminalize the publication of obscene material and defamation, they only cover deepfakes related to sexually explicit content. This limited scope fails to regulate the broader domains where deepfakes can be misused.
 
In the context of the Right to Privacy, recognized as a fundamental right by the Supreme Court, deepfake videos infringe upon individuals’ privacy rights. The court emphasized that privacy safeguards an individual’s autonomy and control over various aspects of their life. Deepfakes violate this right by manipulating personal information without consent, impacting both physical and mental privacy. Even without explicit legislation, legal claims against deepfake videos would likely succeed in court due to these violations of privacy rights.
 
Additionally, the Personal Data Protection Bill 2019, which aims to safeguard personal data, contains provisions that indirectly address deepfakes. The bill defines personal data broadly, encompassing any characteristic of a natural person. Data fiduciaries, whether individuals or companies, must process data lawfully and obtain consent before processing individuals’ data. The bill also introduces the “right to be forgotten,” allowing individuals to request the removal of unauthorized personal data from the public domain, backed by penalties for violations.
 
Once the Personal Data Protection Bill is enacted, it is implied that the creation and circulation of unauthorized deepfake videos will be prohibited, offering legal recourse against this emerging threat.
 
Punishments and Penalties:
The punishments for creating and disseminating deepfakes in India can be severe. Under the Information Technology Act, 2000, Section 66E prescribes imprisonment for up to three years or a fine extending to two lakh rupees, or both, for capturing, publishing, or transmitting private images of an individual without their consent. Moreover, Section 67 of the Act deals with the publication or transmission of obscene material in electronic form, including deepfakes, and can lead to imprisonment for up to three years and a fine.
 
Additionally, deepfakes can also fall under the ambit of defamation laws. Section 499 and 500 of the Indian Penal Code provide punishments for defamation, which includes imprisonment for up to two years, a fine, or both. If a deepfake is used to tarnish someone’s reputation, these provisions can be invoked.
 
Challenges in Enforcement:
While the legal framework exists, enforcing these laws poses challenges. Identifying the creators of deepfakes can be difficult due to the complex nature of digital technologies. Moreover, the rapid evolution of deepfake techniques makes it challenging for law enforcement agencies to keep pace with the technology.
 
Identification of Creators:
One of the primary challenges lies in identifying the creators of deepfakes. The intricate web of digital technologies allows malicious actors to conceal their identities effectively. Forensic techniques, although advancing, struggle to keep up with the complexity of deepfake creation methods, making it hard for law enforcement to pinpoint the culprits.
 
Technological Complexity:
Deepfake technology is constantly evolving, incorporating sophisticated algorithms and machine learning techniques. This rapid evolution means law enforcement agencies need to continuously update their knowledge and tools to detect and combat new variations of deepfakes effectively. Keeping up with these advancements demands significant resources and expertise.
 
Legal Framework Adaptation:
While a legal framework exists, it often struggles to keep pace with the fast-changing landscape of deepfake technology. Laws and regulations might not be specific enough to address all nuances of deepfake creation, distribution, and malicious intent. Consequently, there could be gaps in the legal system that allow deepfake creators to escape accountability.
 
International Jurisdiction:
Deepfakes can easily cross international borders through the internet, making it challenging to enforce laws effectively. Jurisdictional issues arise when perpetrators operate from one country, the victim resides in another, and the data servers are located elsewhere. Coordinating international efforts to tackle these cross-border challenges is a complex and time-consuming process.
Public Awareness and Education:
Another challenge lies in educating the public about the existence and potential threats of deepfake technology. Lack of awareness among the general populace hampers reporting efforts, making it difficult for law enforcement agencies to track down instances of deepfake manipulation. Public education campaigns are crucial to encourage vigilance and responsible online behavior.
 
Resource Allocation:
Enforcing laws against deepfake technology requires substantial resources, including skilled personnel, advanced technology, and financial investments. Law enforcement agencies might struggle with limited budgets and priorities, leading to resource constraints in investigating and prosecuting deepfake cases effectively.
 
Ethical and Privacy Concerns:
Addressing deepfake technology raises ethical dilemmas and privacy concerns. Striking a balance between combating malicious use of deepfakes and respecting individual privacy rights is a challenge. Crafting laws and policies that protect both privacy and security is a delicate task that requires careful consideration.
 
Protecting Against Deepfakes:
To combat the threat of deepfakes, raising awareness about the existence of such technology and its potential consequences is crucial. Media literacy programs and educational initiatives can empower people to identify and report deepfakes. Additionally, technological advancements, such as deepfake detection tools, can assist in identifying manipulated content and mitigating the risks associated with these malicious creations.[1]
 
Media Literacy and Awareness:
Raising awareness about deepfake technology and its potential consequences is fundamental in protecting individuals and society. Media literacy programs can educate people about the existence of deepfakes, how they are created, and the risks they pose. By understanding the technology, individuals are better equipped to identify suspicious content and discern between real and manipulated information.
 
Educational Initiatives:
Educational initiatives targeting various age groups, from school students to adults, play a crucial role. Integrating modules about digital literacy, critical thinking, and online safety into school curricula can empower the younger generation to navigate the digital landscape responsibly. Workshops, seminars, and online courses for adults can enhance their awareness of deepfake threats and teach them how to protect themselves from falling victim to misinformation.
 
Technological Advancements:
Developing and deploying advanced deepfake detection tools is essential for mitigating the risks associated with malicious creations. These tools leverage artificial intelligence and machine learning algorithms to analyze multimedia content, identifying inconsistencies and markers of manipulation. Continuous research and development in this field are necessary to stay ahead of evolving deepfake techniques.
 
User-Friendly Verification Tools:
Creating user-friendly verification tools can empower individuals to confirm the authenticity of media content. These tools could include browser extensions, mobile apps, or online platforms where users can upload images or videos to check for manipulations. User accessibility and ease of use are crucial factors in ensuring widespread adoption of such tools.
 
Collaboration with Tech Companies:
Collaboration between government agencies, law enforcement, and technology companies is essential. Tech companies can contribute by investing in research and development of deepfake detection algorithms, implementing stricter content policies, and enhancing their platforms' security features. Public-private partnerships can facilitate the sharing of expertise and resources in the fight against deepfakes.
 
Legal Framework and Policies:
Strengthening the legal framework and policies related to deepfakes is vital. Clear and specific laws addressing deepfake creation, distribution, and malicious intent can act as a deterrent. Penalties for offenders should be severe enough to discourage malicious actors. International cooperation is also necessary to address cross-border legal challenges related to deepfake activities.
 
Ethical AI Development:
Promoting ethical development and use of artificial intelligence is crucial. Ethical guidelines should be established to ensure that AI technologies, including those used in deepfake detection, are developed and deployed responsibly, respecting privacy and human rights. Transparency in AI algorithms and decision-making processes is essential for building trust among users.
 
Recent Cases:
1. Rise in Fake Political Campaigns:
In 2022, during a state election in India, deepfake videos of political candidates surfaced online, manipulating their speeches and stances on important issues. These videos aimed to mislead voters and damage the candidates’ reputations. Several individuals involved in creating and spreading these deepfakes were arrested under the Information Technology Act and IPC Sections related to defamation.
 
2. Fake Celebrity Scandals:
Deepfake videos involving celebrities engaged in explicit or controversial activities have been widely circulated on social media platforms. In 2023, a prominent Bollywood actress fell victim to such a deepfake scandal, leading to legal action against the perpetrators. The accused were charged under the Information Technology Act and IPC Sections pertaining to privacy violations and defamation.
 
3. Employment Fraud:
Deepfakes have also been used in employment fraud, where job applicants’ resumes and interview videos are manipulated to create false qualifications and work experiences. Companies faced financial losses due to hiring unqualified candidates. Law enforcement agencies intervened, and the culprits were prosecuted under the Information Technology Act for fraud and identity theft.
 
In a recent alarming incident, renowned actress Rashmika Mandanna became embroiled in a controversy surrounding a viral deepfake video. The video, circulating widely on social media, depicts a woman entering an elevator, her face digitally manipulated to resemble Mandanna’s. This incident has stirred significant apprehension, prompting widespread demands for legal intervention. Notably, Bollywood icon Amitabh Bachchan, Mandanna’s co-star from the movie “Goodbye,” has expressed his distress over the rise of deepfake content and has actively advocated for legal measures to combat this disturbing trend.[2]
 
Proposed Legislative Amendments:
There have been instances where programs have been created to generate nonexistent human faces. This technology could be misused to produce deepfake videos, which can then be uploaded on social media through fake accounts. Currently, Indian laws do not cover this specific offense, as personation laws require the substitution of a real person, which is not the case with deepfake technology.
 
To address this issue, potential legislative amendments could be made. One approach could involve modifying the Information Technology (Intermediary Guidelines) Rules 2011. Intermediaries could be instructed to include clauses in their privacy policies, prohibiting users from uploading any form of deepfake content. However, for this prohibition to be effective, the government needs to establish reliable deepfake detection mechanisms.
 
Additionally, there are no provisions in the current legislation to protect the data of deceased individuals. This gap is concerning, especially considering the possibility of deepfake videos being created to manipulate public opinion or disrupt criminal trials. To rectify this, the law should extend data protection to deceased individuals. The proposed amendments should mandate obtaining consent from the heirs of the deceased before using any of their data for public circulation. Similar provisions, such as those found in the Privacy Act of Hungary and the Spanish Data Protection Act, could serve as models to empower heirs and interested parties to file suits in cases involving the unauthorized use of data belonging to deceased individuals.
 
 
 
Conclusion:
As India grapples with the challenges posed by deepfakes, the existing legal framework offers punishments and protections against their malicious use. However, effective enforcement, coupled with awareness programs and technological solutions, is essential to combat the growing threat of deepfakes and safeguard individuals’ privacy, reputation, and security in the digital age.
 
References:
1.       Binayak Dasgupta, ‘BJP’s deepfake videos trigger new worry over AI use in political campaigns Hindustan Times (New Delhi, 20 February 2020) accessed 15 September 2020
2.      Martin Giles, ‘The GANfather: The man who’s given machines the gift of imagination’ (MIT Technology Review, 21 February 2018) accessed 15 September 2020
3.      This Deepfake of Mark Zuckerberg Tests Facebook’s Fake Video Policies’ (Vice, 11 June 2019) accessed 15 September 2020
4.      Kaylee Fagan, ‘A Video that appeared to show Obama calling Trump a “dipsh-t” is a warning About a disturbing new trend called ‘deepfakes’’ (Business Insider, 18 April 2018)
 

[1] Chenxi Wang, ‘Deepfakes, Revenge Porn and the impact on women’ (Forbes, 1 November 2019) accessed 15 September 2020
[2] https://www.businesstoday.in/technology/news/story/rashmika-mandanna-deepfake-controversy-what-is-a-deepfake-video-and-how-can-you-spot-one-404734-2023-11-06?utm_source=btweb_story_share

Article Information

About Journal

International Journal for Legal Research and Analysis

  • Abbreviation IJLRA
  • ISSN 2582-6433
  • Access Open Access
  • License CC 4.0

All research articles published in International Journal for Legal Research and Analysis are open access and available to read, download and share, subject to proper citation of the original work.

Creative Commons

Disclaimer: The opinions expressed in this publication are those of the authors and do not necessarily reflect the views of International Journal for Legal Research and Analysis.