Open Access Research Article

JUSTICE IN AI- GENERATED WORLD AND ITS LEGAL LANDSCAPES BY: SIDDHI JAISWAL

Author(s):
SIDDHI JAISWAL
Journal IJLRA
ISSN 2582-6433
Published 2025/03/12
Access Open Access
Issue 7

Published Paper

PDF Preview

Article Details

JUSTICE IN AI- GENERATED WORLD AND ITS LEGAL LANDSCAPES
 
AUTHORED BY: SIDDHI JAISWAL
Law Student
Narsee Monjee Institute Of Management Studies, KPMSOL, MUMBAI
 
Abstract
The rapid advancement of Artificial Intelligence (AI) has ushered in a new era of creativity, where machines can generate content that rivals human-produced works. This technological revolution presents a complex legal challenge such as the difficulty in distinguishing between AI-generated evidences and non AI generated evidences. Since the AI-powered tools have become more sophisticated, they can produce deepfake videos, images, audios and other forms of fabricated content that can be easily mistaken for authentic materials and be presented as evidence in the courts. This raises significant questions about the credibility and admissibility of AI-generated evidence in legal proceedings. Moreover, the paper delves into the emergence of AI-generated content challenges traditional notions of authorship, originality, and ownership. AI systems can generate creative works, such as text, images, and music, that may qualify for copyright protection under IPR. However, the lack of human authorship raises complex legal questions about who owns the rights to such works and who must be held liable in case a civil or criminal offense has been committed which is discussed through case studies by the author. Furthermore, this paper aims to contribute to the ongoing discourse on the AI generated evidences and how it would challenge the substantive IP laws in picture in the digital age.  The paper also provides recommendations for policymakers, legal practitioners, and AI developers to navigate the complexities of AI-generated content.
 
Keywords - Artificial Intelligence, evidences, admissibility, ownership, copyright, IPR.
 
The urge for legal reform following the rise of AI systems as creators and contributors across different spheres of life has severely challenged the traditional legal framework. It covered two major areas on these challenges: admissibility and probative weight of AI-generated evidence in courts and disruption of intellectual property laws by AI-generated content. Thus, a combination of cases and analyses demonstrates how AI deepfakes, fabricating evidence, and art raise more nuanced questions with respect to the principles of authenticity, authorship, ownership, and liability.
 
With respect to legal evidence, it is the very sophistication of AI technologies which in turn complicates the courtstask of endorsing authenticity by distinguishing between what is genuine and what has been fabricated. The already-existing laws primarily create hindrances for the identification of AI-generated evidence considering originally when developed for those by human beings. So do the laws of IP, derived from individual humanistic concepts of authorship and originality, conflict with the dimension of AI being more contributory with less human involvement than was sanctioned before. Such a gap, anachronistic regression from new methodologies, is the prime manifesto for much-needed reformation. To deal with these problems, a balanced approach using legal, technological, and ethical solutions is imperative. The courts must adopt advanced mechanisms and expertise with respect to the evaluation of AI-generated evidence; restructuring of IP laws is necessary to make room for the much-invoked AI systems which are contributing too much to the creative process. Stakeholders-including policymakers, practitioners of law, and AI developers-need to work together to ensure that the guidelines are unambiguous, aligned, and equitable for all other parties concerned. However, adjusting the law is a question for a new evolution to be conceived for handling AI content, preserving the integrity of courts and intellectual property control. Progressively approaching such reform can enable AI to unleash its own revolution without any compromise on the standards of justice, equity, and creativity.
 
Thus, an exploration of the interaction between AI, legal frameworks, and social expectations tends to these concerns in all their multifaceted dimensions. By closely examining such actual situations as those of DABUS and disputes concerning deepfake evidence, it critically evaluates these quandaries presented by AI content integrating within both the judicial process and intellectual property law. The analysis illustrates how AI questions traditional legal concepts, points to inconsistencies across jurisdictions, and suggests the need for urgent reforms. In summary, the aim of this study, therefore, is to serve to clarify the botched and tortured relationship between AI technology and the law, with designed recommendations for policymakers, legal professionals, and AI developers. This paper will significantly inform the conversation about how to regulate AI technologies in a morally, socially, and legally acceptable manner by addressing issues of AI-generated evidence credibility, authorship and ownership of AI-created works, and accountability of responsible parties.
 
1.      MAURA R. GROSSMAN (2023) The paper discusses the evidentiary issues that must be addressed by the bench and bar to determine whether actual or asserted (i.e., deepfake) GenAI output should be admitted as evidence in civil and criminal trials and offers practical recommendations for courts and attorneys in meeting the evidentiary challenges posed by GenAI.
2.      Akerman LLP (2024) This article delves into the multifaceted challenges presented by AI-generated evidence, explores the principles governing its admissibility. It suggests the development of clear standards and regulations, educating legal professionals on the capabilities and limitations of AI and promoting the development of AI in an ethical manner.
3.      V. K. Ahuja (2020) The article discusses the legal position in several countries and deals comprehensively with several models of authorship in AI-generated works. The discussions being made in this regard at the WIPO have also been discussed in the article.
4.      Paul W. Grimm (2021) This paper explores issues that govern the admissibility of Artificial Intelligence (AI”) applications in civil and criminal cases. It provides a detailed yet intelligible discussion of what AI is and how it works, a history of its development, and a description of the wide variety of functions that it is designed to accomplish.
5.      Eftychia Bampasika (2020) This paper served the purpose of highlighting the interplay between the criminal justice systems and the AI technology in connection with AI being employed as evidence tool which can pose a threat to the justice system.
 
The author has used secondary research and has collected information through case studies, articles, and research papers by other authors. The author has read and analyzed these sources and then formed an analysis with respect to the particular topic for research which will serve as primary data. The research is qualitative and comparative in nature. The citation format is 4th Edition OSCOLA.
 
·         To explore the admissibility and credibility AI-generated evidence in legal proceedings.
·         To determine the evidentiary principles required for addressing admissibility of AI evidences.
·         Addressing the challenges faced by AI evidence in court as admissible.
·         To investigate the impact of AI-generated works on traditional notions of authorship, originality, and ownership under intellectual property laws.
 
AI-generated content presents substantial challenges to legal systems, particularly in terms of the authentication and admissibility of evidence. Additionally, it disrupts conventional intellectual property frameworks by challenging the established notions of authorship and ownership.
 
The state or characteristic of being acceptable or admissible is referred to as admissibility. From the lawman's definition, "evidence" is anything the court accepts to affirm or deny the alleged facts in a trial. Consequently, any evidence in the form of a document, or physical appearance presented before a Court of Law can be termed admissible evidence. In fact, the Court of Law allows only relevant and credible evidence in the courtroom; all other evidence is not allowed. The BSA defines the legal term "admission." The term refers to the authority or permission to approach, enter, allow, or enter. Legally speaking, agreement or concurrence with another person's statement is different from a confession in that the former requires previous investigation by another party, whereas the latter does not. An admission made outside of court that is accepted as evidence can be a fact, point, or statement. Electronic evidence has been a crucial component of legal procedures since the invention of electronic devices and the internet. Electronic mail, SMS, security film, and even social media posts, which are mostly utilized to ascertain facts and make legal arguments in the court of law, contribute to the cause of this particular challenge. Indeed, the emerging AI-generated evidence presents a whole new challenge when electronic evidence grows increasingly popular.
 
Electronic evidence becomes a fundamental element of modern-day cases in court. Demonstrative facts are combined with an argument or case by using emails, texts, social media posts, and even security footage. While the judicial system has procedures in place to manage electronic evidence, there is now a new level of complexity added to that through AI-generated evidence. The growing complexity of AI-generated content also poses serious problems for the legal system, particularly in the determination of reliability and admissibility. Traditionally, courts relied on the human knowledge and well-established forensic instruments. However, the courts are now faced with the task of distinguishing between real and artificial intelligence-generated content. A capable litigator, before stating any matter would see how the facts could be presented as a good evidence. Although the parties give the lawyer appropriate documents and tell their story, the rules of evidence may in certain cases demand much more. More often than not, additional witnesses are required to avoid hearsay objections or to authenticate texts or written documents. With the advent of expert testimony, the considerations become increasingly complex. AI raises multiple questions about the legitimacy of the evidence that it might present in court. AI's inferences will not be acceptable if it is not established that it's reliable. For instance, if AI is to be taken to understand any given social media posts, it could be adjusted in such a way that it searches for certain keywords or phrases referring to particular behaviors.
 
JUDGES PERSPECTIVE
It is difficult for judges to decide whether such evidence can be admitted in court due to problems of bias, interpretability, transparency, and dependability. Generative AI has also increased false information and incorrect data, making it hard for judges to believe that it is true. Recently, generative AI created deepfake, sexually graphic photos of singer and songwriter Taylor Swift, which was reported in the media. A picture of Pope Francis in a clean white jacket caused a buzz since it appeared to be an authentic photograph.
 
Imagine what if the picture depicts a public figure committing a crime or any illegal activity? How might the court or lawyer demonstrate that this picture is authentic when this is the case? If this is an AI-generated image, how will the judge know? In addition, the opacity of AI algorithms makes it impossible to be transparent, and bias in training data may lead to discriminatory findings besides the many dangers that compromise the validity and dependability of evidence. There are no established protocols for confirming AI-generated evidence, which makes decision-making difficult. Another very practical example of the issues with electronic evidence is self-driving automobiles. For instance, it remains unclear how results from a drowsiness detector will be applied in adversarial or inquisitorial legal systems to determine liability for an accident. We have to check the data of the AI system for correctness and limits, assign blame for mishaps or disagreements, and understand the logic behind its judgments.
 
SIGNIFICANT ISSUES FOR LAWYERS AND JUDGES
In most cases, a photographic evidence speaks itself, but it needs a story. A picture generated by AI creates problem in that scenario. For instance, an official at higher position may feel oppressed due to the notion that he is committing a crime when, in fact he is not committing such an act. It will be very challenging for the judiciary or attorney that wants definitive proof of the scenario to conclude the truth regarding the incident. Then, how do the courts possibly know that that picture is indeed AI-made, not actual? Uncensored steps in the making of AI lack transparency and prejudice in training could lead to inaccurate results; numerous other risks make its evidence reliability into question. The lack of established standards for proving AI-generated evidence complicates the decision-making process. Governments of states are thinking about this problem and working to enact legislation to address concerns related to generative AI. Such leads that help identify the culprit beyond a reasonable doubt are termed as evidences. However, it would be difficult in determining who is responsible and to whom the criminals will hand over the responsibility if robots take over human tasks with the help of their own "brain". Further, an autonomous vehicle which could function even with no human control may be difficult to consider electronic and AI-related evidences.
 
For example, it is not clear how the findings of a drowsiness detector may be used in adversarial or inquisitorial legal systems to determine who is responsible for an accident. Could this information be used as evidence to determine the mastermind behind the accident? Since machines can now think like humans, how will the concept of mens rea change? Could human-machine interaction-based machine data be used as evidence? We have to scrutinize the correctness and limits of the data set used by the AI system, attribute blame to mishaps or disagreements, and understand the rationale behind its decision-making.
 
THE ROLE PLAYED BY JUDGES
Judges will be expected to gain adequate understanding about how the AI works to take intelligent decisions as regards admitting evidence into record that are created using such systems. Understanding must involve all possible ways, which would make such admissible and others not to include how it might misrepresent with its ability such as in the deepfake mode.
 
The purpose of the UNESCO article "The Admissibility Challenge: AI-Generated Evidence in the Courtroom" was to familiarize legal practitioners with the role of AI-generated evidence in judicial processes and with the challenges likely to arise in determining whether such evidence may be admitted or not. Such challenges include a question around the reliability and authenticity of AI-generated evidence as well as the capability of judicial actors in developing comprehension and evaluation competencies for the technical aspects of the evidence. It further provides valuable information regarding best practices to develop capacity among the judicial actors on AI-generated evidence issues. As the AI technology is progressing, the AI-generated evidence has become very common in the courts, but the legal system still has a problem of how to handle this kind of evidence.
 
AI AS DATA SECURITY TOOL IN LEGAL FIELD
AI can be used to scrutinize data in ways that were previously impossible. For instance, AI can be used to analyze large amounts of data and identify patterns or correlations that may be relevant to a factual dispute. Lawyers have already seen this sort of tool used by e-discovery vendors to assist in the initial review of large discovery productions in complex cases. One area where proponents of AI may claim to be particularly useful is in the analysis of audio and video recordings. Similarly, AI could be used for the analysis of facial expressions and body language. However, it is objected to misleading uses.
 
AI integration into the prosecution is both an opportunity and a hindrance-in particular, the admission of evidence. The picture isn't bright though; only a handful of court rulings have thus far talked about the admissibility of AI-generated evidence, and most of these have given no thought whatsoever to the issue. In contrast, admissibility of evidence has always been determined by some important factors: fairness, authenticity, relevance, and reliability. The introduction of AI-generated evidence adds yet another layer of complexity: the AI operation would also need to be explained to explain how it produced the evidence. The decision-maker must know that basic operations of the AI system, how it functions, and how it generates output so as to be in a position to accord the weight to the evidence.
Under the Federal Rules of Evidence (FRE), mostly, the trial judges are gatekeepers for determining the admissibility of relevant evidence, including AI-generated content. The party offering Admittedly AI-generated content must explain how it operates and how the content will aid and not confuse the jury into a just verdict. Such processes will involve revealing enough information concerning the training, development, and operational features of the AI model so that both the opposers and the judge could measure its reliability. Multiple factors play a pivotal role in determining the admission and relevance of AI evidence. These factors include the accuracy and reliability of an AI system, the interpretability of the algorithms, and privacy invasions dictated by massive amounts of data. Meanwhile, other considerations such as bias in the AI system, timing, and the basic constraints of AI, can certainly influence the relevance and admissibility of such evidence in court. The authenticity of AI-generated evidence shall depend on showing, under FRE 901(a), that the evidence was what it appeared to be. This is the point at which the jury is going to be largely influenced by it. Discussions around witness testimony and corresponding systems or procedures under FRE; clauses hinges on the provision of rules 901(b)(1) and 901(b)(9).[1] However, establishing the authentication of evidence that uses AI is not an easy task. The areas of major difficulty involve lack of transparency of the algorithms governing AI technologies, the bias in training datasets, the provenance and quality of the used data, how compliant the technologies in question are to be with the law, and the absence of any legal understanding of artificial intelligence as technology. These pose enormous barriers to authentication and thereby inherently question the accuracy and reliability of the evidence.
 
RELIABILITY OF AI GENERATED EVIDENCE
Programs leveraging artificial intelligence, including but not limited to Gemini, CoPilot, and ChatGPT, have radically changed our approach towards many facets of modern life. It stands to reason that AI, as it is developed further, will likely be called upon to play a pivotal role in more and more court cases. Just how reliable will AI be in determining whether or not someone is telling the truth? Just how accurate will AI be in reading body language and facial expressions? A purely legal angle here, where it becomes incumbent upon lawyers to analyze the data that AI looks at, works in an interesting way, because they might also get themselves into analyzing the AI's own programming. Where in reality, an expert-who is not denied to be costly-conducts the analysis instead of the lawyer. In other words, while AI tool-just that-looks like a useful thing, it tends to aggravate the already high cost of litigation for the average citizen. The AI programs might themselves be error-prone. Like all software, AI is fallible and error-prone. This means the evidence developed through AI may not be reliable. The evidence may bump into some major hurdles against admissibility.
 
CONSIDERING AI AS EVIDENCE
Litigation in the present context involves complicated business and intellectual property issues; therefore, many worry about trade secrets, finance data, or other sensitive information are continuing to mount. AI tools could simplify the examination of this data in certain ways. Introducing this kind of analysis as evidence adds another layer of complexity and cost. For instance, electronic "signatures on other classes of documents may help prevent counterfeiting, but proving the reliability of the electronic signatures themselves may need several levels of testimony from experts. To prevent these challenges, courts will have to develop new standards and protocols for the admissibility of AI-generated evidence. This may involve setting guidelines on the use of AI in litigation cases, including allowing litigants to review the source code of the AI program. This may include establishing protocols for the disclosure or certification of AI-generated evidence, or requiring independent third parties to confirm the evidence before it is used in court. Judges may be unduly influenced by the convincing nature of AI-generated content, especially if the technology is not understood. For example, the realistic appearance of deepfakes may deceive fact-finders and lead to miscarriages of justice. Thus, the court system's ability to investigate and understand the underlying technology is inextricably connected to the reliability of AI-generated evidence. The court's rejection of AI-enhanced video evidence in State of Washington v. Puloka[2] reveals the judiciary's reluctance toward AI-generated information. The judgment raises issues regarding the inability of existing legal frameworks to justly assess the validity and reliability of such evidence.
 
Court as Admissible
The infusion of artificial intelligence into the justice system presents a very revolutionary change equipped with its share of innovations and dilemmas. Following a number of scandals, in which some lawyers put before a court legal briefs sustained by fictitious case citations fabricated on or through an artificial intelligence mechanism, several judges have done their part of updating the standing orders to address, specifically, how to deal with AI-generated content in filings. However, as AI-produced evidence is increasingly becoming common and indistinguishable from the non-AI variety, courts will also have to deal with considerable and complex issues regarding the authenticity, reliability, and admissibility of evidence generated by AI. Of serious concern to the courts is the use of AI to create manipulated videos, images, or "deepfakes," that is, artificially produced images, video clips, and audio recordings that are indeed fake but can pass for the real thing and that could taint an entire trial. As an example, in the area of intellectual property, deepfakes could illegally take advantage of a person's image and likeness and trademarks and branding. If a deepfake should use copyrighted material, that might expose it to copyright violations, and it might also stir some controversies about authorship or inventorship issues. That very aspect will bear towards all these controversies landing in AI-generated evidences in trials. Thus, within intellectual property, a deepfake can be used to illegally present an individual's image and likeness, trademarks, and labels. If deepfakes incorporate copyrighted material, they may present claims of copyright infringement and raise questions as to authorship or inventorship; thus, these issues will certainly give rise to the introduction of AI-generated evidence at trial.
1.       Evidence that is purportedly generated from AI-generated sources and includes a range of materials from documents, photos, videos, and synthesizations to conclusions developed by machine learning algorithms. One primary concern is the veracity of the evidence generated by AI or the reliability of the evidence. Traditional evidence could often, by nature, be traced and verified through human agency, while generated content and images may not have clear provenance or are born from complicated processes whose auditing is difficult. This gives rise to problems regarding how evidence of this kind can be seen as being untampered with and was not generated from data that was mishandled.
2.       In addition, the authentication and chain of possession in AI-generated proof remain major challenges. By demanding very high standards of evidence admissibility, the legal system finds itself faced with conditions that have become complicated by AI's opacity, presenting challenges for parties seeking to demonstrate the integrity of a piece of evidence generated by AI rather than by humans. Courts may require a party to demonstrate how AI-generated evidence was collected, relied upon, and introduced into evidence, thereby delaying or extending the discovery phase and any trial. This could further complicated matters since the additional burden on the litigants and the courts may ultimately affect the success of a party's claim.
3.       Besides, due to increasing intricacy of several AI systems, expert testimony may also be required to clarify how the evidence was generated. Nonetheless, that requires information that is extremely specialized and would not always be within reach of the jurors' or judges' grasp. This, however, would cause the hiring of highly experienced specialists that would further raise costs of litigation, beyond approach. Besides, this mandates additional strain on an already-hurdled legal system implying the court will have to determine the credentials of the experts as well as chance of their conclusion.
4.       Debates may deal with the factors of the algorithms' validity, a scope for biases or mistake due to inferred results, and the dignity of the empirical evidence offered to any AI system. Indeed, the truth of the AI systems is alleged to depend on the quality of the grounding data. The supposedly fact-based support of AI isn't ever lifted from history, which is full of biased instances, wherein AI functions actually make such biases reinforced, further multiplied, or perpetuated forever. Herein lie great questions on fairness in the appreciation of evidence, factoring into this the possibility of AI coming up with evidence adversely against certain groups.
5.       Since the majority of AI systems work with large datasets, especially those containing sensitive personal data, privacy and protection of the data will become an issue. If the parties are submitting AI-generated evidence, there are possibilities for raising issues about the personal data and privacy rules, especially if there is doubt over the legitimacy of the data collection and processing.
6.       AI algorithms that tend to be opaque also complicate acceptance of evidence in court, where the logic on which the evidence is based is just as essential as the evidence itself. Therefore, it may be possible that courts require the presentation of evidence and the justification of such evidence, failing which AI-generated evidence might be ignored as lacking in validity.
7.       In a nutshell, the problem is that the AI technology is changing so rapidly that it is hard for the legal system to catch up and for practitioners to stay abreast of the best ways of handling, interpreting, and challenging the evidence generated by AI. Such developments pose some of the biggest challenges for parties, their counsel, and the courts in determining whether evidence is authentic or fabricated.
8.       There are also concerns about the considerably more expensive litigation, as parties would be obliged to bring forensic experts to evaluate the admissibility of AI-generated evidence, juries picking fake from real evidence, and whether courts will face an avalanche of litigation and AI-generated evidence.
The contribution of generative AI has undergone a radical change regarding how intellectual property rights are traditionally viewed. The development is going to have far-reaching implications for technology, the economy, and society itself. Historically, intellectual property rights have played their own unique roles in motivating creativity and safeguarding the creations of its authors. The advancement in AI technology creates problems and complexities for the traditional concept. The intellectual property remains relevant today and is forming an increasing number of users. While some new challenges are emerging, the approaches to these may involve adding IP protection in layers, rather than expelling the attempts to modify IP completely. This creates an interesting problem for the admissibility and reliability of AI-created content in a legal context requiring new standards and methods to establish such admissibility and reliability.
 
As the development of AI technology continues, so does the progress of the legal framework to ensure that law and equity are wrapped up with the other challenges raised by AI-generated evidence. In our time, AI algorithms and machine learning models can autonomously bring forth various creative content that contests our mutable understanding of creativity as it calls for human ingenuity. The principle of exclusivity grants the right holder an absolute power of exploitation that gives him an edge in fostering further creativity. But the questions arise as to: who is the author? Who owns it? What is the uniqueness of the work? Hence, finding a happy balance between appreciating AI as an exercise in creativity and yet keeping a human-centered balance is crucial.
 
CONTENT OF OWNERSHIP
The issue of ownership of AI-generated content remains to be resolved just as in other countries like USA and Canada. AI-generated products and tools may qualify for IP protection; however, there is no explicit provision within the law regarding who owns AI-generated content.A study conducted by Dentons on AI finds that 86% of respondents believe legislation should be enacted to clarify IP protection in the context of AI, with 45% believing it to be urgent. This indicates a dire necessity to address the initially grave issues regarding the rightful ownership and creatorship of AI-generated works as IP has otherwise become as interconnected with AI as it has ever been[3]. Laws involving some ownership relativity of creative works sprung up to protect this privilege from being exploited by others. Although being the work done by human talents, algorithmic copyright questions ownership issues against the overriding concept of copyright-protected contributions under the hand of an individual. But giving rights to non-human beings is a challenge. An alternative suggestion is modeling AI work as "work made for hire," with programmers usually recognized as the creators/authors of works attributed to the AI[4]. There exist certain countries, like England and New Zealand, with national legislation governing copyright such that the individual's aid in propelling inventions eventually endows that person with copyright. Such regulations seem antagonistic to all older copyright considerations with regard to the willingness of robots: lovers of intelligence and disinterested in the creation." Generative artificial intelligence (GenAI) systems, while having posed serious challenges and taken great strides in changing the face of this area in the field of law with regards to intellectual property rights, have now created something of a crisis for conventional doctrines of copyright. The emergence of generative AI (GenAI) systems has called traditional copyright doctrines into question and forever changed the nature of intellectual property (IP) law, as these systems are capable of producing remarkably human-like creative products.
 
Human authorship is paramount to the very foundation of copyright law in that it accepts the uniqueness of the manifestation of human creativity and intellectual ability. On the other end of the spectrum, the creators are non-motivated or human-inspiring creators-using sophisticated and complicated algorithmic models with so much data dictation on human creativity-driven experiences and this is forcing one to analyze whether one can practically apply almost any provision of copyright legislation. Can a non-conscious algorithm be considered the author of a copyrightable work? If the answer is no, then who is the holder of the copyright?
At the same time, originality becomes a contentious factor in AI opus copyright protection. Most AI systems are training on vast repositories of existing copyrighted works before generating new outputs. The originality question revolves around whether AI-generated works are, in fact, original or simply derivative without the requisite degree of human creativity and independent expression. In fact, in patent law, the novelty requirement for patent protection might be hard to satisfy with inventions developed by AI systems trained on existing knowledge and prior art.
The surrounding issues of ownership and exploitation rights thus serve to compound the legal situation still further. Who then shall exploit a copyright work in cases where it is born of an AI system? Whether to reproduce, distribute, or give licenses? The problem here creates a scenario where there lay open-faced conundrums of whether the AI system is owned by its creator, its user that supplied the input, or perhaps the AI itself. Further, given the prevalence of AI-generated content, these issues have a serious potential for copyright violation. Therein lies potential for unauthorized usage or even a copyright infringement claim in cases where the AI systems using copyrighted data unwittingly generate pieces strikingly similar to the original works. The Bombay High Court upheld singer Arijit Singh's personality rights against the unauthorized use of his name, voice, and likeness in AI-generated content in Arijit Singh v. Codible Ventures LLP[5]. The court emphasized the need for legal safeguards by underscoring the potential misuse of AI systems to trade upon a celebrity's goodwill without their permission.
 
The creative sectors might also be radically changed. The onslaught of AI-generated material threatens to lower the value of art made by human hands and disrupt the livelihoods of writers, musicians, artists, and other creators, especially with a high volume and at a low price. This raises questions about the disruption-and-replacement of human creativity from economy by AI.
 
INFLUENCE ON TRADE, PATENT, COPYRIGHT
According to WIPO's estimations, the progress in AI by this industry has highly impacted intellectual property. The World Intellectual Property Organization revealed that between 2013 and 2016, the average growth rate of AI technology was 28%. Approximately 340,000 patent applications on AI-related technologies and more than 1.6 million academic publications on the topic were generated from 1956 to 2017. AI patent applications made to WIPO reached 55,660 in 2017, 300% up from 2011. Such developments have posed several challenges to intellectual property law. In India and the USA, works created solely by AI systems with no human input are not protected by copyright or patents.Thaler is a Missouri-based powerhouse of innovation led by a CEO and president, Stephen Thaler. It has spawned a movement with the foundational purpose of questioning the status quo. Thaler is especially known for unleashing the awe-inspiring DABUS technique, from which an artwork titled "A Recent Entrance to Paradise" emerged using picture analysis of a gigantic collection.
The DABUS case presents a considerable legal challenge to traditional patent law by asking the question of what constitutes an "inventor." The applications raised in this series of legal proceedings are of inventions allegedly conceived by an artificial intelligence system named DABUS ("Device for the Autonomously Bootstrapping Unified Sentience").Patents have been sought, mainly filed in various countries such as the United States, Europe, and United Kingdom, by Dr. Stephen Thaler, developer of DABUS, who chose to name DABUS the inventor. in the choices of different courts regarding this matter warranted the courts to answer the question of whether DABUS could be an inventor under the law in favor of Dr. Thaler. All the time, it progressed under arguments that the existing law doesn't explicitly exclude AI systems from inventorship. However, some courts ruled otherwise, with the UK Supreme Court in Thaler v Comptroller General of Patents Trade Marks And Designs[6] stating that only natural persons could be named as inventors in patent applications. The DABUS case raises deep questions about the nature of invention, the role of human creativity, and the relationship between humans and AI in innovation. It challenges the doctrine that only humans can be inventors and requires a critical review of existing patent law to determine how such innovation driven by AI will meet legal standards. It tests the future of intellectual property law and guides the potential future impact that AI would have on human society. In short, it will become a pivotal analysis to determine the content ownership by testing far and wide elemental basics.
 
The Getty Images v. Stability AI[7] case is a high-profile legal fight that illustrates how complex copyright law has become with the advent of AI. The case was brought by Getty Images, a prominent stock photo agency, against Stability AI, which developed the well-known text-to-image AI model, Stable Diffusion. The Getty Images' claims that Stability AI trained Stable Diffusion on a massive dataset of copyrighted images without obtaining proper licenses or permissions from Getty Images or other copyright holders.   Getty Images argues that this unauthorized use of copyrighted material constitutes copyright infringement. They argue that Stable Diffusion, by learning from and replicating patterns and styles found in copyrighted images, effectively "copies" those images in a way that infringes on the rights of photographers and artists. This case raises crucial questions about the legality of training AI models on massive datasets of copyrighted material, particularly when such training is conducted without explicit consent or compensation to the copyright holders.
 
Though not directly related to AI, the case of "Monkey Selfie" (Naruto v. Slater) provides insightful understanding about the changing concept of authorship. A photographer named David Slater had claimed the copyright of a selfie of a crested macaque monkey, taken using Slater's camera. Arguably, the case, again involving a monkey as the alleged author, raised again the "issue of whether an animal could be an "author" for purposes of copyright". Even though the court found in favor of Slater, the decision showed that conventional copyright law was increasingly at a loss in determining creators who were not actually human and called for further legal and philosophical consideration of authorship into the age of digitalization. While different in their particular contexts, these two cases touch upon the very questions of authorship, ownership, and the application of copyright law when technology is rapidly changing. The Getty Images v. Stability AI case deals specifically with the important issue of AI-generated content and the possibility of copyright infringement. The "Monkey Selfie" case broadens the discussion to ask a question about the definition of "authorship" outside the human world.   These are great precedents and also shape the new legal landscape pertaining to AI in intellectual property.
 
INTERSECTING AI EVIDENCE WITH IP LAWS
The intersection of AI-generated content and AI-generated evidence poses a particular myriad of legal and ethical challenges, especially with respect to intellectual property rights (IPR). Though questions of authorship, originality, and ownership relative to AI-generated content exist, its actual evidence-like usage makes the situation more complicated. The capability of artificial intelligence to generate deepfakes, manipulated videos, and synthetic audio jeopardizes the authenticity and reliability of evidence. In particular, advanced forgeries might make up false incrimination-related evidence, incur the right of the testimony that would not otherwise exist, or even create entire tales. It would damage the integrity of the proceedings, and it brings up the, whether in cases of defamation, harassment, or political manipulation.
 
In addition, AI's employment for the analysis of evidence-such as forensic analysis or document review-raises serious issues about the ownership and conclusion control of generated data. With the application of AI systems on evidence analysis, the intellectual property rights with the results and conclusion could create confusion. For example, in the conduct of a legal discovery, who then owns the copyright to the AI report that reviewed a large constellation of documents Moreover, the possibility of AI algorithms being biased poses a substantial threat. AI systems are largely based on data that is huge but has intrinsic bias, which will likely be reflected in the output which will thus conduct biased or inaccurate analysis on the evidences and in turn suffer from the fairness and reliability of assessment during trial processes. For instance, consider AI which has been trained on biased data that would, in turn, attribute races to certain people to indicate their potential as suspects-this could be seen as discriminatory.
 
The situation has arisen due to the necessity for the multi-pronged approach being increasingly implemented by the presence of AI-generated evidence within legal and intellectual property frameworks. Strong mechanisms need to be introduced for the authentication and admissibility of evidence generated by AI in courts. This should necessarily include the specification of AI-oriented evidence validation protocols, for example, those associated with forensic analysis tools to detect deepfakes and fabricated content. Courts should insist on AI expertise to assess the veracity of such evidence and also set up training programs for judges and lawyers that will facilitate the better understanding of the implications of AI technologies. Legislatures must also amend evidence laws to be more particular in accommodating these new works created by AI so that only content whose credibility has been verified is admitted before courts.
 
As regards intellectual property (IP) laws, only a radically reformulated conception of authorship and ownership will temper the role of AI in content production. The creation of categories in legal field such as "AI-assisted works" or "machine-authored" can help bridge the conflict between existing IP frameworks and the nature of autonomous AI functioning. Moreover, in order to ensure fair ownership recognition for developers, users, and data providers, policy guidelines should be made. These should aim at an international framework for the harmonization of copyright and patent laws to prevent inconsistencies and foster clarity, for example, as seen in DABUS. Equally, greater accountability mechanisms must be put into force on AI developers and users. The developers ought to ensure that their AI systems are visible and audit-able so that people can trace the origins and purposes for which they generate content. Users ought to be responsible when cobbling up bad content that will mislead others. Independent bodies should look into AI applications within the professions of law and creativity and institute redress across the board for problems and encourage ethical AI practices. Reacting to the challenges regarding AI-generated evidence and material will require a holistic approach involving legislative intervention, technological innovation, and worldwide cooperation.
 
The urge for legal reform following the rise of AI systems as creators and contributors across different spheres of life has severely challenged the traditional legal framework. It covered two major areas on these challenges: admissibility and probative weight of AI-generated evidence in courts and disruption of intellectual property laws by AI-generated content. Thus, a combination of live cases and analyses demonstrates how AI deepfakes, fabricating evidence, and art raise more nuanced questions with respect to the principles of authenticity, authorship, ownership, and liability. With respect to legal evidence, it is the very sophistication of AI technologies which in turn complicates the courtstask of endorsing authenticity by distinguishing between what is genuine and what has been fabricated. The already-existing laws primarily create hindrances for the identification of AI-generated evidence considering originally when developed for those by human beings. So do the laws of IP, derived from individual humanistic concepts of authorship and originality, conflict with the dimension of AI being more contributory with less human involvement than was sanctioned before. Such a gap, anachronistic regression from new methodologies, is the prime manifesto for much-needed reformation.
 
To deal with these problems, a balanced approach using legal, technological, and ethical solutions is imperative. The courts must adopt advanced mechanisms and expertise with respect to the evaluation of AI-generated evidence; restructuring of IP laws is necessary to make room for the much-invoked AI systems which are contributing too much to the creative process. Stakeholders-including policymakers, practitioners of law, and AI developers-need to work together to ensure that the guidelines are unambiguous, aligned, and equitable for all other parties concerned. In the end, however, adjusting the law is a question for a new evolution to be conceived for handling AI content, preserving the integrity of courts and intellectual property control. Progressively approaching such reform can enable AI to unleash its own revolution without any compromise on the standards of justice, equity, and creativity.
 
·         Eftychia Bampasika, | Artificial Intelligence as Evidence in Criminal Trial(2021) 2844 CEUR-WS accessed on 23 January 2025
·         Maura R. Grossman, The GPTJudge: Justice In A Generative Ai World(2023) 23(1) Duke Law & Technology Review accessed on 23 January 2025
·         AI Rapid Response Team Digital Evidence and Deepfakes in the Age of AI(2024) NCSC accessed on 23 January 2025
·         Paul W. Grimm, Artificial Intelligence as Evidence(2021) 19(1) Northwestern Journal of Technology and Intellectual Property Northwestern Journal of Technology and Intellectual Property accessed on 23 January 2025
·         V. K. Ahuja, Artificial Intelligence And Copyright: Issues And Challenges(2020) SSRN accessed on 23 January 2025
·         Abhishek Dalal, Deepfakes In Court: How Judges Can Proactively Manage Alleged Ai-Generated Material In National Security Cases(2024) SSRN < https://dx.doi.org/10.2139/ssrn.4943841> accessed on 23 January 2025
·         Sneha, The Transformative Influence Of Artificial Intelligence On Intellectual Property Rights(2023) 2(2) Symbiosis Law School, Nagpur Multidisciplinary Law Review accessed on 23 January 2025


[1] Mavrova Heinrich, Denitsa and Pont, Erika, Who You Gonna Call: The Role of Expert Witnesses in Authenticating AI-Generated Evidence (2024).
[2] State of Washington v. Joshua Puloka (2024) 21-1-04851-2 KNT
[3] Bradley Budden, On the Intersection of Artificial Intelligence and Copyright Law (2022), 47 CAN. L. LIBR. REV. 10
[4] Zack Naqvi, Artificial Intelligence, Copyright, and Copyright Infringement (2020), 24 MIPR. L. REV. 15
[5] Arijit Singh v. Codible Ventures LLP. T (2024)
[6] Thaler v Comptroller General of Patents Trade Marks And Designs(2023) UKSC 49
[7] Getty Images v. Stability AI(2025) EWHC 38 (Ch)

About Journal

International Journal for Legal Research and Analysis

  • Abbreviation IJLRA
  • ISSN 2582-6433
  • Access Open Access
  • License CC 4.0

All research articles published in International Journal for Legal Research and Analysis are open access and available to read, download and share, subject to proper citation of the original work.

Creative Commons

Disclaimer: The opinions expressed in this publication are those of the authors and do not necessarily reflect the views of International Journal for Legal Research and Analysis.