Open Access Research Article

LAW AND ARTIFICIAL INTELLIGENCE: IMPLICATIONS OF THE PRESENT AND FUTURE

Author(s):
YASH CHOUGALE
Journal IJLRA
ISSN 2582-6433
Published 2023/07/03
Access Open Access
Issue 7

Published Paper

PDF Preview

Article Details

LAW AND ARTIFICIAL INTELLIGENCE: IMPLICATIONS OF THE PRESENT AND FUTURE

 
AUTHORED BY - YASH CHOUGALE
4th Year B.A.LL.B student
 

Abstract

This paper aims to introduce you to the way in which the increasing use of artificially intelligent technologies affects the theory and practice of law. Throughout the paper, it showcases the potential benefits of using AI and we also problematize the potential risks of using AI in illegal contexts. With respect to the benefits, we explore how AI can be used in legal practice, how AI can be employed to optimize existing legal processes and how it can even allow law to be used in previously impossible ways. We investigate these benefits both in the private sector with respect to companies for example and in the public sector with respect to the public healthcare authorities for instance. With respect to the potential risks of using AI in the legal context, the course also aims to raise awareness of the wider social ethical implications of using AI. In this regard, we ask, does it make a difference if a legally significant decision is taken by a digital entity or by a human person? Are the characteristics of AI such that some questions should not be delegated to AI? How should questions of legal responsibility and of legal attributability be conceptualized in the digital context? We have tried to explain these questions logically with substantive examples throughout the paper.
The first aspect of the paper introduces you to the field of AI in law, in principle and explores the legal significance of AI software and AI hardware. Later, we cover the use of AI in public sector, in particular, in the context of criminal law, administrative law and legal theory. The focus on AI in the private sector is important to be witnessed understanding its heavy implications on human life. we'll look at the use of AI in the financial services industry for example, an intellectual property law and lastly, we consider the impact of the use of AI on a few selected legal areas including health law, competition law and labour law.
we hope that this paper will equip you with a foundational understanding of the relationship between law and artificial intelligence so that you are able to assess how you might individually benefit from the use of AI in the legal context.
 
Keywords: Artificial Intelligence, Public Healthcare, Criminal Law, Financial Services, Public and Private Sector, Labour law, Competition Law
 
Introduction
The term intelligence has a long and complicated history. The word itself derives from Latin inter, which means between, and Legere, which means to choose or literally to read. So, one could say that being intelligent means literally to be able to draw distinctions between different things to understand or to comprehend oneself and the world around us. The term artificial intelligence assumes that this human ability to understand, to comprehend, to sort the important from the unimportant can be replicated by constructing computer programs that are as good or sometimes even better than humans at understanding. Sorting, comprehending a given state of affairs. Just how powerful such artificially intelligent programs are differing quite a lot. Some programs can only perform very basic tasks like adding numbers, and these programs might not really deserve to be called intelligent at all. But other programs can perform very complicated tasks, such as playing chess or simulating complex medical treatments.
The notion of machines being able to stimulate human beings and the ability to do intelligent things is summarised for artificial intelligence. This method of intelligence is created artificially hence the name. It may work faster, stronger maybe better than the human brain but can’t be the human brain however one mends it as we have intuitive differences between an application verses a human body. In law, we do use an interpretive mindset which provides multiples faucets of solutions to the client by being transparent about the consequences of each step we proceed with but the AI tools shall focus on providing solutions to the client in some way causing immorality in its due process where the task could be successful but the manner in which it was conducted shall not be in its true form. However, AI relates to the similar task of using computers to understand human intelligence, but it does not confine itself to methods that are biologically observable. In general understanding “Artificial intelligence, a branch of computer science, is the recreation of human intelligence processes by machines specially computer system, aims to create intelligent machines which can often act and react like humans and makes possible for computers to perform tasks involving human-like decision making, intelligence, learned skills or expertise[1].
 
The Software aspect of Artificial Intelligence
AI is not necessarily more or less biased than its creators. But due to the fact that AI programs can be applied easily on a much larger scale, AI amplifies the biases of its creators.
 
A second aspect concerns the difficulty of supervising AI programs. When a human being makes a mistake, it is easy to say that the human being should be held accountable for that mistake. When a machine makes a mistake, for example, when a car breaks down, one can often argue that the person who built the machine should be held accountable. However, with more advanced AI programs, this is not as easy.[2] And this is so because it is not always possible to predict how an artificially intelligent program operates in detail. And it is also not always possible to explain reach, respectively, in hindsight, how and why an artificially intelligent program came to the conclusion it did in the health care context[3].
For example, certain algorithms are able to predict the best way of treating a patient, but it is very difficult to understand, just on which basis those algorithms make these recommendations. And that is why, from a legal point of view, one must think very carefully about how the running of AI program is supervised and who should be responsible when things do not go as planned[4]. The third and final aspect that I would like to problematise from a legal point of view concerns the why question why and for which aims is a particular, artificially intelligent program used. AI can be used to control people, for example, when tracking how humans behave. And it can also be used to enable people, for example, when enhancing people's mobility with the help of autonomous cast. In part, this. Why question the question Why we use AI is a political and philosophical question, but it is also a legal question, since it determines if AI programs should just be treated like any other invention, like a photo camera, for example, or if it should be treated in a much more cautious manner to reflect the high stakes that are involved when using AI. As so often in legal contexts, there are no clear answers to these questions, but there shall be a personal opinion formed on how you think AI should be regulated[5].

The Hardware aspect of Artificial Intelligence

The physical computer infrastructure on which artificially intelligent software runs. Like the previous lecture, we will try to look briefly at two questions. First, what is the hardware dimension of AI? Second, which legal issues does AI hardware rise? Let us turn to the first question. Any artificially intelligent software requires a shell, a physical piece of computer equipment[6]. The more complexity AI, the more computational power is required to perform a given task. In technical terms, the computational power of a computer is determined in large part by a computer central processing unit or CPU. Phones true, have a processor that determines how quick tasks can be computed and how complex such tasks can be. For example, when I want to run an artificially intelligent text analysis program on my laptop that tries to recreate legal texts based on judgments that I have supplied to it. It can take a few hours for the analysis to complete. However, when I run the same operation on a more powerful stationary computer, which has a more powerful CPU, it takes only around 10 minutes to complete more complicated AI programs involving, for example, the modelling of medical treatments or facial recognition, I cannot run on my personal computers at all. Due to the limitations of ordinary personal computers, many lay users of AI programs, like myself, use programs that are hosted not on private computers but on service that belong to companies.
At present, almost all of these AI programs rely on the so-called classical computer structure. Classical computers store and process information in binary units called bits. These bits can have the value of either one or zero. This means that any process and any information within this classical computer structure is ultimately represented either by one or by a zero.
 
However, there is now a different type of computer emerging alongside this classical binary type of computer. This type of computer is called a quantum computer. Quantum computers do not use bits that can be either one or zero, but cubits that store and process information. Cubits can be set to one or zero, like a classical computer, but importantly, they can also be set to one and zero at the same time. This technological difference is the reason that quantum computers are vastly more powerful than classical computers.
 
To illustrate, one of the manufacturers of quantum computers recently reported that their quantum computer performed a calculation in one second that would take a classic computer 10,000 years to perform. In general, it is anticipated that fully functioning quantum computers will be 100 million times more powerful than contemporary desktop computers, and at least 3,500 times more powerful than contemporary super-computers[7]. It is important to appreciate that this stage that with respect to certain problems, the potential speed of quantum computers is so superior to that of classical computers that problems that used to be impossible to solve by classical computers can now be solved by quantum computers. That does not mean however, that quantum computers will replace classical computers across the board. Indeed, quantum computers are not per se, faster or more powerful than classical computers. Both types of computers will co-exist, each within its own domain. But with respect to the specific types of complex computations, quantum computers will certainly dramatically enhance the ability to use artificial intelligence for beneficial purposes. But they also dramatically enhance some of the risks that the utilization of AI and tails. This brings us to the second question.
What do all of these technicalities have to do with law? There are again many answers to this question, but for now I will just focus on two particular aspects. The first aspect that I would like to focus on mirrors the first aspect discussed in the previous lecture. Namely the fact that hardware, just like software, is created by people with specific ideas in mind. Again, this is not necessarily problematic, but it is a fact that one needs to be aware of. A comparison of hardware and architecture might help to illustrate this point[8]. When architects construct a building, they can construct it so that it is wheelchair accessible or not. They can also construct it in a manner that allows people to congregate in corridors by creating space for gathering or not, or they can place the door handles very high so that children cannot reach them, and so on. Depending on what the building would be used for, none of these features are necessarily wrong, but each of them is a choice that is taken for a particular reason and computer architecture is very similar. Some infrastructural choices are of course determined by physical necessities, just like with buildings. But many choices are informed by what the architects of a computer, of a processor, want a given machine to do. One assumption, essentially to come into all computational processes, whether quantum or classical, is for example, that the human decision-making processes, which AI mirror, follows certain rules of reason and rationality. This assumption might capture certain aspects of human reasoning, but it might not account for the whole range of human reasoning.
Again, this is not necessarily a problem, but it is important to keep in mind from a legal point of view, since there's an inherent risk that such assumptions, coupled with a belief in neutral technology, can come to dominate over alternative modes of thinking. There's of course also the risk that the views and interests of those who are able to construct hardware prevail over the interests of those who cannot. The second aspect of legal importance relates to the ever- increasing complexity and sophistication of the hardware required to run the most advanced AI programs. For quantum computers function only in a vacuum and they need to be cooled down to around 272 degrees Celsius. This technical complexity of quantum computers means that only very few companies and countries are actually able to construct and utilize them.
 
This is significant. At present, it is assumed, for example, that quantum computers can overcome any encryption mechanism. This means that quantum computers can break conventional password protection mechanisms.
If this is the case, actors with quantum computers, companies, or states have a clear advantage over those who do not have quantum computers, since those without a quantum computer cannot protect the information from those with a quantum computer. Similar issues, though slightly less severe exist also with respect to classical supercomputers that also require a lot of technical skill and electricity to maintain. As a result, the possibility of reaping the remarkable benefits of AI is often limited to those who can access the hardware that is required to run AI programs. As I mentioned just now with respect to the password problem, this situation can lead to significant inequality. In fact, it can amplify existing inequalities since rich countries and large companies will be able to use the full range of AI applications while less well-off factored actors will be left behind.
Thus, from a legal point of view, one should think about ways to bridge these discrepancies. For example, by mandating that those with access to the most advanced AI technologies should share certain percentages of their computing resources with those actors who would otherwise be locked out. Ultimately, it is important to keep in mind that a consideration of the legal issues raised by digital processes must always consider both the legal issues raised by software and by the hardware. Neither software nor hardware are simply neutral things. They are normative phenomena in the sense that they are shaped by human choices, which can be questioned and debated[9].

Introduction to Legal AI in the Public Sector

Public law is the law that governs the relationship between a state and individuals. In four distinct ways in which AI is used in the public law context could be explained for understanding its usage to the general public. First aspect considers the link between AI and legal responsibility. For example, what should happen when an intelligent, autonomous machine make some mistake that leads to damage? Who should compensate a pedestrian who's knocked down by an autonomous vehicle? Or who should be blamed when a robotic medical doctor makes an incorrect diagnosis of a patient?[10] Then we come to the second aspect of the public sector explaining the use of AI in criminal law. Specifically, the use of AI during criminal investigations and highlights the benefits and shortcomings of using CCTV, that is surveillance cameras in public places, in combination with facial recognition to capture suspects. The third aspect explores the potential of AI to assist with the translation of analogue legal norms into digital code. In particular, the idea of modelling law with the help of AI and explains why this process could be beneficial. In the final and fourth aspect of public sector elaborates on how AI can be used in the public sector, by public authorities, and by public administrations to optimize the services they provide to citizens. Despite many benefits, for example, that the use of AI in the public sector comes with numerous challenges concerning data protection and the ability of citizens to understand why decisions were taken in a certain manner[11].
 

AI and Legal Responsibility

It goes without saying that our society can benefit from AI in many ways. But what should we do when an intelligent autonomous machine makes mistake that leads to damage? For example, who should compensate a pedestrian that is knocked down by a self-driving car? Or who should be blamed when a robot doctor makes an incorrect diagnosis of a patient? In the law, the possibility of holding someone responsible for causing damage has several functions. To hold someone responsible is often a necessary condition for obtaining compensation, and it's crucial to societies attribution of blame for wrongful conduct, but linking responsibility is important not only retrospectively to handle harm that has already occurred, but also has a prospective preventive function by deterring people from causing damage for which they can be held liable, and the law contains various tools for holding human beings responsible for harm that they cause. For example, a person that hurts another person by negligence, by being careless might be held responsible by the state in the context of criminal law, or when sued by a private party in Tokyo.
But at the same time, today's legal toolbox is developed and adapted to fit the world that we live in as we know it today. Specifically, the attribution of legal responsibility is to a large extent justified by ideas of human free will and control. The introduction of intelligent machines that act autonomously creates challenges for this system. Is it meaningful to blame robot? Can we ask a bad robots for compensation? If there is no practically useful way in which we can hold a machine responsible for the damage it causes, who then should be responsible? Well, perhaps we could hold the developer of the intelligent machine responsible. When normal, unintelligent machines like a hair dryer cause damage, we tend to look for the company that made the machine and attribute responsibility to them. But intelligent machines differ from traditional hardware’s and other unintelligent machines in ways that makes it challenging to place the responsibility with the developer. This is specifically true for systems that use different kinds of machine learning techniques like CCTV cameras that use advanced facial recognition technology. In short, machine learning means that the system learns from and adapts to its environment, that it is dynamic and changes over time. It is very difficult for the developer to predict or control how the system develops and how it will modify itself, it depends on the environment it interacts with and what it learns from the environment.
 
Many machine learning systems are also high-low part, which means that it can be hard or even impossible for human app server including the developer, to understand why it behaves the way that it does. This raises the question, of course, whether it is reasonable to hold the developer or perhaps the user liable for damage caused by an autonomous machine in situations where they took all reasonable care but something went wrong anyway. In some legal domains, liabilities strict in the sense which of course creates a large incentive for controlling that the product is actually safe. But we could argue that it would be unfair to impose strict liability for damage caused by devices that by definition cannot be fully controlled, and would anyone even dare to develop or use these products under such conditions? Moreover, what happens if the developer of the system is not here anymore but the system that they created is still here and continuing to learn and change? Who should be responsible then? Attribution of legal responsibility for damage will certainly have some role to play in the regulation of AI but it is unlikely to suffice, to compensate for and prevent damage that machines cause, so we need alternative approaches. The tech industry has begun to answer to this need by developing its own standards for responsible AI[12].
Should we perhaps conclude that the regulation of AI must be left to the industry and to voluntary measures? Not necessarily, many believe that industries voluntary approaches must be complemented by legal regulation and argue that the development of intelligent autonomous machines forces us to seek alternative legal ways to serve the functions that traditional retrospective attribution oblique responsibility has so far served. Indeed, the legal toolbox contains various instruments for compensation and prevention not only retrospective responsibility. For example, law can require developers to buy insurances that would compensate people who are harmed by AI when there is no other person to hold legally responsible for the harm. Mandatory insurance is like this, already applying some domains in healthcare, for example, or the law could require developers or users to take precautionary measures to prevent potential risks from materializing in the first place. This kind of proactive preventive approach is common in modern environmental law, for example and who knows? Maybe one day the law will come up with a meaningful way to acknowledge electronic persons so that in the future, we can actually hold machines responsible for what they do. To conclude then, the rise of intelligent autonomous machines and the legal challenges that they create, does not necessarily make legal regulation less relevant in this domain, but it does call for some legal engineering and for a great deal of legal ingenuity[13].
 

AI and Criminal Law

Criminal law is cruel, perhaps it is even the cruellest part of law. So why do I say that. Well, because criminal law actually has the power to imprison people, on that in a sense, it's very evil. We know that imprisonment is not a good thing for anyone. But all Western democracies agree that we need a criminal justice system and therefore we need criminal law as well.
 
We need a legal system that deals with questions like, what is a crime? When can you be responsible for it? What is a punishment and what kind of punishments are allowed? We also need to deal with questions like what is a fair trial? And how do the police and prosecutors work during the preliminary investigation before a case reaches the court? This investigatory part of the criminal justice system is very important because it actually sets the boundaries for the proceedings in the court. It is absolutely vital both to the criminal proceedings and to the verdict. What sort of evidence the police and prosecutors managed to bring forward? One way of obtaining information that could be used as evidence is by using AI. The use of AI during the investigatory part of the criminal legal process[14]. More precisely, example of the use of CCTV cameras in public places shall be elaborative towards understanding AI with criminal law.
CCTV cameras have been used worldwide during a few decades, and I've had quite a big impact on criminal legal investigations in many different settings. But the use of these cameras varies between different countries, for example, in the US and in the UK, CCTV is quite commonly used. And in the US the term video surveillance cameras are also used. In other countries like Germany, the attitude is much stricter and the use of surveillance cameras is less common. In Sweden, the use of these cameras has been increasing during the last decade. It was also one of the measures proposed in 2019 to counteract lethal violence committed by gangs and organized crime in Sweden. So, what is CCTV? Well, this abbreviation is short for closed circuit television. And are typically automated cameras placed in public spaces like streets, squares and parks. Broadly speaking, there are two purposes for using these cameras. One use is to identify suspects and provide information about where they have been and at what time.
So, the information has a lot to do with identification of the offender. Another aim is to prevent crime. But there is one major downside with the use of these cameras and that it is the individual’s integrity and interest not to be surveilled and controlled. So, what sort of considerations should be made before allowing the use of CCTV cameras? . What sort of information could CCTV cameras present in general? And why could the use of CCTV cameras be problematic? Let's start with the first question. What sort of considerations should be made before allowing the use of CCTV cameras? These are general considerations applicable to most countries, but I will use examples from Swedish law. Quite commonly, an assessment has to be made by the authorities before they give permission to use the CCTV camera in a public space.
Generally, the permission is given when the interest of camera surveillance is more important than the interest of the individual not to be controlled. One of the main arguments generally used in favour of using these cameras is typically to prevent, reveal and investigate crime. It needs to be stressed that the cameras should be a compliment to other means and not the only one[15]. On the part of the individual’s interest, the authorities are particularly looking at how the surveillance is being carried out. For example, the recording of visual material it's normally needed, whereas the recording of sounds should be considered particularly privacy sensitive and would need a thorough examination before being used. Another matter is what kind of area that is being surveyed.
Normally, it would be sufficient to show that there are problems with the criminality at a square in the street or at a train station. Even if the application of permission for CCTV camera concerns only a part of this square, street or station. Let's move on to the second question. What sort of information could CCTV cameras present in general? The main information we get from these cameras is the identification of individuals. To be able to identify someone, it is generally needed that identifying features are visible. For example, the whole face or the whole body, also particular clothes, movements or a particular constitution of the body could make an identification possible. Let's move on to the third question. How could the use of CCTV cameras be problematic? I would like to mention two main risks.
Firstly, I would like to point out the risk of uncertainty when it comes to ethnicity and gender in the interpretation of the information about identification. For example, American research shows that the most advanced AI facial recognition tools make mistakes much more often in cases with black women than in cases with white men. Secondly, there is an obvious risk off unnecessary surveillance in general and also risk of surveillance off exposed areas where criminality is supposed to be more frequent. Consequently, specific neighbourhood’s and certain population groups are more surveyed than others. I will let this quote from American researcher Adam France, described in Chicago, illustrate the risk of unnecessary surveillance. Chicago System of video surveillance Cameras has three critical features. Their vast numbers, their tight integration and their powerful abilities to gather and analyse information[16].
Together, these features empowered city government to monitor anyone automatically quickly, easily, inexpensively and surreptitiously in all public places and at all times. This vision is, of course, not valid globally, but it is a necessary reminder off the downside of camera surveillance. And consequently, the use of AI in preliminary investigations.
 

Using AI to model Law

The Blue Book is quite a substantial undertaking. It's quite heavy and is set in very tiny print. Imagine, how much law it contains. The Blue Book includes most of the laws enacted by the Swedish parliament that are currently in effect. You can find a well-used copy of the Blue Book in every Swedish lawyer's office, and perhaps a second copy of the Blue Book at the lawyer’s home. The Blue Book and its digital equivalents are essential tools for the lawyer. How does the Swedish lawyer use the Blue Book today? She consults the Blue Book to locate the exact textual form of a particular rule that each law imposes. The lawyer maybe roughly aware of the general outline of many laws, but only by recalling their exact verbal formulations, can the lawyer competently proceed to advise the client? The Blue Book stands ready to fill this role. Can we imagine a technologically enhanced version of the Blue Book that would a lot more function in utility than reminding the lawyer of the precise form of a rule she likely already knows exists? Could the AI enable Blue Book assist the lawyer to operate even more effectively? This module examines the potential contribution of artificial intelligence to realizing advanced tools for the lawyer of the future. Consider the idea of the Blue Book. It's a single convenient resource containing all the laws a Swedish lawyer might need to know, but one may need to be a well-trained Swedish lawyer to access its contents. A lawyer understands the specialized language used in the Blue Book and knows the large structural elements that relate its many specific provisions. More importantly, the lawyer knows how to use the information contained in the Blue Book to exercise her profession.
 
Now, imagine that the contents of the Blue Book could be transformed into computer code they could be manipulated by artificial intelligence. What are the possibilities presented by the construction of an AI enabled book Swedish laws. An artificial intelligence Blue Book could conceivably answer legal questions put to it, perhaps by a Swedish lawyer, or perhaps even by a person untrained in law[17]. The AI enabled Blue Book might effectively translate complex legislative texts into more comprehensible forums or provide useful examples illustrating the operation of a legal rule. The AI enabled Blue Book might construct legal arguments or suggest the harvesting of particular facts that would be useful to the lawyer.
 
Finally, the AI enabled Blue Book might signal the presence of ambiguities and inconsistencies which inevitably find their way into the larger body of law, permitting the lawyer to anticipate and favourably resolve these.
Artificial intelligence will enable us to model law both individual provisions and the larger structures these provisions form, which in turn will unlock new possibilities. Currently, we routinely translate law from printed format into digital format. Modern lawyers spend more time reading computer screens than visiting law libraries, but digitization of texts only takes us so far. Both print and computer screens display the law in human language to better model law and to unlock the power of artificial intelligence, we must convert law from human language into a form that a computer can recognize. In process. We must transform law into a form that enables computation. We model law for a variety of purposes. First, computational models of law function as knowledge representation. These models contain the rule content of specific laws in a form that is recognizable by a computer. And since computers execute algorithms, the rule content must be stated in an algorithmic form[18].
Most legal rules take a conditional form, which well suits a computer. If certain conditions specified in a legal rule are satisfied, a particular legal conclusion follows the conditions set out in the rule operate as inputs to the computer. The legal conclusion is the computers output? Consider this provision of Swedish law. If the buyer has not put the seller on notice of the defect of the goods within two years after receiving the goods, he shall have forfeited the right to invoke the defect unless otherwise provided by a warranty or other similar undertaking. To convert this provision into algorithmic form, we would encode the inputs as yes, no binary switches. Has the buyer provided the seller with notice that the goods are defective? Yes, or no? Has the notice been given outside of two years from the buyer's receipt of the goods? Yes, or no? Was there no warranty or other undertaking extending sellers responsibility beyond two years? Yes, or no? The output of the algorithm would be the result in legal consequence. Buyers having forfeited the right to allege the goods are defective.
So, the algorithm in operation would flow like this if notice of defect given more than two years after delivery. Yes, and there was no warranty extending beyond two years. Yes, then the buyer has forfeited his right. Computers could be used to access information translated into algorithmic form. Artificial intelligence may be used to provide meaningful yet easily understood answers to questions about the law directly to the public and computational models of law, such as in the example above can give the human operator access to law in its functional, or algorithmic form.
Legislation is expressed in a specialized form of human language and is intended to be read by expert readers. Lawyers Modelling legislation involves the use of AI to operationalize the rules expressed in the legislative text. Artificial intelligence will be utilized to create active connections between different legislative provisions revealing the super structures of the law. It will also enable links to decisions interpreting and executing the relevant legislative provisions. These connectors will exceed the functionality of the now familiar links that moved the reader from one internet site to another. These links will permit complex or compound algorithmic relationships to be visualized, enabling smooth movement from one legal operation to another. There will be links to analogous provisions of law found in other places. The user will be brought to similar provisions and will be able to run these in parallel in a manner that will reveal functional differences.
AI engineered legislative texts will serve as the foundation of effective legal prediction. By responding to questions posed by the AI, an operator will be able to input the particularities of a scenario. She will then be provided with a sound prediction as to how a particular case will be resolved and judges may use AI to supplement their decision making. AI would guide judges through the necessary considerations and could assure that judges properly understand and implement the business rules contained within the relevant legislative texts. AI will take existing legal texts and will output clear statements of their logical operation[19]. These will serve as intelligible articulation of business rules that can be followed by individuals and firms that seek to comply with legislative mandates. AI enabled legislative texts might lead us down the pathway towards the autonomous administration of law. This notion might not be a terrifying as it first sounds, assuming there is built in human oversight to avoid unlawful or unexpected outcomes. And it might best be deployed in areas of extreme rule complexity like tax law where AI might outperform human judges in fairness and accurately[20].
If properly implemented, AI enabled law could be more coherent, more fair and more transparent. But for this to occur, the design and construction of AI must be closely and carefully supervised
 

AI and Administrative Law

As in so many other parts of society, artificial intelligence brings both opportunities and risks. This is true in relation to the work of public authorities as well. Citizens want public services to be fast, effective, and easily accessible. This is also reflected in legal instruments related to good administration, both in national law, and European law, and to some extent on the international level. Automation or work procedures, or public authorities can contribute to fulfil the goals in these legal instruments[21]. Administrative procedures can be faster, more efficient, and easier, accessible from hope. Risks of corruption and abuse of powers could also be eliminated when machines are programmed to only consider objective facts. Such automation of administrative procedures can be aided by AI technology. It can handle a multitude of data points and independently discover patterns. AI gives us the possibility to automate increasingly complex tasks. This in turn, can contribute to fulfil both social and legal demands of good administration[22]. For example, there are no programs that aim to assess the need for social care for children based on several key data points. According to some evaluation, these programs seem to be more effective than humans. However, in practice, there are several challenges to the implementation of AI technology in administrative law. Many of them come from the basic structure of the machine learning technology. It's advantage that it can independently discover correlation in huge datasets may also become a disadvantage. It becomes difficult to predict the outcome beforehand and to give exact reasons for the results[23]. One kind of problem is that correlation don't necessarily reflect normative relations. For example, an AI application may be trained on a number of previous decisions from a social insurance agency and may then see correlations between approval rates and certain groups of people, such as disability benefits to people having a certain medical condition. But these correlations may be the result of biases or previous human decision-makers, not correct application of the rules. In this way, the AI application may repeat and even magnify existing false stereotypes[24].
Since the result of AI is dependent on a large number of correlations, it is often difficult to describe exactly which data points that were most decisive. In fact, the more advanced the models that are used to represent statistical correlations correctly are, the more difficult it is to describe the role of each single data point. Therefore, there is a risk that both biases and other flaws go unnoticed. More generally, it becomes difficult to give the citizens reasons to the administrative decisions and thereby to live up to the good administration standards. We don't exactly know how AI systems work and how they make their decisions. So how can we be sure that the decisions that they make are fair? From an administrative law perspective, these are key aspects since the law must ensure that administrative authorities make fair and transparent decisions. A second kind of problems with AI technology in administrative law relate to the lack of human participation in the administrative process[25].
One aspect of this, is that it may be difficult to hold anyone responsible for inaccurate results. Other aspect is that it may be an important part of the administrative process to ensure citizens that other human beings really listen to their problems and their arguments[26]. A third kind of problems with AI relates personal integrity. How much information about the citizens should the government collect and analyse? And how far should the government go to try to use such data to influence, nudge, or even manipulate citizens? AI does indeed raise several concerns in relation to administrative procedures. This has also caused decision-makers to issue policy documents[27]. There are now such documents highlighting various kinds of concerns on national level in many countries as well as on European and to some extent international level. However, the question of how these problems highlighted here should be sold is still largely unanswered. Therefore, every use of AI must be evaluated in detail in relation to the law applicable to the particular area of administration. The EU general data protection regulation, the GDPR, is of course important. It contains many general rules on personal data processing. Article 22 of the regulation also set up specific requirements for when automatic decision-making is allowed. In addition, other parts of administrative law must also be analysed.
For example, regarding decision-making processes and obligation to give reasons for decisions. Public authorities that want to use AI face several difficult administrative law problems. Many of them relate to difficulties to control and explain exactly how AI applications work. However, you could argue that the same is true for human decision-making. Even if an AI application is not perfect. It may be as good as or better than human decision-making, that is notorious for biases and all kinds of other flaws. As long as the result is better than human case handling, perhaps you could argue that some shortcomings are acceptable. AI definitely has the potential to improve the work of public authorities in many ways. But only if it can be implemented in ways that live up to the principle of good administration in an acceptable way.
 

AI and Health Law

AI systems, are sometimes stated to be the solution to many of the challenges we face in healthcare. Researchers and politicians today agree that healthcare is facing major economic, medical, and social challenges in many parts of the world. In particularly, there are fears that we will not be able to successfully manage demographic developments. At a rapid pace, we are growing older in relation to the total population. One of the greatest successes of the last century is a dramatic increase in life expectancy in many parts of the world. Due to raising living standards, better working conditions, and significant medical progress, the average age has risen and is expected to continue to do so at an ever-increasing rate[28]. Elderly consumption of healthcare constitutes a significant part of the total national healthcare production in a society. Although the general health of the population is improving, other causes of morbidity and disability are expected to become common, and it will require different efforts. For example, the population is estimated to have much higher levels, all dementia and physical disabilities[29]. Even though we are getting healthier, higher up in the years, healthcare costs during our last years of life have exploded. Of course, there are other major challenges that healthcare needs to tackle in the coming years. These are limited resources and personnel, increasing globalization and a rapidly changing society. We are also very vulnerable to sudden and unexpected changes in the world. During the corona pandemic, we experienced how viral attacks can quickly knock out entire communities and burden healthcare with unreasonable and excessive workload. All of this may happen again.[30]
To face challenges like this, there is a need to find different solutions. AI technologies are often presented as such a solution to many of these problems. Through rapid technical medical development combined with a change in control and regulation, AI solutions are considered to be able to contribute to a high quality and quantity of healthcare. The healthcare system is estimated to be able to perform more care with less resources through new cost-effective and automated systems[31]. With this background, let me now turn to the question, what AI is from a health law perspective. The legal application of AI concerns, many areas of medical law. It covers a variety of legal issues related to personal data, administrative procedures, commercial products, and direct healthcare services. You should be borne in mind, that the overall regulations governing healthcare in the national legal system should generally be useable for AI applications. AI in illegal perspective is therefore, to a large extent, the regulatory framework that encompasses all other healthcare. This means that the starting point is that national and global legal regulations governing healthcare in general should also govern different AI solutions. In some basic principles and regulation of healthcare are already there to use. This is the main rule. Of course, there are exceptions and situations that we have not been able to foresee. Those have to be solved as we go and as the AI systems develop. But for the most part, in many countries, we already have a legal system that will work also for AI. The way to find the answer to questions how AI is affected by laws and regulations, will be based on the basic legal principles, laws and regulations that govern each national healthcare system. The general framework or healthcare law, should therefore apply regardless or whether eHealth, AI, or other technology is part of a medical intervention or not. For example, using a Swedish context, Chapter 3 Section 1 of the Swedish Health Law Act, states that, "The goal of healthcare is good health and care on equal terms for the entire population. Care must be given with respect for the equal value of all people and for the dignity of the individual. Anyone with the greatest need of healthcare should be given priority." This means that these principles should also apply when, for example, an AI is to diagnose whether a person has cancer or heart disease or in a patient's contact with an online Doctor.
In the same way, the requirement for good healthcare in the Swedish legislation means that the care must be of good quality with good hygienic standards, and that it must meet the patients need for safety. Furthermore, as the regulation also states, it should be easily accessible, build on respect for the patient's self-determination and integrity and meet the patients need for continuity and security. In this way, we can similarly take other laws like the patient safety act, the patient data act, and the patient act, a study what is required for good healthcare. The same rules must be met for care provided with the help of AI tools. For example, the patient acts rules on requirement for information and consent also apply in healthcare situation using AI. We have so far been able to establish that eHealth must comply with the regulations that apply to other health care. It is often possible to transfer, complete a new situation to this technology as well. This is possible because the technical solutions are mostly used for offering new tools to healthcare personnel, or for providing diagnostics, and other evidence for their decisions[32].
Digital instruments and AI systems can, for example, produce proposals for decisions which are then formally made by humans. Then the difference from traditional care will not be that great. Is it generally the physical care personnel who make the crucial care decisions? This is a general regulation that AI has to adapt to. Then there are of course, some special challenges, particularly when new systems or situations will not fit the current legal design. Within the eHealth development, this is particularly present in cases where it involves automatic decision-making. For example, we cannot just transfer the rules to an AI, since an AI cannot have the legal responsibility. If decisions are made without human intervention, then legal issues are especially at the forefront[33].
AI also raises legal questions about quality assurance, information management and accountability. Systems with AI are becoming increasingly independent and can, for example, find patterns for decision-making. They can become increasingly self-learning over time.
 
From a legal point of view, such systems mean that the application of our laws may be affected. For example, who becomes responsible if a diagnosis made automatically by machine does not prove to be correct? Or if the doctor does not follow the recommendation made by AI and it later turns out that AI was right. Here we have no clear answers today, and there is a need to review our regulations in applicable parts in the coming years. The legal system would gradually adapt to AI development and new regulations can sometimes solve issues. One such example is when AI can be classified as medical equipment. For example, algorithms are used in healthcare to detect the different types of tumours is a specific EU legislation to follow. Medical equipment in healthcare in Sweden is regulated by a special EU regulation on medical devices. This law contains supplementary provisions on clinical trials, supervision, sanctions, and our authorizations.
 
Another development that we can expect is that the continued implementation of AI leads to challenging issues regarding equal care and accessibility in care. We need to ask ourselves whether the requirement for equal care we'll be maintained, if not everyone has access to good internet connections and computers. We also see increased involvement of private players. It is an important aspect from a legal perspective. Here, there are fundamental differences between the regulation of public and private actors, where the public sector is generally more regulated. Often stricter requirements are also placed on government agencies than on private companies.
For example, regarding privacy protection and equal treatment. Therefore, in order to ensure equal care, public healthcare providers need to ensure that these requirements are met even in collaboration with private actors. We can also see increased internationalization, large resources and significant amounts of data are required to develop and operate eHealth solutions. We are therefore getting more elements from multinational companies or other types of international cooperation. This may mean that data is stored on service abroad or that foreign expertise is consulted. The virtual world has no geographical boundaries and data that is collected and generated in healthcare can be easily move between countries. This means that there can be legally complicated situations here. It can also affect patient's rights protection, as well as opportunities for transparency and accountability.
Of course, we also have the privacy issues to consider when our personal information leaves the country. One conclusion is that for each AI solution and healthcare, it is necessary to carefully analyse the legal conditions. One central requirement is that all eHealth solutions that are provided must meet the quality requirements that are set for all care in accordance with the general regulations on requirements for good care in each national system. An important issue in healthcare is, of course, information management and protection of personal privacy. Complex technical and organizational solutions can increase the risk that important information is not disseminated to those who need it, and the sensitive information is disseminated too much. While laws in medical law needs to continue to protect the individual’s integrity and good care on equal terms for the entire population, they should not stand in the way of digital development that can promote healthcare.
The question is, how fast the eHealth development should go and in what ways the legal system may need to adapt to meet the values that healthcare stands for. Like the principle of human dignity, care on equal terms and the requirement for privacy protection. If eHealth in healthcare is to be successfully implemented, both patients and healthcare professionals need to have confidence in the new systems. Here, the legal system has an important role to play.
 
On the one hand, the regulations need to be continuously evaluated and necessary adjustments have to be made. On the other hand, legal solutions to new AI development need to be discussed in relation to ethics and human rights demands. The development of AI poses new questions that involve more than finding legal solutions. It is also about contributing to the design of future care.
In other words, AI is challenging and pushing the legal borders and the third connection between Law and AI are the structural and societal issues related to these large changes in society, as well as questions about who bears the responsibility for possible mistakes here. The implementation of AI raises a number of questions for the national public health care systems, and a lot of these questions are legal in nature. For example, what is the extent off the public's obligation to provide health care if there are private AI services, and what should be the relationship between the public and private sectors in providing public services? Then there's also the question of responsibility. In our context, it is particularly important to know who is responsible if the AI makes a mistake. What if an AI gives the wrong diagnosis or carries out a care measure in a harmful way for the patient? How should we organize a I development of society for the best and safest results for different groups within society and for society as a whole. And how can we avoid discrimination, vulnerability and exclusion in this development? In this regard, AI is very much included as a part of the whole societal development, off people's health and physical and psychological wellbeing.

AI and Labour Law

Labour law regulates working life and the relationship between employer and employee, as well as the relationship between trade union and employer. Broadly speaking, the purpose of labour law is to protect the weaker party to the employment contract, the employee. The European Commission has suggested a definition of AI that is useful for this lecture. AI "Refers to systems that display intelligent behaviour by analysing their environment and taking actions with some degree of autonomy to achieve specific goals". Also, that "AI-based systems can be purely software-based, acting in the virtual world or AI can be embedded in hardware devices". This is important in relation to the regulation of the workplace. At work, AI is present both in the form of algorithmic processes in computers and also in the form of robots automatically linking together AI and algorithms to robotics and the internet of things. As of today, no legislation explicitly regulates AI at work. Instead, the challenges to fit the new technology of AI into the pre-existing level of the framework, and courts have not presented any case law on AI at work[34].
The standing of the law regarding AI at work is not entirely clear. The analysis could be expanded in five parts: Employment protection when AI replaces humans, AI as an employer, workplace safety issues when working alongside robots, equal treatment, can robots discriminate, and lastly, data protection and surveillance issues. Let's start with employment protection. The introduction of AI and robotics into working life will imply that some jobs will disappear and new ones will be created. AI can make workers redundant and labour law does not hinder an employer from replacing workers with robots and AI. The employer's decision to implement AI and robots results in workers being redundant for economic, technological, and structural reasons, and this constitute just cause for termination of the employment contract[35].
In some jurisdictions, a seniority principle governs the order in which employees are terminated so that workers with shorter periods of employment are terminated before those with longer periods of employment. This is usually referred to as the last-in, first-out principle. Since the decision to implement AI and robots to the workplace is within the employer's managerial prerogative, workers must accept working alongside robots. Refusing to do so would provide the employer with just cause for termination of employment based on reasons pertaining to the individual worker. A worker must stay up to date with technological changes to work processes[36]. A key policy goal is then to retrain workers in jobs which are likely to disappear and help them transition into other professionals. Let's now move on to AI as an employer. Is it possible for an AI to represent the employer at work? An algorithm or robot can perform the role of a manager at work to the extent that the actions taken can be construed as emanating from a human legally holding the power to allot and direct work. The employee and the employer can stipulate in the employment contract that an algorithm will represent the will of the employer and that the employee is to receive binding instructions from role. The legal responsibility of the actions of the algorithm is borne by the employer, and the instructions given must respect labour law and also the terms of the contract. For example, boundaries to the duty to perform work.
AI systems in a management role must not be in breach of data protection law, like the right to transparency regarding processing, and right not to be profiled or subject to particular decisions based solely on automated means. Because of the power imbalance between employer and employee, it is possible that employees, legally speaking, cannot freely consent to every type of data processing. Let's move on to health and safety when working alongside algorithms and robots. Robots and AI at workplace present both challenges and opportunities for health and safety at work. To the extent it is reasonably practicable, the employer is required to ensure that the workplace, machinery, equipment, and processes under his or her control are safe and without risk to health. Firstly, algorithms and robots can be useful for workers engaged in dangerous work. It might actually very well be reasonably practicable to demand that the employer implements the assistance of this type of new technology at work.
 
Secondly, because of the autonomous and possibly unpredictable behaviour of robots and algorithms, humans working alongside thinking machines might experience new forms of stress and mental health risks. Employers are obliged to take measures to decrease these novel risks. Health and safety law provides workers with the right to training on new machinery and algorithms, and should a worker be injured by a robot, it would count as an occupational injury. Most existing legislation on health and safety at work operates under the assumption that machines and robots present dangerous to workers, and that there should be a safe distance between the two. Health and safety law must be updated so that it takes into account the implications of humans working closely to robots and AI.
AI systems might be involved in processes regarding hiring and firing of workers, and management of the workforce. The use of AI can present new problems regarding both direct and indirect discrimination. Applicants for a position as well as workers are protected against directed discrimination. That is being treated less favourably in a comparable situation because they have protected characteristic, for example, race or gender. An algorithm engaged in management must be instructed not to discriminate in this way. Indirect discrimination is also primitive. This means that it is not allowed to implement a policy that applies in the same way for everybody, but in effect disadvantages a group of people who share a protected characteristic. A policy that applies equally can still be discriminatory[37].
 
Requirements concerning height or language proficiency might constitute indirect discrimination on the grounds of sex and ethnicity respectively. AI must not be allowed to reproduce prejudices possibly held by the people who constructed the system. The algorithm must be instructed so as not to ask questions that are irrelevant to the particular context, for example, a hiring process or the setting of wages. Since AI and Machine Learning collect and process data on historical events, it is key that algorithms are programmed in a way that does not perpetuate historical biases and exclusionary practices. A company's previous recruitment practices might have favoured a particular category of candidates, and the algorithm must not be allowed to carry this practice into future recruitment. Is this particularly important because of the widely held notion that machines always operate in an objective and neutral matter?
 
Now, for the last part, data dependency and surveillance issues.
AI, algorithms, and robots must in order to operate and to learn collect, and process vast amounts of data, and in the context of the workplace this information is personal data pertained to the employees, their personnel records, their past work performance, and so on. AI systems must respect data protection legislation which prescribes rights and duties on part of the employer and employee. At the workplace, AI and robotics often presupposed that employees are subjected to different kinds of surveillance while working. An employer is allowed to implement surveillance systems at work, but these must respect employee privacy and be proportional in the individual instance to a legitimate overriding interest on part of the employer, and employees must be informed of the surveillance in advance. AI systems must not be in breach of employees right to privacy at work.
To sum up, everything that labour law prohibits an ordinary human employer from doing, is also not allowed from algorithm. The employer is legally speaking responsible for the actions of algorithms and robots. AI and robotics must be implemented to the workplace in a way that complies with health and safety law, anti-discrimination legislation, data protection legislation, and workers' rights to personal integrity. AI reaches into many areas of labour protection under regulation of the workplace[38]. Labour lawyers must continue to engage with the topic of AI to ensure that the goal of labour protection can be realized also into future context of big data, robotics, the Internet of things, and AI. It is also important that labour law responds to the call for human-centred vision for AI put forward by, for example, international organization, OECD, which ask that governments work closely with stakeholders to promote the responsible use of AI at work, to enhance the safety of workers and the quality of jobs, to foster entrepreneurship and productivity and aim to ensure that the benefits from AI are broadly and fairly shared.
 

Regulatory framework in India

At present Information Technology, Act 2000[39] along with Digital Media Ethics Code[40] is in force to take care of major privacy online digital and artificial intelligence-based operations in India. As Information Technology Act mandate is primarily directed to provide recognition to electronic commerce and trade under the international obligation of UNCTAD model code in order give force to digital online e-transaction to be adopted in the market economy to boost trade and commerce[41]. This Model UNCTAD code can be said as a starter to all online banking, trade, commerce across the globe. As a result of which many multi-national corporations and online digital businesses started taking shape and unfolded their wings into operation in transboundary transactions which has no doubt eased the economy and helped in boosting the economy, trade and jobs. But till the early 20th century, nobody knew that this will lead to issue of privacy and cybercrimes. As a result of massive use of advance computing, cloud computing-based application and rise of artificial intelligence-based automated systems and its operations across the globe has made universal entry of AI, which ultimately became ground of privacy threats and made data/information, personal sensitive information on cyberspace more vulnerable. As we all know more challenges are faced by India because of its rising population, which has made it best market economy for investors for online services for the services sector, health sector, banking, trade, etc.
 

Conclusion

This article was overarchingly intending to the future of how artificial intelligence is going to be supporting the field of legal practice and the future isn’t far to reach. It analyses the policies from around the world by raising questions that a common person shall have in mind but can’t ask due to the fear of knowing the reality too soon enough. Given the complex terrain of navigating challenges posed by AI systems, it is integral that future deliberation, policy making, and regulation of AI is informed by multiple disciplines on an equal footing. These must be ethically, legally, technically, and philosophically informed throughout the process. The pace of development is quick, the nature of development is opaque, and the effects of development are profound and often irreversible. Traditions of building processes and deploying technology first, and deliberating on their effects next will not work with AI. I hope the proposed framework will help other researchers, policy makers, lawyers and technologists to explain, deliberate, and understand the challenges and opportunities of AI in their unique contexts.
Propels in innovation have without a doubt modified the legal industry’s standpoint; it very well may be reasoned that AI in the field of regulation has various advantages: it has supported legal experts in speedy examination; it can help decided in dynamic cycles with its prescient innovation; it is valuable for law offices for an expected level of investment work, information assortment, and different assignments, all of which make their work more proficient; and  it is helpful  for law offices for an  expected.
level of effort work, information assortment, and different errands, all of which make their work more effective. Regardless of its various advantages, AI can’t supplant legal advisors. It can help them in specific areas of work; however, AI needs essential reasoning and isn’t inventive similarly that people are. Robots need enthusiastic intelligence and sympathy, as well as the capacity to make do before an appointed authority. Consolidating AI into the legal business has various issues, including the way that it is as yet defenceless against an assortment of dangers, requiring the production of a complete legal structure to control Artificial Intelligence and keep it from taking advantage of its clients’ information. Just when we have a legal structure directing AI’s way of behaving to diminish the perils related with it can we receive the full rewards of AI.
Artificial intelligence can be considered as a boon and a bane in the evolving society. It tends reduce the burden on lawyers and other professionals but it is largely encroaching upon the field that is predominated by human beings. Law is a profession that requires not only human intellect but also human emotions. It is beyond question that Artificial Intelligence is useful in time intensive work such as research work, but it does not come without disadvantages.
 
Complete reliance of artificial intelligence is definitely a bane. Artificial intelligence has not been granted legal personality and hence any glitch that arises due to the technical or functional malfunctions, nobody may be held accountable for the same. This tends to create an imbalance in a society. Hence, Artificial Intelligence must be used judiciously.
The goal of this article was to provide a realistic, demystified view of Law and Artificial Intelligence. As it currently stands, AI is neither magic nor is it intelligent in the human- cognitive sense of the word. Rather, today’s AI technology is able to produce intelligent results without intelligence by harnessing patterns, rules, and heuristic proxies that allow it to make useful decisions in certain, narrow contexts. However, current AI technology has its limitations. Notably, it is not very good at dealing with abstractions, understanding meaning, transferring knowledge from one activity to another, and handling completely unstructured or open-ended tasks. Rather, most tasks where AI has proven successful (e.g., chess, credit card fraud, tumour detection) involve highly structured areas where there are clear right or wrong answers and strong underlying patterns that can be algorithmically detected. Knowing the strengths and limits of current AI technology is crucial to the understanding of AI within law. It helps us have a realistic understanding of where AI is likely to impact the practice and administration of law and, just as importantly, where it is not.
 


[1] https://royalsocietypublishing.org/doi/pdf/10.1098/rsta.2018.0087
[2] https://www.ijlmh.com |2018 IJLMH | Volume 1, Issue 2 | ISSN: 2581-5369
[3] What is Artificial Intelligence. Available at http://www.aisb.org.uk/public-engagement/what-is-ai
[4] Eastern Book Company v D.B. Modak, 2008 1 SCC 1.
[5] Cowls J, Floridi L. 2018 Prolegomena to a White Paper on an Ethical Framework for a Good AI Society. Working Paper Series. See https://papers.ssrn.com/sol3/papers.cfm?abstract_ id=3198732.
[6] Frank H. Easterbrook, Cyberspace and the Law of the Horse, 1996 U. CHI. LEGAL F. 207, 208 (1996) (“Develop a sound law of intellectual property, then apply it to computer networks.”)
[7] Microsoft, THE FUTURE COMPUTED: ARTIFICIAL INTELLIGENCE AND ITS ROLE IN SOCIETY 28 (2018) https://blogs.microsoft.com/uploads/2018/02/The-Future-Computed_2.8.18.pdf.
[8] David Kelnar, The Fourth Industrial Revolution: A Primer on Artificial Intelligence, MEDIUM (Dec. 2, 2016), https://medium.com/mmc-writes/the-fourth-industrial-revolution-a-primer-on-artificial-intelligence-
aiff5e7fffcae1 (“All machine learning is AI, but not all AI is machine learning.”). There are more than 15 approaches to machine learning, each of which uses a different algorithmic structure to optimize predictions based on the data received. Id. For a more nuanced description of machine learning, see Ben Buchanan and Taylor Miller, Machine Learning for Policymakers, Belfer Center (June 2017)
[9] See BULK COLLECTION: SYSTEMATIC GOVERNMENT ACCESS TO PRIVATE SECTOR DATA (Fred H. Cate & James X. Dempsey, eds., Oxford 2017).
[10] Tathagata Chakraborti et al., The Emerging Landscape of Explainable AI Planning and Decision Making (02/26/2020), https://deepai.org/publication/the-emerging-landscape-of-explainable-ai-planning-anddecision- making.
[11] William D. Eggers et al., AI-Augmented Government: Using Cognitive Technologies to Redesign Public Sector Work, DELOITTE INSIGHTS (Apr. 26, 2017), https://www2.deloitte.com/insights/us/en/focus/cognitive-
technologies/artificial-intelligencegovernment.html [https://perma.cc/4VLZ-8485].
[12] Kate Crawford and Trevor Paglen, Excavating AI: The Politics of Training Sets for Machine Learning (Sept. 19, 2019) https://excavating.ai. See Will Knight, AI Is Biased: Here’s How Scientists Are Trying to Fix It, WIRED(Dec. 19, 2019), https://www.wired.com/story/ai-biased-how-scientists-trying-fix.
[13] See, e.g., Peter Swire, The Golden Age of Surveillance, SLATE (July 15, 2015),
[14] See Quinn Emanuel Trial Lawyers, Artificial Intelligence Litigation: Can the Law Keep Pace with the Rise of the Machines (2018), https://www.quinnemanuel.com/the-firm/publications/article-december-2016- artificial-
intelligence-litigation-can-the-law-keep-pace-with-the-rise-of-the-machines/\
[15] Matthew U. Scherer, Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies, 29 HARV. J. LAW & TECH. 353 (2016).
[16] See Report to the European Parliament, with Recommendations to the Commission on Civil Law Rules on Robotics, from the Committee on Legal Affairs, Mady Delvaux, Rapporteur (Jan. 1, 2017)
http://www.europarl.europa.eu/sides/getDoc.do?pubRef=-//EP//NONSGML+REPORT+A8-2017-
0005+0+DOC+PDF+V0//EN. For more on the current EU landscape around liability and ethics of AI, see Nathalie Nevejans, European Civil Law Rules in Robotics (Oct. 2016), published by the Directorate-General for Internal Policies, http://www.europarl.europa.eu/RegData/etudes/STUD/2016/571379/IPOL_STU(2016)571379_EN.pdf.
[17] Michael Kushner, To Pay or Not to Pay: Free Legal Services at the Push of a Button, JETLAW (Oct. 15, 2018), http://www.jetlaw.org/2018/10/15/to-pay-or-not-to-pay-free-legal-services-at-the-pushof-a-button/[https://perma.cc/AG94-YR2A].
[18] See generally Omer Tene & Jules Polonetsky, Taming the Golem: Challenges of Ethical Algorithmic Decision- Making, 19 N.C. J. L. & TECH. 125 (2017).
[19] Nick Wallace & Dan Castro, The Impact of the EU’s New Data Protection Regulation on AI, INFORMATION TECH. & INNOVATION FOUNDATION (Mar. 26, 2018), https://itif.org/publications/2018/03/26/impact-eu- newdata-protection-regulation-ai.
[20] Richardson, Schultz and Southerland, Litigating Algorithms 2019 US Report: New Challenges to Government Use of Algorithmic Decision Systems (AI Now Institute, Sept. 2019)
https://ainowinstitute.org/litigatingalgorithms-2019-us.pdf
[21] See generally DANIELLE KEHL ET AL., ALGORITHMS IN THE CRIMINAL JUSTICE SYSTEM: ASSESSING THE USE
OF RISK ASSESSMENTS IN SENTENCING (2017), https://dash.harvard.edu/bitstream/handle/1/33746041/2017- 07_responsivecommunities_2.pdf?sequence=1&isAllowed=y [https://perma.cc/U6UC-8MCL] (analyzing and applying trends to the recent use of artificial intelligence in the courtroom).
[22] See Jonathan Howard, A Big Data Cheat Sheet: From Narrow AI to General AI, MEDIUM (May 23, 2017), https://blog.statsbot.co/3-types-of-artificial-intelligence-4fb7df20fdd8.
[23] Ian R. Kerr, Ensuring the Success of Contract Formation in Agent-Mediated Electronic Commerce, 1 ELEC. COMMERCE RESEARCH 183 (2001).
[25] See Peter Voss, From Narrow to General AI, MEDIUM (Oct. 3, 2017),
https://medium.com/intuitionmachine/from-narrow-to-general-ai-e21b568155b9.
[26] Ben Dickson, What is Narrow, General, and Super Artificial Intelligence, TECHTALKS (May 12, 2017)
https://bdtechtalks.com/2017/05/12/what-is-narrow-general-and-super-artificial-intelligence/ (“Narrow AI is the only form of Artificial Intelligence that humanity has achieved so far.”)
[27] McKinsey Global Institute, Jobs Lost, Jobs Gained: Workforce Transitions in a Time of Automation (December 2017).
[28] See Statement from FDA Commissioner Scott Gottlieb, M.D. on steps toward a new, tailored review framework for artificial intelligence-based medical devices (April 2, 2019)
[29] In February 2019,the AMA Journal of Ethics devoted its entire issue to AI. https://journalofethics.amaassn.org/issue/artificial-intelligence-health-care.
[30] https://health.economictimes.indiatimes.com/news/industry/artificial-intelligence-in-healthcare- applications-and-legal-implications/66690368
[31] FDA, Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-
Based Software as a Medical Device: Discussion Paper and Request for Feedback
[32] Paul Triolo, Elsa Kania, & Graham Webster, Chinese Government Outlines AI Ambitions Through 2020, NEW AM. (Jan. 26, 2018), https://www.newamerica.org/cybersecurity-initiative/digichina/blog/translationchinese- government-outlines-ai-ambitions-through-2020/.
[33] Fairness, Accountability, and Transparency in Machine Learning, https://www.fatml.org/.
[34] Elizabeth E. Joh, Artificial Intelligence and Policing: First Questions, 41 SEATTLE UNIV. L. REV. 1139 (2018), https://ssrn.com/abstract=3168779. See also Christopher Rigano, Using Artificial Intelligence to Address
Criminal Justice Needs, National Institute of Justice (Oct. 8, 2018) (discussing NIJ support for AI research infour areas: video and image analysis, DNA analysis, gunshot detection, and crime forecasting)
[35] Jessica Cussins Newman, DECISION POINTS IN AI GOVERNANCE (May 2020) https://cltc.berkeley.edu/wp- content/uploads/2020/05/Decision_Points_AI_Governance.pdf
[36] David Gunning, Explainable Artificial Intelligence, Defense Advanced Research Projects Agency, https://www.darpa.mil/program/explainable-artificial-intelligence.
[37] See Karen Yeung, Andrew Howes, and Ganna Pogrebna, AI Governance by Human Rights-Centered Design, Deliberation and Oversight: An End to Ethics Washing (June 21, 2019), in M. Dubber and F. Pasquale (eds.) THE OXFORD HANDBOOK OF AI ETHICS (2019) https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3435011; Karen Hao, In 2020, let’s stop AI ethicswashing and actually do something, TECHNOLOGY REVIEW (Dec. 27, 2019)
[38] See Jeffrey K. Gurney, Crashing Into The Unknown: An Examination Of Crash-Optimization Algorithms Through The Two Lanes Of Ethics And Law, 79 ALB. L. REV. 183, 242 (2016).
[39] Section 43A provides for the protection of sensitive personal data or information (‘SPDI’) and section 72A protects personal information from unlawful disclosure in breach of contract
[40] 40 See Preamble of Information Technology Act 2000; Today India has jumped to 10th position in the cyber security index released by ITU due to its stringent measures on privacy protection policy. But still, India is
grappling to bring robust data protection law. As in 2019 it has tabled data protection bill in parliament to deal with the issue of data privacy issues put forwarded by AI and other applications. To cover the risk attached to data protection vulnerability, on the recommendation of Sri Krishna Committee Report which has highlighted fair use of data principle in digital economy . Thus, above-mentioned law is primarily operating to take care of digital governance and privacy-related issues on an online platform which also takes into account the issues of AI
[41] Notification dated, the 25th February 2021 G.S.R. 139(E): the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021

About Journal

International Journal for Legal Research and Analysis

  • Abbreviation IJLRA
  • ISSN 2582-6433
  • Access Open Access
  • License CC 4.0

All research articles published in International Journal for Legal Research and Analysis are open access and available to read, download and share, subject to proper citation of the original work.

Creative Commons

Disclaimer: The opinions expressed in this publication are those of the authors and do not necessarily reflect the views of International Journal for Legal Research and Analysis.