Open Access Research Article

CHALLENGES OF ARTIFICIAL INTELLIGENCE IN HEALTH CARE SYSTEM: ITS LEGAL AND ETHICAL CONSEQUENCES.

Author(s):
HRISHIKESH RAJKHOWA
Journal IJLRA
ISSN 2582-6433
Published 2023/04/29
Access Open Access
Volume 2
Issue 7

Published Paper

PDF Preview

Article Details

CHALLENGES OF ARTIFICIAL INTELLIGENCE IN HEALTH CARE SYSTEM: ITS LEGAL AND ETHICAL CONSEQUENCES.
 
AUTHOR- HRISHIKESH RAJKHOWA
 
 
Abstract-
A sizable set of methods for treating diseases have been developed in medical practice. The high investment allure of medical technologies seems to be leading hospitals to do actions that are against the Hippocratic oath of doctors. The widespread employment of cyborg-AI doctors, AI robots, and AI medical organizations in the near future may be hampered by this. Of course, choosing the best course of treatment requires having the accurate diagnosis. The AI-robot recently exhibited the capacity to choose appropriate treatment procedures and efficient medications based on genomic data. However, this study is not for the discussion of the various advantages that AI has and ratter it focuses on the problems that are upcoming because of the introduction of AI in the sector of health care and medical science. The problems ranges from various aspects such as patients safety, doctor and patient relationship, transparency, accountability, privacy etc. This study makes a general overview of the existing use and problems of AI in health care and at the same time trace any law that stands existing in the world for its regulation.
 
Keywords- AI- Artificial intelligence, IBM- International Business Machines, ML- Machine Learning, GDPR- General Data Protection Regulation, CDS- Clinical Decision Support, EU- European Union,
 
 
I.                INTRODUCTION
The term "artificial intelligence" (AI) refers to a group of technological solutions that mimic human cognitive functions, including the capacity for independent learning and decision-making without the aid of predetermined algorithms. When used to accomplish certain tasks, AI can yield results that are on par with or superior to those produced by human intellectual activity. This group of technological solutions consists of tools and services for data processing, decision-making, and information and communication infrastructure, software (including program that use machine learning techniques), and software applications.
 
The evolution of how digital technology and artificial intelligence (AI) have been used in healthcare has evolved over many years. MYCIN, an expert system developed by Stanford University enabled medical professionals to recognize bacterial diseases such bacteremia and meningitis, and to recommend a suitable course of action. MYCIN was not utilized in daily life. It was just used as an experimental model to show what AI is capable of. In 1986, with the help of a decision assistance system called DXplain, the University of Massachusetts using the patient's symptoms created a list of potential diagnoses which was  generated for the doctor's reference.  The University of Washington then put the Germwatcher expert system into practice for the patient's identification of infections[1]. Since the start of the creation of AI-based medical applications in the twenty-first century has been a priority concern for IT based industries.
 
According to IBM, the company that created the Watson supercomputer, AI technologies can be used in healthcare to structure medical data. For example- processing natural language and turning it into clinical text, analyze  patient information, such as abstracting treatment records into a patient's medical history comparing clinical diagnostic findings to choose the best course of action,  advancing medical philosophy by developing models of patient therapy based on comparable examples, as well as confirming medical theories.[2]
 
II. USE OF ARTIFICIAL INTELLIGENCE IN HEALTH CARE AND MEDICINE.
The various sectors where AI has been extensively used in the field of AI and health care are as follows.
1.      Development of Drug and its Validation-
In the process of developing new medications, AI first made it possible to speed up the work using massive data. In this context, the fundamental goal of AI is to forecast how future drug molecules will interact with the proteins of human cells and, in turn, how well the medicine will work. AI can also be used to investigate illness mechanisms, find biomarkers, and study them.
 
In order to quickly create a treatment for Ebola virus infections in 2015, Atomwise collaborated with the University of Toronto and IBM amid an outbreak of the disease in West Africa. Atomwise has made fundamental AI technology available for medicinal development.[3]
 
For instance, AI technology allowed researchers to examine the activity of dozens of medications in 2020 in connection to their capacity to inhibit an enzyme that the SARS-CoV-2 virus needs to replicate in human cells.  The application of AI technology has enhanced the design of clinical trials, streamlined the development of novel vaccinations and the analysis of trial findings, in addition to the comparison and systematization of data from various patient groups in the perspective of the COVID-19 pandemic.[4]
 
 
2.      Diagnosis of disease and off-site AI application-
The use of algorithms in medical AI offers many advantages over human cognition, such as the capacity to work nonstop and the absence of vulnerability to weariness or emotional bias. In the event of a disease outbreak (epidemics and pandemics), the treatment of severe types of diseases, and the appearance of new diseases that were previously unknown to medicine and such other advantages become extremely crucial. Physicians attest to AI's capacity to correctly identify a variety of illnesses. For example it has been shown that AI can accurately identify colon polyps during a colonoscopy and can identify coagulopathy and inflammation in trauma.[5] AI is able to evaluate a patient's present state of health and instantaneously place it in the context of their whole medical history. One use of this functionality is in Botkin which is a Russian software that detect cancer and provide answers in 2019.
 
Additionally, AI can assess external factors such as weather and climatic conditions (temperature, atmospheric pressure, and humidity level), sanitary and epidemiological conditions, a patient's genetic susceptibility to certain infections, economic factors (household income, living arrangements, and working capacity), degree of socioeconomic status, and environmental factors in real time. All forms of AI, including a cyborg-AI doctor, a robot doctor, an AI hospital, and an AI cloud doctor, can be used to address the aforementioned problems. AI can help in managing health issues, preventing diseases, and keeping an eye on the likelihood that diseases will spread.[6]
A remote medical examination of a patient based on their symptoms and medical history, and the determination of the need for hospitalization (based, for example, on the results of an ECG, coronary angiogram, or ultrasound examination) are all examples of ways that AI can be used in healthcare off-site. These areas include oncology, gastroenterology, orthopedics, ophthalmology, endocrinology, gynecology, and others.
 
Obviously, a home X-ray, MRI, or CAT scan are not now available (off-site). In many situations, having physical contact with a patient is still necessary. Only a few medical tests may be performed at a patient's house because to the portable equipment that is now available. Therefore, the deployment of cyborg-AI-doctors, AI-robots, and AI-medical organizations in healthcare facilities is required. Although an AI-cloud doctor is not yet able to conduct a thorough patient examination from a distance, some encouraging progress has been made in this area.[7]
 
 
3.      Treatment by noble AI powered solution.
The field of medicine has produced a sizable set of methods for treating diseases. It appears that hospitals may make decisions that are against the Hippocratic oath of doctors due to the high investment attraction of healthcare technology. This could limit the widespread usage of AI-cloud-doctors, as well as the predominance of cyborg-AI-doctors, AI-robots, and AI-medical organizations in the near future. It should be obvious saying that selecting the proper treatment requires having the accurate diagnosis. The AI robot Sophia recently exhibited the capacity to choose appropriate treatment procedures and efficient medications based on genomic data[8].
All types of AI, including cyborg-AI-doctors, AI-robots, AI-hospitals, and AI-cloud-doctors, can be used effectively in the following fields:
·                   medical interventions (surgery), pharmaceutical production and prescription (pharmaceutics and pharmacology),
·                    immunotherapy and herbal medicine, epidemic epidemiology, etc.
·                   A cyborg AI doctor
·                   AI-robot/AI-hospital/AI-cloud doctor supporting a human physician, or
·                   an autonomous and remotely controlled AI-robot/AI-hospital are the three various ways AI can function.
The chances of interacting with various types of AI appear good. An AI-robot assistance, for instance, could be useful to a human surgeon performing an intervention using a remote-controlled machine with surgical instruments. AI can quickly access a patient's medical history during both surgical and diagnostic procedures and assess the variables that may influence the choice of treatment (climatic circumstances, epidemiological situation, the patient's genetic susceptibility to infections, etc.).
Robotic surgery and pharmaceuticals, which would enable lowering the expenses of staffing and 24-hour patient care, are the future of medicine. Even a medication driven by mediocre AI has the potential to help a patient by inducing the placebo effect. Furthermore, due to the fact that AI's algorithms are created to minimize errors, patients will be more inclined to believe that the work of AI is error-free on a subconscious level.
III CHALLENGES IN THE USE OF ARTIFICIAL INTELLIGENCE IN HEALTH CARE AND MEDICINE.
The application of AI in clinical healthcare practice has enormous potential to improve it, as the previous section suggested, but it also poses many ethical and legal issues when used in this sector that are necessary to be examined.
 
1.      Consent for use-
 The patient-physician interaction will change as a result of health AI applications in areas like imaging, diagnosis, and surgery. But how will the application of AI to health care interact with the concepts of informed consent? Although informed consent will be one of the most urgent obstacles to successfully Integrating into clinical practice, this burning question has not earned enough attention in the ethical discussion. It is necessary to investigate when the informed consent principles should be used in the context of clinical AI.[9]
 
How much of a duty does clinicians have to inform patients on the intricacies of AI, especially the type of ML used? The data inputs used, the system, and the potential for biases or are there any other flaws in the data being used. In what conditions must the patient be informed that artificial intelligence is being employed at all?
 
These inquiries are particularly difficult to respond to when the AI uses "black-box" algorithms, which may be the consequence of imprecise machine-learning methods that are highly complex for therapists to completely comprehend. For instance, Corti's algorithms[10] are "black boxes" since not even the software's creator is aware of how the program determines when to notify emergency dispatchers that a person is having a cardiac arrest. Medical practitioners might be concerned about this lack of understanding. How much should a clinician, for instance, explain that they are unable to completely understand the AI's diagnostic or therapy suggestions? How much openness is required? How does this interact with the GDPR's purported "right to explanation"? What about situations when the patient would be hesitant to consent to the use of specific data categories (such genetic information and family history)? How can we effectively strike a balance between patient privacy and the security and efficiency of AI?
 
AI healthcare apps and chatbots are also being utilized more often in a variety of health-related applications, such as diet advice, health evaluations, assistance with medication adherence, and analysis of data gathered by wearable sensors.  These applications raise concerns from bioethicists regarding user agreements and how they relate to conscious consent as opposed to the conventional. A user agreement is a contract that a person signs following the informed consent process without a face-to-face conversation. Most individuals habitually disregard user contracts because they don't spend the time to study them.[11] People's ability to adhere to the terms of service they have accepted is further complicated by the software's frequent updates. What details should be provided to users of these apps and chatbots? Do users fully comprehend that continuing to use the AI health app or chatbot may require agreeing to modified terms of use? How much should informed consent documents mirror user agreements?[12]
 
 
2.      Safety and Transparency-
 One of the main obstacles for AI in healthcare is safety. To give an example that has received a lot of attention, IBM Watson for Oncology employs AI algorithms to analyse data from patient medical records and assist doctors in looking into cancer therapy options for their patients. It has recently drawn criticism, meanwhile, for allegedly making "unsafe and erroneous" suggestions regarding cancer treatments. The issue appears to be with Watson for Oncology's training process, which only used a small number of "synthetic" cancer cases developed by Memorial Sloan Kettering (MSK) Cancer Center physicians as opposed to genuine patient data.[13]
 
The field has received negative publicity as a result of this actual instance. It also demonstrates how crucial it is that AIs be reliable and efficient. But how can we make sure that AIs maintain their word? Stakeholders, in particular AI developers, must ensure two critical factors in order to fully utilize AI's potential: (1) the authenticity and trustworthiness of the datasets; and (2) transparency.
 
The employed datasets must first be valid and dependable. When it comes to AI, the saying "garbage in, trash out" is applicable. The performance of the AI will improve with greater training data or the labelled data. To get reliable results, the algorithms frequently need to be improved further. Data sharing is a significant problem as well. For self-driving cars, for example, where the AI needs to have a high level of confidence, more data sharing will be required.[14]
 
 Second, some degree of transparency must be guaranteed in the interest of patient confidence and safety. While in an ideal world all information and algorithms would be accessible to the public, there may be some real concerns about safeguarding investments and intellectual property as well as avoiding an increase in cybersecurity risk. Additionally, AI developers must be sufficiently transparent, for instance, regarding the types of data used and any software flaws such as data bias. We should draw conclusions from cases like Watson for Oncology, in which IBM concealed Watson's risky and ineffective therapy suggestions for more than a year.[15] Finally, trust among all parties, especially physicians and patients, is built through transparency, and this trust is essential for a successful application of AI in clinical practice.[16]
 
 
3.      Fairness and Algorithm Biasness.
AI has the potential to enhance healthcare not just in high-income settings, but also by "globalizing" it and making it accessible to even remote locations. Any ML system or human-trained algorithm, however, will only be as reliable, efficient, and equitable as the data it is taught with. AI is also susceptible to biases, which could lead to prejudice.[17] Therefore, it is crucial that AI developers are conscious of this danger and take steps to reduce any potential biases at every phase of the product development process. When choosing (1) the ML procedures they want to employ to train the algorithms and (2) what datasets including evaluating their quality and diversity, they want to utilize for the programming, they should pay particular attention to the risk for biases.
 
Algorithms can display biases that can lead to unfairness with regard to ethnic origins, skin colour, or gender, as shown by a number of real-world incidents. Biases might also exist with reference to different characteristics, including age or a disability.[18] Such prejudices have numerous, varied, and complex justifications. For instance, they might be the outcome of the datasets themselves (which are not representative), the selection and analysis of the data by data scientists and ML systems, the use of AI, etc.
 
Biased AI could, for example, result in incorrect diagnoses, render therapies ineffective for specific subgroups, endanger their safety, and so forth in the medical sector when phenotype- and occasionally genotype-related information is involved. Consider an AI-based clinical decision support (CDS) tool that enables doctors to help patients with skin cancer receive the optimal care. The algorithm, however, was primarily trained on patients who were Caucasian. As a result, the AI software will probably provide less accurate or even incorrect suggestions for subpopulations like African Americans for which the training data was underinclusive.
 
 
4.      Data Privacy-
The Royal Free NHS Foundation Trust was found to have violated the UK Data Protection Act 1998 when it gave Google DeepMind access to the personal information of about 1.6 million patients, according to a decision made in July 2017 by the UK Information Commissioner's Office (ICO). Data exchange took place for "Streams", an app that intends to assist with the diagnosis and identification of acute kidney injury, throughout the clinical safety testing phase. Patients weren't properly informed about how their data was processed during the exam, though. Elizabeth Denham of the Information Commissioner properly noted that "the erosion of fundamental privacy rights does not need to be the price of innovation."[19]
 
Recent case studies demonstrating patient privacy issues in the context of data sharing and the application of AI include the legal proceeding Dinerstein v. Google[20] and Project Nightingale by Google and Ascension[21]. What about who owns the data, though? Health data can be worth billions of dollars, but some data indicates that the public is uneasy about businesses or the government making money off of selling patient data[22]. However, there can be other means by which patients can feel valued than actual ownership. However, there can be other means by which patients can feel valued than actual ownership. Enabling instance, the Royal Free NHS Foundation Trust and Google DeepMind reached an agreement for the Trust to use Streams for free for the next five years in exchange for providing patient data for the app's testing. Ownership is not always necessary for reciprocity, but anyone wishing to utilize patient data must demonstrate how doing so will benefit the health of the very same patients whose data is being used.
 
 
IV.        LAW REGULATING AI IN THE FIELD OF HEALTH CARE AND MEDICINE.
The chances of interacting with various types of AI appear good. For instance, a human surgeon doing an operation while using a remote-controlled device, a n AI-robot assistance may be useful for those using surgical equipment.  lowering the cost of hiring workers and providing 24/7 patient care. Even inadequate or subpar AI-driven treatment may be advantageous by inducing the placebo effect in a person. Additionally, patients will unconsciously be more prone to view the AI's work is error-free since its algorithms are created to reduce errors.[23]
 
A. Legal Aspects of AI in the field of medicine and health care-
AI and the issue of trust in the field of Medical Science. The development of a single, trustworthy digital realm has been made possible by advances in science, technology, and digital technology. Participants in this area are assumed to have confidence in the information obtained from it, and as a result, their identification and authentication take place automatically. Regarding the acceptance of electronic signatures, the topic of the "space of trust" was initially raised. The European Union passed Directive 1999/93/EC of the European Parliament and of the Council of December 13, 1999, on a Community Framework for Electronic Signatures to address this problem. This directive was later replaced by Regulation (EU) No 910/2014 of the European Parliament and of the Council of July 23, 2014, on Electronic Identification and Trust Services for Electronic Transactions in the Internal Market and Repealing Directive 1999/93/EC.[24]
 
The Eurasian Economic Union and other international organizations used a similar strategy. Information could be electronically transmitted remotely thanks to the Internet. Through the identification and verification of participants in the information exchange, the existence of the digital space of trust with regard to electronic signatures ensures that the information received is trustworthy and reliable.[25]
 
In general, any digital information is decoded by translating binary code—a series of ones and zeros—into text that can be understood by humans. It seems sensible to utilize the using the e-signature space of trust as a guide for creating a digital space of trust in the AI, where uniform identification and permission procedures would apply to medical AI. The creation of a single digital platform for AI trust is anticipated to be hindered by various procedures implemented in medical schools and medical customs around the world (the subjective factor). However, all medical professionals share a common bond regardless of ethnicity, to treat the patient or prevent them from becoming ill religion, gender, and color, as well as the hospital's location. As a result, it is likely that international medical alliances and organizations (such as the World Health Organization, the International Federation of Red Cross and Red Crescent Societies, Médecins Sans Frontières, etc.) will play a significant part in the development of the AI trust space in the digital sphere.
 
Doctors, patients, the government, the state, and civil society as a whole will all recognize the veracity of the information transmitted related to AI, which is the fundamental tenet guiding the creation of the unified digital space of trust in medical AI.
 
B.     Laws in European Union.
The following sources of legal regulation apply at the level of the European Union (EU): The application of AI technology can be identified.
·      The EU Parliament passed Resolution 2015/2103(INL) Civil Law in February 2017. Robotics regulations.[26] Robotics regulations in Europe AI is founded on Isaac Asimov's rules, which state that: (1) A robot shall not, through act or omission, cause
(2) A robot must accept human commands if they do not conflict with the first in order to avoid harm to humans.
(3) A robot is required to ensure its safety to the degree that doing so does not violate the one of the two laws.
The terms of the aforementioned regulation primarily apply to robotics, although it might be assumed that, by analogy, they also apply to AI technology. The resolution proposes giving AI robots the status of electronic persons, establishing a European registration system for "smart" robots, and defining culpability for harm caused by robotics.
 
Following this, representatives of 25 European nations, including those outside the European Union, signed a Declaration of cooperation on artificial intelligence in April 2018. According to its provisions, the signatory nations committed to working on an integrated European approach to the development of artificial intelligence, pursuing cogent national policies to increase the competitiveness of the European Union, and fostering digital innovation.[27]
 
The Coordinated Plan for Artificial Intelligence of December 7, 2018, which offers a European strategy for the development of robots and AI, was created by the European Commission also in 2018. The participating States' overarching objective in cooperating is to make sure that Europe emerges as the world's premier region for the development and implementation of more advanced, ethical and safe AI.[28]
 
Additionally, ethical concerns related to the usage of AI are regulated. In order to promote trustworthy AI, the European Commission approved Ethics Guidelines for Trustworthy AI in 2019 (Ethics Guidelines for Trustworthy AI. High-Level Expert Group on Artificial Intelligence 2019). Three requirements for trustworthy AI should be met over the course of the system's complete life cycle: In order to ensure adherence to ethical principles and values, it should be (1) lawful,
(2) ethical, and
(3) resilient from both a technical and social standpoint. Even with the best of intentions, AI systems have the potential to hurt people unintentionally. The contents of this document state that respecting human autonomy, preventing harm, being fair, and being explicable are the four basic ethical considerations for using AI.
In order to digitalize European society and preserve competitiveness in cutting-edge technologies, such as robotics and AI, the EU has also adopted the Digital Europe Strategy Programme as a structural component of the EU financial consolidation for 2021–2027.[29] This program's main goal is to promote digital transformation by funding the adoption of emerging technologies in the most crucial fields, each with a separate budget, such as high performance computing, artificial intelligence, cyber security, and trust, as well as the implementation and best use of digital literacy.
C.    Laws in Russia
Russia has been creating the initial iterations of national guidelines in the area of AI in healthcare since 2020. The Technical Committee for Standardization "Artificial Intelligence" (TC 164), founded on the model of the Russian Venture Company, will oversee the development. Thus, by 2027, it is intended to develop about 50 standards pertaining to the use of AI technology in healthcare in specific areas, such as general requirements and classification of AI systems in clinical medicine, radiology and functional diagnostics, remote monitoring systems, histology, medical decision support systems, image reconstruction in diagnostics and treatment, big data in healthcare, medical analytics, and forecasting.
 
A distinct type of law (rules of conduct) can be recognized in addition to the conventional legal sources—a medical custom.
René David and Camille Jauffret-Spinosi, researchers in comparative law, contend that although legal customs need to be given legal legitimacy, this does not preclude them from being regarded as an independent, impartial source of law.[30] Legal traditions are typically accepted by academics as sources of law on par with legislative acts (for examples, see Panagiotis Zepos' writings). The works of Raymond Legeais provide a complete description of the contribution of legal customs to the system of sources of law in various legal traditions. In line with this theory, it might be claimed that medical legal traditions recognised by the government or other public organizations (such as the World Health Organization) can be seen as sources of AI control.
 
The topic is not without debate, though. How do we combine medical conventions, which have been formed over many years of effort by the medical community (cyborg-AI-doctor, AI-robot, AI-medical organization, and AI-cloud-doctor), into the legal framework governing AI technologies? Can such legal practices evolve in the future as a result of the cooperation between doctors and AI systems? The short history of AI makes it challenging to provide solutions to these queries. It is anticipated that the state will play a significant part in these issues since its competent authorities will have to decide which medical legal conventions (written or unwritten) may regulate AI in healthcare. The distinction between legal and illegal medical customs, the latter of which is not approved by the state and does not serve as a source of legislation, is crucial.
It is necessary to standardize and make AI accessible to medical legal customs. To guarantee the caliber and objectivity of AI, this is essential. The rules and procedures for using AI technology (such as an AI-hospital, AI-robot, and cyborg-AI-doctor) will be heavily influenced by the legal system in a particular nation.
We will then need to develop a worldwide database of medical legal norms adopted by all nations taking part in the integration effort, which will serve as the foundation for global AI (i.e., for an AI-cloud-doctor).[31]
 
V.             CONCLUSION
Cybersecurity, informed consent, and high standards of data protection high standards of safety and effectiveness, algorithmic fairness, resilience and cybersecurity, appropriate transparency and regulatory oversight, all of these crucial elements must be taken into consideration and handled in order to successfully develop an AI-driven healthcare system. In this sense, we must do more than simply update the existing regulatory frameworks to reflect new technological advancements. However, it is equally crucial to have political and public debates that centre on the morality of AI-driven healthcare and its effects on the workforce and society at large. AI has enormous potential to enhance our healthcare system, but we can only fully realise that potential if we start now to address the moral and legal issues we face.
 


[1]G. Michael, A. Steib, C.D. Wiliam, and F.J. Victoria, “Monitoring expert system performance using continuous user feedback” 3 Journal of the American Medical Informatics Association 216 (1996).
[2] What is Artificial Intelligence in Medicine? 2021, available at <https://www.ibm.com/watson health/learn/artificial-intelligencemedicine> (last visited on April 20th 2021)
 
[3] New Ebola Treatment Using Artificial Intelligence, available at <https://www.atomwise.com/2015/03/24/new-ebolatreatment- using-artificial-intelligence/> (last visited on 22nd  October 2021).
[4] Phulan Sarma, S. V. Rana, Bikash Medhi, and Manisha Naithani, “Emerging role of artificial intelligence in therapeutics for COVID-19: A systematic review” 10  Journal of Biomolecular Structure and Dynamics 16 (2020).
[5] A. Sami, “Challenges facing the detection of colonic polyps: What can deep learning do?” 55 Medicina Journal 473 (2019).
[6] Wenbo Yang, Jehane Michael Le Grange, Peng Wang, Wei Huang, and Ye Zhewei,  “Smart healthcare: Making medical care more intelligent” 3 Global Health Journal 65 (2020).
[7] Eric Campo, Daniel Esteve, and Jean-Yves Fourniols. “Smart homes—Current features and future perspective” 64 Maturitas 97 (2019).
[8] Sophia AI Reaches Key Milestone by Helping to Better Diagnose 200,000 Patients Worldwide, available at (last visited on 22 May 2021).
[9] I.G Cohen, R. Amarasingham, A. Shah, B. Xie, “The legal and ethical concerns that arise from using complex predictive analytics in health care” 14 Health Aff  1134, available at <https://doi.org/10.1377/hlthaff.2014.0048.> (last visited on 5th 2021).
[10] J. Vincent, “AI that detects cardiac arrests during emergency calls will be tested across Europe this summer” Verge, available at <https://www.theverge.com/2018/4/25/17278994/aicardiac- arrest-corti-emergency-call-response 2018> (last visited March 21st 2021).
[11] I.G, Cohen, A. Pearlman, “Smart pills can transmit data to your doctors, but what about privacy?” N Scientist, May 2019, available at <https://www.newscientist.com/article/2180158-smartpills-can-transmit-data-to-your-doctors-but-what-about-privacy> (last visited on May 16th 2022).
[12] Ibid.
[13] C. Ross, I. Swetlitz, “IBM’s Watson supercomputer recommended ‘unsafe and incorrect’ cancer treatments, internal documents show”, STAT, Jan 2017, available at <https://www.statnews. com/2018/07/25/ibm-watson-recommended-unsafe-incorrect-treatments> (last visited on 7th October 2021).
[14] Figure Eight. What is training data?, available at  <https://www.figure-eight.com/resources/whatis- training-data; 2020> (last visited on Jan 1st 2021).
[15] Ibid.
[16] J. Brown, “IBM Watson reportedly recommended cancer treatments that were unsafe and incorrect”, Gizmodo, May 2019, available at <https://gizmodo.com/ibm-watson-reportedly-recommended- cancer-treatments-tha-1827868882; 2018> (last visited on Feb. 23rd 2021).
[17] B. Wahl, A. Cossy-Gantner, S. Germann, “Artificial intelligence (AI)and global health: how can AI contribute to health in resource-poor settings?”,  BMJ Glob Health, March 2018, available at   (last visited on July 21st 2021).
[18] N. Sharkey, “The impact of gender and race bias in AI”, Humanitarian Law Policy, Sept 2018, available at <https://blogs.icrc.org/law-and-policy/2018/08/28/impact-gender-race-bias-ai> (last visited on Jan. 3rd 2021).
[19] ICO. Royal Free, “Google DeepMind trial failed to comply with data protection law”,  available at <https://ico.org.uk/about-the-ico/news-and-events/news-and-blogs/2017/07/ royal-free-google-deepmind-trial-failed-to-comply-with-data-protection-law> (last visited on Oct. 13, 2021).
[21] R. Copeland, “Google’s ‘Project Nightingale’ gathers personal health data on millions of Americans”, Nightingalegathers, May 2018, available at <https://www.wsj.com/articles/google-s-secret-project-nightingalegathers- personal-health-data-on-millions-of-americans-1157349679> (last visited on March 23rd 2021)
[22] S. Gerke, I.G. Cohen, “Potential liability for physicians using artificial intelligence”  JAMA, May 2019, available at <https://doi.org/ 10.1001/jama.2019.15064.> (last visited on July 21st 2021).
[23] Sophia AI Reaches Key Milestone by Helping to Better Diagnose 200,000 Patients Worldwide, 2018. Available at <https://www.prnewswire.com/news-releases/sophia-ai-reaches-key-milestone-by-helping-to-better-diagnose-200000-patients-worldwide-680907791.html> (last visited on March 21st 2021).
[24] Official Journal of the European Union, available at <https://eur-lex.europa.eu/legal-content/EN/ TXT/PDF/?uri=OJ:L:2000:013:FULL&from=EN> (last visited on May 15th 2021).
[25] Treaty on the Eurasian Economic Union, 2014, available at https://www.un.org/en/ga/sixth/70/docs/treaty_on_eeu.pdf (last visited on 20th  October 2021).
[26] European Parliament Resolution of 16 February 2017 with Recommendations to the Commission on Civil Law Rules on Robotics, 2015 available at <http://www.europarl.europa.eu/doceo/document/TA-8-2017-0051_EN.pdf> (last visited on Jan. 2nd 2021).
 
[27] EU Declaration of Cooperation on Artificial Intelligence Signed at Digital Day on 10th April 2018, available at <https://ec.europa. eu/digital-single-market/en/events/digital-day-2018> (last visited on May 6th 2021).
[28] Coordinated Plan on Artificial Intelligence 2021 Review, 2021, available at <https://digital-strategy.ec. europa.eu/en/library/coordinated-plan-artificial-intelligence-2021-review> (last visited on Nov. 23rd 2021).
[29] The Digital Europe Programme for the Period 2021–2027, 2018. Available at <https://eur-lex.europa.eu/legalcontent/ EN/ALL/?uri=CELEX:52018PC0434> (last visited on Sept. 5th 2020).
[30] David, Rene, and Camille Jauffret Spinosi, Les Grands Systèmes de Droit Contemporains 107 (The Major Contemporary Legal Systems, Paris, 10th edn. 2002).
[31] Ibid.CHALLENGES OF ARTIFICIAL INTELLIGENCE IN HEALTH CARE SYSTEM: ITS LEGAL AND ETHICAL CONSEQUENCES.
 
AUTHOR- HRISHIKESH RAJKHOWA
 
 
Abstract-
A sizable set of methods for treating diseases have been developed in medical practice. The high investment allure of medical technologies seems to be leading hospitals to do actions that are against the Hippocratic oath of doctors. The widespread employment of cyborg-AI doctors, AI robots, and AI medical organizations in the near future may be hampered by this. Of course, choosing the best course of treatment requires having the accurate diagnosis. The AI-robot recently exhibited the capacity to choose appropriate treatment procedures and efficient medications based on genomic data. However, this study is not for the discussion of the various advantages that AI has and ratter it focuses on the problems that are upcoming because of the introduction of AI in the sector of health care and medical science. The problems ranges from various aspects such as patients safety, doctor and patient relationship, transparency, accountability, privacy etc. This study makes a general overview of the existing use and problems of AI in health care and at the same time trace any law that stands existing in the world for its regulation.
 
Keywords- AI- Artificial intelligence, IBM- International Business Machines, ML- Machine Learning, GDPR- General Data Protection Regulation, CDS- Clinical Decision Support, EU- European Union,
 
 
I.                INTRODUCTION
The term "artificial intelligence" (AI) refers to a group of technological solutions that mimic human cognitive functions, including the capacity for independent learning and decision-making without the aid of predetermined algorithms. When used to accomplish certain tasks, AI can yield results that are on par with or superior to those produced by human intellectual activity. This group of technological solutions consists of tools and services for data processing, decision-making, and information and communication infrastructure, software (including program that use machine learning techniques), and software applications.
 
The evolution of how digital technology and artificial intelligence (AI) have been used in healthcare has evolved over many years. MYCIN, an expert system developed by Stanford University enabled medical professionals to recognize bacterial diseases such bacteremia and meningitis, and to recommend a suitable course of action. MYCIN was not utilized in daily life. It was just used as an experimental model to show what AI is capable of. In 1986, with the help of a decision assistance system called DXplain, the University of Massachusetts using the patient's symptoms created a list of potential diagnoses which was  generated for the doctor's reference.  The University of Washington then put the Germwatcher expert system into practice for the patient's identification of infections[1]. Since the start of the creation of AI-based medical applications in the twenty-first century has been a priority concern for IT based industries.
 
According to IBM, the company that created the Watson supercomputer, AI technologies can be used in healthcare to structure medical data. For example- processing natural language and turning it into clinical text, analyze  patient information, such as abstracting treatment records into a patient's medical history comparing clinical diagnostic findings to choose the best course of action,  advancing medical philosophy by developing models of patient therapy based on comparable examples, as well as confirming medical theories.[2]
 
II. USE OF ARTIFICIAL INTELLIGENCE IN HEALTH CARE AND MEDICINE.
The various sectors where AI has been extensively used in the field of AI and health care are as follows.
1.      Development of Drug and its Validation-
In the process of developing new medications, AI first made it possible to speed up the work using massive data. In this context, the fundamental goal of AI is to forecast how future drug molecules will interact with the proteins of human cells and, in turn, how well the medicine will work. AI can also be used to investigate illness mechanisms, find biomarkers, and study them.
 
In order to quickly create a treatment for Ebola virus infections in 2015, Atomwise collaborated with the University of Toronto and IBM amid an outbreak of the disease in West Africa. Atomwise has made fundamental AI technology available for medicinal development.[3]
 
For instance, AI technology allowed researchers to examine the activity of dozens of medications in 2020 in connection to their capacity to inhibit an enzyme that the SARS-CoV-2 virus needs to replicate in human cells.  The application of AI technology has enhanced the design of clinical trials, streamlined the development of novel vaccinations and the analysis of trial findings, in addition to the comparison and systematization of data from various patient groups in the perspective of the COVID-19 pandemic.[4]
 
 
2.      Diagnosis of disease and off-site AI application-
The use of algorithms in medical AI offers many advantages over human cognition, such as the capacity to work nonstop and the absence of vulnerability to weariness or emotional bias. In the event of a disease outbreak (epidemics and pandemics), the treatment of severe types of diseases, and the appearance of new diseases that were previously unknown to medicine and such other advantages become extremely crucial. Physicians attest to AI's capacity to correctly identify a variety of illnesses. For example it has been shown that AI can accurately identify colon polyps during a colonoscopy and can identify coagulopathy and inflammation in trauma.[5] AI is able to evaluate a patient's present state of health and instantaneously place it in the context of their whole medical history. One use of this functionality is in Botkin which is a Russian software that detect cancer and provide answers in 2019.
 
Additionally, AI can assess external factors such as weather and climatic conditions (temperature, atmospheric pressure, and humidity level), sanitary and epidemiological conditions, a patient's genetic susceptibility to certain infections, economic factors (household income, living arrangements, and working capacity), degree of socioeconomic status, and environmental factors in real time. All forms of AI, including a cyborg-AI doctor, a robot doctor, an AI hospital, and an AI cloud doctor, can be used to address the aforementioned problems. AI can help in managing health issues, preventing diseases, and keeping an eye on the likelihood that diseases will spread.[6]
A remote medical examination of a patient based on their symptoms and medical history, and the determination of the need for hospitalization (based, for example, on the results of an ECG, coronary angiogram, or ultrasound examination) are all examples of ways that AI can be used in healthcare off-site. These areas include oncology, gastroenterology, orthopedics, ophthalmology, endocrinology, gynecology, and others.
 
Obviously, a home X-ray, MRI, or CAT scan are not now available (off-site). In many situations, having physical contact with a patient is still necessary. Only a few medical tests may be performed at a patient's house because to the portable equipment that is now available. Therefore, the deployment of cyborg-AI-doctors, AI-robots, and AI-medical organizations in healthcare facilities is required. Although an AI-cloud doctor is not yet able to conduct a thorough patient examination from a distance, some encouraging progress has been made in this area.[7]
 
 
3.      Treatment by noble AI powered solution.
The field of medicine has produced a sizable set of methods for treating diseases. It appears that hospitals may make decisions that are against the Hippocratic oath of doctors due to the high investment attraction of healthcare technology. This could limit the widespread usage of AI-cloud-doctors, as well as the predominance of cyborg-AI-doctors, AI-robots, and AI-medical organizations in the near future. It should be obvious saying that selecting the proper treatment requires having the accurate diagnosis. The AI robot Sophia recently exhibited the capacity to choose appropriate treatment procedures and efficient medications based on genomic data[8].
All types of AI, including cyborg-AI-doctors, AI-robots, AI-hospitals, and AI-cloud-doctors, can be used effectively in the following fields:
·                   medical interventions (surgery), pharmaceutical production and prescription (pharmaceutics and pharmacology),
·                    immunotherapy and herbal medicine, epidemic epidemiology, etc.
·                   A cyborg AI doctor
·                   AI-robot/AI-hospital/AI-cloud doctor supporting a human physician, or
·                   an autonomous and remotely controlled AI-robot/AI-hospital are the three various ways AI can function.
The chances of interacting with various types of AI appear good. An AI-robot assistance, for instance, could be useful to a human surgeon performing an intervention using a remote-controlled machine with surgical instruments. AI can quickly access a patient's medical history during both surgical and diagnostic procedures and assess the variables that may influence the choice of treatment (climatic circumstances, epidemiological situation, the patient's genetic susceptibility to infections, etc.).
Robotic surgery and pharmaceuticals, which would enable lowering the expenses of staffing and 24-hour patient care, are the future of medicine. Even a medication driven by mediocre AI has the potential to help a patient by inducing the placebo effect. Furthermore, due to the fact that AI's algorithms are created to minimize errors, patients will be more inclined to believe that the work of AI is error-free on a subconscious level.
III CHALLENGES IN THE USE OF ARTIFICIAL INTELLIGENCE IN HEALTH CARE AND MEDICINE.
The application of AI in clinical healthcare practice has enormous potential to improve it, as the previous section suggested, but it also poses many ethical and legal issues when used in this sector that are necessary to be examined.
 
1.      Consent for use-
 The patient-physician interaction will change as a result of health AI applications in areas like imaging, diagnosis, and surgery. But how will the application of AI to health care interact with the concepts of informed consent? Although informed consent will be one of the most urgent obstacles to successfully Integrating into clinical practice, this burning question has not earned enough attention in the ethical discussion. It is necessary to investigate when the informed consent principles should be used in the context of clinical AI.[9]
 
How much of a duty does clinicians have to inform patients on the intricacies of AI, especially the type of ML used? The data inputs used, the system, and the potential for biases or are there any other flaws in the data being used. In what conditions must the patient be informed that artificial intelligence is being employed at all?
 
These inquiries are particularly difficult to respond to when the AI uses "black-box" algorithms, which may be the consequence of imprecise machine-learning methods that are highly complex for therapists to completely comprehend. For instance, Corti's algorithms[10] are "black boxes" since not even the software's creator is aware of how the program determines when to notify emergency dispatchers that a person is having a cardiac arrest. Medical practitioners might be concerned about this lack of understanding. How much should a clinician, for instance, explain that they are unable to completely understand the AI's diagnostic or therapy suggestions? How much openness is required? How does this interact with the GDPR's purported "right to explanation"? What about situations when the patient would be hesitant to consent to the use of specific data categories (such genetic information and family history)? How can we effectively strike a balance between patient privacy and the security and efficiency of AI?
 
AI healthcare apps and chatbots are also being utilized more often in a variety of health-related applications, such as diet advice, health evaluations, assistance with medication adherence, and analysis of data gathered by wearable sensors.  These applications raise concerns from bioethicists regarding user agreements and how they relate to conscious consent as opposed to the conventional. A user agreement is a contract that a person signs following the informed consent process without a face-to-face conversation. Most individuals habitually disregard user contracts because they don't spend the time to study them.[11] People's ability to adhere to the terms of service they have accepted is further complicated by the software's frequent updates. What details should be provided to users of these apps and chatbots? Do users fully comprehend that continuing to use the AI health app or chatbot may require agreeing to modified terms of use? How much should informed consent documents mirror user agreements?[12]
 
 
2.      Safety and Transparency-
 One of the main obstacles for AI in healthcare is safety. To give an example that has received a lot of attention, IBM Watson for Oncology employs AI algorithms to analyse data from patient medical records and assist doctors in looking into cancer therapy options for their patients. It has recently drawn criticism, meanwhile, for allegedly making "unsafe and erroneous" suggestions regarding cancer treatments. The issue appears to be with Watson for Oncology's training process, which only used a small number of "synthetic" cancer cases developed by Memorial Sloan Kettering (MSK) Cancer Center physicians as opposed to genuine patient data.[13]
 
The field has received negative publicity as a result of this actual instance. It also demonstrates how crucial it is that AIs be reliable and efficient. But how can we make sure that AIs maintain their word? Stakeholders, in particular AI developers, must ensure two critical factors in order to fully utilize AI's potential: (1) the authenticity and trustworthiness of the datasets; and (2) transparency.
 
The employed datasets must first be valid and dependable. When it comes to AI, the saying "garbage in, trash out" is applicable. The performance of the AI will improve with greater training data or the labelled data. To get reliable results, the algorithms frequently need to be improved further. Data sharing is a significant problem as well. For self-driving cars, for example, where the AI needs to have a high level of confidence, more data sharing will be required.[14]
 
 Second, some degree of transparency must be guaranteed in the interest of patient confidence and safety. While in an ideal world all information and algorithms would be accessible to the public, there may be some real concerns about safeguarding investments and intellectual property as well as avoiding an increase in cybersecurity risk. Additionally, AI developers must be sufficiently transparent, for instance, regarding the types of data used and any software flaws such as data bias. We should draw conclusions from cases like Watson for Oncology, in which IBM concealed Watson's risky and ineffective therapy suggestions for more than a year.[15] Finally, trust among all parties, especially physicians and patients, is built through transparency, and this trust is essential for a successful application of AI in clinical practice.[16]
 
 
3.      Fairness and Algorithm Biasness.
AI has the potential to enhance healthcare not just in high-income settings, but also by "globalizing" it and making it accessible to even remote locations. Any ML system or human-trained algorithm, however, will only be as reliable, efficient, and equitable as the data it is taught with. AI is also susceptible to biases, which could lead to prejudice.[17] Therefore, it is crucial that AI developers are conscious of this danger and take steps to reduce any potential biases at every phase of the product development process. When choosing (1) the ML procedures they want to employ to train the algorithms and (2) what datasets including evaluating their quality and diversity, they want to utilize for the programming, they should pay particular attention to the risk for biases.
 
Algorithms can display biases that can lead to unfairness with regard to ethnic origins, skin colour, or gender, as shown by a number of real-world incidents. Biases might also exist with reference to different characteristics, including age or a disability.[18] Such prejudices have numerous, varied, and complex justifications. For instance, they might be the outcome of the datasets themselves (which are not representative), the selection and analysis of the data by data scientists and ML systems, the use of AI, etc.
 
Biased AI could, for example, result in incorrect diagnoses, render therapies ineffective for specific subgroups, endanger their safety, and so forth in the medical sector when phenotype- and occasionally genotype-related information is involved. Consider an AI-based clinical decision support (CDS) tool that enables doctors to help patients with skin cancer receive the optimal care. The algorithm, however, was primarily trained on patients who were Caucasian. As a result, the AI software will probably provide less accurate or even incorrect suggestions for subpopulations like African Americans for which the training data was underinclusive.
 
 
4.      Data Privacy-
The Royal Free NHS Foundation Trust was found to have violated the UK Data Protection Act 1998 when it gave Google DeepMind access to the personal information of about 1.6 million patients, according to a decision made in July 2017 by the UK Information Commissioner's Office (ICO). Data exchange took place for "Streams", an app that intends to assist with the diagnosis and identification of acute kidney injury, throughout the clinical safety testing phase. Patients weren't properly informed about how their data was processed during the exam, though. Elizabeth Denham of the Information Commissioner properly noted that "the erosion of fundamental privacy rights does not need to be the price of innovation."[19]
 
Recent case studies demonstrating patient privacy issues in the context of data sharing and the application of AI include the legal proceeding Dinerstein v. Google[20] and Project Nightingale by Google and Ascension[21]. What about who owns the data, though? Health data can be worth billions of dollars, but some data indicates that the public is uneasy about businesses or the government making money off of selling patient data[22]. However, there can be other means by which patients can feel valued than actual ownership. However, there can be other means by which patients can feel valued than actual ownership. Enabling instance, the Royal Free NHS Foundation Trust and Google DeepMind reached an agreement for the Trust to use Streams for free for the next five years in exchange for providing patient data for the app's testing. Ownership is not always necessary for reciprocity, but anyone wishing to utilize patient data must demonstrate how doing so will benefit the health of the very same patients whose data is being used.
 
 
IV.        LAW REGULATING AI IN THE FIELD OF HEALTH CARE AND MEDICINE.
The chances of interacting with various types of AI appear good. For instance, a human surgeon doing an operation while using a remote-controlled device, a n AI-robot assistance may be useful for those using surgical equipment.  lowering the cost of hiring workers and providing 24/7 patient care. Even inadequate or subpar AI-driven treatment may be advantageous by inducing the placebo effect in a person. Additionally, patients will unconsciously be more prone to view the AI's work is error-free since its algorithms are created to reduce errors.[23]
 
A. Legal Aspects of AI in the field of medicine and health care-
AI and the issue of trust in the field of Medical Science. The development of a single, trustworthy digital realm has been made possible by advances in science, technology, and digital technology. Participants in this area are assumed to have confidence in the information obtained from it, and as a result, their identification and authentication take place automatically. Regarding the acceptance of electronic signatures, the topic of the "space of trust" was initially raised. The European Union passed Directive 1999/93/EC of the European Parliament and of the Council of December 13, 1999, on a Community Framework for Electronic Signatures to address this problem. This directive was later replaced by Regulation (EU) No 910/2014 of the European Parliament and of the Council of July 23, 2014, on Electronic Identification and Trust Services for Electronic Transactions in the Internal Market and Repealing Directive 1999/93/EC.[24]
 
The Eurasian Economic Union and other international organizations used a similar strategy. Information could be electronically transmitted remotely thanks to the Internet. Through the identification and verification of participants in the information exchange, the existence of the digital space of trust with regard to electronic signatures ensures that the information received is trustworthy and reliable.[25]
 
In general, any digital information is decoded by translating binary code—a series of ones and zeros—into text that can be understood by humans. It seems sensible to utilize the using the e-signature space of trust as a guide for creating a digital space of trust in the AI, where uniform identification and permission procedures would apply to medical AI. The creation of a single digital platform for AI trust is anticipated to be hindered by various procedures implemented in medical schools and medical customs around the world (the subjective factor). However, all medical professionals share a common bond regardless of ethnicity, to treat the patient or prevent them from becoming ill religion, gender, and color, as well as the hospital's location. As a result, it is likely that international medical alliances and organizations (such as the World Health Organization, the International Federation of Red Cross and Red Crescent Societies, Médecins Sans Frontières, etc.) will play a significant part in the development of the AI trust space in the digital sphere.
 
Doctors, patients, the government, the state, and civil society as a whole will all recognize the veracity of the information transmitted related to AI, which is the fundamental tenet guiding the creation of the unified digital space of trust in medical AI.
 
B.     Laws in European Union.
The following sources of legal regulation apply at the level of the European Union (EU): The application of AI technology can be identified.
·      The EU Parliament passed Resolution 2015/2103(INL) Civil Law in February 2017. Robotics regulations.[26] Robotics regulations in Europe AI is founded on Isaac Asimov's rules, which state that: (1) A robot shall not, through act or omission, cause
(2) A robot must accept human commands if they do not conflict with the first in order to avoid harm to humans.
(3) A robot is required to ensure its safety to the degree that doing so does not violate the one of the two laws.
The terms of the aforementioned regulation primarily apply to robotics, although it might be assumed that, by analogy, they also apply to AI technology. The resolution proposes giving AI robots the status of electronic persons, establishing a European registration system for "smart" robots, and defining culpability for harm caused by robotics.
 
Following this, representatives of 25 European nations, including those outside the European Union, signed a Declaration of cooperation on artificial intelligence in April 2018. According to its provisions, the signatory nations committed to working on an integrated European approach to the development of artificial intelligence, pursuing cogent national policies to increase the competitiveness of the European Union, and fostering digital innovation.[27]
 
The Coordinated Plan for Artificial Intelligence of December 7, 2018, which offers a European strategy for the development of robots and AI, was created by the European Commission also in 2018. The participating States' overarching objective in cooperating is to make sure that Europe emerges as the world's premier region for the development and implementation of more advanced, ethical and safe AI.[28]
 
Additionally, ethical concerns related to the usage of AI are regulated. In order to promote trustworthy AI, the European Commission approved Ethics Guidelines for Trustworthy AI in 2019 (Ethics Guidelines for Trustworthy AI. High-Level Expert Group on Artificial Intelligence 2019). Three requirements for trustworthy AI should be met over the course of the system's complete life cycle: In order to ensure adherence to ethical principles and values, it should be (1) lawful,
(2) ethical, and
(3) resilient from both a technical and social standpoint. Even with the best of intentions, AI systems have the potential to hurt people unintentionally. The contents of this document state that respecting human autonomy, preventing harm, being fair, and being explicable are the four basic ethical considerations for using AI.
In order to digitalize European society and preserve competitiveness in cutting-edge technologies, such as robotics and AI, the EU has also adopted the Digital Europe Strategy Programme as a structural component of the EU financial consolidation for 2021–2027.[29] This program's main goal is to promote digital transformation by funding the adoption of emerging technologies in the most crucial fields, each with a separate budget, such as high performance computing, artificial intelligence, cyber security, and trust, as well as the implementation and best use of digital literacy.
C.    Laws in Russia
Russia has been creating the initial iterations of national guidelines in the area of AI in healthcare since 2020. The Technical Committee for Standardization "Artificial Intelligence" (TC 164), founded on the model of the Russian Venture Company, will oversee the development. Thus, by 2027, it is intended to develop about 50 standards pertaining to the use of AI technology in healthcare in specific areas, such as general requirements and classification of AI systems in clinical medicine, radiology and functional diagnostics, remote monitoring systems, histology, medical decision support systems, image reconstruction in diagnostics and treatment, big data in healthcare, medical analytics, and forecasting.
 
A distinct type of law (rules of conduct) can be recognized in addition to the conventional legal sources—a medical custom.
René David and Camille Jauffret-Spinosi, researchers in comparative law, contend that although legal customs need to be given legal legitimacy, this does not preclude them from being regarded as an independent, impartial source of law.[30] Legal traditions are typically accepted by academics as sources of law on par with legislative acts (for examples, see Panagiotis Zepos' writings). The works of Raymond Legeais provide a complete description of the contribution of legal customs to the system of sources of law in various legal traditions. In line with this theory, it might be claimed that medical legal traditions recognised by the government or other public organizations (such as the World Health Organization) can be seen as sources of AI control.
 
The topic is not without debate, though. How do we combine medical conventions, which have been formed over many years of effort by the medical community (cyborg-AI-doctor, AI-robot, AI-medical organization, and AI-cloud-doctor), into the legal framework governing AI technologies? Can such legal practices evolve in the future as a result of the cooperation between doctors and AI systems? The short history of AI makes it challenging to provide solutions to these queries. It is anticipated that the state will play a significant part in these issues since its competent authorities will have to decide which medical legal conventions (written or unwritten) may regulate AI in healthcare. The distinction between legal and illegal medical customs, the latter of which is not approved by the state and does not serve as a source of legislation, is crucial.
It is necessary to standardize and make AI accessible to medical legal customs. To guarantee the caliber and objectivity of AI, this is essential. The rules and procedures for using AI technology (such as an AI-hospital, AI-robot, and cyborg-AI-doctor) will be heavily influenced by the legal system in a particular nation.
We will then need to develop a worldwide database of medical legal norms adopted by all nations taking part in the integration effort, which will serve as the foundation for global AI (i.e., for an AI-cloud-doctor).[31]
 
V.             CONCLUSION
Cybersecurity, informed consent, and high standards of data protection high standards of safety and effectiveness, algorithmic fairness, resilience and cybersecurity, appropriate transparency and regulatory oversight, all of these crucial elements must be taken into consideration and handled in order to successfully develop an AI-driven healthcare system. In this sense, we must do more than simply update the existing regulatory frameworks to reflect new technological advancements. However, it is equally crucial to have political and public debates that centre on the morality of AI-driven healthcare and its effects on the workforce and society at large. AI has enormous potential to enhance our healthcare system, but we can only fully realise that potential if we start now to address the moral and legal issues we face.
 


[1]G. Michael, A. Steib, C.D. Wiliam, and F.J. Victoria, “Monitoring expert system performance using continuous user feedback” 3 Journal of the American Medical Informatics Association 216 (1996).
[2] What is Artificial Intelligence in Medicine? 2021, available at <https://www.ibm.com/watson health/learn/artificial-intelligencemedicine> (last visited on April 20th 2021)
 
[3] New Ebola Treatment Using Artificial Intelligence, available at <https://www.atomwise.com/2015/03/24/new-ebolatreatment- using-artificial-intelligence/> (last visited on 22nd  October 2021).
[4] Phulan Sarma, S. V. Rana, Bikash Medhi, and Manisha Naithani, “Emerging role of artificial intelligence in therapeutics for COVID-19: A systematic review” 10  Journal of Biomolecular Structure and Dynamics 16 (2020).
[5] A. Sami, “Challenges facing the detection of colonic polyps: What can deep learning do?” 55 Medicina Journal 473 (2019).
[6] Wenbo Yang, Jehane Michael Le Grange, Peng Wang, Wei Huang, and Ye Zhewei,  “Smart healthcare: Making medical care more intelligent” 3 Global Health Journal 65 (2020).
[7] Eric Campo, Daniel Esteve, and Jean-Yves Fourniols. “Smart homes—Current features and future perspective” 64 Maturitas 97 (2019).
[8] Sophia AI Reaches Key Milestone by Helping to Better Diagnose 200,000 Patients Worldwide, available at (last visited on 22 May 2021).
[9] I.G Cohen, R. Amarasingham, A. Shah, B. Xie, “The legal and ethical concerns that arise from using complex predictive analytics in health care” 14 Health Aff  1134, available at <https://doi.org/10.1377/hlthaff.2014.0048.> (last visited on 5th 2021).
[10] J. Vincent, “AI that detects cardiac arrests during emergency calls will be tested across Europe this summer” Verge, available at <https://www.theverge.com/2018/4/25/17278994/aicardiac- arrest-corti-emergency-call-response 2018> (last visited March 21st 2021).
[11] I.G, Cohen, A. Pearlman, “Smart pills can transmit data to your doctors, but what about privacy?” N Scientist, May 2019, available at <https://www.newscientist.com/article/2180158-smartpills-can-transmit-data-to-your-doctors-but-what-about-privacy> (last visited on May 16th 2022).
[12] Ibid.
[13] C. Ross, I. Swetlitz, “IBM’s Watson supercomputer recommended ‘unsafe and incorrect’ cancer treatments, internal documents show”, STAT, Jan 2017, available at <https://www.statnews. com/2018/07/25/ibm-watson-recommended-unsafe-incorrect-treatments> (last visited on 7th October 2021).
[14] Figure Eight. What is training data?, available at  <https://www.figure-eight.com/resources/whatis- training-data; 2020> (last visited on Jan 1st 2021).
[15] Ibid.
[16] J. Brown, “IBM Watson reportedly recommended cancer treatments that were unsafe and incorrect”, Gizmodo, May 2019, available at <https://gizmodo.com/ibm-watson-reportedly-recommended- cancer-treatments-tha-1827868882; 2018> (last visited on Feb. 23rd 2021).
[17] B. Wahl, A. Cossy-Gantner, S. Germann, “Artificial intelligence (AI)and global health: how can AI contribute to health in resource-poor settings?”,  BMJ Glob Health, March 2018, available at   (last visited on July 21st 2021).
[18] N. Sharkey, “The impact of gender and race bias in AI”, Humanitarian Law Policy, Sept 2018, available at <https://blogs.icrc.org/law-and-policy/2018/08/28/impact-gender-race-bias-ai> (last visited on Jan. 3rd 2021).
[19] ICO. Royal Free, “Google DeepMind trial failed to comply with data protection law”,  available at <https://ico.org.uk/about-the-ico/news-and-events/news-and-blogs/2017/07/ royal-free-google-deepmind-trial-failed-to-comply-with-data-protection-law> (last visited on Oct. 13, 2021).
[21] R. Copeland, “Google’s ‘Project Nightingale’ gathers personal health data on millions of Americans”, Nightingalegathers, May 2018, available at <https://www.wsj.com/articles/google-s-secret-project-nightingalegathers- personal-health-data-on-millions-of-americans-1157349679> (last visited on March 23rd 2021)
[22] S. Gerke, I.G. Cohen, “Potential liability for physicians using artificial intelligence”  JAMA, May 2019, available at <https://doi.org/ 10.1001/jama.2019.15064.> (last visited on July 21st 2021).
[23] Sophia AI Reaches Key Milestone by Helping to Better Diagnose 200,000 Patients Worldwide, 2018. Available at <https://www.prnewswire.com/news-releases/sophia-ai-reaches-key-milestone-by-helping-to-better-diagnose-200000-patients-worldwide-680907791.html> (last visited on March 21st 2021).
[24] Official Journal of the European Union, available at <https://eur-lex.europa.eu/legal-content/EN/ TXT/PDF/?uri=OJ:L:2000:013:FULL&from=EN> (last visited on May 15th 2021).
[25] Treaty on the Eurasian Economic Union, 2014, available at https://www.un.org/en/ga/sixth/70/docs/treaty_on_eeu.pdf (last visited on 20th  October 2021).
[26] European Parliament Resolution of 16 February 2017 with Recommendations to the Commission on Civil Law Rules on Robotics, 2015 available at <http://www.europarl.europa.eu/doceo/document/TA-8-2017-0051_EN.pdf> (last visited on Jan. 2nd 2021).
 
[27] EU Declaration of Cooperation on Artificial Intelligence Signed at Digital Day on 10th April 2018, available at <https://ec.europa. eu/digital-single-market/en/events/digital-day-2018> (last visited on May 6th 2021).
[28] Coordinated Plan on Artificial Intelligence 2021 Review, 2021, available at <https://digital-strategy.ec. europa.eu/en/library/coordinated-plan-artificial-intelligence-2021-review> (last visited on Nov. 23rd 2021).
[29] The Digital Europe Programme for the Period 2021–2027, 2018. Available at <https://eur-lex.europa.eu/legalcontent/ EN/ALL/?uri=CELEX:52018PC0434> (last visited on Sept. 5th 2020).
[30] David, Rene, and Camille Jauffret Spinosi, Les Grands Systèmes de Droit Contemporains 107 (The Major Contemporary Legal Systems, Paris, 10th edn. 2002).
[31] Ibid.

About Journal

International Journal for Legal Research and Analysis

  • Abbreviation IJLRA
  • ISSN 2582-6433
  • Access Open Access
  • License CC 4.0

All research articles published in International Journal for Legal Research and Analysis are open access and available to read, download and share, subject to proper citation of the original work.

Creative Commons

Disclaimer: The opinions expressed in this publication are those of the authors and do not necessarily reflect the views of International Journal for Legal Research and Analysis.