NAVIGATING LEGAL AND ETHICAL CHALLENGES IN AI-DRIVEN HEALTHCARE: ENSURING ACCOUNTABILITY, TRANSPARENCY, AND PATIENT PROTECTION BY - JANVI ASHIKA G
NAVIGATING
LEGAL AND ETHICAL CHALLENGES IN AI-DRIVEN HEALTHCARE: ENSURING ACCOUNTABILITY,
TRANSPARENCY, AND PATIENT PROTECTION
AUTHORED
BY - JANVI ASHIKA G
ABSTRACT:
The integration of
artificial intelligence (AI) into healthcare systems has the potential to
revolutionize medical practices, improve patient outcomes, and enhance
operational efficiency. However, the rapid adoption of AI technologies also
raises significant legal and ethical concerns, particularly regarding
accountability for misuse. This paper explores the legal frameworks surrounding
AI in healthcare, with a focus on the challenges posed by algorithmic bias,
data privacy, transparency, and the shifting nature of liability. Through an
analysis of existing laws, regulatory gaps, and case studies of AI failures in
healthcare, the paper examines how current legal structures address (or fail to
address) the risks associated with AI misuse, such as misdiagnoses, breaches of
patient confidentiality, and biased treatment recommendations. It also
investigates potential legal reforms needed to ensure that healthcare
providers, developers, and AI systems are held accountable for harms caused by
malfunctioning or discriminatory algorithms. The paper argues for a
multi-faceted approach to accountability, including the development of robust
regulatory standards, greater transparency in AI decision-making processes, and
clear guidelines for liability. In doing so, it aims to contribute to the
ongoing conversation about balancing innovation with patient protection in the
age of AI.
KEYWORDS: Artificial Intelligence,
Accountability, Healthcare, Patient Protection, Human Rights.
1.
INTRODUCTION:
Article 25 of the Universal
Declaration of Human Rights (UDHR)[1]
recognizes the right to a standard of living adequate for health and
well-being, including access to medical care. Article 25 underlines the
importance of accessible healthcare, framing it as a fundamental human right.
This aligns with global discussions on the right to health and access to essential
medical services, which have been significantly influenced by international
law, including frameworks such as the International Covenant on Economic,
Social and Cultural Rights (ICESCR)[2].
Artificial intelligence
(AI) has become a driving force in modern healthcare, offering unprecedented
opportunities to improve patient outcomes, enhance operational efficiencies,
and advance clinical decision-making. From diagnostic tools that analyse
medical images with remarkable accuracy to AI-driven systems that recommend
personalized treatment plans, the potential benefits are vast. AI technologies
are reshaping the way healthcare providers approach patient care, enabling
faster diagnoses, more precise treatments, and streamlined administrative
processes.
However, alongside these
advancements, there are significant risks associated with the integration of AI
into healthcare systems. The same capabilities that promise to revolutionize
healthcare also pose the potential for misuse, errors, and unintended
consequences.[3]
Misapplications of AI can lead to incorrect diagnoses, biased treatment
recommendations, breaches of patient privacy, and other harmful outcomes. These
risks raise important legal and ethical questions about accountability
particularly in cases where AI-driven decisions may harm patients or lead to
failures in care delivery.
This article will explore
both the transformative potential of AI in healthcare and the perils that arise
when these technologies are misused or inadequately regulated. By understanding
these dual aspects, we can better assess the legal frameworks and mechanisms
needed to ensure accountability in the age of AI, protecting both patients and
providers while fostering innovation.
2.
LEGAL FRAMEWORKS AND ACCOUNTABILITY
IN HEALTHCARE IN THE AGE OF AI
The integration of
Artificial Intelligence (AI) in healthcare has the potential to revolutionize
diagnostics, treatment planning, patient management, and overall care
efficiency. However, it also raises significant legal and ethical concerns, especially
in areas like accountability, patient safety, and malpractice. In India, the
integration of Artificial Intelligence (AI) in healthcare is subject to a
growing but evolving legal and regulatory landscape. The use of AI in
healthcare presents a unique set of challenges, especially regarding
accountability, patient safety, and legal liability. Let’s explore the current
legal frameworks and the role of medical malpractice law in the age of AI.
2.1. CURRENT LAWS AND
REGULATIONS GOVERNING AI IN HEALTHCARE
The use of AI in
healthcare is governed by a variety of legal frameworks and regulations. These
regulations are evolving as the technology matures, and the specifics vary by
country, but the following broad areas are commonly regulated:
2.1.1.
Medical Device Regulations (FDA, EMA, CDSCO, etc.)
In many jurisdictions, AI
applications in healthcare are classified as medical devices when they are used
for diagnostic, therapeutic, or monitoring purposes. Regulatory bodies like the
U.S. Food and Drug Administration (FDA)[4],
the European Medicines Agency (EMA)[5],
and other national health authorities assess AI systems before they can be
deployed in clinical settings. In India, the Central Drugs Standard Control
Organization (CDSCO)[6],
under the Drugs and Cosmetics Act, 1940[7],
regulates the approval of medical devices, including AI-based technologies.
i.
FDA (U.S.): The FDA classifies AI-driven software as a software as a
medical device (SaMD)[8].
The FDA evaluates these products for safety and efficacy, similar to other
medical devices. Recently, it has streamlined its approval process to account
for AI's dynamic learning and adaptation capabilities. For example, AI systems
that learn and evolve post-deployment may be subject to ongoing oversight and
monitoring through mechanisms like post-market surveillance.
ii.
EMA (Europe): The European Union regulates AI under the Medical
Device Regulation (MDR)[9]
and In-vitro Diagnostic Medical Device Regulation (IVDR)[10].
AI software that provides medical decisions or clinical guidance is regulated
as a medical device, and manufacturers are required to undergo clinical
evaluations and provide ongoing post-market surveillance. Also, The EU just
enacted the EU AI Act[11],
the first complete legislative framework for AI in history.
iii.
CDSCO (India): The AI/ML-based Software as a Medical Device
(SaMD)[12]
is regulated by the CDSCO in India. AI-based software that diagnoses,
monitors, or helps in clinical decision-making is considered a medical device
and needs approval before being used. The regulatory requirements involve
testing for safety and efficacy, clinical trials, and post-market surveillance.
The Ministry of Health and Family Welfare (MoHFW) issued guidelines (AI
Medical Device Guideline) in 2021 for regulating AI/ML-based Software as a
Medical Device (SaMD), outlining a risk-based classification for AI products.
This classification takes into account the intended purpose and the potential
risk to patients. For example, diagnostic tools that analyze medical imaging or
perform predictive analytics are classified as high-risk devices.
2.2. Gaps in Regulatory Oversight:
While these existing
frameworks offer essential regulatory oversight for AI in healthcare, there are
notable gaps that leave significant risks unaddressed.
- Lack of Standardized AI Evaluation: AI technologies in
healthcare are evolving rapidly, but regulatory bodies have struggled to
keep up with the speed of innovation. The FDA and EMA's existing
frameworks often do not account for the adaptive learning capabilities[13]
of AI systems. For instance, AI algorithms that evolve over time through
machine learning might require continuous monitoring and approval[14],
something traditional medical devices do not need. Current regulations
often fail to address the post-market surveillance of AI systems after
their approval, allowing for potential safety concerns to remain
undetected.
- Absence of Clear Liability Frameworks: Another significant
gap in AI regulation is the lack of clear guidelines on liability when AI
systems cause harm or errors in healthcare. Regulatory oversight should
address how liability is distributed in cases of AI-related malpractice or
errors.
- Algorithmic Transparency and Explainability: Many AI systems in
healthcare operate as “black boxes”[15],
meaning the decision-making process is often not transparent to users or patients.
Regulatory frameworks have not sufficiently addressed the need for
transparency and explainability in AI decision-making processes.
- Bias and Discrimination in AI: AI systems in
healthcare, if not properly regulated, can inadvertently perpetuate or
even exacerbate biases in treatment, diagnosis, or care. Algorithmic bias
can lead to discriminatory practices, affecting marginalized groups based
on gender, race, age, or socioeconomic status.[16]
Regulatory bodies have not yet fully developed methodologies for
evaluating and mitigating bias in AI healthcare systems, leaving
significant gaps in oversight.
- Global Discrepancies in Regulation: AI regulation in
healthcare varies significantly across jurisdictions. For example, while
the EU has a more robust data privacy framework through GDPR[17],
other regions may lack sufficient AI-specific healthcare regulations,
leading to cross-border regulatory challenges. This inconsistency
complicates the development and deployment of AI systems globally,
especially for multinational healthcare organizations or developers.
3.
CHALLENGES IN DETERMINING LIABILITY
Determining “liability”
and “accountability” in the context of “AI errors” in healthcare is a highly
complex issue that raises fundamental questions about the roles and
responsibilities of the key stakeholders involved: AI developers, healthcare
providers, and manufacturers. As AI technologies become increasingly integrated
into healthcare systems, these challenges are amplified by issues of autonomy,
decision-making, and human oversight. UNESCO’s Recommendation on the
Ethics of AI[18]
focuses on ensuring that AI development aligns with human rights, fairness, and
transparency. It emphasizes the need for AI systems to be responsible and
accountable for their actions.
3.1. Dissecting Issues of
Liability and Accountability
3.1.1. Role of AI Developers
AI developers are
responsible for creating the underlying algorithms, training data, and models
that power healthcare AI systems. However, determining their liability when an
AI system makes an error is a nuanced issue. Potential issues related to AI
developer liability:
Algorithmic Errors: If the AI algorithm makes
an incorrect diagnosis or treatment recommendation due to errors in its
programming or design, the developers could be held responsible. This could
include issues like insufficient training data, biased datasets, or flawed
model architecture.[19]
For example, if an AI system used for diagnostic imaging incorrectly identifies
a tumour due to incomplete training data or algorithmic bias, the developers of
the system may be held liable for negligence in developing the algorithm.[20]
Training Data: AI systems learn from
data, and the quality of this data is essential for the performance of the
system. If the training data is flawed, developers may be held accountable for
failing to use accurate or diverse data, resulting in poor AI predictions. For
example, an AI diagnostic tool trained predominantly on data from one
demographic group (e.g., white patients) may fail to accurately detect
conditions in patients from other groups (e.g., Black or Asian patients),
leading to misdiagnoses.[21]
Product Liability: Developers could also
face liability under product liability laws if the AI system is considered a
defective product. If an AI system causes harm due to a defect in its design or
functionality, developers could be held responsible for the defect,
particularly if the system does not perform as intended or as promised. In
India, the Consumer Protection Act, 2019[22]
provides a framework for product liability, and AI tools, if considered
“products”, might fall under this law, making developers and manufacturers
liable for harm caused by defective products.
3.1.2.
Role of Healthcare Providers (Doctors, Hospitals)
Healthcare providers,
including doctors, nurses, and hospitals, have traditionally been the main parties
held responsible for patient care. However, with the introduction of AI tools,
the question arises as to whether their liability extends to errors caused by
the AI systems they use. Potential issues related to healthcare provider
liability:
Failure to Validate AI
Recommendations: Healthcare providers are still expected to use professional judgment in
diagnosing and treating patients, even when using AI-based tools.[23]
If a healthcare provider blindly follows an AI recommendation without
critically assessing its appropriateness for the patient, they may be held
liable for negligence.
Duty of Care: In cases where AI
assists in diagnostics, healthcare providers have a duty to ensure that the AI
system is functioning correctly, is appropriate for the case at hand, and is
validated for its accuracy. A failure to check or question the output of AI
tools might lead to legal exposure for the healthcare provider. For example, if
a doctor relies on an AI system for diagnosing a rare condition but fails to
verify the AI’s output, and the misdiagnosis results in harm, the doctor could
be deemed negligent in failing to exercise due diligence.[24]
Shared Responsibility: In collaborative
settings, such as hospitals or healthcare organizations, AI tools may be used
across multiple levels of care. In these cases, liability could be spread
across various individuals or entities, depending on the structure of the
organization and the AI’s specific role in the treatment.
3.1.3.
Role of Manufacturers (Hardware and AI System Vendors)
The manufacturers of
AI-powered medical devices or systems play a key role in ensuring that the
hardware and software meet safety standards and are designed to function
correctly. In cases of AI error, the question is whether manufacturers are
liable for faulty products or insufficient safety measures. Potential issues
related to manufacturer liability:
Defective Products: If an AI medical device
malfunctions due to a hardware failure or a bug in the software, the
manufacturer may be held liable under product liability laws. This could apply
to the hardware, such as medical imaging devices with AI capabilities, or to
the AI software itself.[25]
For example, if a medical imaging AI system consistently misidentifies
conditions due to faulty image processing or software bugs, [26]the
manufacturer may be held liable for producing a defective product.
Failure to Warn: Manufacturers of AI tools
might also face liability if they fail to properly warn healthcare providers
and patients about the limitations and risks associated with their systems.
This could include inadequate labeling or failure to provide proper training
for users. For example, if a hospital uses an AI system to predict patient
outcomes, and the manufacturer does not adequately disclose the system’s limitations,
the manufacturer may be held responsible for harm caused by relying on
inaccurate results.
Post-Market Surveillance: Manufacturers have an
ongoing obligation to monitor the performance of their AI systems after they
are deployed in clinical settings. If the manufacturer fails to conduct proper
post-market surveillance, or if they ignore warnings about potential risks or
defects, they could be liable for harm caused by the system.[27]
4.
ALGORITHMIC BIAS AND
DISCRIMINATION: LEGAL IMPLICATIONS
Algorithmic bias in AI
refers to the systematic favouritism or prejudice that occurs when an AI
system's decisions, predictions, or outputs disproportionately benefit or
disadvantage certain groups based on factors such as race, gender, age, or
socio-economic status. AI systems that perpetuate bias or discrimination may
violate the core principles of the Universal Declaration of Human Rights,
particularly Article 1, [28]the
right to equality before the law. In healthcare, the consequences of AI bias
can be particularly serious, leading to discriminatory practices in diagnosis,
treatment, and overall patient care. This raises significant legal and ethical
challenges in terms of accountability, patient safety, and fairness. AI systems
are trained on large datasets to recognize patterns and make decisions, but if
these datasets are incomplete or unbalanced, the AI may develop biased
decision-making, leading to biased treatment recommendations, misdiagnoses, or
unequal care in healthcare.
Examples:
§
Racial Bias: AI systems used for diagnostic purposes might be less
accurate for minority groups if the data used to train the models is
predominantly from one racial group. For instance, an AI system designed to
predict the risk of heart disease might underdiagnose Black patients if the
model was primarily trained on data from white patients.[29]
§
Gender Bias: A diagnostic AI tool could misinterpret symptoms of
heart disease in women, who may experience symptoms differently than men,
leading to delayed or incorrect treatment.[30]
§
Socioeconomic Bias: If data reflects socioeconomic
disparities, AI systems might prioritize care for wealthier individuals,
leaving lower-income patients with fewer resources or less optimal treatment
plans.[31]
4.1.
Impact on Patient Care
AI algorithms used in
radiology, pathology, and medical imaging can be affected by biases if they are
trained on images that predominantly represent certain groups of people. For
instance, if an algorithm is trained mostly on data from a specific demographic
(e.g., white patients), it may struggle to accurately diagnose conditions in
patients outside of that demographic, leading to worse outcomes for
underrepresented groups. For example, an AI system used for skin cancer
detection may fail to correctly identify melanoma in patients with darker skin
tones because it was trained mostly on images of lighter-skinned individuals.[32]
Even though AI has the
potential to make healthcare more accessible, the unequal development and
deployment of AI tools could deepen existing healthcare disparities. Rural
areas or underprivileged communities might have less access to AI-powered
diagnostic tools, or the algorithms used in those regions might not be
appropriately calibrated to reflect local population needs. If patients believe
that AI systems are unfair or biased, they may lose trust in healthcare
providers who rely on these technologies, which can result in poorer health
outcomes, lower patient satisfaction, and hesitancy to engage with the
healthcare system.
4.2.
Legal Consequences of Discriminatory Outcomes in AI-Driven
Healthcare
AI bias in healthcare not
only threatens patient well-being but also exposes healthcare providers and
developers to significant legal risks. The consequences of AI-driven
discrimination can be profound, with various legal frameworks addressing these
issues.
4.2.1.
Violations of Anti-Discrimination Laws:
In many countries, there
are specific anti-discrimination laws that prohibit unequal treatment based on
race, gender, ethnicity, and other protected characteristics. In India, laws
such as the Constitution of India (Article 15)[33]
and the Rights of Persons with Disabilities Act, 2016[34]
mandate non-discriminatory practices, and failure to comply with these
principles can expose healthcare providers to legal action.
Consumer Protection Act,
2019 (India)[35]: AI tools in healthcare
that are discriminatory may also face legal scrutiny under the Consumer
Protection Act. If a healthcare provider uses an AI system that leads to
discriminatory outcomes, patients could seek legal recourse for unfair trade
practices, including the failure to provide equal treatment.
Equal Protection Clauses
(U.S. and other jurisdictions): In the U.S., the Civil Rights Act[36]
and Affordable Care Act[37]
have provisions that prohibit discrimination in healthcare based on race,
colour, national origin, sex, age, or disability. If AI systems result in
unequal care or outcomes, healthcare providers could face lawsuits or
regulatory actions.
4.2.2.
Negligence and Malpractice Claims:
Healthcare providers might
be found negligent if they fail to monitor and intervene when AI systems
provide discriminatory or biased outputs. If a healthcare provider relies on an
AI system that leads to harm due to bias (such as overlooking a diagnosis for a
specific demographic group), they could be held liable for medical malpractice
or negligence under laws that mandate providing a reasonable standard of care.
In India and many other jurisdictions, healthcare providers have a duty of care
to ensure that AI tools are effective, accurate, and free from bias.[38]
A failure to properly vet AI tools for discriminatory outcomes could lead to
lawsuits if patients are harmed.
4.2.3.
Violation of Data Protection and Privacy Laws:
In countries like India,
AI tools often use sensitive health data, and biased outcomes can raise
concerns related to data privacy and fairness. If AI systems disproportionately
affect certain groups due to biased data or algorithms, it may constitute a violation
of data protection regulations like the Personal Data Protection Bill[39]
in India, or the General Data Protection Regulation (GDPR)[40]
in the EU. Regulatory bodies such as India’s Central Drugs
Standard Control Organization (CDSCO), and the U.S. Food and Drug
Administration (FDA), can take actions against AI tools that lead to
discriminatory outcomes. Regulators can mandate companies to withdraw products
from the market, issue fines, or require manufacturers to make improvements to
prevent discrimination.
5.
CASE STUDIES
The IBM Watson for
Oncology Controversy (2018),[41]
where the
IBM Watson for Oncology was introduced as a tool to assist doctors in making
cancer treatment decisions by analyzing data from clinical trials and patient
records. However, the system faced significant criticism for recommending
unsafe and incorrect treatments. This case raised questions about the
responsibility of healthcare providers when relying on AI tools for critical
medical decisions. Legal challenges focused on whether Watson’s creators or the
healthcare providers were liable for the harm caused by incorrect treatment
recommendations. This case highlights the issue of accountability in
AI-driven decision-making, where patients or their families may pursue
malpractice claims against healthcare providers using AI tools.
In the Google Health AI
Misdiagnosis Lawsuit (2020),[42]
Google Health’s AI system for analyzing medical images showed promise in
diagnosing breast cancer. However, a study revealed that the AI made errors in
certain contexts, leading to false negatives and incorrect diagnoses. The
lawsuit that followed highlighted the potential for AI systems to misdiagnose
and how patients might hold AI developers or healthcare providers accountable
for errors that lead to harm. The case echoes concern about the limits
of AI technology in healthcare and who bears the burden of liability when AI
systems fail to perform as expected.
The Rise of Diagnostic
Errors Due to AI in Radiology[43],
in the
field of radiology, AI is being increasingly used to interpret medical images
like CT scans, MRIs, and X-rays. However, some patients have reported
misdiagnoses, including missed cancer diagnoses, due to AI’s inability to
correctly interpret images. Legal proceedings in these cases focus on
the allocation of liability, especially when an AI misdiagnosis leads to
delayed treatments or wrongful death. Questions arise about whether the
radiologists, AI developers, or healthcare institutions are responsible. Such
cases are likely to set precedents on how the courts view the role of AI in
medical malpractice suits, particularly in determining whether AI acts as a
substitute or tool for human decision-makers.
In AI-Powered Predictive
Models for Patient Outcome[44], AI systems are being
developed to predict patient outcomes, such as readmission risks or future
health complications. However, there have been instances where predictions were
not accurate, leading to missed opportunities for early interventions. If
a predictive model fails and results in harm (e.g., a preventable death),
patients or families might seek legal redress against healthcare providers or
AI developers. The question here is whether the AI system should have been
thoroughly vetted before being used in clinical practice. This scenario
raises issues related to medical negligence, informed consent, and whether AI
systems need to undergo the same level of regulatory scrutiny as other medical
devices.
In Autonomous Robotic
Surgery and Surgical Malpractice[45],
the Robotic surgery, powered by AI, has been gaining popularity for its
precision and minimal invasiveness. However, cases have emerged where robotic
systems malfunctioned or failed to follow surgeon commands, resulting in
patient harm. These cases often revolve around determining who is liable
when an autonomous system fails whether the surgeon who used the technology is
at fault, the manufacturer of the AI system, or the hospital. Such cases
are critical in setting precedents regarding the limits of AI autonomy in
surgical procedures and the need for adequate human oversight.
These case studies provide
valuable lessons in the intersection of AI, healthcare, and law, and will
likely continue to shape how courts handle malpractice claims involving AI in
the medical field.
6.
CONCLUSION & WAY FORWARD
As AI technologies
continue to reshape healthcare, ensuring accountability becomes critical to
balancing innovation and patient protection. A multi-faceted approach to
accountability involves the development of robust regulatory standards,
enhancing transparency in AI decision-making processes, and establishing clear
guidelines for liability. This approach aims to address the ethical, legal, and
operational challenges posed by AI systems in healthcare, ensuring that these
innovations benefit patients while minimizing potential harm. Governments and
regulatory bodies must develop comprehensive standards for the use of AI in
healthcare. These standards should focus on safety, effectiveness, and ethical
considerations while allowing for innovation. This includes guidelines for data
privacy, algorithm validation, and risk management. While some aspects of
healthcare AI are regulated through existing frameworks (like FDA guidelines
for medical devices), new AI-specific regulations are necessary to address the
unique challenges posed by machine learning and autonomous systems.
AI systems in healthcare
must be transparent in their decision-making. This means that the algorithms
should be interpretable by healthcare providers and patients, ensuring that
both can understand how decisions are made. The AI developers and healthcare
providers should communicate openly with the public and medical professionals
about how AI is used, its benefits, and its limitations. Informed consent and
patient education are essential to maintaining trust in AI-powered healthcare
solutions.
When AI systems cause harm
or lead to errors in healthcare, liability must be clearly defined. This
includes determining who is responsible: the AI developer, the healthcare
provider, or a combination of both. Clear legal frameworks should address
issues such as medical malpractice involving AI tools, establishing who is
accountable in case of misdiagnosis, treatment errors, or data breaches. New
insurance models may be needed to cover the risks associated with AI in
healthcare. These models would ensure that patients can receive compensation
for harm caused by AI, and they would provide financial protection for
healthcare providers and developers against legal claims. Accountability
mechanisms are essential to ensuring that AI systems are not only effective and
efficient but also responsible and transparent in their decision-making. The
increasing integration of AI in healthcare demands a legal infrastructure that
can evolve to address emerging challenges while keeping pace with the rapid
advancement of technology.
The balance between
innovation and accountability will also require global cooperation. AI in
healthcare is a global phenomenon, and legal frameworks should promote
international harmonization of standards to avoid conflicts and ensure
consistency in the application of regulations across borders. The ongoing
development of global ethical standards, along with cross-border data
protection agreements and continuous monitoring of AI systems, will be vital in
creating a safe and equitable healthcare environment worldwide.
[1] Universal Declaration
of Human Rights (UDHR), Art. 25.
[2] International Covenant
on Economic, Social and Cultural Rights (ICESCR), Art. 12.
[3] Qiao Jin, et al. Hidden
Flaws Behind Expert-Level Accuracy of Multimodal GPT-4 Vision in Medicine. npj
Digital Medicine, 2024, 10.1038/s41746-024-01185-7.
[6] Central Drugs Standard
Control Organization (CDSCO), available at: https://cdsco.gov.in/opencms/opencms/en/Home/
[7] Act No. 23 of 1940.
[8] Artificial Intelligence
and Machine Learning in Software as a Medical Device, Food and Drug
Administration; 2024 https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-software-medical-device
[9] Regulation (EU)
2017/745 of the European Parliament and of the Council of 5 April 2017 on
medical devices, 2017.
[10] Regulation (EU) 2017/746 of the
European Parliament and of the Council of 5 April 2017 on in vitro diagnostic
medical devices, 2017.
[11] Regulation of the European
Parliament and of the Council laying down harmonised rules on Artificial
Intelligence (Artificial Intelligence Act), 2024.
[12] Regulation of Software as Medical
Device (SaMD) in India, Freyr, (Aug. 9, 2022), https://www.freyrsolutions.com/blog/regulation-of-software-as-medical-device-samd-in-india
[13] Tumaini Kabudi, et al. AI-enabled
adaptive learning systems: A systematic mapping of the literature, Computers
and Education: Artificial Intelligence, Volume 2, 100017, (2021), https://doi.org/10.1016/j.caeai.2021.100017.
[14] V. Sounderajah, et
al. Developing specific reporting guidelines for diagnostic accuracy
studies assessing AI interventions: The STARD-AI Steering Group. Nat
Med 26, 807–808 (2020). https://doi.org/10.1038/s41591-020-0941-1
[15] A. Marey, et al. Explainability,
transparency and black box challenges of AI in radiology: impact on patient
care in cardiovascular radiology. Egypt J Radiol Nucl Med 55,
183 (2024). https://doi.org/10.1186/s43055-024-01356-2
[16] Xavier Ferrer, et al. Bias and
Discrimination in AI: A Cross-Disciplinary Perspective. IEEE Technology and
Society Magazine. 40. 72-80, (2021). 10.1109/MTS.2021.3056293
[17] General Data Protection
Regulation, L119, p. 1–88, 4 May 2016.
[18] UNESCO's first-ever global standard
on AI ethics – the ‘Recommendation on the Ethics of Artificial Intelligence’,
adopted in 2021, is applicable to all 194 member states of UNESCO. 16 May 2023,
Last update:26 September 2024.
[19] Grote, Thomas, and Geoff Keeling.
“On Algorithmic Fairness in Medical Practice.” Cambridge Quarterly of
Healthcare Ethics, vol. 31, no. 1, pp. 83-94, (2022),
[20] Bernstein MH, et al. Can incorrect
artificial intelligence (AI) results impact radiologists, and if so, what can
we do about it? A multi-reader pilot study of lung cancer detection with chest
radiography. Eur Radiol. (Nov 2023); 33(11):8263-8269. doi:
10.1007/s00330-023-09747-1.
[21] Obermeyer, Ziad, et al.
“Dissecting racial bias in an algorithm used to manage the health of
populations.” Science (American Association for the Advancement of
Science), vol. 366, no. 6464, pp. 447–453, (2019), https://www.science.org/doi/10.1126/science.aax2342
[22] Act No. 35 of 2019
[23] Ahsan MM, et al.
Machine-learning-based disease diagnosis: a comprehensive review. Healthcare.
10(3):541, (2022), https://doi.org/10.3390/healthcare10030541.
[24] Myszczynska MA, et al.
Applications of machine learning to diagnosis and treatment of neuro
degenerative Diseases, Nat Reviews Neurol, 16(8):440–56, (2020), https://doi.org/10.1038/s41582-020-0377-8.
[25] Channa R, et al. Autonomous
artificial intelligence in diabetic retinopathy: from algorithm to clinical
application, J Diabetes Sci Technol, (2021), https://journals.sagepub.com/doi/10.1177/1932296820909900
[26] Fenton JJ, et al. Influence of
computer-aided detection on performance of screening mammography. N Engl J
Med. 356(14):1399-1409, (2007).
[27] Karnika Singh
&Praveen Selvam, Medical device risk management, Trends in Development
of Medical Devices, Academic Press, Pages 65-76, (2020), https://doi.org/10.1016/B978-0-12-820960-8.00005-8
[28] Universal Declaration of Human
Rights, art. 1
[29] Parikh RB, Teeple
S, Navathe AS. Addressing Bias in Artificial Intelligence in Health
Care. JAMA.322(24):2377–2378 (2019) https://jamanetwork.com/journals/jama/article-abstract/2756196
[30] D. Cirillo, et al. Sex and
gender differences and biases in artificial intelligence for biomedicine and
healthcare. npj Digit. Med. 3, 81 (2020). https://doi.org/10.1038/s41746-020-0288-5
[31] Juhn YJ, et al. Assessing
socioeconomic bias in machine learning algorithms in health care: a case study
of the HOUSES index. J Am Med Inform Assoc.29(7):1142-1151, (Jun 14, 2022), https://pmc.ncbi.nlm.nih.gov/articles/PMC9196683/
[32] Esteva A, et al.
Dermatologist-level classification of skin cancer with deep neural networks.
Nature. 542(7639):115–8, (2017). https://doi.org/10.1038/nature21056.
[33] India Const. art. 15.
[34] Act No. 49 of 2016
[35] Act No. 35 of 2019
[36] Civil Rights Act of 1964;
7/2/1964; Enrolled Acts and Resolutions of Congress, 1789 - 2011; General
Records of the United States Government, Record Group 11; National Archives
Building, Washington, DC.
[37] 124 Stat. 119.
[38] Williams, Betsy Anne, et al. “How
Algorithms Discriminate Based on Data They Lack: Challenges, Solutions, and
Policy Implications.” Journal of Information Policy (University Park, Pa.),
vol. 8, pp. 78-115, (2018), https://doi.org/10.5325/jinfopoli.8.2018.0078.
[39] Personal Data Protection Bill,
2023, Government of India.
[40] Regulation (EU) 2016/679 of the
European Parliament and of the Council of 27 April 2016.
[41] S. Bhattacharya, "IBM Watson
for Oncology and the AI revolution in healthcare: A critical review."
Journal of Cancer Policy, 24, 100191, (2020).
https://doi.org/10.1016/j.jcpo.2020.100191
[42] Sweeney, L. "Google Health's
AI and the 2020 Misdiagnosis Lawsuit: Implications for Healthcare and Data
Privacy." Journal of Health Law and Policy, 32(2), 215-233. https://doi.org/10.1016/j.jhlp.2020.05.003
[43] Le, M. H., & Nguyen, T.
"AI in Radiology: Implications for Diagnostic Accuracy and Errors."
Journal of Medical Imaging and Radiation Sciences, 51(4), 468-474, (2020). https://doi.org/10.1016/j.jmir.2020.02.002
[44] Rajkomar, A., et al.
"Scalable and accurate deep learning for electronic health records."
npj Digital Medicine, 1, 18, (2018). https://doi.org/10.1038/s41746-018-0029-1
[45] Mazzone, P., et al. "Legal
and Ethical Implications of Autonomous Robotic Surgery." Journal of
Medical Robotics and Computer Assisted Surgery, 15(2), 115-124, (2019). https://doi.org/10.1002/rob.21815