Open Access Research Article

RESPONSIBLE AI IMPLEMENTATION: ETHICAL DILEMMAS DATA PRIVACY AND ALGORITHMIC BIAS IN TODAY S TECHNOLOGICAL LANDSCAPE

Author(s):
JENNIFER GHOSHRAY
Journal IJLRA
ISSN 2582-6433
Published 2023/07/17
Access Open Access
Issue 7

Published Paper

PDF Preview

Article Details

RESPONSIBLE AI IMPLEMENTATION:
ETHICAL DILEMMAS, DATA PRIVACY, AND ALGORITHMIC BIAS IN TODAY'S TECHNOLOGICAL LANDSCAPE
 
AUTHORED BY - JENNIFER GHOSHRAY *
 
 
ABSTRACT
Delving into the complex and dynamic realm of artificial intelligence (AI) and technology, this article explores three paramount challenges that demand careful consideration for the responsible implementation of AI. Specifically, ethical dilemmas, data privacy concerns, and algorithmic bias are examined, shedding light on their significance, and proposing pathways towards a more equitable and ethically grounded future. Emphasizing transparency, fairness, and accountability, the article underscores the critical role of establishing robust frameworks for responsible AI decision-making. By adhering to ethical principles, practitioners can effectively navigate the intricate landscape of AI and technology.
 
The importance of safeguarding data privacy and security within the context of AI is further emphasized. Comprehensive data protection regulations, transparent data collection practices, and empowering individuals in controlling their personal data are highlighted as essential measures. Such actions not only foster trust and confidence in AI technologies but also mitigate the risks associated with data misuse, while safeguarding individual privacy rights. Addressing algorithmic bias emerges as another crucial aspect, stressing the significance of recognizing and mitigating bias through comprehensive auditing, diverse training data, and ongoing monitoring. By promoting fairness in AI systems, equal opportunities can be ensured, particularly in areas like the legal field, where biased AI algorithms have profound societal consequences.
 
The article concludes by providing practitioners, policymakers, and researchers with actionable insights and recommendations to effectively navigate the intricate landscape of AI and technology. It offers practical guidance for the responsible implementation of AI, placing emphasis on transparency, fairness, and accountability. By considering the issues discussed, stakeholders can make informed decisions that empower stakeholders in making responsible choices within the evolving landscape of AI and accountability for the benefit of society.
 
Keywords: Artificial intelligence (AI), Ethical dilemmas, Data privacy, Algorithmic bias, Transparency
 
“… the machine does not control us. We control the machine...”0F[1]
 
RESPONSIBLE IMPLEMENTATION OF ARTIFICIAL INTELLIGENCE
In today's rapidly evolving technological landscape, the advent of AI has brought transformative advancements across various sectors. However, with these advancements come a host of challenges that necessitate careful consideration for the responsible implementation of AI. This article aims to delve into the intricate realm of AI and shed light on three paramount challenges that arise in this context: ethical dilemmas, data privacy concerns, and algorithmic bias. By addressing these challenges, we aim to pave the way towards a more equitable and ethically grounded future in the realm of AI.
 
Pre-AI Era: Issues and Problems
Before the emergence of AI, technological advancements already presented their fair share of challenges. Issues surrounding data privacy and the ethical implications of decision-making processes have long been subjects of intense scrutiny. The increasing reliance on technology in various domains, such as healthcare, finance, and transportation, raised concerns about the security and privacy of individuals' personal information.
 
For instance, in the healthcare sector, the digitalization of medical records and the exchange of sensitive patient data raised questions about data security and unauthorized access.1F[2] Similarly, in the financial industry, the use of online banking and mobile payment systems led to discussions about the protection of financial information and the vulnerability of electronic transactions.2F[3] Moreover, the rapid automation of tasks and the potential displacement of human workers prompted discussions on the societal impact of technology-driven changes. The advent of automated systems in manufacturing raised concerns about job losses and the need for upskilling the workforce to adapt to the changing technological landscape.3F[4]
 
As technology continues to advance, the integration of AI technologies marks a paradigm shift, introducing new dimensions and complexities to the existing challenges. The emergence of AI brings forth unique ethical dilemmas,4F[5] data privacy concerns,5F[6] and the potential for algorithmic bias,6F[7] necessitating fresh perspectives and approaches to ensure responsible AI implementation.
 
The Shifting Paradigm: Challenges in the Age of AI
With the rapid advancement of AI technology, the challenges confronting both technology and society have reached a new level of significance. Ethical dilemmas have become increasingly pronounced as AI systems gain the ability to make autonomous decisions, prompting concerns about the biases and privacy infringements that may be inherent in these systems.7F[8] The growing reliance on AI systems, which rely on vast amounts of personal data, has intensified the need for robust regulations and transparent practices to safeguard individuals' privacy rights.8F[9] The potential for algorithmic bias further compounds these challenges, as biased AI algorithms can perpetuate discrimination and impede equal opportunities across various domains.9F[10]
 
Consider a specific challenge of bias in AI algorithms used in the employment hiring processes. Research has revealed that these algorithms can lead to unfair outcomes and hinder equal opportunities for job applicants. For example, a study found that an AI algorithm used in a hiring process exhibited a bias against certain demographic groups, resulting in discriminatory outcomes and perpetuating existing inequalities.10F[11] This highlights the importance of addressing algorithmic bias to ensure fair and equitable hiring practices.
 
Another concern revolves around the privacy of personal data used by AI systems. AI relies heavily on collecting and analyzing vast amounts of data, including sensitive personal information. This raises significant data privacy concerns, as the mishandling or unauthorized access to such data can lead to breaches of privacy and potential harm to individuals.11F[12] Robust regulations and transparent data collection practices are essential to mitigate these risks and protect individuals' privacy rights in the age of AI.
 
Furthermore, the complex ethical implications of autonomous decision-making by AI systems cannot be ignored. As AI algorithms become more sophisticated, they increasingly make decisions that impact individuals' lives, from credit scoring12F[13] to healthcare diagnosis,13F[14] to even criminal matters.14F[15] Ensuring transparency, fairness, and accountability in AI decision-making processes is crucial to address ethical dilemmas and uphold societal values. The forthcoming sections of this article will discuss these challenges in a more comprehensive manner, exploring the intricacies of ethical dilemmas, data privacy concerns, and algorithmic bias in the context of AI implementation. By tackling these challenges and establishing robust frameworks, we can harness the potential of AI while safeguarding individuals' rights, promoting fairness, and fostering a more equitable and transparent usage of AI technology.
 
ETHICAL DILEMMAS IN AI DECISION-MAKING
The ethical application of AI has become a pressing concern considering rapid technological advancements. It is imperative to ensure that AI systems are designed and implemented in an ethical manner, taking into account the potential biases, privacy infringements, and broader societal implications they may entail. Ethical considerations hold particular significance when AI systems autonomously make decisions that directly impact individuals' lives, such as in healthcare, finance, and criminal justice.
 
In the realm of healthcare, the use of AI algorithms for diagnostic and treatment decisions poses significant ethical questions.15F[16] While AI has the potential to enhance medical decision-making and improve patient outcomes, careful implementation is necessary to address risks. Notably, author Topol16F[17] demonstrated the superior diagnostic performance of an AI algorithm compared to human cardiologists in certain heart conditions. However, concerns arise regarding the transparency and explainability of AI-driven diagnoses. Ensuring that patients and healthcare providers have confidence in AI system decisions, along with the ability to understand and contest them when needed, is crucial.
 
Similarly, in the financial industry, the application of AI algorithms for credit scoring and lending decisions raises ethical concerns with societal implications. While AI algorithms may offer efficiency and accuracy in assessments, the fairness and potential for discrimination must be carefully addressed. Research by Fuster17F[18] revealed biases in certain AI credit scoring models, resulting in unequal access to credit for marginalized communities. These biases perpetuate existing social inequalities and hinder equal opportunities for individuals.
 
AI bias in criminal justice applications is a critical ethical concern that demands immediate attention. One alarming example is the risk assessment algorithms used in sentencing decisions, which have demonstrated a propensity for racial bias. A study by Dressel and Farid analyzed the COMPAS algorithm,18F[19] a widely used tool in the United States for predicting recidivism risk. The research revealed that the algorithm exhibited racial disparities, as it falsely flagged black defendants as future criminals at almost twice the rate of white defendants.19F[20] Conversely, white defendants were more likely to be mislabeled as low risk compared to their black counterparts.20F[21] This kind of bias perpetuates systemic inequalities and undermines the principles of fairness and equal treatment in the criminal justice system. Hence, the responsible implementation of AI systems also necessitates safeguarding data privacy and security, ensuring the protection of individuals' privacy rights and mitigating the risks associated with data misuse and unauthorized access.
 
SAFEGUARDING DATA PRIVACY AND SECURITY
The Cambridge Analytica Scandal: A Cautionary Tale
The protection of data privacy and security has become a matter of utmost importance within the context of AI systems. As AI technologies rely heavily on vast amounts of data, ensuring the safeguarding of personal information has emerged as a pressing concern. The notorious Cambridge Analytica scandal serves as a cautionary tale, highlighting the risks associated with data misuse and the erosion of individual control over personal information.21F[22] In this scandal, it was revealed that personal data of millions of Facebook users had been harvested without their consent and subsequently used for targeted political advertising.22F[23] This incident raised significant concerns about the lack of control individuals have over their personal information and highlighted the need for robust measures to protect user privacy in the digital age.
 
To address these concerns, there is a growing recognition of the importance of data protection regulations and transparency in data collection practices. Scholars have extensively studied the implications of the Cambridge Analytica scandal and its impact on data privacy. For example, Stanford University23F[24] reported the legal and ethical implications of data misuse in the context of social media platforms, highlighting the need for stronger regulations and user control over personal data. Vössing et al.24F[25] examined the role of transparency in data collection and emphasized the importance of informing individuals about how their data is collected, used, and shared by AI systems. These scholarly works shed light on the significance of regulatory frameworks and transparent practices in safeguarding data privacy and security.
 
Empowering individuals with control and understanding of their data is a crucial step in fostering trust and confidence in AI technologies. The implementation of measures that provide individuals with greater control over their personal information can contribute to a more privacy-centric approach to AI. For instance, the European Union's General Data Protection Regulation (GDPR) has introduced strict guidelines for data protection and privacy, giving individuals more control over their data and requiring transparency in data collection and usage.25F[26] By adopting similar approaches and best practices, organizations and policymakers can effectively safeguard data privacy and security in the age of AI. Turning now to addressing the challenges posed by algorithmic bias in AI systems and discussing strategies to promote fairness and mitigate bias in decision-making processes.
 
ALGORITHMIC BIAS AND FOSTERING FAIRNESS
Promoting Ethical AI Practices
This section critically examines the issue of algorithmic bias in AI systems,26F[27] focusing on its potential to generate unfair outcomes and hinder equal opportunities in various domains, including criminal and civil law. Research has revealed instances where these algorithms exhibit biases that perpetuate discrimination and reinforce existing inequalities. These biases pose significant challenges, causing harm and injury to individuals who are disproportionately affected by biased AI systems.
 
For instance, in the field of criminal justice, a study highlighted the case of an AI risk assessment algorithm used in sentencing that exhibited racial bias.27F[28] The algorithm assigned higher risk scores to black defendants compared to white defendants, potentially leading to harsher sentences for individuals from marginalized communities.28F[29] This biased algorithmic decision-making undermines the principle of equal justice under the law and perpetuates the systemic discrimination faced by minority groups.
 
Consider other studies, such as "Predictive Policing and Reasonable Suspicion" by Ferguson (2012).29F[30] This study examined the use of predictive policing algorithms, which aim to identify areas with a higher likelihood of criminal activity. The researchers found that these algorithms tend to concentrate enforcement efforts on low-income communities and communities of color, resulting in disproportionate surveillance and a heightened police presence in these areas. This biased allocation of resources can contribute to the over-policing and targeting of marginalized communities, perpetuating systemic biases, and eroding trust between law enforcement agencies and the communities they serve.
 
Similarly, the study "Discrimination in Online Ad Delivery" by Ali et al., (2019)30F[31] explored the presence of algorithmic bias in online ad delivery systems. The researchers discovered that these systems could exhibit discriminatory behavior by selectively displaying ads to users based on their gender or race. For instance, the study revealed that online job ads promoting high-paying positions were more likely to be shown to male users than female users, perpetuating gender-based disparities in employment opportunities. Such biases in ad delivery systems reinforce existing inequalities and limit access to resources and opportunities for certain groups.
 
Turning to civil law, biased AI algorithms used for predictive analytics in insurance claims have also raised concerns. These algorithms have been shown to discriminate against specific demographic groups, resulting in unequal treatment and denial of coverage for individuals belonging to these groups.31F[32] As such, an algorithm used to assess insurance claims may disproportionately deny coverage to individuals from lower socioeconomic backgrounds or specific geographic areas, perpetuating socioeconomic disparities and limiting access resources.
 
These studies shed light on the challenges posed by algorithmic bias in criminal and civil law, highlighting the potential for discriminatory outcomes. They emphasize the need for robust measures to mitigate bias and ensure fairness in AI systems. Algorithmic bias has real-world consequences for individuals and communities, raising important ethical and legal concerns that must be addressed to uphold principles of equality, justice, and non-discrimination in the age of AI. The challenges presented by algorithmic bias in these domains underscore the urgency of implementing strategies that promote fairness, equal treatment, and justice.
 
MITIGATING ALGORITHMIC BIAS: RECOMMENDATIONS
Comprehensive Auditing and Bias Identification
To address algorithmic bias in the legal field, organizations should prioritize conducting thorough audits of their AI systems.32F[33] These audits serve as essential tools for identifying and rectifying biases that may be embedded within the algorithms. By examining the data inputs, algorithms, and decision outputs, organizations can gain insights into the potential biases or discriminatory patterns present in their AI systems. For example, in the hiring process, audits can assess whether AI algorithms exhibit biases based on gender, race, or other protected characteristics.33F[34] These comprehensive audits allow organizations to uncover biases that may perpetuate discrimination and take corrective actions to promote fairness and equal opportunities for all applicants.
 
In the era of AI technology, where criminal and civil matters are increasingly being processed through automated systems, the importance of audits and oversight cannot be overstated. The use of AI algorithms in decision-making processes, such as sentencing, case prediction, and legal analytics, necessitates regular audits to evaluate their performance and impact.34F[35] Audits provide transparency and accountability, enabling the identification and rectification of biases that may be embedded within AI algorithms. Audits ensure that the outcomes of AI-powered legal processes uphold the principles of justice, equality, and due process. Oversight can be carried out by regulatory bodies, legislative committees, and judicial review mechanisms.35F[36] This oversight helps prevent the misuse or abuse of AI technology, safeguarding against unjust outcomes or violations of individuals' rights. By implementing robust oversight measures, stakeholders can ensure that AI systems are used responsibly, ethically, and in compliance with legal and constitutional principles.
 
CONCLUSION
This article has addressed three key issues in the responsible implementation of AI in diverse domains. The specific issues of ethical dilemmas in AI decision-making, safeguarding data privacy and security, and algorithmic bias were explored. This article has provided valuable insights and recommendations to policymakers, organizations, and stakeholders. In terms of ethical dilemmas, the article emphasizes the importance of establishing robust frameworks for responsible AI implementation, upholding principles such as transparency, fairness, and accountability. Regarding data privacy and security, comprehensive data protection regulations, transparent data collection practices, and empowering individuals in controlling their personal data are highlighted as crucial measures to foster trust and confidence in AI technologies. The article also delves into the significant challenge of algorithmic bias, stressing the need for comprehensive audits and ongoing monitoring to ensure equal opportunities, particularly in criminal and civil law contexts where biased AI algorithms can have far-reaching societal consequences.
 
By offering insights, examples, and recommendations, this article contributes to scholarly literature on the ethical, privacy, and bias challenges associated with AI implementation. Moving forward, policymakers and stakeholders must carefully consider the issues raised in this article and take proactive steps to address them. Prioritizing the development and implementation of ethical frameworks, robust data protection measures, and strategies to mitigate algorithmic bias are crucial. Collaboration among academia, legal practitioners, policymakers, and technology experts is essential to advance research, refine practices, and ensure that AI systems promote fairness, transparency, and accountability. Responsible implementation of AI requires a multifaceted approach that addresses ethical dilemmas, safeguards data privacy and security, and mitigates algorithmic bias. It is through these collective efforts that we can navigate the complex landscape of AI and technology, ultimately contributing to a more equitable and ethically grounded future for all.
 


* Jennifer Ghoshray is a legal researcher and cognitive psychologist, who studied law, psychology, and economic theory at the prestigious Rutgers University and Florida International University in USA. She utilizes a multi-dimensional lens through which to view mergers and acquisition and anti-trust issues, while placing common citizens’ rights in the context. She has developed a unique expertise in the conflict between development of artificial intelligence fuelled Metaverse and existing laws, areas on which she has lectured at various National Law Schools, such as the NUJS, Kolkata, NLIU, Bhopal, and Symbiosis Law. Her presentation at the 2022 National Conference on Fintech, Data Protection and Cybersecurity held at National Law University, Delhi traced the emergence of digital capitalism at the praxis of technology’s ambition and law’s inadequacy. The author can be reached at Jennifer.Ghoshray@gmail.com.
[1] Tegmark, M. AI and Physics, Lex Fridman Podcast (Jan. 18, 2021), episode #155, at 2:217 (Paraphrased from the podcast). https://www.happyscribe.com/public/lex-fridman-podcast-artificial-intelligence-ai/155-max-tegmark-ai-and-physics.
[2] Murdoch, B. Privacy and Artificial Intelligence: Challenges For Protecting Health Information In A New Era. BMC Med Ethics 22, 122 (2021). [doi: https://doi.org/10.1186/s12910-021-00687-3].
[3] Moon, I., Shamsuzzaman, M., Mridha, M., & Rahaman, A. (2022). Towards the Advancement of Cashless Transaction: A Security Analysis of Electronic Payment Systems. Journal of Computer and Communications, 10, 103-129. doi: 10.4236/jcc.2022.107007.
[4] Vermeulen, B., Kesselhut, J., Pyka, A., & Saviotti, P. P. (2018). The Impact of Automation on Employment: Just the Usual Structural Change? Sustainability, 10(5), 1661. doi: 10.3390/su10051661.
[5] Bankins, S., & Formosa, P. (2023). The Ethical Implications of Artificial Intelligence (AI) For Meaningful Work. Journal of Business Ethics, 185, 725–740. doi: 10.1007/s10551-023-05339-7.
[6] Bartneck, C., Lütge, C., Wagner, A., & Welsh, S. (2021). Privacy Issues of AI. In An Introduction to Ethics in Robotics and AI. SpringerBriefs in Ethics. Springer, Cham. https://doi.org/10.1007/978-3-030-51110-4_8.
[7] Jackson, M. C. (2021). Artificial Intelligence & Algorithmic Bias: The Issues With Technology Reflecting History & Humans, 16 J. Bus. & Tech. L. 299. https://digitalcommons.law.umaryland.edu/jbtl/vol16/iss2/5.
[8] Thiebes, S., Lins, S., & Sunyaev, A. (2021). Trustworthy artificial intelligence. Electron Markets, 31(3), 447–464. https://doi.org/10.1007/s12525-020-00441-4.
[9] Rodrigues, R. (2020). Legal And Human Rights Issues Of AI: Gaps, Challenges And Vulnerabilities. Journal of Responsible Technology, 4, 100005. ISSN 2666-6596. https://doi.org/10.1016/j.jrt.2020.100005.
[10] Zajko, M. (2022). Artificial Intelligence, Algorithms, And Social Inequality: Sociological Contributions To Contemporary Debates. Sociology Compass, 16(3), e12962. https://doi.org/10.1111/soc4.12962.
[11] Dastin, J. (2018, October 11). Amazon scraps secret AI recruiting tool that showed bias against women. Reuters. https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G.
[12] Pearce, Guy. "Beware the Privacy Violations in Artificial Intelligence Applications." ISACA Now Blog. May 28, 2021. https://www.isaca.org/resources/news-and-trends/isaca-now-blog/2021/beware-the-privacy-violations-in-artificial-intelligence-applications.
[13] Garcia, A.C.B., Garcia, M.G.P. & Rigobon, R. "Algorithmic Discrimination in the Credit Domain: What Do We Know About It?" AI & Soc (2023), DOI: 10.1007/s00146-023-01676-3.
[14] Murdoch B. Privacy And Artificial Intelligence: Challenges For Protecting Health Information In A New Era. BMC Med Ethics. 2021 Sep 15;22(1):122. Doi: 10.1186/s12910-021-00687-3. PMID: 34525993; PMCID: PMC8442400.
[15] Nicol Turner Lee, Paul Resnick, and Genie Barton, "Algorithmic Bias Detection and Mitigation: Best Practices and Policies to Reduce Consumer Harms," Brookings (May 22, 2019), https://www.brookings.edu/articles/algorithmic-bias-detection-and-mitigation-best-practices-and-policies-to-reduce-consumer-harms/.
[16] Bak, M., Madai, V. I., Fritzsche, M. C., Mayrhofer, M. Th., & McLennan, S. (2022). You Can't Have AI Both Ways: Balancing Health Data Privacy and Access Fairly. Frontiers in Genetics, 13. https://www.frontiersin.org/articles/10.3389/fgene.2022.929453. doi: 10.3389/fgene.2022.929453. ISSN 1664-8021.
[17] Topol, E.J. High-Performance Medicine: The Convergence Of Human And Artificial Intelligence, Nat Med 25, 44–56 (2019), https://doi.org/10.1038/s41591-018-0300-7.
[18] Fuster, A., Goldsmith-Pinkham, P., Ramadorai, T., & Walther, A. (2022). Predictably Unequal? The Effects of Machine Learning on Credit Markets. Journal of Finance, 77: 5–47.
[19] Dressel, J., & Farid, H. (2018). The Accuracy, Fairness, And Limits Of Predicting Recidivism. Sci. Adv., 4, eaao5580. DOI: 10.1126/sciadv.aao5580.
[20] Angwin, J., Larson, J., Mattu, S., Kirchner, L. (2016). Machine bias: There’s software used across the country to predict future criminals. And it’s biased against blacks. ProPublica. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.
[21] Ibid.
[22] Boldyreva, E. (2018). Cambridge Analytica: Ethics and Online Manipulation With Decision-Making Process. Proceedings of the International Scientific Conference "Contemporary Issues in Business, Management and Education" (Vol. 12, pp. 91-102). doi: 10.15405/epsbs.2018.12.02.10.
[23] Rehman, I. u. (2019). Facebook-Cambridge Analytica data harvesting: What you need to know. Library Philosophy and Practice (e-journal), 2497. https://digitalcommons.unl.edu/libphilprac/2497.
[24] Maslej, N., Fattorini, L., Brynjolfsson, E., et al. (2023). The AI Index 2023 Annual Report. AI Index Steering Committee, Institute for Human-Centered AI, Stanford University, Stanford, CA, April 2023.
[25] Vössing, M., Kühl, N., Lind, M. et al. Designing Transparency for Effective Human-AI Collaboration. Inf Syst Front 24, 877–895 (2022). https://doi.org/10.1007/s10796-022-10284-3
[26] General Data Protection Regulation (GDPR), Regulation (EU) 2016/679 (General Data Protection Regulation). https://gdpr-info.eu/.
[27] Shin, D., Hameleers, M., Park, Y. J., et al. (2022). Countering Algorithmic Bias and Disinformation and Effectively Harnessing the Power of AI in Media. Journalism & Mass Communication Quarterly, 99(4), 887–907. https://doi.org/10.1177/10776990221129245.
[28] Gravett, W. (2021). Sentenced by an algorithm--Bias and lack of accuracy in risk-assessment software in the United States criminal justice system. South African Journal of Criminal Justice, 34(1), 31+. https://link.gale.com/apps/doc/A688856609/AONE?u=anon~7f6f2f9&sid=googleScholar&xid=f24028e5.
[29] Ibid.
[30] Ferguson, A. G. (2012). Predictive Policing and Reasonable Suspicion. 62 Emory Law Journal, 259. https://ssrn.com/abstract=2050001.
[31] Sweeney, L. (2013). Discrimination in Online Ad Delivery. Communications of the ACM, 56. DOI: 10.2139/ssrn.2208240.
[32] National Association of Insurance Commissioners. (2021). AI-Enabled Underwriting Brings New Challenges for Life Insurance: Policy and Regulatory Considerations. Journal of Insurance Regulation, 40(8), JIR-ZA-40-08.
[33] Simbeck, K. They Shall Be Fair, Transparent, And Robust: Auditing Learning Analytics Systems. AI Ethics (2023). https://doi.org/10.1007/s43681-023-00292-7.
[34] Chowdhury, R., & Mulani, N. (2018). Auditing Algorithms for Bias. Harvard Business Review. https://hbr.org/2018/10/auditing-algorithms-for-bias
[35] Rigano, C. (2019). Using Artificial Intelligence to Address Criminal Justice Needs. NIJ Journal, (Issue No. 280), January. National Institute of Justice.
[36] Zuiderwijk, A., Chen, Y.-C., & Salem, F. (2021). Implications of the use of artificial intelligence in public governance: A systematic literature review and a research agenda. Government Information Quarterly, 38(3), 101577. https://doi.org/10.1016/j.giq.2021.101577

About Journal

International Journal for Legal Research and Analysis

  • Abbreviation IJLRA
  • ISSN 2582-6433
  • Access Open Access
  • License CC 4.0

All research articles published in International Journal for Legal Research and Analysis are open access and available to read, download and share, subject to proper citation of the original work.

Creative Commons

Disclaimer: The opinions expressed in this publication are those of the authors and do not necessarily reflect the views of International Journal for Legal Research and Analysis.