AI IN CRIMINAL JUSTICE: A FORCE FOR PROGRESS OR A THREAT TO FAIRNESS? BY - SRINIVASULA GOUTHAM
AI in Criminal Justice: A Force for Progress or a Threat to
Fairness?
AUTHORED BY
- SRINIVASULA GOUTHAM
INTRODUCTION:
The criminal justice system serves as
a cornerstone of a safe and secure society. It upholds the rule of law by
deterring crime through punishment, protecting innocent individuals through
fair trials and striving to rehabilitate offenders to prevent future harm. This
detailed network of law enforcement ensures a balanced response to criminal
activity, fostering a sense of security and order that allows communities to
thrive. The criminal justice system hinges on a series of critical decisions
made at every stage. From law enforcement on the streets to judges in
courtrooms, these choices determine who is investigated, arrested, charged and
ultimately punished. Ideally, these decisions are made fairly, objectively and
based on a thorough examination of evidence. However, human biases and
limitations creep into the process. Enter Artificial Intelligence (AI), a
rapidly developing technology with the potential to revolutionize
decision-making within the criminal justice system. AI can analyse vast
datasets and identify patterns invisible to the human eye and has the potential
to lead to fairer and consistent outcomes. However, it is pertinent to
understand that AI has its disadvantages and should be taken into consideration
before using it in the decision-making in the criminal justice system.
In this article, I will establish the
potential of AI in moving the criminal justice system in both positive and negative
directions, based on the way it’s used. I argue that the usage of AI in
decision-making in the criminal justice system is progressive but has its
limitations.
AI
AND ITS PROGRESS:
AI has been booming in the current
times. Andrea Roth in her article “Trial by Machine” examines the growing
application of artificial intelligence (AI) in the criminal justice system.[1] In
examining its use in areas such as evidence analysis and sentence, the author
emphasizes the creation of AI-driven decision support systems that aid judges.
This analysis reflects a rising recognition of AI's significance in these
evaluations, as strictly clinical and retributive approaches are giving way to
probabilistic and actuarial assessments in penology. The growth of "truth-in-sentencing"
legislation and associated parole board guidelines which may incorporate AI
decision-making are also discussed in detail. The author also looks at the
application of AI software in forensic diagnosis and interpretation,
speculating about a future in which the need for eyewitness testimony to prove
guilt may be lessened. Nonetheless, the author highlights that a significant
factor in deciding whether AI systems are accepted as reliable sources of
evidence is institutional dynamics and deeply held beliefs inside the legal
system. By providing instances of faulty computer-assisted legal reasoning in
administrative law and faults found in software used for sentencing under
federal guidelines, the author admits the possibility of errors in AI.
The criminal
justice system works with a plethora of data, ranging from witness accounts and
crime reports to DNA evidence and recurrence rates. In order to increase
productivity and data analysis in these duties, artificial intelligence (AI)
provides a potent set of tools that could result in a more effective and
efficient legal system. Here's an overview of several important fields where
artificial intelligence can have a big impact:
Automating
Repetitive Tasks:
The legal
system is burdened by a multitude of time-consuming and repetitive tasks. Data
entry, case management, and evidence assessment are just a few examples that
can bog down progress and limit the focus on core legal issues. Artificial
intelligence (AI) offers a powerful solution, automating these tasks and
freeing up valuable time for human law enforcement officials and legal
professionals. Imagine a scenario where AI systems handle the initial drudgery
of data entry. Police reports, witness
statements, and case documents can be automatically scanned and categorised,
saving officers and lawyers countless hours spent on manual data input. This
allows them to dedicate more time to complex investigations, strategic case
planning, and client interaction. Beyond data entry, AI can delve into the
heart of legal work: evidence assessment. AI-powered tools can analyse vast
amounts of evidence, including witness testimonies, video footage, and digital
records. These tools can identify
patterns, inconsistencies, and potential leads that human reviewers might miss
due to time constraints or cognitive biases. For example, AI might detect
inconsistencies in witness accounts based on subtle language cues or flag
inconsistencies in timelines across different pieces of evidence. This can
significantly expedite investigations and direct human investigators towards
areas requiring closer scrutiny. A prime example of AI automation in action is
Kira Systems. This company offers AI-powered legal review tools that automate
the tedious process of contract analysis.
By using natural language processing (NLP), Kira Systems can extract key
information from contracts, identify potential risks or clauses requiring
negotiation, and highlight areas for further review.[2] This not only saves lawyers significant time
but also improves the accuracy and consistency of contract reviews.
Enhanced
Crime Pattern Detection:
Law
enforcement has traditionally relied on detective work and patrol strategies to
combat crime. However, Artificial Intelligence (AI) offers a powerful new
approach: proactive crime prevention through pattern recognition. AI algorithms
can analyze vast datasets of historical crime statistics, including locations,
times of day, types of crimes committed, and even factors like gang activity or
repeat offenders. By examining through this information, AI can identify hidden
connections and emerging trends that human detectives might miss due to
cognitive limitations and the sheer volume of data. Imagine a system that can
not only pinpoint areas with historically high crime rates but also predict
future hotspots. AI can detect seasonal fluctuations in crime (e.g., property
crimes increasing during holidays), recognize patterns associated with specific
criminal activities (e.g., identifying burglary methods used by serial
offenders), and even account for external influences like social media activity
that might signal gang activity or planned violence. This allows law
enforcement to shift from reactive response to proactive prevention. By
anticipating high-risk locations and potential criminal activities, resources
can be deployed more effectively. This could involve strategically placing
plainclothes officers or mobile surveillance units in predicted hotspots,
increasing foot patrols in vulnerable areas during high-risk times, or even
implementing targeted sting operations to disrupt criminal enterprises before they
strike. Pattern recognition, which has been improved by AI and machine
learning, is essential for law enforcement's threat assessment and strategic
planning. It facilitates the strategic distribution of resources, patrol
scheduling, and the use of crime prevention initiatives by assisting in the
identification of crime patterns, hotspots, and common criminal tactics.[3]
Improved
Data-Based Decision Making:
To create
risk evaluations, AI may examine enormous databases containing information on
social, demographic and criminal histories. Decisions regarding parole
eligibility, sentence, and pre-trial detention may be made using the results of
these evaluations. Although there may be biases associated with these
instruments, they can provide judges and parole boards with insightful
information that helps them make better decisions. Also, the judicial system,
while striving for fairness, is not immune to human biases. Landmark cases like
Mahmood Farooqui v State,[4]
where a woman's education and other background influenced the court's
perception of her ability to deny consent, exemplify this challenge. Artificial
intelligence (AI) has the potential to offer a valuable tool in mitigating such
biases. AI algorithms, when trained on comprehensive datasets devoid of
personal characteristics, can analyze evidence and legal precedents more
objectively. This could potentially reduce the influence of unconscious biases
that can creep into human decision-making.
Streamlining
Evidence Analysis:
Artificial
intelligence could help when forensic professionals are assessing complex
evidence including DNA samples and video footage. AI systems can identify
trends and speed up the analysis process resulting in quicker case resolutions.
AI can also assist in locating tiny pieces of evidence that humans could
possibly overlook, which could result in breakthroughs in cold cases. For
example, Clearview AI has become very relevant in the USA and is used around 1
million times by the US police. “Clearview's system allows a law enforcement
customer to upload a photo of a face and find matches in a database of billions
of images it has collected. It then provides links to where matching images
appear online. It is considered one of the most powerful and accurate facial
recognition companies in the world”.[5]
Beyond
facial recognition, AI can also be applied to other areas of forensic science.
For instance, AI algorithms can analyse vast datasets of fingerprints to
identify patterns and potential matches more efficiently. In DNA analysis, AI
can assist in interpreting complex genetic profiles and identifying potential
suspects. Additionally, AI-powered systems can analyze video footage to detect anomalies,
track objects, and enhance image quality, aiding in the investigation of crimes
such as robberies and assaults. These applications of AI have the potential to
revolutionize forensic science by improving the speed, accuracy, and efficiency
of evidence analysis.
Enhanced
Communication and Collaboration:
Language
obstacles can be solved by AI-powered translation systems, allowing law
enforcement organizations on other continents to communicate more easily. This
is essential for international investigations and collaboration in the fight
against transnational crime. AI can also be used to establish centralized
databases that are only accessed by authorised staff and would enhance
communication and cooperation amongst the criminal justice system. A real-life
example of progress in this area is INTERPOL's I-24/7 secure global police
communication system.[6]
This system allows law enforcement agencies from member countries to share
information and coordinate investigations in real time. While not solely reliant
on AI for translation, such systems can be further enhanced by integrating
AI-powered translation tools to streamline communication across language
barriers. Language obstacles can be solved by AI-powered translation systems,
allowing law enforcement organizations on other continents to communicate more
easily. This is essential for international investigations and collaboration in
the fight against transnational crime. AI can also be used to establish
centralized databases that are only accessed by authorised staff and would
enhance communication and cooperation amongst the criminal justice system. A
real-life example of progress in this area is INTERPOL's I-24/7 secure global
police communication system.[7] This
system allows law enforcement agencies from member countries to share
information and coordinate investigations in real time. While not solely
reliant on AI for translation, such systems can be further enhanced by
integrating AI-powered translation tools to streamline communication across
language barriers. Beyond facilitating communication between law enforcement
agencies, AI in translation can also assist with witness interviews and victim
support. For instance, in a situation where a witness from a different country
provides crucial information for a case. AI translation tools can break down
language barriers in real-time, allowing investigators to gather accurate and
timely statements. Similarly, for victims of crime who speak a different
language, AI translation can ensure they receive proper support and understand
their rights throughout the legal process. This fosters trust and cooperation
within the criminal justice system, ultimately contributing to better outcomes
for everyone involved.
THE SHADY SIDE OF AI: BIAS, TRANSPARENCY AND INEQUALITY:
Even
though AI has a lot of potential for use in criminal justice, there are still
concerns about bias, there are still concerns about bias, lack of transparency,
and the possibility of escalating already existing disparities. Here’s a closer
look at these important concerns:
Biases
Lurking in Data:
The
quality of AI algorithms depends on the quality of the training data.
Unfortunately, when real-world data is used in AI systems for criminal justice,
it frequently replicates societal biases, producing unfair results. Biases like
historical biases can creep in. For example, AI may be able to forecast future
crimes based on police arrest data that reflects racial profiling in
neighbourhoods where minorities predominate. The problem lies with the data the
algorithms feed upon. For one thing, predictive algorithms are easily skewed by
arrest rates. For example, PredPol is a predictive policing software used by
some police departments across the US and it analyzes historical crime data to
identify areas with a higher likelihood of future crime. “According to US
Department of Justice figures, you are more than twice as likely to be
arrested if you are Black than if you are white. A Black person is five
times as likely to be stopped without just cause as a white person”. [8]
Lack of information about socio-economic characteristics may distort risk
evaluations, thereby ignoring low-risk individuals from underprivileged
families. Consequentially, increased police presence and surveillance in
minority areas might result from algorithmic prejudice, which can exacerbate
feelings of alienation and mistrust. “The state’s use of such proxies in
criminal law to reduce false negatives and increase efficiency also conforms to
a more general pattern of simplifying legal
decision-making into determinable elements, with an eye toward efficiency and accuracy, but a tendency to oversimplify or entrench existing biases.”[9]
decision-making into determinable elements, with an eye toward efficiency and accuracy, but a tendency to oversimplify or entrench existing biases.”[9]
Moreover,
the reliance on historical data can perpetuate existing systemic biases. For
instance, if an algorithm is trained on arrest data that disproportionately targets
marginalized communities, it may erroneously predict higher crime rates in
those areas. This can lead to a self-fulfilling prophecy, as increased police
presence and surveillance can exacerbate tensions and lead to more arrests.
Additionally, the lack of diversity in the development teams behind these
algorithms can contribute to blind spots and biases.
The Black
Box of AI Decisions:
A large
number of AI algorithms used in criminal justice lack transparency in their
decision-making process. “This situation is commonly
referred to as the black box problem in AI. Without understanding how AI
reaches its conclusions, it is an open question to what extent we can trust
these systems”.[10] This
lack of transparency raises various issues. Questions regarding fairness and
due process arise when human monitoring is rendered less effective due to a
lack of comprehension of the reasoning behind AI choices. “The use of
technology with inherent black box problems, i.e., an inability to explain a
certain result, in a criminal proceeding comes at a price. Triers of fact will
have to decide whether to trust an AI-generated statement that can only
partially be explained by experts.”[11]
Recent
studies (2018-2019) by MIT, Microsoft Research, and the US National Institute
of Standards and Technology (NIST) revealed significant racial and gender
biases in facial recognition algorithms. These algorithms, often used by law
enforcement for identification purposes, exhibited a higher error rate when analysing
faces of colour, particularly for women. The largest error rate, reaching 35%,
was identified for female faces of colour according to the MIT/Microsoft study.[12]
This highlights a crucial concern regarding "black box" AI
decisions. Facial recognition algorithms
function as black boxes because the internal decision-making processes are not
readily apparent. These studies demonstrate
how such opaque AI systems can perpetuate biases within the training data, leading
to discriminatory outcomes in real-world applications.
AI
Amplifying Inequality:
If AI
systems are not properly developed and applied, they run the potential of
escalating already existing racial and socio-economic disparities in the
criminal justice system. Minorities may be subjected to harsher penalties if AI
risk assessments repeatedly classify them as high-risk, which would result in
additional data points supporting the initial prejudice in subsequent training
cycles. Low-income people or those who live in high-crime regions may be
unfairly disadvantaged by AI algorithms that rely on variables like zip code or
work history. AI-driven risk evaluations may be a factor in the increase in
mass imprisonment that unfairly affects minority communities. For instance, in
Detroit, a growing trend of using facial recognition software by police led to
a wrongful arrest in January 2020.[13]
Robert Williams, an African American man, was mistakenly identified by the
software as a suspect who stole watches from a Shinola store. This incident
wasn't isolated, as Michael Oliver and Nijeer Parks faced similar situations in
2019 due to facial recognition misidentification. These cases highlight the
potential dangers of this technology, especially when it leads to wrongful accusations.
However, one way to
improve algorithmic decision-making in criminal justice to ensure racial equity
is to use risk assessment tools that are neutral concerning race. For example,
the Public Safety Assessment (PSA) is a tool that assesses an individual's risk
factors without taking into account gender, race, or economic conditions.[14]
Another improvement could be to
reduce the reliance on biased data in algorithms. The ab origine collection of
discriminatory data, such as mapping certain urban areas or collecting data on
potential criminals or victims, can consolidate prejudices and lead to unequal
treatment. Ensuring that algorithms are not based on discriminatory data can
help prevent bias and promote racial equity.[15]
In addition to these measures, it is
crucial to implement robust auditing and oversight mechanisms to monitor the
performance of AI algorithms in the criminal justice system. Regular
audits can help identify and address biases that may emerge over time.
Moreover, transparency in the development and deployment of AI systems is
essential to ensure public trust and accountability. By making the algorithms
and data used in decision-making processes public, stakeholders can scrutinize
their fairness and identify potential sources of bias. Furthermore, it is
imperative to involve diverse teams in the development and testing of AI systems
to ensure that they are representative of the populations they serve. By
incorporating diverse perspectives, developers can help mitigate biases and
ensure that the algorithms are equitable and effective. Ultimately, the goal is
to create a criminal justice system that is fair, just, and free from racial
and socioeconomic disparities.
Conclusion:
The
integration of Artificial Intelligence (AI) into the criminal justice system
presents a fascinating paradox. On one hand, AI offers a multitude of progressive
tools: automating repetitive tasks, streamlining workflows, and leveraging data
analysis to predict and prevent crime. From Kira Systems' automated contract
review to AI-powered crime prediction models, the potential for efficiency and
proactive crime-fighting is undeniable. On the other hand, the limitations and
ethical considerations surrounding AI require careful attention. Biases within
training data can perpetuate discrimination, while the opaque nature of some
algorithms hinders accountability. Additionally, the spectre of AI replacing
human judgment entirely raises concerns about fairness and the erosion of due
process. The path forward lies in acknowledging both the promise and peril of
AI. We must embrace AI as a powerful supplement to human expertise, not a
replacement. Law enforcement officials and legal professionals must remain at
the steering, utilizing AI tools for data analysis, pattern recognition, and
communication facilitation, while reserving human judgment for critical
decision-making and ethical considerations. Furthermore, robust regulations and
oversight mechanisms are crucial. Data
privacy must be a top priority, with safeguards in place to prevent misuse and
discrimination. Transparency in AI algorithms needs to be addressed, ensuring
explainable AI models that allow for human scrutiny and accountability.
Ultimately,
AI in criminal justice holds immense potential to improve efficiency, reduce
crime, and streamline legal processes. However, responsible development,
ethical considerations, and unwavering human oversight will be paramount in
ensuring that AI serves as a force for progress, not a detriment to the very
justice system it seeks to enhance.
[1] Andrea Roth, Trial by Machine,
104 Georgetown Law Journal 1, 2-25
(2016).
[2] Technology Evaluation Centres, Kira Systems Reviews, Pricing and
Features - 2024 (technologyevaluation.com) (last visited Mar. 24, 2024).
[3] Sabine Gless, AI IN THE
COURTROOM: A COMPARATIVE ANALYSIS OF MACHINE EVIDENCE IN CRIMINAL TRIALS,
51 Georgetown Law Journal 195,
197-199 (2020).
[4] Mahmood Farooqui v. State (Govt.
of NCT of Delhi), MANU/DE/2901/2017.
[5] James Clayton & Ben Derico, Clearview
AI used nearly 1m times by US police, it tells the BBC, BBC (Mar. 28, 2023), Clearview AI used nearly 1m times by US police, it
tells the BBC.
[6] Naciones Unidas, ctc_cted_factsheet_law_enforcement_dec_2021.pdf
(un.org) (last
visited Mar. 1, 2024).
[7] Naciones Unidas, ctc_cted_factsheet_law_enforcement_dec_2021.pdf
(un.org) (last
visited Mar. 1, 2024).
[8] Will Doughlas Heaven, Predictive
policing algorithms are racist. They need to be dismantled, MIT Technology Review (Jul. 17, 2020), Predictive policing algorithms are
racist. They need to be dismantled. | MIT Technology Review.
[9] Sabine,
supra note 3, at 211.
[10] Warren J. von Eschenbach, Transparency
and the Black Box Problem: Why We Do Not Trust AI, 34 Philosophy and Technology 1607,
1607-1622 (2021).
[11] Sabine,
supra note 3, at 207.
[12] Sidney Perkowitz, The Bias in
the Machine: Facial Recognition Technology and Racial Disparities,
MIT Schwarzman College of
Computing (Feb. 06,
2021), The Bias in the Machine: Facial
Recognition Technology and Racial Disparities · Winter 2021 (pubpub.org).
[13] Id.
[14] Maria Stefania Cataleta, Humane
Artificial Intelligence: The Fragility of Human Rights Facing AI 5-7 (East-West Center, Working Paper No. 2,
2020).
[15] Id.