Open Access Research Article

ALGORITHMS, ARTIFICIAL INTELLIGENCE & PREDICTIVE POLICING: A BRIEF COMMENT BY: AKSHAY SREEVATSA

Author(s):
AKSHAY SREEVATSA
Journal IJLRA
ISSN 2582-6433
Published 2024/05/13
Access Open Access
Issue 7

Published Paper

PDF Preview

Article Details

ALGORITHMS, ARTIFICIAL INTELLIGENCE & PREDICTIVE POLICING: A Brief Comment
 
AUTHORED BY: AKSHAY SREEVATSA
 
 
INTRODUCTION:
A DANGEROUS NEW PARADIGM FOR CRIMINAL JUSTICE
Yuval Noah Harari warned in his prophetic book, “20 Lessons for the 20th Century” that automation should be something we should prepare for as inevitable, cautioned us to beware that all that glitters is not gold. As a corollary, we now can deduce that all that is gold does not glitter, as J.R.R. Tolkien wisely wrote in a time where artifice was great, although in a vastly different iteration. Today, Artificial Intelligence[1] is the latest globally chic buzzword and social media is its greatest ally, and all driven forward by the bellowing locomotive we call Technology.
 
However, now, people have begun to experience the dungeons of this our latest once edified artifice – human emotion, intelligence, choosing to prostitute the gifts of our minds for lassitude and privileged ennui – and reduced the value of human capital and resources to a non est – which in so many ways exemplifies the irony of the human experience throughout history. While everyone presumably knows of and has benefitted from artificial intelligence – irrespective of the moral philosophies and ethical principles that guided them, it is my opinion that the dangers it poses cannot be overstated at this crucial juncture of our evolution as a species.
 
The scope and focus of this comment traverses only the aspect of predictive policing through the use of AI, and is confined to identifying risks, and seeking out safeguards against the tyranny it could cause. That said, it is only a part of a larger existential problem the criminal justice system, and the legal enterprise more generally. In the present context, the focus remains on the potentially adverse impact it could have on anyone, even unbeknownst to them.
 
I. ALGORITHMS & PREDICTIVE POLICING
An ‘algorithm,’ is simply a set of automated instructions to process a pool of data and produce outputs accordingly. Thus, in the context of predictive policing, an algorithm provides the equation to make predictions, and crucially are created by researchers who study large pools of data to identify what factors are statistically most associated with offending. Some examples are being male, association with criminal friends, previous criminal history, and oftentimes residential location, and race as well. activities, and classify them as likely offenders through algorithmic driven predictions.[2] Quoting another scholar’s (Selbst, 2017) more simplistic formulation, it refers to ‘predictive profiling as criminal profiling using computer technology and data.’[3]
[Predictive policing means] the use of historical data to create a forecast of areas of criminality or crime hot spots, or high-risk offender characteristic profiles that will be one component of police resource allocation decisions. The resources will be allocated with the expectation that, with targeted deployment, criminal activity can be prevented, reduced, or disrupted.”[4]
‘Predictive Policing,’ today, is used by law enforcement to prevent and pre-empt crime by predicting who pose the greatest risk of tuning into offenders in the future or to identify offenders at large using algorithmic predictions. Increasingly, law enforcement agencies are deploying Artificial Intelligence (‘AI’) or Automated Decision Making (‘ADM’) to assess and identify potential threats or offenders – which could result in devastating consequences when an individual is profiled as a criminal, or potential offender and targeted as such, even without the actual commission of an offence.
 
This is the product of an algorithm, designed to automate real time decision making  from a pool of data previously gathered by law enforcement agencies around the world, and supplied to researchers who assess this data based according to pre-defined factors, and convert it into an algorithm which drives predictive policing, something that was conceptualised to eliminate human error in law enforcement, and thereby aid in deterring crime.
 
However, recent studies, especially in the European Union have demonstrated some significant negative effects of preventive policing by personal profiling: the social costs of undermining faith in law enforcement agencies to do their jobs effectively, a a breach of trust in the criminal justice system and its fidelity to the Rule of Law and the principles that uphold it. This is not to mention the loss of confidence in the judiciary – as most times, individuals are unaware that decisions about them are arrived at by automation, and not human value judgement and the exercise of sound discretion and conscientious application of the law.
 
The result: reliance on AI has caused many negative consequences, and has come at a significant social cost, many times achieving the exact opposite of what it was intended to do. For the predicted offender, the legal implications range from being unduly harassed and restricted for no plausible reason, to being denied the presumption of innocence, thereby compromising the irrevocable constitutional right to a fair trial, especially vital to the conduct of any criminal proceedings, given the negative impact they can cause to an unfairly judged person.
 
For instance, in the Netherlands, the ‘Top 600’ is a list of youth most likely to commit criminal offences. One in three of these individuals named in this list, and who have reported being repeatedly harassed by the police etc., are of Moroccan descent. In Italy, a predictive system known as ‘Delia’ incorporates ethnicity data to predict a person’s ‘future criminality,’ whereas in other jurisdictions, systems have been developed to ‘predict’ where crimes are likely to be committed, and use as baseline such areas that have high populations of deprived communities of those populated by a specific race.[5] This clearly denigrates any right to equality in society, and bypasses an essential facet of humanity, i.e. the uniqueness of identity, the right to free choice, and other unique circumstances that contribute to the development of individual identity. This logically translates into depriving an individual of the right to choose and shape his own identity, and instead have it chosen for him by computer technology that substitutes logic for experience and humanity. The most biting irony that completes this vicious cycle is that humanity may have advanced too far for its own good.
 
II. AI IN CRIMINAL JUSTICE: CHALLENGES &
THE ROAD AHEAD
a. Discrimination and Inherent Bias
Studies in various jurisdictions, including in Europe, provide substantial evidence that AI and machine-learning systems have a markedly negative influence on criminal justice, to the extent that such interfaces and platforms directly generate and further reinforce outcomes that are irrefutably discriminatory, and manifestly unjust. These include the infringement of fundamental rights, and there is nothing by way of cogent and coherent evidence that is probative of any manner of positive influence on human decisions, their quality or consistency. Rather, they have been shown to do the opposite, and have been widely criticized for their inherent design flaws that abrogate basic human rights, and in doing so, directly and unflinchingly denude human dignity by failing to comply with peremptory human rights standards.[6]
 
The reason for this is that the resource pool from which an automated decision making system or other AI platform is informed by the crime data gathered by law enforcement agencies, which contain only reports on the outcomes, and exclude the peculiarities of the crime and its occurrence – which necessarily excludes circumstances that potentially affect the decision-making process and reduce it to an unfair binary involving a black or white, either-or, determination of guilt or innocence.
 
Thus, we are now faced with an automated, pre-designed system that produces ‘evaluative’ decisions built on pre-existing structural and institutional biases in policing, originating from human error(s). This could result in people being unfairly incarcerated based on a mechanical algorithm which is incapable of accounting for factors beyond the intransigent pre-programmed logic. The result? Reproducing and exacerbating the existing biases and   discrimination based on race, socio-economic factors, ethnicity and other grounds, as the case may be.
 
Even if free of bias, predictive policing can lead to inaccurate results. For instance, if the location  of a potential offender is identified correctly, then there is the issue of apprehending the correct suspect, which once again presents many opportunities for mistakes in human judgment, because of the artificial intellect that informs it.
 
b. Undermining the Right to a Fair Trial & Presumption of Innocence
Almost all democratic and civilized systems of law, and indeed a pillar of the Rule of Law itself, is the right to be presumed innocent until proven guilty as a part of the Due Process of Law. When an automated system of justice is used to make this determination, people are at the risk of being “profiled” as guilty without the commission of an offence.
 
Such profiles and decisions are most often based on factors unrelated to an individual’s behaviour and actions alone, but also on factors far beyond their personal control, e.g. demographic information which could include the neighbourhoods in which they live, or even people with whom they may be in contact, for completely legitimate and lawful reasons, but which indicts them as potential criminals or decides the same because of actions of another, completely unrelated or unbeknownst to them.[7]
 
This undermines their right to a fair trial, or even afforded a chance to defend themselves and defeat this presumption of guilt through an impartial adjudicatory process, which necessarily operate on a case by case basis, accounting for unique facts, circumstances and other factors that are presented in each case. This necessarily discards the value of human intellect, empathy, emotion, deprives the defendant and adjudicator of ‘judicial discretion,’ and precludes value judgements on that basis. This promotes and produces a one size fits all solution in matters of guilt or innocence, rather than judging cases on individual merit (supposing that there is a case to be made at all).
 
The ethical conundrum AI presents is this: for want of human emotion, and judicial discretion to make value judgments as needed, it is, by design equipped to provide only one answer or produce one set outcome to every set of facts presented before it. The absolute logic that guides its determinations (whether correct or erroneous) cannot be controverted by any information of which its guiding and opaque algorithm is programmed to process.
 
The problem is especially acute in profiling and predictive assessments of personalities or communities to identify offenders or probabalize criminal activity because crime data within which it is populated and informed, supplied by law enforcement is often uncorroborated.
 
The irony that presents itself here is that AI and predictive policing claims to harness scientific precision to aid in the prevention of crime. Yet, uncorroborated data is the diametric opposite the definition of an ‘unscientific methodology’ – unreliable, and ergo unacceptable especially in what is now hailed as the latest frontier in scientific and technological advancement. Taking the above factors into account, is it possible to assert with confidence that such a system, it cannot be denied that such a system is unbiased, or at the very least prone to making irreparable errors.
 
Moreover, in this author’s opinion, it lacks the capacity to determine the value of human life (e.g. to an AI system, the value of all life is not something it can process, so a teeeange outh growing up in a dangerous neighbourhood, even if he is not affiliated to any criminal activities could be assessed to be a potential recidivist, only because of factors beyond his control, comprehension or knowledge. Neither does the predictive model possess the ability to determine the opportunity cost of deprivation of freedom of life, liberty, and now, increasingly, the right to privacy. This argument is buttressed by the fact that no intellect, artificial, human, or ay other form present in the universe can ever predict an individual’s potential for good, irrespective of his algorithmic profile’s conclusions to the contrary. Thus, when automated systems fail, technology is rendered immune from sanction by virtue of the authority that automation provides it, and is unable or exonerated of any responsibility to indemnify the opportunity cost it has caused – and unfairly places the blame on human members of law enforcement and other branches of the justice system, making them stand accused of professional incompetence.
 
These are serious threats that cannot be easily justified or dismissed – because lacking any present, active, and impartial application of the uniquely human intellect and its ability to draw crucial distinctions, driven by experience and uniquely human emotional capacities – compassion, empathy, real time understanding of unique circumstances, ethics or morality, the usage of Artificial Intelligence can never be said to be a substitute for human judgment or intellect.
 
c. Lack of Transparency and Avenues for Redressal
It is widely agreed that State authorities owe a duty of transparency to the public in the performance of their duties, being custodians of the public’s trust. Nowhere is transparency more relevant and pertinent than in institutions of criminal justice and those that influence their decisions them in any way. this more important than in the criminal justice system, and any other system that exercises influence on it, its administration, and decisions. However, in contemporary times, technology is often the barrier to transparency, rather than a gateway for access to it. Further, driven by profit motives, deliberate efforts have also been known to conceal how these systems work, most likely because they are unwilling to expose any flaws or open up any questionable practice, or developmental flaws to public scrutiny, for fear of backlash.
 
Thus, people are left without the knowledge that they have been judged by an automated system running on quite possibly a flawed algorithm that has no human checks involved, and which lacks any element of human discretion in the decision-making process. In this context, criminal profiling and predictive assessments provide no clear or avenues for righting ay wrongs, nor to challenge decisions themselves. It follows that barring an effective method of challenge, avenues for redressal are severely restricted or lacking altogether.
 
The end result is that ordinary, innocent lives are put in jeopardy, especially when an erroneous profile leads to the impossibility of securing a fair trial for having one’s guilt pre-determined, based solely on an unaccountable, opaque, and automated logic. This is the perfect anathema to Justice Oliver Wendell Holmes’ famous and legendary declaration that the life of the law lies not in logic, but experience.
 
Conclusion
Given the relatively recent emergence of AI into the legal system in such a big way, much of the iterature referred to in this paper and the ideas presented therein are in large part a result of the resolutions and recommendations made by the Council of Europe and several collaborators in this project.
 
Much of what has been stated above is a result of the dedicated efforts of the Report of the European Parliament’s Committee on Civil Liberties, Justice and Home Affairs, where it exhaustively describes the usefulness of AI in criminal justice, but at the same time lays down very strong suggestions for how it must be harnessed and regulated for the protection of human interests, rights, dignity, and that the ultimate decisionmaker must always be a human, nd lys down  framework as to how this might be accomplished through cautious use of AI to safeguard against abrogation of fundamental freedoms and inalienable human rights. It deals with the matter exhaustively from the how it must be used by law enforcement as well as judicial authorities, and that AI must always be subordinated to justice for humanity.[8]
 
Given that the literature on the subject is still emerging, and technologies still being perfected, regulatory mechanisms still being developed (and being jurisdictionally bound), this paper has not attempted to provide an exhaustive list of all the pros and cons of predictive policing that claims to be the final word on the subject.
 
Rather, my main attempt has been to emphasise the potential dangers of relying on machine learning to tackle crime and identify potential safeguards against the same, while underscoring the danger of over-reliance on AI in predictive policing specifically, and using its conclusions as a substitute for human intelligence, experience and judgment. And although predictive policing is only a very small part of the use of AI in criminal justice delivery, it is nevertheless imperative that it only be used as a reference to streamline enable the human decision maker’s task, wherever it is deployed, and for whatever purpose it is used.
 
As  concluding statement, I believe it necessary once more to emphasise that within the narrow scope of predictive risk assessment and policing in criminal justice, algorithmic predictions must necessarily only be tool to aid the human endeavour of crime prevention and criminal justice, rather than making human intellect and effort its subordinate, and thereby rendering law enforcement officials so reliant on AI as to disincentivize them from improving their own expertise and sharpening the irreplaceable human skill that is required to actually deter crime and address criminal activity in real time, transparency in their operations and accountable for them, and provide meaningful avenues for redressal of wrongs, and be prepared to provide acceptable justifications when called on to do so.


[1] According to the National Justice Institute’s literature, Artificial Intelligence is defined it as “the science and engineering of making intelligent machines.” This definition was supposedly coined by John McCarthy, credited as being the father of AI.
Christopher Rigano, “Using Artificial Intelligence to Address Criminal Justice Needs,” available online at https://nij.ojp.gov/topics/articles/using-artificial-intelligence-address-criminal-justice-needs/ (Published on 10.08.2018), last visited on 01.05.2024.on 24.04.2024.
[2] Melissa Hamilton, “Predictive Policing through Risk Assessment,” in Predictive Policing and Artificial Intelligence at p. 60 (Routledge, 2021)
[3] As cited in Supra. Note 2.
[4] As cited in Supra. Note 2.
 
[5] “Artificial Intelligence” (AI), data and Criminal Justice,” available online at https://www.fairtrials.org/campaigns/ai-algorithms-data/#what-are-the-problems-with-ai-83, last visited on 02.05.2024.
[6] “Regulating Artificial Intelligence for Use in Criminal Justice Systems in the EU” Policy Paper,” available at Regulating-Artificial-Intelligence-for-Use-in-Criminal-Justice-Systems-Fair-Trials.pdf (fairtrials.org)
[7] Id.                  
[8] The Complete Report, officially titled “REPORT on artificial intelligence in criminal law and its use by the police and judicial authorities in criminal matters | A9-0232/2021 | European Parliament (europa.eu),” dated 13.07.2021, is available online at https://www.europarl.europa.eu/doceo/document/A-9-2021-0232_EN.html#_ftn8, last accessed on 28.04.2024.

About Journal

International Journal for Legal Research and Analysis

  • Abbreviation IJLRA
  • ISSN 2582-6433
  • Access Open Access
  • License CC 4.0

All research articles published in International Journal for Legal Research and Analysis are open access and available to read, download and share, subject to proper citation of the original work.

Creative Commons

Disclaimer: The opinions expressed in this publication are those of the authors and do not necessarily reflect the views of International Journal for Legal Research and Analysis.