https://unsplash.com/id/foto/bingkai-logam-hitam-dalam-fotografi-skala-abu-abu-3R_GnBNRVI0?utm_content=creditShareLink&utm_medium=referral&utm_source=unsplash

Photo from Tim Hüfner on Unsplash

In a tense courtroom, a defendant stands waiting for a decision that could shape the rest of their life. The judge still presides, but beyond legal files and testimony lies something new: a risk score. Calculated by an algorithm using personal data as age, employment, prior convictions, even neighborhood this number claims to predict whether the individual is likely to reoffend. Presented as neutral and data-driven, it carries the weight of scientific objectivity. For some judges, it offers a sense of security. For defendants, it becomes a digital phantom that might determine their freedom. This is the new face of modern justice: algorithms of justice, where machines increasingly influence who now walks free and who does not.

This is not speculative policing. In the United States, a system called COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) has been widely used to assess recidivism risk. Its logic simple: a high-risk score suggests a higher likelihood of reoffending, which influences judges’ decisions on bail or sentencing. However, an investigative report by ProPublica (2016) revealed alarming biases. Black defendants were far more likely to receive high-risk scores, even if they did not reoffend, while white defendants were often scored as lowrisk-even when they committed recurring crimes. What was meant to be a neutral tool actually reinforced long-standing racial disparities in the justice system.

Academic critiques have grown louder. Dressel and Farid (2018) found that algorithmic predictions of recidivism were no more accurate than simple human guesses based on basic demographic data. Worse, the logic behind these scores is often hidden from public scrutiny. Proprietary protections mean that defendants and also judges do not know how the risk scores are calculated. The result is a legal “black box” in a system that is supposed to be moving to be transparent and just.

Beyond algorithmic opacity, the problem runs deeper. Almasoud and Idowu (2024) argue that predictive policing and sentencing algorithms often reinforce structural discrimination. Historical over-policing of certain communities leads to more arrests, which in turn generates more “risk data” associated with those neighborhoods. This creates a feedback loop in which marginalized populations are disproportionately flagged by systems trained on biased data. Far from correcting injustice, these tools encode and amplify itwrapped in the false authority of technology.

Moreover, research by Han, Greenewald, and Shah (2025) shows that disparities don’t just exist in who is flagged as risky, but in how quickly they are judged to reoffend. Their study on time-to-recidivism uncovered that even when controlling for relevant variables, racial disparities persisted. The algorithm did not simply predict outcomesit shaped timelines and urgency, imposing a different temporal reality on different racial groups.

These concerns are compounded by findings from the ACM Conference on Fairness, Accountability, and Transparency (2023), which highlight how risk assessment tools, when used within unequal legal systems, exacerbate existing inequalities. Algorithms are often treated as scientifically valid and objective, yet their outputs are deeply shaped by biased histories and flawed assumptions. Once labeled “high-risk,” a defendant can face harsher sentences, greater surveillance, and fewer opportunities for rehabilitationall based on a number they had no chance to question.

The rise of such tools challenges our understanding of democracy and justice itself. For centuries, justice has been understood as a deeply human process, the result of deliberation, dialogue, and careful consideration of moral responsibility. Judges, juries, and legal practitioners have always balanced the rigidity of the law with the fluidity of human experiencecontext, empathy, intention, and circumstance. A courtroom has never been merely about facts and figures; it has been a space where the complexity of human life collides with the formal structures of law. Yet, as algorithms enter this arena, those nuanced deliberations risk being flattened into mathematical calculations. A defendant’s future is reduced to a percentile, a likelihood, a label that claims to predict who they are and who they will become. The danger is not simply that numbers dominate the narrative, but that they erase the lived realities of individualssilencing voices that the justice system was meant to hear. What was once a forum for moral reckoning threatens to become a mechanical sorting system, where the dignity of the person is overshadowed by the efficiency of the machine.

The key question is not whether we can use algorithms in courtrooms, but whether we shouldand under what ethical safeguards such use must be bound. Legal systems have always evolved with technology, but rarely has there been a moment when the core principles of justicefairness, accountability, and transparencystood so directly at risk. If we surrender decision-making to opaque systems trained on biased and incomplete data, we risk constructing a façade of fairness that hides structural injustice. Such systems might appear efficient, neutral, and modern, but beneath the surface they carry the weight of historical prejudice and institutional inequality. If left unchecked, these digital ghosts will haunt our courtrooms, replacing scales of justice with cold probability scores that pretend to be impartial but are anything but. In such a world, freedom itself may no longer rest on human judgment but on algorithmic fatea fate shaped not by reasoned debate, but by the hidden logic of code, owned and controlled by private entities with little accountability. The question for society is whether we will demand transparency and limits before this future takes hold, or whether we will accept the quiet erosion of justice in exchange for the illusion of certainty.

References

Almasoud, A. S., & Idowu, J. A. (2024). Algorithmic fairness in predictive policing. AI and Ethics, 5(3), 2323–2337. https://doi.org/10.1007/s43681-024-00541-3

Dressel, J., & Farid, H. (2018). The accuracy, fairness, and limits of predicting recidivism. Science Advances, 4(1), eaao5580. https://doi.org/10.1126/sciadv.aao5580

Han, J. X., Greenewald, K., & Shah, D. (2025). Fairness is more than algorithms: Racial disparities in time-to-recidivism. arXiv. https://arxiv.org/abs/2504.18629

ProPublica. (2016, May 23). Machine bias: There’s software used across the country to predict future criminals. And it’s biased against blacks. ProPublica. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

The progression of disparities within the criminal justice system: Differential enforcement and risk assessment instruments. (2023). In Proceedings of the ACM Conference on Fairness, Accountability, and Transparency (FAccT). ACM. https://doi.org/10.1145/3593013.3594079

Recent posts

Quote of the week

"People ask me what I do in the winter when there's no baseball. I'll tell you what I do. I stare out the window and wait for spring."

~ Rogers Hornsby