AI is showing up in court cases – but only a human jury can grapple with the moral weight of assessing guilt
- Written by Sonali Chakravarti, Professor of Government, Wesleyan University
“Mercy[1],” a film released in January 2026, depicts a dystopian Los Angeles in the near future: a city riddled with violence, homelessness and civic disorder. California’s response is to set up the Mercy Capital Court, run entirely by an AI bot that goes by the name Judge Maddox. The judge can analyze evidence, determine whether the threshold for guilt has been met and execute the defendant – all in a matter of 90 minutes.
Actor Chris Pratt plays a police officer named Chris Raven, who stands accused of murdering his wife. If he wants to leave the Mercy Court alive, he must do everything he can to lower his “guilt score” – the AI’s assessment of whether he’s the killer – from 97.5% to 92%.
AI judges may still be in the realm of science fiction, but AI tools are entering the courtroom. Risk-assessment tools now help judges make decisions about bail[2], and lawyers and judges have used AI to research legal precedent[3]. Some judges are even experimenting with it to formulate rulings[4], and simulations have used AI tools[5] to stand in for human jurors.
“Mercy” does not appear to take itself too seriously as a commentary on the legal system. But the idea that an AI bot can determine a verdict by assessing evidence distorts the meaning of legal judgment.
As a scholar who studies juries[6], I believe AI obscures the importance of what human decision-makers bring to the task, and why they are essential for the legitimacy of the legal system. Since the Middle Ages, jurors have had to grapple with the weight of determining guilt – including having serious reservations about the quality of the evidence, the legitimacy of punishment and the impossibility of complete knowledge about the case.
Features, not bugs
Weighing the evidence in a criminal case cannot easily be measured on a scoreboard. Interpreting what it means is often difficult – not just intellectually but emotionally[7]. The gravity of possibly inflicting pain on an innocent person is an essential part of judgment.
Jurors are linked in a web of relationships to the defendant, the victim and others affected by the crime. They can’t help but consider the consequences of the crime and of the verdict, and they imagine what it would feel like to be in the defendant’s shoes. How could a juror not feel doubt about their decision with all these factors weighing on them?
AI systems are trained to maximize predictive certainty[9]: That is, they offer suggestions based on previous patterns or on the training they have received. They cannot weigh different outcomes in light of prior experiences or collective ideals. Getting information from AI can feel like a salve for the thorny work of complex moral and legal decision-making, but it is the wrong kind of answer for the question of whether someone should be punished by the state.
Philosopher Brian Cantwell Smith[10] argued that while AI can make powerful, calculative decisions[11], judgment requires something else: human deliberation about how to apply ethical ideals under particular conditions, and grappling with others’ views about what is at stake. It is neither purely rational nor purely emotional. In order to take responsibility for its own decision, a jury needs judgment, not mere calculation inspired by what a machine considers the optimal outcome.
Wrestling with doubt
AI systems will likely continue to improve their performance on benchmarked tasks relevant to law and jurisprudence[12] – aiding with research, identifying patterns in large troves of evidence, expediting administrative tasks – but they cannot perform the task of jurors themselves. This is especially true as relates to doubt: Whereas the AI tool considers the quantity of uncertainty, jurors must be attuned to the quality of their uncertainty. They must weigh whether it signals the need for more discussion or whether the evidence is not sufficient.
Jurors are told to determine whether the prosecution has proved its case “beyond a reasonable doubt[13].” That is meant to set a very high bar for the evidence and for jurors’ confidence about its meaning. Yet grappling with what the reasonable doubt standard means[14] is one of the most intellectually challenging aspects of being a juror. Judges tend to give a minimal description to jurors – saying that jurors should be firmly convinced before convicting someone, for example. Each group of jurors must discuss how to interpret the standard and whether the threshold for evidence has been met.
Legal scholar James Q. Whitman’s[15] research on the history of reasonable doubt[16] traces its origins back to the Middle Ages. Christian jurors were afraid to take on the tasks of judgment and punishment, which they believed were properly held by God.
Eventually, by the 1700s, courts codified the phrase “guilt beyond a reasonable doubt” to acknowledge human hesitation over jurors’ role in punishment. Jurors are not asked to be omnipotent. Confidence in a conviction can coexist with appropriate ambivalence about the process and their own fallibility.
In order to convict, a jury must be unanimous[17] – a requirement that Whitman suggested can provide “moral comfort[18]” to mortals issuing a guilty verdict. Unanimity raises the bar for evidence and also allows “the twelve to share the heavy moral responsibility for judgment, and therefore to diffuse it among themselves.”
It is a distinct moral landscape: neither divine judgment nor algorithmic reckoning. A room of people deliberating may seem less efficient than AI, but it is a necessary component of the justice system’s moral legitimacy. Wrestling with doubt about the evidence, the verdict and its impact on the world is a way for jurors to remember their responsibility; it is not a step to be erased en route to the verdict. A jury decision symbolizes willingness to bear accountability for imposing a punishment.
Uniquely human
AI cannot replace human judges and jurors, but perhaps it can help them see their task more clearly.
In the 1800s, Karl Marx used the term “species-being[20]” to refer to conscious, purposeful activities that only humans can do, especially creative activities. Today, in light of AI’s pervasiveness, there is value in considering where we want to experience a sense of species-being.
By cordoning off certain parts of our lives from AI, we can practice the feeling of unease that can come from not having an easy tool to tell us what we should do – whether in a jury room or anywhere else. Decisions that cause unease are often ones that make us choose between different values, and we must be prepared to live with the consequences.
Fantasizing that AI tools will deliver us from the messy, tedious and emotionally wrenching work of criminal legal decisions is understandable. But collective governance is something only humans can achieve – acutely aware of our capacities for both good and evil.
References
- ^ Mercy (www.imdb.com)
- ^ make decisions about bail (bailproject.org)
- ^ used AI to research legal precedent (www.americanbar.org)
- ^ to formulate rulings (www.theguardian.com)
- ^ have used AI tools (www.theguardian.com)
- ^ a scholar who studies juries (www.wesleyan.edu)
- ^ not just intellectually but emotionally (doi.org)
- ^ Carol M. Highsmith Archive, Library of Congress via Wikimedia Commons (commons.wikimedia.org)
- ^ to maximize predictive certainty (www.ibm.com)
- ^ Philosopher Brian Cantwell Smith (scholar.google.com)
- ^ AI can make powerful, calculative decisions (mitpress.mit.edu)
- ^ benchmarked tasks relevant to law and jurisprudence (www.thomsonreuters.com)
- ^ beyond a reasonable doubt (www.law.cornell.edu)
- ^ what the reasonable doubt standard means (www.ce9.uscourts.gov)
- ^ Legal scholar James Q. Whitman’s (law.yale.edu)
- ^ research on the history of reasonable doubt (yalebooks.yale.edu)
- ^ must be unanimous (doi.org)
- ^ moral comfort (yalebooks.yale.edu)
- ^ MPI/Getty Images (www.gettyimages.com)
- ^ used the term “species-being (www.marxists.org)
Authors: Sonali Chakravarti, Professor of Government, Wesleyan University

