Cornell Law School Logo - white on transparent background

Volume 109, Issue 3

Article

The Right to a Glass Box: Rethinking the Use of Artificial Intelligence in Criminal Justice

Brandon L. Garrett & Cynthia Rudin

L. Neil Williams, Jr. Distinguished Professor of Law, Duke University School of Law and Faculty Director, Wilson Center for Science and Justice, Earl D. McLean, Jr. Professor of Computer Science, Electrical and Computer Engineering, Statistical Science, Mathematics, and Biostatistics & Bioinformatics, Duke University. 

23 Apr 2024

Artificial intelligence (“AI”) increasingly is used to make important decisions that affect individuals and society. As governments and corporations use AI more pervasively, one of the most troubling trends is that developers so often design it to be a “black box.” Designers create AI models too complex for people to understand or they conceal how AI functions. Policymakers and the public increasingly sound alarms about black box AI. A particularly pressing area of concern has been criminal cases, in which a person’s life, liberty, and public safety can be at stake. In the United States and globally, despite concerns that technology may deepen pre-existing racial disparities and overreliance on incarceration, black box AI has proliferated in areas such as: DNA mixture interpretation; facial recognition; recidivism risk assessments; and predictive policing. Despite constitutional criminal procedure protections, judges have often embraced claims that AI should remain undisclosed in court.

Both champions and critics of AI, however, mistakenly assume that we inevitably face a trade-off: black box AI may be incomprehensible, but it performs more accurately. But that is not so. In this Article, we question the basis for this assumption, which has so powerfully affected judges, policymakers, and academics. We describe a mature body of computer science research showing how “glass box” AI—designed to be fully interpretable by people—can be more accurate than the black box alternatives. Indeed, black box AI performs predictably worse in settings like the criminal system. After all, criminal justice data is notoriously error prone, and it may reflect preexisting racial and socioeconomic disparities. Unless AI is interpretable, decisionmakers like lawyers and judges who must use it will not be able to detect those underlying errors, much less understand what the AI recommendation means.

Debunking the black box performance myth has implications for constitutional criminal procedure rights and legislative policy. Judges and lawmakers have been reluctant to impair the perceived effectiveness of black box AI by requiring disclosures to the defense. Absent some compelling—or even credible—government interest in keeping AI black box, and given the substantial constitutional rights and public safety interests at stake, we argue that the burden rests on the government to justify any departure from the norm that all lawyers, judges, and jurors can fully understand AI. If AI is to be used at all in settings like the criminal system—and we do not suggest that it necessarily should—the presumption should be in favor of glass box AI, absent strong evidence to the contrary. We conclude by calling for national and local regulation to safeguard, in all criminal cases, the right to glass box AI.

To read this Article, please click here: The Right to a Glass Box: Rethinking the Use of Artificial Intelligence in Criminal Justice.