COMPAS (software)
In-game article clicks load inline without leaving the challenge.
Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) is a case management and decision support software developed and owned by Northpointe (now Equivant), used by U.S. courts to assess the likelihood of a defendant becoming a recidivist.
COMPAS has been used by the U.S. states of New York, Wisconsin, California, Florida's Broward County, and other jurisdictions.
Background
COMPAS was created in 1998 by Northpointe, Inc., which merged with other justice technology firms to become equivant in January 2017. It is categorized as a "fourth-generation" risk assessment instrument because it evaluates both static data (such as prior criminal record) and dynamic "criminogenic" needs (such as current social environment and employment status).
While originally designed for correctional rehabilitation planning, its use expanded to judicial sentencing. In 2012, the Wisconsin Department of Corrections adopted COMPAS statewide, a move that later led to the landmark legal challenge in Loomis v. Wisconsin.
Risk assessment
The COMPAS software uses an algorithm to assess potential recidivism risk. Northpointe created risk scales for general and violent recidivism, and for pretrial misconduct. According to the COMPAS Practitioner's Guide, the scales were designed using behavioral and psychological constructs "of very high relevance to recidivism and criminal careers."
Pretrial release risk scale
Pretrial risk is a measure of the potential for an individual to fail to appear and/or to commit new felonies while on release. According to the research that informed the creation of the scale, "current charges, pending charges, prior arrest history, previous pretrial failure, residential stability, employment status, community ties, and substance abuse" are the most significant indicators affecting pretrial risk scores.
General recidivism scale
The General recidivism scale is designed to predict new offenses upon release, and after the COMPAS assessment is given. The scale uses an individual's criminal history and associates, drug involvement, and indications of juvenile delinquency.
Violent recidivism scale
The violent recidivism score is meant to predict violent offenses following release. The scale uses data or indicators that include a person's "history of violence, history of non-compliance, vocational/educational problems, the person's age-at-intake and the person's age-at-first-arrest."
The violent recidivism risk scale is calculated as follows:
s = a ( − w ) + a first ( − w ) + h violence w + v edu w + h nc w {\displaystyle s=a(-w)+a_{\text{first}}(-w)+h_{\text{violence}}w+v_{\text{edu}}w+h_{\text{nc}}w}
where s {\displaystyle s} is the violent recidivism risk score, w {\displaystyle w} is a weight multiplier, a {\displaystyle a} is current age, a first {\displaystyle a_{\text{first}}} is the age at first arrest, h violence {\displaystyle h_{\text{violence}}} is the history of violence, v edu {\displaystyle v_{\text{edu}}} is vocational education scale, and h nc {\displaystyle h_{\text{nc}}} is history of noncompliance. The weight, w {\displaystyle w}, is "determined by the strength of the item's relationship to person offense recidivism that we observed in our study data."
Support and criticism
Risk assessment tools such as COMPAS are used because of the desire for objective, evidence-based sentencing procedures as well as increased efficiency in the court system. Proponents of using AI and algorithms in the courtroom tend to argue that these solutions will mitigate predictable biases and errors in judges' reasoning, such as the hungry judge effect (the phenomenon that judges are more likely to make lenient decisions after eating a meal). Alternatives to risk assessment tools are possible, but are difficult to implement.
A general critique of the use of proprietary software such as COMPAS is that since the algorithms it uses are trade secrets, they cannot be examined by the public and affected parties, which has been described as a violation of due process. Additionally, simple, transparent and more interpretable algorithms have been shown to perform predictions approximately as well as the COMPAS algorithm. Existing analyses of the algorithms have used the publicly available questionnaires and reverse-engineered approximations based on the publicly available data.
Another general criticism of machine-learning based algorithms is that since they are data-dependent, if the data are biased, the software will likely yield biased results. Similarly, the initial version of the related LSI-R algorithm was primarily trained on Caucasian offenders, which resulted in lower validity for black and Latino offenders. Algorithms may also exhibit other types of bias which are given less attention due to the focus on racial bias.
COMPAS risk assessments have been argued to violate 14th Amendment Equal Protection rights on the basis of race, since the algorithms are argued to be racially discriminatory, to result in disparate treatment, and to not be narrowly tailored.
Accuracy
Empirical analysis of algorithmic risk assessment tools was inspired by a 2016 ProPublica investigation of COMPAS and a subsequent study by Dressel and Farid (2018). ProPublica found that COMPAS was racially biased against black defendants, and Northpointe responded that the algorithm predicted recidivism accurately regardless of race. Counterintuitively, both statements are true: they refer to two mutually exclusive definitions of fairness. ProPublica's analysis focused on the rate of classification errors (the chance that a defendant's recidivism is predicted incorrectly), while Northpointe focused on the accuracy of prediction (whether the algorithm treats all defendants equally). If the algorithm is fair on one metric, it will be biased on the other.
The study by Dressel and Farid (2018) found that COMPAS software is somewhat more accurate than individuals with little or no criminal justice expertise, yet less accurate than groups of such individuals. However, a subsequent review found that these results "seem[ed] like a specific occurrence and less reflective of general and real conditions" and that algorithms performed better than humans under conditions closer to the real world. For example, a replication study found that the algorithms did better when the chance (base rate) of rearrest was low, while Dressel and Farid assumed that recidivism and non-recidivism were about equally likely.
Risk assessment tools do not explicitly incorporate race, and doing so would likely violate the US constitution. However, because factors such as education level or employment status are correlated with race, algorithms using these factors produce different results for different racial groups.
One of the proposed benefits of risk assessment tools is an expected reduction in incarceration rates. In 2024, an analysis of the practical impact of COMPAS in Broward County found that its use led to a reduced rate of confinement across demographic groups, but that it also exacerbated the differences between racial groups.
Legal rulings
In July 2016, the Wisconsin Supreme Court ruled that COMPAS risk scores can be considered by judges during sentencing, but there must be warnings given to the scores to represent the tool's "limitations and cautions."
See also
- Algorithmic bias
- Garbage in, garbage out
- Legal expert systems
- Loomis v. Wisconsin
- Criminal sentencing in the United States
Further reading
- Northpointe (March 15, 2015). (PDF).
- Angwin, Julia; Larson, Jeff (May 23, 2016). . ProPublica.
- Flores, Anthony; Lowenkamp, Christopher; Bechtel, Kristin. (PDF). Community Resources for Justice.