The justice system uses pre- and post-trial risk assessment tools to measure several risk factors for defendants, like rearrest, failure to appear in court, and committing further violations. These have been widely used among officers to determine which individuals are suitable for release and to help implement supervision strategies[HG1] (Administration Office of the U.S. Courts, 2021). Risk assessment tools are highly debated within the justice system; some believe it can improve the system by reclassifying offenders that otherwise would have been considered high risk, eventually decreasing overincarceration and recidivism; others raise concerns that risk assessment tools may exacerbate racial and ethnic disparities in the criminal justice system.
This concern over the potential to increase punitive sanctions to racial and ethnic minorities is not unfounded – many of the risk assessment tools provide higher risk scores defendants from a racial and ethnic minority than they do White defendants. For example, minorities and people of color are more likely to live in poverty or to have other disadvantages that might inflate their risk scores, hence justifying the implementation of harsher sentences.
In “Impact of risk assessment instruments on rates of pretrial detention, postconviction placements, and release: A systematic review and meta-analysis”, Viljoen and colleagues (2019) examined the existing research surrounding risk assessment tools to identify whether the use of these tools can have a positive effect on criminal justice proceedings. A meta-analysis of 22 studies on pre- and post-trial risk assessment tools showed there were some small effects of using risk assessment tools on restrictive placements and recidivism. However, they also encountered several problems with the existing body of research on risk assessment.
Specifically, these studies are often poor in quality, do not examine racial and ethnic disparities, and have produced widely inconsistent findings. Even within this meta-analysis of 22 studies, the significant effects on restrictive placements and recidivism do not hold up if highly biased studies are removed from the analysis. Although this cannot tell us too much about risk assessment as a concept, it does showcase some major problems with the current line of research on risk assessment.
Therefore, it was concluded that we require more extensive research to determine how risk assessment tools impact our justice system. Viljoen and colleagues (2019) even recommended to use controlled experimental designs to observe how tools influenced judges’ decisions making process.
Thus, risk assessment tools raise concerns, but it is unclear what the actual impact is on the justice system. There has been a movement to reduce bias in the system by eliminating subjective decisions (by probation officers or judges) and using more objective decision options (such as risk assessment tools or algorithms), which would be ideal! But ultimately, these “objective” systems are as biased as the programmer or data used to create them. Often, it is even more difficult to identify and challenge the bias within these “objective” systems. One major issue, as Viljoen and colleagues (2019) identified, is the lack of reliable data.
There are several obstacles preventing society from accessing the data. First, it might not exist. Many agencies making the determinations do not collaborate with social scientists when determining what information to collect. Therefore, they might not collect or store all of the information we would want. Second, even if the information exists, it is difficult for researchers to access it. Some researchers (like Dr. Reed and other Legal Psychology professors at UTEP) have had the fortune of working closely with agencies who allow researchers to access their data in order to identify areas of improvement. But, there are so many hurdles to these collaborations – agencies are busy, don’t know (or don’t want to admit) there is a problem, lack resources to do research, or do not have relationships with researchers. Furthermore, there are political limitations. Agencies that overcome these hurdles and bring in researchers to help identify and solve their issues do not necessarily want these issues to be made public (especially if it calls into question all of the decisions they have made up to that point). It is a difficult balance for researchers – you appreciate the agencies for calling you in to make evidence-based or data-driven changes and do not want to burn bridges by publishing their problematic data. Thus, even when data exists and researchers have access to it, they are unable to publish it publicly, have restrictions on what they are allowed to publish, or are limited in the data that was originally collected. Ultimately, we as researchers have a responsibility to establish relationships with these agencies and work to make data-driven changes internally, publishing to the academic community when possible. But we need to use a variety of methods in order to get a better perspective on the real impact of risk assessment tools in the justice system. Especially if these “objective” tools perpetuate biases against racial and ethnic minorities in the criminal justice system.