As artificial intelligence (AI) marches forward into the future, it is expanding into many corners of society, including the legal system. Machine learning (ML) is a field of study under artificial intelligence in which a machine (or in other words, a computer) uses algorithms to perform tasks without instruction. Human programmers create algorithms to automate certain tasks, such as email spam filters, or customize experiences, such as your Instagram or Facebook feed.
Studies comparing machine and human decision making in medicine have found that algorithms make equal or better decisions than humans in a wide range of situations. Algorithms are often applauded for their consistent, bias-free decisions. However, these algorithms are created by humans, which often results in personal and systemic bias being programmed into the tasks ML is supposedly completing bias-free. Research looking at the use of AI in policing and bail determinations has shown racial biases, including increased surveillance of Black communities and higher bail amounts for people of racial and ethnic minorities.
In a recent study by Mori et al, researchers assessed the use of ML in Japanese Assessment Centers as a means to predict whether that people would commit another crime (recidivism). The study was based on the Ministry of Justice Case Assessment Tool (MJCA) which is used to predict recidivism based on things like previous criminal acts, abuse in the home, or history of delinquency. Researchers used different types of ML to see which ones best predicted rates compared to MJCA predictions. They applied the different methods to 5,942 cases involving Japanese youth (Mage = 16.5; 638 Female Mage =16.1). They were able to identify a ML method that was more accurate than the MJCA at predicting recidivism.
The use of this technology going forward is inevitable. In this study, it is being used to help identify methods of successful prediction of future crime for kids involved in the justice system. This improves upon existing tools and allows more accurate determinations in terms of the level of supervision required. There are still concerns that ML is programmed by humans and the data it is based on is decided and collected by humans, and thus ML can still be subject to bias. However, improving risk assessment tools is one instance in which the use of ML might be able to benefit the legal system.
Comments