Failure No. 2- Risk Assessments: Wrong Much of the Time


Failure No. 2:  Risk Assessments are Wrong Much of the Time

Using an algorithm to make predictions about risk of failure to appear and risk of commiting a new crime is fraught with problems.  As early as 2017, it was pointed out that when big data contains bad data, it can lead to big problems for organizations that use that data.  Nate Silver who makes a living at making predictions after reviewing data has been quoted as saying, "We're not that much smarter than we used to be, even though we have much more information-- and that means the real skill now is learning who to pick out the useful information from all this noise."

The issues that cause risk assessment tools to be wrong much of the time fall into two areas:  (1) data issues; and (2) systemic algorithm issues.  The first area that can make risk assessments wrong is when the data that is put into and considered by the risk assessment is incorrect.  These issues fall into several areas.  In jurisdictions where a defendant's criminal history is not avaliable in a central data base, then the information relied upon for the assessment may be incomplete and cause the result to be wrong.  Additionally, current risk assessment tools do not normally consider arrests, but only convictions.  Therefore, a criminal history may show that the defendant has entered a downward spiral over the last month, but this data would not be considered by the risk assessment tool.  In this sitatuion, the results would be to cause a lower risk score than if these arrests had been considered by the tool.  Further, the tool may limit the criminal history that is to be considered to the last 2 years.  In some situations these limitations may cause the tool to not be in compliance with state law, but if the state mandates its use, but has not funded the costs for programing changes to get into compliance, the results of the tool will continue to give results that do not comply with the state's laws.  This would also be true anytime the state made changes to the factors to be considered in setting bail.  When the law is changed, the tool would have to be re-programed.  In addition, in some smaller counties, the staff may rely on the defendants reporting of the criminal history and this information may be incorrect for whatever reason.  In these situations, the results of the assessment would also be wrong. 

In Texas, the Governor has issued an executive order during the COVID-19 pandemic and has held that criminal histories must be reviewed before bail is set and individuals with certain criminal histories are not entitled to PR bonds.

It would seem that a review of a defendants criminal history would give the magistrate more information than the result of a risk assessment tool.  It would give the magistrate the entire picture of the defendant; instead of withholding or excluding valid relevant information from the court's consideration.

The second area that causes the results of the risk assessment tool to be wrong arises from systemic problems with the tool.  The algorithm was designed to predict what a group would do and not what an individual would do.  So the tool is being applied in situations that for which the tool was not designed for.  Therefore, in predicting what an individual caucasion will do in a situation, the algorithm relies in part on the historical data for caucasions.  This would cause the assessment to start with a lower base score just based on race.  Additionally, the opposite is true as well.  An individual black being assessed by the tool, the algorithm will review and base its score in part on the historical data for blacks.  This would cause the base score for this individual defendant to be higher simply because of the color of his skin.

There have been two recent studies that have been released that have reviewed these situations.  The first study, released in July 2019, demonstrated that risk assessment tools were incorrect regarding certain racial groups at least 37% of the time.  The second study was released on July 1, 2020 and it documented that risk assessment tools were incorrect as much as 50% of the time for this same racial group.

Finally, when the risk assessment is wrong it will have a substantial impact on a defendant.  As failures to appear increase, if the trial courts decides to take action to address increasing numbers of pending cases, these systemic issues will cause a disproportionate percentage of certain races to be detained for no other reason than the increased risk built into the assessment tool because of the consideration of race.  

Because risk assessment tools are wrong much of the time, they should not be used as a part of criminal justice reform.



 

Comments

Most Read Posts Over The Last 30 Days

PBT Announces Special Guest for Upcoming Annual Meeting

Episode No. 60- Conquering the Digital Beast with Guest District Clerk Jon Gimble

FBI Fudging Crime Numbers?

New York Mayor Says His State Should Rethink Bad Bail Reform

Stories by Topic:

Show more