The use of psychological expert witnesses for evidence in court cases has been somewhat controversial in the United States. There is a disagreement in the Legal field as to whether there is proper scientific foundation for expert psychological testimony. Though they can provide good insight into the case at hand, there is a question of whether a psychological expert can present reliable and valid evidence, and whether a judge or juror would be able to recognize when an expert’s testimony is not reliable or valid. If a juror is unable to distinguish between reliable and unreliable information, how might that impact their decision-making in each case?
In Variations in reliability and validity do not influence judge, attorney, and mock juror decisions about psychological expert evidence, Chorn & Kovera (2019) examined this very issue. Across two studies, the researchers explored how variations in reliability and validity of expert evidence can impact judges’, attorneys’, and jurors’ ability to identify and evaluate the reliability and validity of evidence presented to them. Overall, evidence would suggest that none of these parties (judges, attorneys, jurors) are able to accurately identify when evidence presented lacks reliability and validity.
In the first study, the researchers highlighted evidence that shows judges generally are unable to distinguish whether or not psychological expert evidence is reliable or valid, and will frequently admit unreliable and invalid data into evidence. Additionally, both judges and attorneys were insensitive to variations in the validity of previous research presented to them, and attorneys were not able to catch unreliable and invalid evidence that judges admitted. The researchers also investigated whether judges and attorneys would be able to craft cross-contamination questions that specifically targeted reliability and validity of the psychological expert’s evidence. Though there was a small increase in the number of these targeted cross-contamination questions when validity was threatened, but still fewer than half of judges and a fifth of attorneys asked such questions in cross-examination.
The second study examined how the findings from study 1 might impact jurors’ ability to identify unreliable and invalid data. The researchers wanted to see whether attorneys would be able to educate jurors about reliability and validity through their cross-examinations of the psychological expert witnesses. Results suggested that cross-examination was only helpful in teaching jurors to identify bad evidence when the issue in reliability and validity is obvious and easy for jurors to understand (i.e., missing a control group), and not so helpful when the threat to reliability and validity isn’t as obvious or simple (i.e., poor construct validity).
When unreliable or invalid data is admitted into evidence in a trial, it can compromise the integrity of a verdict, especially if it is unrecognizable by judges, attorneys, and jurors alike. Though the results of these two studies suggest that, in their current state, actors in our legal system aren’t able to identify bad data presented to them, there is always the opportunity for improvement. The authors suggest that one solution to this problem is to better train judges and attorneys to identify unreliable and invalid data. This would take the burden off of jurors to understand and identify threats to validity and reliability, and places the responsibility on judges and attorneys to assure that bad evidence never makes its way into the courtroom in the first place. With the ability to better identify (and exclude) unreliable and invalid data, perhaps we can help mitigate some of the controversy around psychological expert testimony.