Generating Bias: Why Artificial Intelligence is Not an Intelligent Choice for Jury Selection

Sarah Ferguson, Winner of PMC's 2025 Law Student Writing Competition

After a three-day jury trial, criminal defendant Miguel Angel Peña-Rodriguez was found guilty of unlawful sexual contact and harassment.1 Following the trial, two jurors stayed behind, asking to speak with the defense counsel privately.2 During this conversation, the jurors revealed that throughout jury deliberations, one juror expressed many anti-Hispanic views to influence the other jurors.3 These views included statements about how he “believed the defendant was guilty because, in [his] experience as an ex-law enforcement officer, Mexican men had a bravado that caused them to believe they could do whatever they wanted with women,” that “nine times out of ten Mexican men were guilty of being aggressive toward women and young girls,” and that the defendant’s witness was not credible because he was “an illegal.”

While the Supreme Court ultimately reversed and remanded this case in 2017, following an investigation into these comments, a question arises from these results – could this result have been avoided with advanced screening during jury selection? And with the growing abilities of artificial intelligence (AI) software, is AI the advanced screening solution needed?

Click here to read the full paper.