John Campbell and Jury Trial Data Analytics
By: Bell Law Firm
Share This Post
John Campbell and Jury Trial Data Analytics
“Face the Jury” is a podcast dedicated to all issues involving medical malpractice – what it is, how to spot it and how to prevent it while protecting yourself and your family.
In this episode, we’re joined by John Campbell, founder of Empirical Jury and law professor at the University of Denver. John is making a tremendous impact in the industry by bringing data and analytics to trial lawyers, helping us make informed decisions and best present our case during a trial.
Lloyd: I always enjoy learning from you, John. Give our audience a thumbnail of who you are, a little about your background, what you do at Empirical Jury.
John: Sure. I started as a lawyer at the Simon Law Firm in St. Louis. It’s a firm started and still run by John Simon, a wonderful trial lawyer, Inner Circle member, and a very good man. That’s where I began trying cases. They told me they’d get me in the courtroom right away, and they were good to their word. I got to get in the courtroom early and often.
As I was there, I started running a class action department because we were facing some tort reform headwinds, and we were diversifying a little bit. I really enjoyed that, but of course, you don’t try many class-action lawsuits. You do a lot more motion practice and evidentiary hearings.
While that was going on, I got lucky, and the opportunity came around to join the University of Denver Law School. I started teaching legal research, writing, and torts. While I was there, I started to learn about people doing research using large samples, online samples, more statistical analysis, and they were doing it across all different fields. I got interested.
A small group of academics does this kind of work on juries, and I tried to get to know all of them. We published academic work that I think is valuable to the practicing bar. That led to studying a couple of cases and using a big data approach to understanding them (vs. a small, focus-group approach). It worked, and six years later, here we are at Empirical Jury studying cases all over the country. It’s the most fun I’ve probably ever had.
Lloyd: John, there have been so many people before you that have tried to crack the code of juries. Focus groups were used for many years to try to understand what a real jury might do. How is what you’re doing different from bringing in a random group of 12 people in a focus group setting to run your case?
John: The simple answer is, it’s a lot more people. Because of the ability today to access people who are online and looking for work, we can get four- or five hundred jurors instead of 12. There are some statistical and accuracy advantages and some completeness there. If you have a focus group of 10 or 12 people, you learn what they think, and that’s great. But it’s tough to know if those 10 or 12 people are outliers that day or whether that is, in fact, how most people would think about the case. So, there’s no doubt in-person focus groups are still valuable in providing information. Still, you have to be cautious because you might be way off if you start thinking in generalities based on that small sample. It could be a false positive. You think you’ve seen a pattern when really, you’ve just seen a coincidence.
Lloyd: I’m familiar with the concept that if you get a large sample group to answer a discrete question, a statistical driver will reach the accurate answer. For example, take a huge jar of jellybeans. You’ll get widely divergent answers if you get 10 people guessing how many jellybeans there are. If you get 10,000 people guessing, weirdly, you get something very close to an accurate response. Can you talk about that concept and how it relates to what you’re doing with jury analytics?
John: Yeah, that’s part of it. It’s true that, without getting too nerdy, there’s a lot of evidence and studies to say that if you get two or three hundred people to do something, you’ll wash out the outliers, and you can start to trust some of the statistics like the averages or median. I think there’s one more piece to the puzzle. Deliberation tends to wash out outliers in a similar manner. It tends to move people towards something that looks like an average.
If we know the likely value jurors will assign to our case, we can have more confidence in the potential trial outcome. We can predict with big data a likely average that will map well to what jurors will do in the deliberations.
Lloyd: I am incredibly excited to begin working with you. Next, I’d like you to bring it from the theoretical 10,000-foot view to the ground. Explain to lawyers listening to this episode how this works on a practical level and your process and results.
John: The process, in a simple way, is this: when we meet with lawyers, we want to understand the case. We listen and show jurors a mediation statement – a plaintiff case and a defense case that’s some combination of text, images, and video. Jurors experience it almost like a website, which is intentional because jurors are good at consuming information that way. They read, look at accident or medical diagrams, see pictures of the plaintiff, and watch animations. Whatever is needed.
We work with the lawyers to get that together for both a plaintiff and defense case. We are adamant that the defense case needs to be as rich and detailed as the plaintiff case. We want jurors to look at both sides and have zero instinct about who is sponsoring the study. That part of the process is phase one.
Lloyd: Many trial lawyers – myself included – rely so much on prior experiences and small focus groups. There’s that instinct you touched on that if people know what you’re looking for, they tend to give you what you’re looking for. I’ve seen that in focus groups I’ve facilitated. I’m really impressed by your use of data. The way I think about it is that it combats a lot of the biases human beings are burdened with. The data seems to have a more useful role in protecting you from yourself and giving you real numbers and real responses from people to make decisions.
Talk to the lawyers listening about the data samplings and what they do with the data once it’s received. How can it be incorporated into their process?
John: The first thing is to think of it as a diagnostic. We send out reports with charts and language lawyers understand, and of course, a debrief call to walk through it. The first thing I always look at when I get the data is the win rate. What percentage of jurors vote for liability? We think we know that about cases, but I can tell you that sometimes the data tells a different story. Lawyers need to know that about their cases.
Next thing, I look at the rewards. What are jurors doing with the damages? I just sent out a report for a case where an attorney felt strongly that his client had demonstrable and clear TBI, and that no juror would doubt it. We looked at the rewards, and about one out of every four jurors awarded just medical bills, which told us that they did not believe there was any ongoing, lasting injury. We needed to know that because we must check our own blind spots. How, then, do we convince these unconvinced jurors that there is a traumatic brain injury? It told us we need to be smart about using the patient as a witness. When you have a comparative fault, you need to understand how much fault my client is likely to take and how likely is it to impact the verdict? Those are some of the big diagnostic things.
Then, when people vote for the defense, we ask people to identify the most important arguments. What is it defense jurors keep mentioning at the highest rate? We can use data to start understanding when we lose, why we lose.
If you think about political polling, it’s easier to imagine. When you call listeners to predict whether a person they can’t see and don’t know voted for Trump or Biden in the last presidential election, the smart money is on Biden because he received more votes. But, if they’re smart, they might request to ask some questions. If they ask: is the person male or female, white or non-white, college-educated or not, they could make a much smarter guess about how that person voted. We can do the same thing with juries. If we have 500 people who decided a case and awarded damages, we can look backward and ask if people who have different levels of education receive my case differently?
From there, you can start to make smart decisions about who to seat during selection to improve our chances of helping our clients.
Lloyd: That’s just fascinating, and it makes me a little surprised it’s taken this long for somebody to meld data analytics with jury research. We’re always looking for an advantage and ways to serve our clients better.
It’s a fascinating conversation and one that will take us another episode to unpack in full. Stay tuned for Season 2, Episode 6 of Face the Jury.