Now that you mention it directly, it’s flabbergasting that no one’s ever said what percentage level “beyond a reasonable doubt” corresponds to (legal eagles: correct me if I’m wrong). That’s a pretty gaping huge deviation from a properly Bayesian legal system right there.
Well, the number could hardly be made explicit, for political reasons (“you mean it’s acceptable to have x wrongful convictions per year?? We shouldn’t tolerate any at all!”).
In any case, let me not be interpreted as arguing that the legal system was designed by people with a deep understanding of Bayesianism. I say only that we, as Bayesians, are not prevented from working rationally within it.
This is the third time on LW that I’ve seen the percentage of certainty for convictions conflated with the percentage of wrongful convictions (I suspect it’s just quick writing or perhaps my overwillingness to see that implication on this particular post). They’re not identical.
Suppose we had a quantation standard of 99% certainty and juries were entirely rational actors, understanding of the thin slice 1% is, and given unskewed evidence. The percentage of wrongful convictions would be well under 1% at trial; juries would convict on cases from 99% certainty to c. 100% certainty. The actual percentage of wrongful convictions would depend on the skew of the cases in that range.
Yes, the certainty level provides a bound on the number of wrongful convictions. A 99% certainty requirement means at least 99% certainty, so an error rate of at most 1%.
It is, in fact, illegal to argue a quantation of “reasonable doubt.”
I’m a fan of the jury system, but I do think quantation would lead to less, not more, accuracy by juries. Arguing math to lawyers is bad enough; to have lawyers generally arguing math to juries is not going to work. (I like lawyers and juries, but mathy lawyers in criminal law are quite rare.)
Probably because the math isn’t explained properly.
That said, I do agree in the sense that I think juries can still come to the same verdict, the same way they do now (by intuition), and then just jigger the likelihood ratios to rationalize their decision. However, it’s still a significant improvement in that questionable judgments are made transparent.
For example, “Wait a sec—you gave 10 bits of evidence to Amanda Knox having a sex toy, but only 2 bits to her DNA being nowhere at the crime scene? What?”
One of the earliest attempts to quantify reasonable doubt was a 1971 article… In a later analysis of the question (“Distributions of Interest for Quantifying Reasonable Doubt and Their Applications,” 2006[9]) , three students at Valparaiso University presented a trial to groups of students… From these samples, they concluded that the standard was between 0.70 and 0.74.
The majority of law theorists believe that reasonable doubt cannot be quantified. It is more a qualitative than a quantitative concept. As Rembar notes, “Proof beyond a reasonable doubt is a quantum without a number.”[10]
It’s illegal for the prosecution or defense to do so in court. Apologies for the lack of context.
The 1971 paper that cites the .70-.74 numbers causes me to believe the people who participated were unbelievably bad at quantation, or that the flaws pointed out in 2006 paper of the 1971 paper are sufficient to destroy the value of that finding, or that this is one of many studies with fatal flaws. I expect there are very few jurors indeed who would convict with a belief that the defendant was 25% to be innocent.
I wonder if quantation interferes with analysis for some large group of people? Perhaps just the mention of math interferes with efficient analysis. I don’t know; I can say that in math- or physics-intensive cases, both sides try to simplify for the jury.
In fact, we have some types of cases with fact patterns that give us fairly narrow confidence ranges; if there’s a case where I’m 75% certain the guy did it, and no likely evidence or investigation will improve that number, that’s either not issued, or if that state has been reached post-issuance, the case is dismissed.
Now that you mention it directly, it’s flabbergasting that no one’s ever said what percentage level “beyond a reasonable doubt” corresponds to (legal eagles: correct me if I’m wrong). That’s a pretty gaping huge deviation from a properly Bayesian legal system right there.
Well, the number could hardly be made explicit, for political reasons (“you mean it’s acceptable to have x wrongful convictions per year?? We shouldn’t tolerate any at all!”).
In any case, let me not be interpreted as arguing that the legal system was designed by people with a deep understanding of Bayesianism. I say only that we, as Bayesians, are not prevented from working rationally within it.
This is the third time on LW that I’ve seen the percentage of certainty for convictions conflated with the percentage of wrongful convictions (I suspect it’s just quick writing or perhaps my overwillingness to see that implication on this particular post). They’re not identical.
Suppose we had a quantation standard of 99% certainty and juries were entirely rational actors, understanding of the thin slice 1% is, and given unskewed evidence. The percentage of wrongful convictions would be well under 1% at trial; juries would convict on cases from 99% certainty to c. 100% certainty. The actual percentage of wrongful convictions would depend on the skew of the cases in that range.
Yes, the certainty level provides a bound on the number of wrongful convictions. A 99% certainty requirement means at least 99% certainty, so an error rate of at most 1%.
It is, in fact, illegal to argue a quantation of “reasonable doubt.”
I’m a fan of the jury system, but I do think quantation would lead to less, not more, accuracy by juries. Arguing math to lawyers is bad enough; to have lawyers generally arguing math to juries is not going to work. (I like lawyers and juries, but mathy lawyers in criminal law are quite rare.)
Probably because the math isn’t explained properly.
That said, I do agree in the sense that I think juries can still come to the same verdict, the same way they do now (by intuition), and then just jigger the likelihood ratios to rationalize their decision. However, it’s still a significant improvement in that questionable judgments are made transparent.
For example, “Wait a sec—you gave 10 bits of evidence to Amanda Knox having a sex toy, but only 2 bits to her DNA being nowhere at the crime scene? What?”
Illegal??
From wikipedia:
It’s illegal for the prosecution or defense to do so in court. Apologies for the lack of context.
The 1971 paper that cites the .70-.74 numbers causes me to believe the people who participated were unbelievably bad at quantation, or that the flaws pointed out in 2006 paper of the 1971 paper are sufficient to destroy the value of that finding, or that this is one of many studies with fatal flaws. I expect there are very few jurors indeed who would convict with a belief that the defendant was 25% to be innocent.
I wonder if quantation interferes with analysis for some large group of people? Perhaps just the mention of math interferes with efficient analysis. I don’t know; I can say that in math- or physics-intensive cases, both sides try to simplify for the jury.
In fact, we have some types of cases with fact patterns that give us fairly narrow confidence ranges; if there’s a case where I’m 75% certain the guy did it, and no likely evidence or investigation will improve that number, that’s either not issued, or if that state has been reached post-issuance, the case is dismissed.