My question for Less Wrong: Just how innocent is Cameron Todd Willingham? Intuitively, it seems to me that the evidence for Willingham’s innocence is of higher magnitude than the evidence for Amanda Knox’s innocence.
In both instances, the prosecution case amounts to roughly zero bits of evidence. However, demographics give Willingham a higher prior of guilt than Knox, perhaps by something like an order of magnitude (1 to 4 bits). I am therefore about an order of magnitude more confident in Knox’s innocence than Willingham’s.
Challenge question: What does an idealized form of Bayesian Justice look like?
Bayesian jurors (preferably along with Bayesian prosecutors and judges); that’s really all it comes down to.
In particular, discussions about the structure of the judicial system are pretty much beside the point, in my view. (The Knox case is not about the Italian justice system, pace just about everyone.) Such systematic rules exist mostly as an attempt at correcting for predictable Bayesian failures on the part of the people involved. In fact, most legal rules of evidence are nothing but crude analogues of a corresponding Bayesian principle. For example, the “presumption of innocence” is a direct counterpart of the Bayesian prohibition against privileging the hypothesis.
There is this notion that Bayesian and legal reasoning are in some kind of constant conflict or tension, and oh-whatever-are-we-to-do as rationalists when judging a criminal case. (See here for a classic example of this kind of hand-wringing.) I would like to dispel this notion. It’s really quite simple: “beyond a reasonable doubt” just means P(guilty|evidence) has to be above some threshold, like 99%, or something. In which case, if it’s 85%, you don’t convict. That’s all there is to it. (In particular, away with this nonsense about how P(guilty|evidence) is not the quantity jurors should be interested in; of course it is!)
From our perspective as rationality-advocates, the best means of improving justice is not some systematic reform of legal systems, but rather is simply to raise the sanity waterline of the population in general.
Now that you mention it directly, it’s flabbergasting that no one’s ever said what percentage level “beyond a reasonable doubt” corresponds to (legal eagles: correct me if I’m wrong). That’s a pretty gaping huge deviation from a properly Bayesian legal system right there.
Well, the number could hardly be made explicit, for political reasons (“you mean it’s acceptable to have x wrongful convictions per year?? We shouldn’t tolerate any at all!”).
In any case, let me not be interpreted as arguing that the legal system was designed by people with a deep understanding of Bayesianism. I say only that we, as Bayesians, are not prevented from working rationally within it.
This is the third time on LW that I’ve seen the percentage of certainty for convictions conflated with the percentage of wrongful convictions (I suspect it’s just quick writing or perhaps my overwillingness to see that implication on this particular post). They’re not identical.
Suppose we had a quantation standard of 99% certainty and juries were entirely rational actors, understanding of the thin slice 1% is, and given unskewed evidence. The percentage of wrongful convictions would be well under 1% at trial; juries would convict on cases from 99% certainty to c. 100% certainty. The actual percentage of wrongful convictions would depend on the skew of the cases in that range.
Yes, the certainty level provides a bound on the number of wrongful convictions. A 99% certainty requirement means at least 99% certainty, so an error rate of at most 1%.
It is, in fact, illegal to argue a quantation of “reasonable doubt.”
I’m a fan of the jury system, but I do think quantation would lead to less, not more, accuracy by juries. Arguing math to lawyers is bad enough; to have lawyers generally arguing math to juries is not going to work. (I like lawyers and juries, but mathy lawyers in criminal law are quite rare.)
Probably because the math isn’t explained properly.
That said, I do agree in the sense that I think juries can still come to the same verdict, the same way they do now (by intuition), and then just jigger the likelihood ratios to rationalize their decision. However, it’s still a significant improvement in that questionable judgments are made transparent.
For example, “Wait a sec—you gave 10 bits of evidence to Amanda Knox having a sex toy, but only 2 bits to her DNA being nowhere at the crime scene? What?”
One of the earliest attempts to quantify reasonable doubt was a 1971 article… In a later analysis of the question (“Distributions of Interest for Quantifying Reasonable Doubt and Their Applications,” 2006[9]) , three students at Valparaiso University presented a trial to groups of students… From these samples, they concluded that the standard was between 0.70 and 0.74.
The majority of law theorists believe that reasonable doubt cannot be quantified. It is more a qualitative than a quantitative concept. As Rembar notes, “Proof beyond a reasonable doubt is a quantum without a number.”[10]
It’s illegal for the prosecution or defense to do so in court. Apologies for the lack of context.
The 1971 paper that cites the .70-.74 numbers causes me to believe the people who participated were unbelievably bad at quantation, or that the flaws pointed out in 2006 paper of the 1971 paper are sufficient to destroy the value of that finding, or that this is one of many studies with fatal flaws. I expect there are very few jurors indeed who would convict with a belief that the defendant was 25% to be innocent.
I wonder if quantation interferes with analysis for some large group of people? Perhaps just the mention of math interferes with efficient analysis. I don’t know; I can say that in math- or physics-intensive cases, both sides try to simplify for the jury.
In fact, we have some types of cases with fact patterns that give us fairly narrow confidence ranges; if there’s a case where I’m 75% certain the guy did it, and no likely evidence or investigation will improve that number, that’s either not issued, or if that state has been reached post-issuance, the case is dismissed.
I wouldn’t go that far. There are many cases where the legal system explicitly deviates from Bayesianism. Some examples:
Despite the fact that Demographic Group X is more/less likely to have committed crime Y, neither side can introduce this as evidence, e.g. “Since my client is a woman, you should reduce the odds you assign to her having committed a murder by a factor of 4.” (Obviously, the jury will notice the race/gender of the defendant, but you can’t argue that this is informative about the odds of guilt.)
Prohibition on many types of prejudicial evidence that is informative about the probability of guilt (like whether the defendant is a felon). (This can be justified on grounds of cognitive bias maybe, but not Bayesian grounds.)
In the US, the Constitutional prohibition on using the defendant’s silence as evidence, despite its informativeness, e.g., “If he’s really innocent, why doesn’t he just tell his side of the story? What’s the big deal? Why did he wait hours before even saying what happened? Did he need to get his story straight first?” (Again, the jury will notice that the defendant didn’t take the stand, but you can’t draw their attention to this as the prosecution.)
The exclusionary rule. The impact of illegally-collected physical evidence (i.e. not forced confessions but e.g. warrantless searches) has a small to non-existent impact on the evidence’s strength. The policy on excluding illegally-obtained evidence may be justified on decision-theoretic grounds, but not on Bayesian grounds.
Outside of trials, the fact that you have to wait years before you hear a judge’s binding opinion on whether or not a law actually can be enforced (i.e. is Constitutional).
But notice that these are examples of restrictions on evidence of guilt. The assumption (very reasonable, it seems to me) is that human irrationality tends in the direction of false positives, i.e. wrongful convictions. (Possibly along with the assumption that our values require a lower tolerance for false positives than false negatives.)
If juries are capable of convicting on the sort of evidence presented at the Knox/Sollecito trial (and they are, whether in Italy, the U.S., or anywhere else)...well, can you imagine all the false convictions we would have if such rules as you listed were relaxed?
The bias toward false positives is probably especially strong in criminal cases. The archetypal criminal offense is such that it unambiguously happened (not quite like the Willingham case), and in the ancestral human environment there were far fewer people around who could have done it. That makes the priors for everyone higher, which means that for whatever level of probability you’re asking for it takes less additional evidence to get there. That a person is acting strangely might well be enough—especially since you’d have enough familiarity with that person to establish a valid baseline, which doesn’t and can’t happen in any modern trial system.
Now add in the effects of other cognitive biases: we tend to magnify the importance of evidence against people we don’t like and excessively discount evidence against people we do. That’s strictly noise when dealing with modern criminal defendants, but ancestral humans actually knew the people in question, and had better reason for liking or disliking them. That might count as weak evidence by itself, and a perfect Bayesian would count it while also giving due consideration to the other evidence. But these weren’t just suspects, but your personal allies or rivals. Misweighing evidence could be a convenient way of strengthening your position in the tribe, and having a cognitive bias let you do that in all good conscience. We can’t just turn that off when we’re dealing with strangers, especially when the media creates a bogus familiarity.
But notice that these are examples of restrictions on evidence of guilt.
No, they’re not. The first one I listed can go either way.
“Since my client is a woman, you should reduce the odds you assign to her having committed a murder by a factor of 4.”
The second one can go either way too; it just as much excludes e.g. hearsay evidence that implicates someone else.
The assumption (very reasonable, it seems to me) is that human irrationality tends in the direction of false positives, i.e. wrongful convictions.
Sure, but that needs to be accounted for via the guilt probability threshold, not by reducing the accuracy of the evidence. Favoring acquittal through a high burden and biasing evidence in favor of the defendant is “double-dipping”.
If juries are capable of convicting on the sort of evidence presented at the Knox/Sollecito trial (and they are, whether in Italy, the U.S., or anywhere else)...well, can you imagine all the false convictions we would have if such rules as you listed were relaxed?
I only listed a few examples off the top of my head. The appropriate comparison is to the general policy of, per Bayesianism, incorporating all informative evidence. This would probably lead to more accurate assessments of guilt. In particularly egregious cases like K/S, it would have been a tremendous boon to them to allow them to have an explicit guilt threshold and count up the (log) likelihood ratio of all the evidence.
In any case, remember that there’s a cost to false negatives as well. Although that’s heavily muddled by the fundamental injustice of so many laws for which such a cost is non-existent.
Let me take a step back here, because despite the fact that it sounds like we’re arguing, I find myself in total agreement with other comments of yours in this thread, in particular your description of how trials should work; I could scarcely have said it better myself.
Here’s what I claim: the rules of evidence constitute crude attempts to impose some degree of rationality on jurors and prosecutors who are otherwise not particularly inclined to be rational. These hacks are not always successful, and occasionally even backfire; and they would not be necessary or useful for Bayesian juries who could be counted on to evaluate evidence properly. However, removing such rules without improving the rationality of jurors would be a disaster.
(Let’s not forget, after all, that there were people here on LW who reacted with indignation at my dismissal of certain discredited evidence in the Knox case, protesting that legal rules of admissibility don’t apply to Bayesian calculations—as if I had been trying to pass off some kind of legal loophole as a Bayesian argument. Such people were apparently taking it for granted that this evidence was significant, which suggests to me that it is very difficult for people—even aspiring rationalists—to discount information they come across. This provides support for the necessity of rules that exclude certain kinds of information from courtrooms, given the population currently doing the judging.)
Okay, then I think we’re in agreement. I guess I had interpreted your earlier comment as a much stronger claim about the mapping between pure Bayesianism and existing legal systems, but I definitely agree with what you’ve said here. I would just note that it would probably be more accurate to say that the rules of evidence are hacks to approximate Bayes and correct for predictable cognitive biases, though perhaps in this context those aren’t quite separate categories.
I think that is an incomplete description of the justification of the rules of evidence—some of these rules are also introduced to discourage particular abuses of the system, such as unreasonable searches. Otherwise, agreed.
The policy on excluding illegally-obtained evidence may be justified on decision-theoretic grounds, but not on Bayesian grounds.
In that case, why should we design the system on Bayesian grounds?
I think that’s really why I concur with komponisto—our system may not be optimal, but optimal for a system has to work as a system, including resistance to gaming. Aside from what you suggest about constitutionality, on which I have no comment, your changes are generally unlikely to improve the ability of a legal system to prosecute the guilty and acquit the innocent.
I think the proper response to illegally obtained evidence, is to allow it to be presented as evidence, but charge those who obtained it with whatever crimes made its obtainment illegal.
The problem with implementing this in the current system is that the government has a monopoly on prosecuting criminal charges, so that agents of the government can get away with criminal acts. If ordinary citizens had the same power as district attorneys to seek indictments and prosecute criminal charges, it would provide a huge disincentive for illegally obtaining evidence, and many other government abuses.
In that case, why should we design the system on Bayesian grounds?
Maybe we shouldn’t; I was just disputing komponisto’s insinuation that there’s some unappreciated, general mapping between Bayesianism and the existing justice system.
I think that’s really why I concur with komponisto—our system may not be optimal, but optimal for a system has to work as a system, including resistance to gaming.
Even when it allows so much relative weight to be given to sociological “evidence” (“she had a wild sex life”) compared to physical evidence?
Maybe we shouldn’t; I was just disputing komponisto’s insinuation that there’s some unappreciated, general mapping between Bayesianism and the existing justice system.
I agree that the necessity of a mapping has not been shown, although that’s not what I read into komponisto’s comment.
Even when it allows so much relative weight to be given to sociological “evidence” (“she had a wild sex life”) compared to physical evidence?
No. But that would be best corrected by sanity and education, not by changing the law. A jury of people interested primarily in the physical evidence would not be distracted by trivia about countercultural tendencies on the parts of relevant persons.
that would be best corrected by sanity and education, not by changing the law. A jury of people interested primarily in the physical evidence would not be distracted by trivia about countercultural tendencies on the parts of relevant persons.
But I think it would make a big (positive) difference if everything had to be phrased in terms of likelihood ratios against a prior and guilt threshold.
Individual pieces of evidence are not independent. If Mortimer Q. Snodgrass is shown to have left his home at 11:50, arrived at the scene of the crime at midnight, and returned home fifteen minutes later is damning if the victim died at midnight and exculpatory if the victim died three hours later. There’s a combinatorial explosion trying to describe the effects of every piece of evidence separately.
Sure, but at least each side can draw its theorized causal diagram, how the evidence fits in, how the likelihood ratios interplay (per Pearl’s method of separating inferential and causal evidence flows), and what probability that justifies. It would still lend a clarity of thought not currently present among the mouthbreathers on juries that haven’t been exposed to any of this, even if you had to train them in it first.
(And that would be easy if the trainers really understood [at Level 2 at least] causal diagrams and read my forthcoming article on guidelines for explaining...)
Thanks, but I don’t see how much the points being discussed here hinge on it.
Are you saying that you’re skeptical that Pearl’s networks and Bayesian inference can be quickly (e.g. over a day or so) explained to random people selected for jury duty, but might be convinced of the ease of such training after seeing my exposition of how to enhance your explanatory abilities?
Hm, now that I think about it, that by itself should be evidence I have some abnormally high explanatory mojo—if I could explain your position to you better than you could explain it to yourself. :-P
It’s important that everyone be Bayesian, of course.
To address the implied subtext: yes, I’m in general more worried about false convictions than false acquittals.
Arguably, if investigators and jurors were pure Bayesian epistemic rationalists, attorneys (on either side) wouldn’t even be necessary. That’s an extremely fanciful state of affairs, however.
In both instances, the prosecution case amounts to roughly zero bits of evidence. However, demographics give Willingham a higher prior of guilt than Knox, perhaps by something like an order of magnitude (1 to 4 bits). I am therefore about an order of magnitude more confident in Knox’s innocence than Willingham’s.
Bayesian jurors (preferably along with Bayesian prosecutors and judges); that’s really all it comes down to.
In particular, discussions about the structure of the judicial system are pretty much beside the point, in my view. (The Knox case is not about the Italian justice system, pace just about everyone.) Such systematic rules exist mostly as an attempt at correcting for predictable Bayesian failures on the part of the people involved. In fact, most legal rules of evidence are nothing but crude analogues of a corresponding Bayesian principle. For example, the “presumption of innocence” is a direct counterpart of the Bayesian prohibition against privileging the hypothesis.
There is this notion that Bayesian and legal reasoning are in some kind of constant conflict or tension, and oh-whatever-are-we-to-do as rationalists when judging a criminal case. (See here for a classic example of this kind of hand-wringing.) I would like to dispel this notion. It’s really quite simple: “beyond a reasonable doubt” just means P(guilty|evidence) has to be above some threshold, like 99%, or something. In which case, if it’s 85%, you don’t convict. That’s all there is to it. (In particular, away with this nonsense about how P(guilty|evidence) is not the quantity jurors should be interested in; of course it is!)
From our perspective as rationality-advocates, the best means of improving justice is not some systematic reform of legal systems, but rather is simply to raise the sanity waterline of the population in general.
Now that you mention it directly, it’s flabbergasting that no one’s ever said what percentage level “beyond a reasonable doubt” corresponds to (legal eagles: correct me if I’m wrong). That’s a pretty gaping huge deviation from a properly Bayesian legal system right there.
Well, the number could hardly be made explicit, for political reasons (“you mean it’s acceptable to have x wrongful convictions per year?? We shouldn’t tolerate any at all!”).
In any case, let me not be interpreted as arguing that the legal system was designed by people with a deep understanding of Bayesianism. I say only that we, as Bayesians, are not prevented from working rationally within it.
This is the third time on LW that I’ve seen the percentage of certainty for convictions conflated with the percentage of wrongful convictions (I suspect it’s just quick writing or perhaps my overwillingness to see that implication on this particular post). They’re not identical.
Suppose we had a quantation standard of 99% certainty and juries were entirely rational actors, understanding of the thin slice 1% is, and given unskewed evidence. The percentage of wrongful convictions would be well under 1% at trial; juries would convict on cases from 99% certainty to c. 100% certainty. The actual percentage of wrongful convictions would depend on the skew of the cases in that range.
Yes, the certainty level provides a bound on the number of wrongful convictions. A 99% certainty requirement means at least 99% certainty, so an error rate of at most 1%.
It is, in fact, illegal to argue a quantation of “reasonable doubt.”
I’m a fan of the jury system, but I do think quantation would lead to less, not more, accuracy by juries. Arguing math to lawyers is bad enough; to have lawyers generally arguing math to juries is not going to work. (I like lawyers and juries, but mathy lawyers in criminal law are quite rare.)
Probably because the math isn’t explained properly.
That said, I do agree in the sense that I think juries can still come to the same verdict, the same way they do now (by intuition), and then just jigger the likelihood ratios to rationalize their decision. However, it’s still a significant improvement in that questionable judgments are made transparent.
For example, “Wait a sec—you gave 10 bits of evidence to Amanda Knox having a sex toy, but only 2 bits to her DNA being nowhere at the crime scene? What?”
Illegal??
From wikipedia:
It’s illegal for the prosecution or defense to do so in court. Apologies for the lack of context.
The 1971 paper that cites the .70-.74 numbers causes me to believe the people who participated were unbelievably bad at quantation, or that the flaws pointed out in 2006 paper of the 1971 paper are sufficient to destroy the value of that finding, or that this is one of many studies with fatal flaws. I expect there are very few jurors indeed who would convict with a belief that the defendant was 25% to be innocent.
I wonder if quantation interferes with analysis for some large group of people? Perhaps just the mention of math interferes with efficient analysis. I don’t know; I can say that in math- or physics-intensive cases, both sides try to simplify for the jury.
In fact, we have some types of cases with fact patterns that give us fairly narrow confidence ranges; if there’s a case where I’m 75% certain the guy did it, and no likely evidence or investigation will improve that number, that’s either not issued, or if that state has been reached post-issuance, the case is dismissed.
I wouldn’t go that far. There are many cases where the legal system explicitly deviates from Bayesianism. Some examples:
Despite the fact that Demographic Group X is more/less likely to have committed crime Y, neither side can introduce this as evidence, e.g. “Since my client is a woman, you should reduce the odds you assign to her having committed a murder by a factor of 4.” (Obviously, the jury will notice the race/gender of the defendant, but you can’t argue that this is informative about the odds of guilt.)
Prohibition on many types of prejudicial evidence that is informative about the probability of guilt (like whether the defendant is a felon). (This can be justified on grounds of cognitive bias maybe, but not Bayesian grounds.)
In the US, the Constitutional prohibition on using the defendant’s silence as evidence, despite its informativeness, e.g., “If he’s really innocent, why doesn’t he just tell his side of the story? What’s the big deal? Why did he wait hours before even saying what happened? Did he need to get his story straight first?” (Again, the jury will notice that the defendant didn’t take the stand, but you can’t draw their attention to this as the prosecution.)
The exclusionary rule. The impact of illegally-collected physical evidence (i.e. not forced confessions but e.g. warrantless searches) has a small to non-existent impact on the evidence’s strength. The policy on excluding illegally-obtained evidence may be justified on decision-theoretic grounds, but not on Bayesian grounds.
Outside of trials, the fact that you have to wait years before you hear a judge’s binding opinion on whether or not a law actually can be enforced (i.e. is Constitutional).
You give the legal system way too much credit.
But notice that these are examples of restrictions on evidence of guilt. The assumption (very reasonable, it seems to me) is that human irrationality tends in the direction of false positives, i.e. wrongful convictions. (Possibly along with the assumption that our values require a lower tolerance for false positives than false negatives.)
If juries are capable of convicting on the sort of evidence presented at the Knox/Sollecito trial (and they are, whether in Italy, the U.S., or anywhere else)...well, can you imagine all the false convictions we would have if such rules as you listed were relaxed?
The bias toward false positives is probably especially strong in criminal cases. The archetypal criminal offense is such that it unambiguously happened (not quite like the Willingham case), and in the ancestral human environment there were far fewer people around who could have done it. That makes the priors for everyone higher, which means that for whatever level of probability you’re asking for it takes less additional evidence to get there. That a person is acting strangely might well be enough—especially since you’d have enough familiarity with that person to establish a valid baseline, which doesn’t and can’t happen in any modern trial system.
Now add in the effects of other cognitive biases: we tend to magnify the importance of evidence against people we don’t like and excessively discount evidence against people we do. That’s strictly noise when dealing with modern criminal defendants, but ancestral humans actually knew the people in question, and had better reason for liking or disliking them. That might count as weak evidence by itself, and a perfect Bayesian would count it while also giving due consideration to the other evidence. But these weren’t just suspects, but your personal allies or rivals. Misweighing evidence could be a convenient way of strengthening your position in the tribe, and having a cognitive bias let you do that in all good conscience. We can’t just turn that off when we’re dealing with strangers, especially when the media creates a bogus familiarity.
No, they’re not. The first one I listed can go either way.
The second one can go either way too; it just as much excludes e.g. hearsay evidence that implicates someone else.
Sure, but that needs to be accounted for via the guilt probability threshold, not by reducing the accuracy of the evidence. Favoring acquittal through a high burden and biasing evidence in favor of the defendant is “double-dipping”.
I only listed a few examples off the top of my head. The appropriate comparison is to the general policy of, per Bayesianism, incorporating all informative evidence. This would probably lead to more accurate assessments of guilt. In particularly egregious cases like K/S, it would have been a tremendous boon to them to allow them to have an explicit guilt threshold and count up the (log) likelihood ratio of all the evidence.
In any case, remember that there’s a cost to false negatives as well. Although that’s heavily muddled by the fundamental injustice of so many laws for which such a cost is non-existent.
Let me take a step back here, because despite the fact that it sounds like we’re arguing, I find myself in total agreement with other comments of yours in this thread, in particular your description of how trials should work; I could scarcely have said it better myself.
Here’s what I claim: the rules of evidence constitute crude attempts to impose some degree of rationality on jurors and prosecutors who are otherwise not particularly inclined to be rational. These hacks are not always successful, and occasionally even backfire; and they would not be necessary or useful for Bayesian juries who could be counted on to evaluate evidence properly. However, removing such rules without improving the rationality of jurors would be a disaster.
(Let’s not forget, after all, that there were people here on LW who reacted with indignation at my dismissal of certain discredited evidence in the Knox case, protesting that legal rules of admissibility don’t apply to Bayesian calculations—as if I had been trying to pass off some kind of legal loophole as a Bayesian argument. Such people were apparently taking it for granted that this evidence was significant, which suggests to me that it is very difficult for people—even aspiring rationalists—to discount information they come across. This provides support for the necessity of rules that exclude certain kinds of information from courtrooms, given the population currently doing the judging.)
Okay, then I think we’re in agreement. I guess I had interpreted your earlier comment as a much stronger claim about the mapping between pure Bayesianism and existing legal systems, but I definitely agree with what you’ve said here. I would just note that it would probably be more accurate to say that the rules of evidence are hacks to approximate Bayes and correct for predictable cognitive biases, though perhaps in this context those aren’t quite separate categories.
I think that is an incomplete description of the justification of the rules of evidence—some of these rules are also introduced to discourage particular abuses of the system, such as unreasonable searches. Otherwise, agreed.
In that case, why should we design the system on Bayesian grounds?
I think that’s really why I concur with komponisto—our system may not be optimal, but optimal for a system has to work as a system, including resistance to gaming. Aside from what you suggest about constitutionality, on which I have no comment, your changes are generally unlikely to improve the ability of a legal system to prosecute the guilty and acquit the innocent.
I think the proper response to illegally obtained evidence, is to allow it to be presented as evidence, but charge those who obtained it with whatever crimes made its obtainment illegal.
The problem with implementing this in the current system is that the government has a monopoly on prosecuting criminal charges, so that agents of the government can get away with criminal acts. If ordinary citizens had the same power as district attorneys to seek indictments and prosecute criminal charges, it would provide a huge disincentive for illegally obtaining evidence, and many other government abuses.
Maybe we shouldn’t; I was just disputing komponisto’s insinuation that there’s some unappreciated, general mapping between Bayesianism and the existing justice system.
Even when it allows so much relative weight to be given to sociological “evidence” (“she had a wild sex life”) compared to physical evidence?
I agree that the necessity of a mapping has not been shown, although that’s not what I read into komponisto’s comment.
No. But that would be best corrected by sanity and education, not by changing the law. A jury of people interested primarily in the physical evidence would not be distracted by trivia about countercultural tendencies on the parts of relevant persons.
But I think it would make a big (positive) difference if everything had to be phrased in terms of likelihood ratios against a prior and guilt threshold.
Individual pieces of evidence are not independent. If Mortimer Q. Snodgrass is shown to have left his home at 11:50, arrived at the scene of the crime at midnight, and returned home fifteen minutes later is damning if the victim died at midnight and exculpatory if the victim died three hours later. There’s a combinatorial explosion trying to describe the effects of every piece of evidence separately.
Sure, but at least each side can draw its theorized causal diagram, how the evidence fits in, how the likelihood ratios interplay (per Pearl’s method of separating inferential and causal evidence flows), and what probability that justifies. It would still lend a clarity of thought not currently present among the mouthbreathers on juries that haven’t been exposed to any of this, even if you had to train them in it first.
(And that would be easy if the trainers really understood [at Level 2 at least] causal diagrams and read my forthcoming article on guidelines for explaining...)
Please make this come forth promptly. I plan to explain some pretty complicated stuff to a bunch of people soon, and could use the help!
I’ll do my best.
I’ll put off judgment until after your article, then.
Thanks, but I don’t see how much the points being discussed here hinge on it.
Are you saying that you’re skeptical that Pearl’s networks and Bayesian inference can be quickly (e.g. over a day or so) explained to random people selected for jury duty, but might be convinced of the ease of such training after seeing my exposition of how to enhance your explanatory abilities?
Related: maybe you just suck at explaining
LOL, you have no idea how many times I’ve thought that about people who claim something’s hard to explain …
Yes. Edit: That’s probably a better summary of my thoughts than I could give at the moment, even.
Can I call ’em or what? ;-)
I aim to be predictable. (-:
Hm, now that I think about it, that by itself should be evidence I have some abnormally high explanatory mojo—if I could explain your position to you better than you could explain it to yourself. :-P
Don’t promote the hypothesis excessively—you’re comparing yourself to The Worst Debater In The World with sleep deprivation. (-;
Damn I’m good B-)
Were defense attorneys left out by accident, or do you think it’s not important that they be Bayesian?
It’s important that everyone be Bayesian, of course.
To address the implied subtext: yes, I’m in general more worried about false convictions than false acquittals.
Arguably, if investigators and jurors were pure Bayesian epistemic rationalists, attorneys (on either side) wouldn’t even be necessary. That’s an extremely fanciful state of affairs, however.