You can’t use two pieces of contradictory evidence to support the same argument. If the most highly contested cases still have a chance at success, finding 0% success rate at the furthest distance from the last break (because they are the longest cases and therefore placed last) should not increase your belief that there is no bias at work. It should reduce it. How significantly your belief is reduced depends on just how likely you would see 0% success rates at a high distance from break due only to scheduling, but I can’t see any way it could legitimately raise your belief that there is no bias.
I kinda doubt it . . . it goes against common sense that there are judges who, once they get hungry, rule against any parole applications no matter how compelling.
You can’t use two pieces of contradictory evidence to support the same argument.
Yes you can, and I can demonstrate it by stepping back and demonstrating this point with an example in abstract terms:
Let’s suppose that we are debating whether Hypothesis X is correct or Hypothesis Y is correct. I am relying on evidence A which seems to support hypothesis X. You are relying on Evidence B which seems to support hypothesis Y.
Ok, now suppose you present Evidence C which contradicts my hypothesis—Hypothesis X. Does Evidence C make my hypothesis less likely to be correct? Not necessarily. If Evidence C contradicts Hypothesis Y even more acutely than Hypothesis X, then Hypothesis X is actually more likely than it was before.
So situations can arise where evidence comes out which contradicts a hypothesis but still makes that hypothesis more likely to be correct.
And that’s pretty much the situation here. Your observation about a zero percent success rate at the end of the day in some cases undermines the ‘hunger’ hypothesis at least as much as it undermines the hypothesis that contested cases are being put at the end of the session (or the hypothesis that there is some other ordering factor at work).
No, Hypothesis X and Hypothesis Y are now both less likely.
No, since the debate (in my abstract example) is over whether Hypothesis X is correct or Hypothesis Y is correct. In my abstract example, one and only one hypothesis is correct. So you can’t have a situation where both are less likely.
I have no idea what you think the word “likely” means, or why on Earth anyone should care about that thing.
The thing I talk about when I talk about likelihood is a thing that is affected by evidence. That thing absolutely goes down for X given evidence contradicting X, and goes down for Y given evidence contradicting Y, and is not affected by what debate I might or might not be having at any given moment.
I have no idea what you think the word “likely” means, or why on Earth anyone should care about that thing.
“more likely” means greater probability. “less likely” means lower probability.
That thing absolutely goes down for X given evidence contradicting X
Technically I agree with you, but in interpreting bigjeff5′s post, I understood him to be using a more flexible definition of “contradictory evidence” and that is what I used in my response. Either way, my basic point is the same.
If we’re talking about the probabilities of X and Y, as you say here, then the evidence against them lowers those probabilities, and the fact that the debate in your abstract example is over whether X or Y is correct doesn’t change that. It is a situation where both are less likely than they were before C was known.
If your basic point is consistent with that, then I do not understand your basic point. It sure sounds to me like your basic point was that C made one of those assertions more likely, which is false.
Let me propose a charitable interpretation of what brazil84 is saying (he can correct me if I am wrong). Here is an example:
We are discussing who committed a crime. There are three and only three suspects: Peter, Paul and Mary. Mary has an excellent alibi, so she’s basically out of the running. There is some evidence both for Peter’s and for Paul’s guilt. Let’s say we agree that the probabilities of each being guilty are: Mary 2%, Peter 49%, Paul 49%.
Then a witness comes up who saw someone wearing a dress in the scene of the crime. Since men are a priori unlikely to wear dresses, this lowers the probability of Peter or Paul doing it.. Let’s say however that for whatever reason, we agree that it slightly less unlikely a priori that Peter would wear a dress than that Paul would wear it. Mary’s alibi is so good that the new evidence only raises very slightly her probability of being guilty. The posterior probabilities are: Mary: 6%, Peter: 48%, Paul 46%.
This seems like a situation which might be described with brazil84′s quote
“so situations can arise where evidence comes out which contradicts a hypothesis but still makes that hypothesis more likely to be correct”
in the sense that Peter’s guilt, even though in the absolute sense less likely (the evidence “contradicted” it) should now be our top hypothesis; it is “more likely to be correct” compared to the only plausible alternative.
I agree that brazil84′s way of putting it was a bit confusing, if this is what he meant.
I certainly agree that the situation you describe can occur. (I could quibble about whether the probability-shift for Mary actually depends on the quality of her alibi here, as that seems like double-counting evidence, but either way it’s entirely possible for the posterior probabilities to come out the way you describe.)
And, OK, sure, if “more likely to be correct” is understood as “more likely [than some other hypothesis] to be correct”, rather than “more likely [than it was before that evidence arrived] to be correct”, I agree that the phrase describes the situation. That is, as you say, a bit confusing, but not false.
So, OK. Provisionally adopting that interpretation and returning to the original comment… their initial comment was “situations can arise where evidence comes out which contradicts a hypothesis but still makes that hypothesis more likely to be correct”. Which, sure, if I understand that to mean “more likely [than some other hypothesis] to be correct” is absolutely true.
All of which was meant, I think, to refute bigjeff5′s comment about what sort of evidence should increase confidence in the belief that there is no bias. Which I understood to refer to increasing confidence relative to earlier confidence.
I think that’s pretty close. If I am arguing that Paul committed the murder (and you are arguing that Peter committed the murder) it doesn’t really help your argument to point out that there is evidence the murderer was wearing a dress since it undermines your own position just as much as it undermines the position you have taken.
Getting back to the original discussion, another poster pointed out that my “contested cases later” hypothesis is undermined by the fact is undermined by the observation that for some judges there is a zero percent approval rate for later cases. The problem with this argument is that it undermines the “hunger” hypothesis even more than the “contested cases later” hypothesis.
If we’re talking about the probabilities of X and Y, as you say here, then the evidence against them lowers those probabilities
Not if it’s just a matter of choosing X or Y. It’s impossible in such a situation for a piece of evidence to lower both probabilities.
Perhaps an example will make it clearer:
Let’s suppose that a victim is found dead in a pool of blood, apparently having died from a gunshot wound.
There are two possibilities: (1) He was shot from a distance with a rifle; and (2) He was shot at close range with a small caliber handgun. I favor the first hypothesis and you favor the second.
Ok, now let’s suppose we find a new piece of evidence: There is no bullet found inside or around the victim’s body. Further, it is known that if somebody is shot from a distance with a rifle, a bullet will be find in or around the person’s body 99.99% of the time.
In common parlance, one might say that such a piece of evidence contradicts or undermines the hypothesis that the person was shot from a distance with a rifle. Since we have just seen something which is totally unexpected if our hypothesis is correct.
On the other hand, suppose we know that being shot at close range with a handgun carries a 99.999% chance of finding a bullet in or around the victim’s body. In that case, what has been reasonably described as “contradictory evidence” actually increases the chances that the first hypothesis is correct.
The probability of both, in that case, plummets, and you should start looking at other explanations. Like, say, that the victim was shot with a rifle at close range, which only leaves a bullet in the body 1% of the time (or whatever).
It might be true that, between two hypotheses one is now more likely to be true than the other, but the probability for both still dropped, and your confidence in your pet hypothesis should still drop right along with its probability of being correct.
So say you have hypothesis X at 60% confidence and hypotheses Y at 40% New evidence comes along that shifts your confidence of X down to 20%, and Y down to 35%. Y didn’t just “win”. Y is now even more likely to be wrong than it was before the new evidence came it. The only substantive difference is that now X is probably wrong too. If you notice, there’s 45% probability there we haven’t accounted for. If this is all bound up in a single hypothesis Z, then Z is the one that is the most likely to be correct.
Contradictory evidence shouldn’t make you more confident in your hypothesis.
That’s just not so, since the total of the two probabilities equals one. If the probability of murder with a rifle drops, the probability of murder with a handgun necessarily rises. I’m not sure how to make this point any clearer . . . . perhaps a couple equations will help:
Let’s suppose that X and Y are mutually exclusive and collectively exhaustive hypotheses.
If either X or Y has to be true, you cannot have 20% for X and 35% for Y. The remaining 45% would be a contradiction (Neither X nor Y, but “X or Y”).
While you can work with those numbers (20 and 35), they are not probabilities any more—they are relative probabilities.
It is very unlikely that the murderer won in the lottery. However, if a suspect did win in the lottery, this does not reduce the probability that he is guilty—he has the same (low) probability as all others.
I’m talking about probability estimates. The actual probability of what happened is 1, because it is what happened. However, we don’t know what happened, that’s why we make a probability estimate in the first place!
Forcing yourself to commit to only one of two possibilities in the real world (which is what all of these analogies are supposed to tie back to), when there are a lot of initially low probability possibilities that are initially ignored (and rightly so), seems incredibly foolish.
Also, your analogy doesn’t fit brazil84′s murder example. What evidence does the lottery win give that allows us to adjust our probability estimate for how the gun was fired? I’m not sure where you’re going with that, at all.
The real probability of however the bullet was fired is 100%. All we’ve been talking about are our probability estimates based on the limited evidence we have. They are necessarily incomplete. If new evidence makes both of our hypotheses less likely, then it’s probably smart to check and see if a third hypotheses is now feasible, where it wasn’t before.
brazil84 stated that there are just two options, so let’s stick to that example first.
“[rifle] no bullet will be find in or around the person’s body 0.01% of the time” is contradictory evidence against the rifle (and for the handgun). But “[handgun] no bullet will be find in or around the person’s body 0.001% of the time” is even stronger evidence against the handgun (and for the rifle). In total, we have some evidence for the rifle.
Now let’s add a .001%-probability that it was not a gunshot wound—in this case, the probability to find no bullet is (close to) 100%. Rifle gets an initial probability of 60% and handgun gets 40% (+ rounding error).
So let’s update:
No gunshot: 0.001 → 0.001
Rifle: 60 → 0.006
Handgun: 40 → 0.0004
Of course, the probability that one of those 3 happened has to be 1 (counting all guns as “handgun” or “rifle”), so let’s convert that back to probabilities:
0.001+0.006+0.0004 = 0.0074
No gunshot: 0.001/0.0074=13.5%
Rifle: 0.006/0.0074=81.1%
Handgun: 0.0004/0.0074=5.4%
The rifle and handgun numbers increased the probability of a rifle shot, as the probability for “no gunshot” was very small. All numbers are our estimates, of course.
The break distance bias found in the papers?
You can’t use two pieces of contradictory evidence to support the same argument. If the most highly contested cases still have a chance at success, finding 0% success rate at the furthest distance from the last break (because they are the longest cases and therefore placed last) should not increase your belief that there is no bias at work. It should reduce it. How significantly your belief is reduced depends on just how likely you would see 0% success rates at a high distance from break due only to scheduling, but I can’t see any way it could legitimately raise your belief that there is no bias.
I kinda doubt it . . . it goes against common sense that there are judges who, once they get hungry, rule against any parole applications no matter how compelling.
Yes you can, and I can demonstrate it by stepping back and demonstrating this point with an example in abstract terms:
Let’s suppose that we are debating whether Hypothesis X is correct or Hypothesis Y is correct. I am relying on evidence A which seems to support hypothesis X. You are relying on Evidence B which seems to support hypothesis Y.
Ok, now suppose you present Evidence C which contradicts my hypothesis—Hypothesis X. Does Evidence C make my hypothesis less likely to be correct? Not necessarily. If Evidence C contradicts Hypothesis Y even more acutely than Hypothesis X, then Hypothesis X is actually more likely than it was before.
So situations can arise where evidence comes out which contradicts a hypothesis but still makes that hypothesis more likely to be correct.
And that’s pretty much the situation here. Your observation about a zero percent success rate at the end of the day in some cases undermines the ‘hunger’ hypothesis at least as much as it undermines the hypothesis that contested cases are being put at the end of the session (or the hypothesis that there is some other ordering factor at work).
No, Hypothesis X and Hypothesis Y are now both less likely.
No, since the debate (in my abstract example) is over whether Hypothesis X is correct or Hypothesis Y is correct. In my abstract example, one and only one hypothesis is correct. So you can’t have a situation where both are less likely.
I have no idea what you think the word “likely” means, or why on Earth anyone should care about that thing.
The thing I talk about when I talk about likelihood is a thing that is affected by evidence. That thing absolutely goes down for X given evidence contradicting X, and goes down for Y given evidence contradicting Y, and is not affected by what debate I might or might not be having at any given moment.
“more likely” means greater probability. “less likely” means lower probability.
Technically I agree with you, but in interpreting bigjeff5′s post, I understood him to be using a more flexible definition of “contradictory evidence” and that is what I used in my response. Either way, my basic point is the same.
You’ve lost me completely.
If we’re talking about the probabilities of X and Y, as you say here, then the evidence against them lowers those probabilities, and the fact that the debate in your abstract example is over whether X or Y is correct doesn’t change that. It is a situation where both are less likely than they were before C was known.
If your basic point is consistent with that, then I do not understand your basic point. It sure sounds to me like your basic point was that C made one of those assertions more likely, which is false.
Let me propose a charitable interpretation of what brazil84 is saying (he can correct me if I am wrong). Here is an example:
We are discussing who committed a crime. There are three and only three suspects: Peter, Paul and Mary. Mary has an excellent alibi, so she’s basically out of the running. There is some evidence both for Peter’s and for Paul’s guilt. Let’s say we agree that the probabilities of each being guilty are: Mary 2%, Peter 49%, Paul 49%.
Then a witness comes up who saw someone wearing a dress in the scene of the crime. Since men are a priori unlikely to wear dresses, this lowers the probability of Peter or Paul doing it.. Let’s say however that for whatever reason, we agree that it slightly less unlikely a priori that Peter would wear a dress than that Paul would wear it. Mary’s alibi is so good that the new evidence only raises very slightly her probability of being guilty. The posterior probabilities are: Mary: 6%, Peter: 48%, Paul 46%.
This seems like a situation which might be described with brazil84′s quote
in the sense that Peter’s guilt, even though in the absolute sense less likely (the evidence “contradicted” it) should now be our top hypothesis; it is “more likely to be correct” compared to the only plausible alternative.
I agree that brazil84′s way of putting it was a bit confusing, if this is what he meant.
I certainly agree that the situation you describe can occur. (I could quibble about whether the probability-shift for Mary actually depends on the quality of her alibi here, as that seems like double-counting evidence, but either way it’s entirely possible for the posterior probabilities to come out the way you describe.)
And, OK, sure, if “more likely to be correct” is understood as “more likely [than some other hypothesis] to be correct”, rather than “more likely [than it was before that evidence arrived] to be correct”, I agree that the phrase describes the situation. That is, as you say, a bit confusing, but not false.
So, OK. Provisionally adopting that interpretation and returning to the original comment… their initial comment was “situations can arise where evidence comes out which contradicts a hypothesis but still makes that hypothesis more likely to be correct”. Which, sure, if I understand that to mean “more likely [than some other hypothesis] to be correct” is absolutely true.
All of which was meant, I think, to refute bigjeff5′s comment about what sort of evidence should increase confidence in the belief that there is no bias. Which I understood to refer to increasing confidence relative to earlier confidence.
I think that’s pretty close. If I am arguing that Paul committed the murder (and you are arguing that Peter committed the murder) it doesn’t really help your argument to point out that there is evidence the murderer was wearing a dress since it undermines your own position just as much as it undermines the position you have taken.
Getting back to the original discussion, another poster pointed out that my “contested cases later” hypothesis is undermined by the fact is undermined by the observation that for some judges there is a zero percent approval rate for later cases. The problem with this argument is that it undermines the “hunger” hypothesis even more than the “contested cases later” hypothesis.
Not if it’s just a matter of choosing X or Y. It’s impossible in such a situation for a piece of evidence to lower both probabilities.
Perhaps an example will make it clearer:
Let’s suppose that a victim is found dead in a pool of blood, apparently having died from a gunshot wound.
There are two possibilities: (1) He was shot from a distance with a rifle; and (2) He was shot at close range with a small caliber handgun. I favor the first hypothesis and you favor the second.
Ok, now let’s suppose we find a new piece of evidence: There is no bullet found inside or around the victim’s body. Further, it is known that if somebody is shot from a distance with a rifle, a bullet will be find in or around the person’s body 99.99% of the time.
In common parlance, one might say that such a piece of evidence contradicts or undermines the hypothesis that the person was shot from a distance with a rifle. Since we have just seen something which is totally unexpected if our hypothesis is correct.
On the other hand, suppose we know that being shot at close range with a handgun carries a 99.999% chance of finding a bullet in or around the victim’s body. In that case, what has been reasonably described as “contradictory evidence” actually increases the chances that the first hypothesis is correct.
Hope that makes things clear for you.
The probability of both, in that case, plummets, and you should start looking at other explanations. Like, say, that the victim was shot with a rifle at close range, which only leaves a bullet in the body 1% of the time (or whatever).
It might be true that, between two hypotheses one is now more likely to be true than the other, but the probability for both still dropped, and your confidence in your pet hypothesis should still drop right along with its probability of being correct.
So say you have hypothesis X at 60% confidence and hypotheses Y at 40% New evidence comes along that shifts your confidence of X down to 20%, and Y down to 35%. Y didn’t just “win”. Y is now even more likely to be wrong than it was before the new evidence came it. The only substantive difference is that now X is probably wrong too. If you notice, there’s 45% probability there we haven’t accounted for. If this is all bound up in a single hypothesis Z, then Z is the one that is the most likely to be correct.
Contradictory evidence shouldn’t make you more confident in your hypothesis.
That’s just not so, since the total of the two probabilities equals one. If the probability of murder with a rifle drops, the probability of murder with a handgun necessarily rises. I’m not sure how to make this point any clearer . . . . perhaps a couple equations will help:
Let’s suppose that X and Y are mutually exclusive and collectively exhaustive hypotheses.
In that case, do you agree that P(X) + P(Y) = 1?
Also, do you agree that P(X|E) + P(Y|E) = 1 ?
If either X or Y has to be true, you cannot have 20% for X and 35% for Y. The remaining 45% would be a contradiction (Neither X nor Y, but “X or Y”). While you can work with those numbers (20 and 35), they are not probabilities any more—they are relative probabilities.
It is very unlikely that the murderer won in the lottery. However, if a suspect did win in the lottery, this does not reduce the probability that he is guilty—he has the same (low) probability as all others.
I’m talking about probability estimates. The actual probability of what happened is 1, because it is what happened. However, we don’t know what happened, that’s why we make a probability estimate in the first place!
Forcing yourself to commit to only one of two possibilities in the real world (which is what all of these analogies are supposed to tie back to), when there are a lot of initially low probability possibilities that are initially ignored (and rightly so), seems incredibly foolish.
Also, your analogy doesn’t fit brazil84′s murder example. What evidence does the lottery win give that allows us to adjust our probability estimate for how the gun was fired? I’m not sure where you’re going with that, at all.
The real probability of however the bullet was fired is 100%. All we’ve been talking about are our probability estimates based on the limited evidence we have. They are necessarily incomplete. If new evidence makes both of our hypotheses less likely, then it’s probably smart to check and see if a third hypotheses is now feasible, where it wasn’t before.
brazil84 stated that there are just two options, so let’s stick to that example first.
“[rifle] no bullet will be find in or around the person’s body 0.01% of the time” is contradictory evidence against the rifle (and for the handgun). But “[handgun] no bullet will be find in or around the person’s body 0.001% of the time” is even stronger evidence against the handgun (and for the rifle). In total, we have some evidence for the rifle.
Now let’s add a .001%-probability that it was not a gunshot wound—in this case, the probability to find no bullet is (close to) 100%. Rifle gets an initial probability of 60% and handgun gets 40% (+ rounding error).
So let’s update: No gunshot: 0.001 → 0.001 Rifle: 60 → 0.006 Handgun: 40 → 0.0004
Of course, the probability that one of those 3 happened has to be 1 (counting all guns as “handgun” or “rifle”), so let’s convert that back to probabilities: 0.001+0.006+0.0004 = 0.0074 No gunshot: 0.001/0.0074=13.5% Rifle: 0.006/0.0074=81.1% Handgun: 0.0004/0.0074=5.4%
The rifle and handgun numbers increased the probability of a rifle shot, as the probability for “no gunshot” was very small. All numbers are our estimates, of course.
I believe brazil84 is describing this:
P(X | C & (X v Y)) > P(X | X v Y)
P(Y | C & (X v Y)) < P(Y | X v Y)
while you are describing this:
P(X | C) < P(X)
P(Y | C) < P(Y)
All four of these statements can be true.
(nods)
See also alejandro1′s sibling comment and my reply.