Tyler Cowen offering a reason why the CRU hacked emails should raise our confidence in AGW…
That’s not what he is saying. His argument is not that the hacked emails actually should raise our confidence in AGW. His argument is that there is a possible scenario under which this should happen, and the probability that this scenario is true is not infinitesimal. The alternative possibility—that the scientists really are smearing the opposition with no good reason—is far more likely, and thus the net effect on our posteriors is to reduce them—or at least keep them the same if you agree with Robin Hanson.
Here’s (part of) what Tyler actually said:
Another response, not entirely out of the ballpark, is: 2. “These people behaved dishonorably. They must have thought this issue was really important, worth risking their scientific reputations for. I will revise upward my estimate of the seriousness of the problem.”
I am not saying that #2 is correct, I am only saying that #2 deserves more than p = 0. Yet I have not seen anyone raise the possibility of #2.
Tyler Cowen offering a reason why the CRU hacked emails should raise our confidence in AGW...
That’s not what he is saying. His argument is not that the hacked emails actually should raise our confidence in AGW. His argument is that there is a possible scenario under which this should happen, and the probability that this scenario is true is not infinitesimal
Right—that is what I called “giving a reason why the hacked emails...” and I believe that characterization is accurate: he’s described a reason why they would raise our confidence in AGW.
The alternative possibility—that the scientists really are smearing the opposition with no good reason—is far more likely, and thus the net effect on our posteriors is to reduce them
This is reason why Tyler’s argument for a positive Bayes factor is in error, not a reason why my characterization was inaccurate.
The alternative possibility—that the scientists really are smearing the opposition with no good reason—is far more likely, and thus the net effect on our posteriors is to reduce them
This is reason why Tyler’s argument for a positive Bayes factor is in error, not a reason why my characterization was inaccurate.
Tyler isn’t arguing for a positive Bayes factor. (I assume that by “Bayes factor” you mean the net effect on the posterior probability). He posted a followup because many people misunderstood him. Excerpt:
I did not try to justify any absolute level of belief in AGW, or government spending for that matter. I’ll repeat my main point about our broader Bayesian calculations:
I am only saying that #2 [scientists behaving badly because they think the future of the world is at stake] deserves more than p = 0.
edited to add:
I’m not sure I understand you’re criticism, so here’s how I understood his argument. There are two major possibilities worth considering:
1) “These people behaved dishonorably.
and
2) “These people behaved dishonorably. They must have thought this issue was really important, worth risking their scientific reputations for.
Then the argument goes that the net effect of 1 is to lower our posteriors for AGW while the net effect of 2 is to raise them.
Finally, p(2 is true) != 0.
This doesn’t tell us the net effect of the event on our posteriors—for that we need p(1), p(2) and p(anything else). Presumably, Tyler thinks p(anything else) ~ 0, but that’s a side issue.
Is this how you read him? If so, which part do you disagree with?
I assume that by “Bayes factor” you mean the net effect on the posterior probability).
I’m using the standard meaning: for a hypothesis H and evidence E, the bayes factor is p(E|H)/p(E|~H). It’s easiest to think of it as the factor you mutiply your prior odds to get posterior odds. (Odds, not probabilities.) Which means I goofed and said “positive” when I meant “above unity” :-/
I read Tyler as not knowing what he’s talking about. For one thing, do you notice how he’s trying to justify why something should have p>0 under a Bayesian analysis … when Bayesian inference already requires p’s to be greater than zero?
In his original post, he was explaining a scenario under which seeing fraud should make you raise your p(AGW). Though he’s not thinking clearly enough to say it, this is equivalent to describing a scenario under which the Bayes factor is greater than unity. (I admit I probably shouldn’t have said “argument for >1 Bayes factor”, but rather, “suggestion of plausibility of >1 Bayes factor”)
That’s the charitable interpretation of what he said. If he didn’t mean that, as you seem to think, then he’s presenting metrics that aren’t helpful, and this is clear when he think’s its some profound insight to put p(fraud due to importance of issue) greater than zero. Yes, there are cases where AGW is true despite this evidence—but what’s the impact on the Bayes factor?
I think we are arguing past each other, but it’s about interpreting someone else so I’m not that worried about it. I’ll add one more bullet to your list to clarify what I think Tyler is saying. If that doesn’t resolve it, oh well.
If we know with certainty that the secenario that Tyler described is true, that is if we know that the scientists fudged things because they knew that AGW was real and that the consequences were worth risking their reputations on, then Climategate has a Bayes factor above 1.
I don’t think Tyler was saying anything more than that. (Well, and P(his scenario) is non-negligible)
That’s not what he is saying. His argument is not that the hacked emails actually should raise our confidence in AGW. His argument is that there is a possible scenario under which this should happen, and the probability that this scenario is true is not infinitesimal. The alternative possibility—that the scientists really are smearing the opposition with no good reason—is far more likely, and thus the net effect on our posteriors is to reduce them—or at least keep them the same if you agree with Robin Hanson.
Here’s (part of) what Tyler actually said:
Right—that is what I called “giving a reason why the hacked emails...” and I believe that characterization is accurate: he’s described a reason why they would raise our confidence in AGW.
This is reason why Tyler’s argument for a positive Bayes factor is in error, not a reason why my characterization was inaccurate.
I think we agree on the substance.
Tyler isn’t arguing for a positive Bayes factor. (I assume that by “Bayes factor” you mean the net effect on the posterior probability). He posted a followup because many people misunderstood him. Excerpt:
edited to add:
I’m not sure I understand you’re criticism, so here’s how I understood his argument. There are two major possibilities worth considering:
and
Then the argument goes that the net effect of 1 is to lower our posteriors for AGW while the net effect of 2 is to raise them.
Finally, p(2 is true) != 0.
This doesn’t tell us the net effect of the event on our posteriors—for that we need p(1), p(2) and p(anything else). Presumably, Tyler thinks p(anything else) ~ 0, but that’s a side issue.
Is this how you read him? If so, which part do you disagree with?
I’m using the standard meaning: for a hypothesis H and evidence E, the bayes factor is p(E|H)/p(E|~H). It’s easiest to think of it as the factor you mutiply your prior odds to get posterior odds. (Odds, not probabilities.) Which means I goofed and said “positive” when I meant “above unity” :-/
I read Tyler as not knowing what he’s talking about. For one thing, do you notice how he’s trying to justify why something should have p>0 under a Bayesian analysis … when Bayesian inference already requires p’s to be greater than zero?
In his original post, he was explaining a scenario under which seeing fraud should make you raise your p(AGW). Though he’s not thinking clearly enough to say it, this is equivalent to describing a scenario under which the Bayes factor is greater than unity. (I admit I probably shouldn’t have said “argument for >1 Bayes factor”, but rather, “suggestion of plausibility of >1 Bayes factor”)
That’s the charitable interpretation of what he said. If he didn’t mean that, as you seem to think, then he’s presenting metrics that aren’t helpful, and this is clear when he think’s its some profound insight to put p(fraud due to importance of issue) greater than zero. Yes, there are cases where AGW is true despite this evidence—but what’s the impact on the Bayes factor?
Why should we care about arbitrarily small probabilities?
Tyler was not misunderstood: he used probability and Bayesian inference incorrectly and vacuously, then tried to backpedal. ( My comment on page 2.)
Anyway, I think we agree on the substance:
The fact that the p Tyler referred to is greater than zero is insufficient information to know how to update.
The scenario Tyler described is insufficient to give Climategate a Bayes factor above 1.
(I was going to the drop the issue, but you seem serious about de-Aumanning this, so I gave a full reply.)
I think we are arguing past each other, but it’s about interpreting someone else so I’m not that worried about it. I’ll add one more bullet to your list to clarify what I think Tyler is saying. If that doesn’t resolve it, oh well.
If we know with certainty that the secenario that Tyler described is true, that is if we know that the scientists fudged things because they knew that AGW was real and that the consequences were worth risking their reputations on, then Climategate has a Bayes factor above 1.
I don’t think Tyler was saying anything more than that. (Well, and P(his scenario) is non-negligible)