So, okay, how would you tell the difference between an argument that “sounds convincing” and one which should actually be considered rationally persuasive?
It’s not an easy problem, in general—hence LW!
But we can always start by doing the Bayesian calculation. What’s your prior for the hypothesis that the U.S, government was complicit in the 9/11 attacks? What’s your estimate of the strength of each of those pieces of evidence you think is indicative of a conspiracy?
I’d be curious to know what kind of ideas with substantial numbers of adherents you would feel safe in dismissing without bothering to research.
None that I can think of. Again, what’s your point? I am not “dismissing” the dominant conclusion, I am questioning it.
You misunderstood. I was talking about your failure to dismiss 9/11 conspiracy theories. I was asking whether there were any conspiracy theories that you would be willing to dismiss without research.
Again, I think this question is a diversion from what I have been arguing; its truth or falseness does not substantially affect the truth or falseness of my actual claims (as opposed to beliefs mentioned in passing).
That said, I made a start at a Bayesian analysis, but ran out of mental swap-space. If someone wants to suggest what I need to do next, I might be able to do it.
Also vaguely relevant—this matrix is set up much more like a classical Bayesian word-problem: it lists the various pieces of evidence which we would expect to observe for each known manner in which a high-rise steel-frame building might run down the curtain and join the choir invisible, and then shows what was actually observed in the cases of WTC1, 2, and 7.
Is there enough information there to calculate some odds, or are there still bits missing?
You misunderstood. I was talking about your failure to dismiss 9/11 conspiracy theories. I was asking whether there were any conspiracy theories that you would be willing to dismiss without research.
No, not really. I think of that as my “job” at Issuepedia: don’t dismiss anything without looking at it. Document the process of examination so that others don’t have to repeat it, and so that those who aren’t sure what to believe can quickly see the evidence for themselves (rather than having to go collect it) -- and can enter in any new arguments or questions they might have.
Does that process seem inherently flawed somehow? I’m not sure what you’re suggesting by your use of the word “failure” here.
(Some folks have expressed disapproval of this conversation continuing in this thread; ironically, though, it’s becoming more and more an explicit lesson in Bayesianism—as this comment in particular will demonstrate. Nevertheless, after this comment, I am willing to move it elsewhere, if people insist.)
Again, I think this question is a diversion from what I have been arguing; its truth or falseness does not substantially affect the truth or falseness of my actual claims (as opposed to beliefs mentioned in passing)
You’re in Bayes-land here, not a debating society. Beliefs are what we’re interested in. There’s no distinction between an argument that a certain point of view should be taken seriously and an argument that the point of view in question has a significant probability of being true. If you want to make a case for the former, you’ll necessarily have to make a case for the latter.
That said, I made a start at a Bayesian analysis, but ran out of mental swap-space. If someone wants to suggest what I need to do next, I might be able to do it.
Here’s how you do a Bayesian analysis: you start with a prior probability P(H). Then you consider how much more likely the evidence is to occur if your hypothesis is true (P(E|H)) than it is in general (P(E)) -- that is, you calculate P(E|H)/P(E). Multiplying this “strength of evidence” ratio P(E|H)/P(E) by the prior probability P(H) gives you your posterior (updated) probability P(H|E).
Alternatively, you could think in terms of odds: starting with the prior odds P(H)/P(~H), and considering how much more likely the evidence is to occur if your hypothesis is true (P(E|H)) than if it is false (P(E|~H)); the ratio P(E|H)/P(E|~H) is called the “likelihood ratio” of the evidence. Multiplying the prior odds by the likelihood ratio gives you the posterior odds P(H|E)/P(~H|E).
One of the two questions you need to answer is: by what factor do you think the evidence raises the probability/odds of your hypothesis being true? Are we talking twice as likely? Ten times? A hundred times?
If you know that, plus your current estimate of how likely your hypothesis is, division will tell you what your prior was—which is the other question you need to answer.
Is there enough information there to calculate some odds, or are there still bits missing?
If there’s enough information for you to have a belief, then there’s enough information to calculate the odds. Because, if you’re a Bayesian, that’s what these numbers represent in the first place: your degree of belief.
I’m not sure what you’re suggesting by your use of the word “failure” here
“Your failure to dismiss...” is simply an English-language locution that means “The fact that you did not dismiss...”
This is a flat-out Bayesian contradiction.
It’s not an easy problem, in general—hence LW!
But we can always start by doing the Bayesian calculation. What’s your prior for the hypothesis that the U.S, government was complicit in the 9/11 attacks? What’s your estimate of the strength of each of those pieces of evidence you think is indicative of a conspiracy?
You misunderstood. I was talking about your failure to dismiss 9/11 conspiracy theories. I was asking whether there were any conspiracy theories that you would be willing to dismiss without research.
Again, I think this question is a diversion from what I have been arguing; its truth or falseness does not substantially affect the truth or falseness of my actual claims (as opposed to beliefs mentioned in passing).
That said, I made a start at a Bayesian analysis, but ran out of mental swap-space. If someone wants to suggest what I need to do next, I might be able to do it.
Also vaguely relevant—this matrix is set up much more like a classical Bayesian word-problem: it lists the various pieces of evidence which we would expect to observe for each known manner in which a high-rise steel-frame building might run down the curtain and join the choir invisible, and then shows what was actually observed in the cases of WTC1, 2, and 7.
Is there enough information there to calculate some odds, or are there still bits missing?
No, not really. I think of that as my “job” at Issuepedia: don’t dismiss anything without looking at it. Document the process of examination so that others don’t have to repeat it, and so that those who aren’t sure what to believe can quickly see the evidence for themselves (rather than having to go collect it) -- and can enter in any new arguments or questions they might have.
Does that process seem inherently flawed somehow? I’m not sure what you’re suggesting by your use of the word “failure” here.
(Some folks have expressed disapproval of this conversation continuing in this thread; ironically, though, it’s becoming more and more an explicit lesson in Bayesianism—as this comment in particular will demonstrate. Nevertheless, after this comment, I am willing to move it elsewhere, if people insist.)
You’re in Bayes-land here, not a debating society. Beliefs are what we’re interested in. There’s no distinction between an argument that a certain point of view should be taken seriously and an argument that the point of view in question has a significant probability of being true. If you want to make a case for the former, you’ll necessarily have to make a case for the latter.
Here’s how you do a Bayesian analysis: you start with a prior probability P(H). Then you consider how much more likely the evidence is to occur if your hypothesis is true (P(E|H)) than it is in general (P(E)) -- that is, you calculate P(E|H)/P(E). Multiplying this “strength of evidence” ratio P(E|H)/P(E) by the prior probability P(H) gives you your posterior (updated) probability P(H|E).
Alternatively, you could think in terms of odds: starting with the prior odds P(H)/P(~H), and considering how much more likely the evidence is to occur if your hypothesis is true (P(E|H)) than if it is false (P(E|~H)); the ratio P(E|H)/P(E|~H) is called the “likelihood ratio” of the evidence. Multiplying the prior odds by the likelihood ratio gives you the posterior odds P(H|E)/P(~H|E).
One of the two questions you need to answer is: by what factor do you think the evidence raises the probability/odds of your hypothesis being true? Are we talking twice as likely? Ten times? A hundred times?
If you know that, plus your current estimate of how likely your hypothesis is, division will tell you what your prior was—which is the other question you need to answer.
If there’s enough information for you to have a belief, then there’s enough information to calculate the odds. Because, if you’re a Bayesian, that’s what these numbers represent in the first place: your degree of belief.
“Your failure to dismiss...” is simply an English-language locution that means “The fact that you did not dismiss...”