Part of the point is you’re not allowed to do this!
I’m allowed to believe whatever I want; I’m just not allowed to try to convince you of it unless I have a rational argument.
Isn’t this what Bayesianism is all about—reaching the most likely conclusion in the face of weak or inconclusive evidence? Or am I misunderstanding something?
I do have arguments for my belief, but I’m not really prepared to spend the time getting into it; it’s not essential to my main thesis, and I mentioned it only in passing as a way of giving context, to wit: “some people believe this, and I’m not trying to dismiss them, partly because I happen to agree with them, but that belief is entirely beside the point”.
On your OT: You win a cookie! I had to research this a bit to figure out what happened, but apparently some 9/11 researchers found a list of passenger-victims and thought it was a passenger manifest. One anomaly does remain in that 6 of the alleged hijackers have turned up alive, but I wouldn’t call that enough of an anomaly to be worth worrying about.
(Found the offending factoid under “comments” on the position page; fixing it...)
I’m allowed to believe whatever I want; I’m just not allowed to try to convince you of it unless I have a rational argument.
Traditional Rationality is often expressed as social rules, under which this claim might work. But in Bayesian Rationality, there is math that tells you exactly what you ought to believe given the evidence you have observed.
Okay—but in practicality, what if I don’t have time (or mental focus, or whatever resources it takes) to explicitly identify, enumerate, and evaluate each piece of evidence that I may be considering? It took me over an hour just to get this far with a Bayesian analysis of one hypothesis, which I’m probably not even doing right.
Or do we step outside the realm of Bayesian Rationality when we look at practical considerations like “finite computing resources”?
I’d actually say, start with the prior and with the strongest piece of evidence you think you have. This of itself should reveal something interesting and disputable.
As someone who recently failed at an attempt at Bayesian analysis let my try to offer a few pointers:
You correctly conclude that “What is the likelihood that evidence E would occur even if H were false?” is more immediately relevant than “What is the likelihood that evidence E would not occur if H were true?”, which you only asked because you got the syntax wrong, “the likelihood that evidence E would occur even if H were false” would be P(E|~H).
P(H) is your prior, the probability before considering any evidence E, not the probability in absence of any evidence.
The considerations you list under evidence against are of the sort you would make when determining the priors, asking “What is the likelihood that Bush is a twit if H were true?” and so on would be very difficult to set probabilities for, you CAN threat it that way but it’s far from straightforward.
Actually I have never seen a non-trivial example of this sort of analysis for this sort of real word problem done right on this site.
H = this sort of analysis is practical
E = user FAWS has not seen any example of this sort of analysis done right.
P(H)=0.9 smart people like Eliezer seem to praise Bayesian thinking, and people ask for priors and so on.
P(E|H)= 0.3 I haven’t read every comment, probably not even 10%, but if this is used anywhere it would be here, and if it’s practical it should be used at least somewhat regularly.
P(E|~H) =0.9 Might still be done even if impractical when it’s a point of pride and / or group identification, which could be argued to be the case.
I’m allowed to believe whatever I want; I’m just not allowed to try to convince you of it unless I have a rational argument.
Isn’t this what Bayesianism is all about—reaching the most likely conclusion in the face of weak or inconclusive evidence? Or am I misunderstanding something?
The best source to look at here is Probability is Subjectively Objective. You cannot (in the bayesian sense) believe whatever you ‘want’. There is precisely one set of beliefs to which you are epistemically entitled given your current evidence even though I are obliged to form a different set of beliefs given what I have been exposed to.
Isn’t this what Bayesianism is all about—reaching the most likely conclusion in the face of weak or inconclusive evidence? Or am I misunderstanding something?
Reaching the most likely conclusion while uncertain yes. But that doesn’t mean believing things without evidence.
One anomaly does remain in that 6 of the alleged hijackers have turned up alive, but I wouldn’t call that enough of an anomaly to be worth worrying about.
Really? I’d worry about that. That would be a big deal. At the least it would be really embarrassing for the FBI. But it isn’t true either!
But that doesn’t mean believing things without evidence.
Lacking sufficient resources (time, energy, focus) to be able to enumerate one’s evidence is not the same as not having any. I believe that I have sufficient evidence to believe what I believe, but I do not currently have a transcript of the reasoning by which I arrived at this belief.
But it isn’t true either!
What is your evidence that it isn’t true? Here’s mine. Note that each claim is footnoted with a reference to a mainstream source.
What is your evidence that it isn’t true? Here’s mine.
What you provide is evidence that some people shared names and some other data with the hijackers. You haven’t shown that the actual people identified by the FBI later turned up alive.
I’m allowed to believe whatever I want; I’m just not allowed to try to convince you of it unless I have a rational argument.
Isn’t this what Bayesianism is all about—reaching the most likely conclusion in the face of weak or inconclusive evidence? Or am I misunderstanding something?
I do have arguments for my belief, but I’m not really prepared to spend the time getting into it; it’s not essential to my main thesis, and I mentioned it only in passing as a way of giving context, to wit: “some people believe this, and I’m not trying to dismiss them, partly because I happen to agree with them, but that belief is entirely beside the point”.
On your OT: You win a cookie! I had to research this a bit to figure out what happened, but apparently some 9/11 researchers found a list of passenger-victims and thought it was a passenger manifest. One anomaly does remain in that 6 of the alleged hijackers have turned up alive, but I wouldn’t call that enough of an anomaly to be worth worrying about.
(Found the offending factoid under “comments” on the position page; fixing it...)
Traditional Rationality is often expressed as social rules, under which this claim might work. But in Bayesian Rationality, there is math that tells you exactly what you ought to believe given the evidence you have observed.
See No One Can Exempt You From Rationality’s Laws.
Okay—but in practicality, what if I don’t have time (or mental focus, or whatever resources it takes) to explicitly identify, enumerate, and evaluate each piece of evidence that I may be considering? It took me over an hour just to get this far with a Bayesian analysis of one hypothesis, which I’m probably not even doing right.
Or do we step outside the realm of Bayesian Rationality when we look at practical considerations like “finite computing resources”?
I’d actually say, start with the prior and with the strongest piece of evidence you think you have. This of itself should reveal something interesting and disputable.
As someone who recently failed at an attempt at Bayesian analysis let my try to offer a few pointers: You correctly conclude that “What is the likelihood that evidence E would occur even if H were false?” is more immediately relevant than “What is the likelihood that evidence E would not occur if H were true?”, which you only asked because you got the syntax wrong, “the likelihood that evidence E would occur even if H were false” would be P(E|~H). P(H) is your prior, the probability before considering any evidence E, not the probability in absence of any evidence. The considerations you list under evidence against are of the sort you would make when determining the priors, asking “What is the likelihood that Bush is a twit if H were true?” and so on would be very difficult to set probabilities for, you CAN threat it that way but it’s far from straightforward.
Actually I have never seen a non-trivial example of this sort of analysis for this sort of real word problem done right on this site.
H = this sort of analysis is practical
E = user FAWS has not seen any example of this sort of analysis done right.
P(H)=0.9 smart people like Eliezer seem to praise Bayesian thinking, and people ask for priors and so on.
P(E|H)= 0.3 I haven’t read every comment, probably not even 10%, but if this is used anywhere it would be here, and if it’s practical it should be used at least somewhat regularly.
P(E|~H) =0.9 Might still be done even if impractical when it’s a point of pride and / or group identification, which could be argued to be the case.
Calculating the posterior probability P(H|E):
P(H|E) = P(H&E)/P(E)= P(H)*P(E|H)/P(E)= P(H)*P(E|H)/(P(E|H)*P(H)+P(E|~H)\P(~H))= 0.9 * 0.3 /(0.3 * 0.9 + 0.9 * 0.1)= 0.75
The best source to look at here is Probability is Subjectively Objective. You cannot (in the bayesian sense) believe whatever you ‘want’. There is precisely one set of beliefs to which you are epistemically entitled given your current evidence even though I are obliged to form a different set of beliefs given what I have been exposed to.
Typo in the link syntax. Corrected: Probability is Subjectively Objective.
Reaching the most likely conclusion while uncertain yes. But that doesn’t mean believing things without evidence.
Really? I’d worry about that. That would be a big deal. At the least it would be really embarrassing for the FBI. But it isn’t true either!
Lacking sufficient resources (time, energy, focus) to be able to enumerate one’s evidence is not the same as not having any. I believe that I have sufficient evidence to believe what I believe, but I do not currently have a transcript of the reasoning by which I arrived at this belief.
What is your evidence that it isn’t true? Here’s mine. Note that each claim is footnoted with a reference to a mainstream source.
What you provide is evidence that some people shared names and some other data with the hijackers. You haven’t shown that the actual people identified by the FBI later turned up alive.
Here’s Wikipedia on the subject.