I don’t rule out mundane explanations. Hence my repeated disclaimers on each use of the word “haunted.” If anything “supernatural” exists, it isn’t supernatural, it’s merely natural, and we simply haven’t pinned down what’s going on yet. Empiricism and reductionism don’t get broken.
Fine, replace ‘mundane causes’ with ‘mundane causes minus cause X’ and ‘supernatural with ‘cause X’ in my examples. -_-
I could construct related examples without any anthropic reasoning at all:
And they fail. In the desert island case, Aumann is perfectly applicable: if you have more evidence than he does, then this will be incorporated appropriately; in fact, the desert island case is a great example of Aumann in practice: that you’ve been lying to him merely shows that ‘disagreements are not honest’ (you are the dishonest party here).
The fact that it is happening to me, rather than another person, is a kind of contextual framing, in much the same sense that calling heads first frames the coin-flipping event.
How so? You didn’t predict it would be a haunted house before it went, to point out the most obvious disanalogy.
I feel like your disagreement is getting a little slippery here.
My rejection of Aumann is that there is no common knowledge of our posteriors. It’s not necessary for me to have lied to him before, after all; I could have been trying to cheer him up entirely honestly.
If I -had- predicted it would be a haunted house, I’d be suspicious of any evidence that suggested it was. The point isn’t the prediction—prediction is just one mechanism of framing an outcome. The point is in the priors; my prior of -somebody- experiencing a series of weird events in a given house are pretty high, there’s a lot of people out there to experience such weird events, and some of them will experience several. My prior odds of -me- experiencing a series of weird events in a given house should be pretty low. It’s thus much more significant for -me- to experience a series of weird events in a given house than for some stranger who I wouldn’t have known about except for their reporting such. If I’m not updating my priors after being surprised, what am I doing?
It’s not necessary for me to have lied to him before, after all; I could have been trying to cheer him up entirely honestly.
Then why does he distrust you? If you have never lied and will never lie in trying to cheer you up, then he is wrong to distrust you and this is simply an example of irrationality and not uncommunicable knowledge; if he is right to suspect that you or people like you would lie in such circumstances, then ‘disagreements are not honest’ and this is again not uncommunicable knowledge.
The point is in the priors; my prior of -somebody- experiencing a series of weird events in a given house are pretty high, there’s a lot of people out there to experience such weird events, and some of them will experience several. My prior odds of -me- experiencing a series of weird events in a given house should be pretty low. It’s thus much more significant for -me- to experience a series of weird events in a given house than for some stranger who I wouldn’t have known about except for their reporting such. If I’m not updating my priors after being surprised, what am I doing?
And you talk about me being slippery. We’re right back to where we began:
to the extent you have any knowledge in this case of observing a rare but known event, the knowledge is communicable to an outsider who can also update; it is no more ‘significant’ to you than a stranger, and any significance may just reflect known cognitive biases like anecdotes, salience, base-rate neglect, etc (and fall under the irrationality rubric)
to the extent that this knowledge is anthropic, one can argue that this is uncommunicable but anthropic arguments are so divergent and unreliable that it’s not clear you have learned uncommunicable knowledge rather than found yet another case in which anthropic arguments are unreliable and give absurd conclusions.
You have not shown any examples which simultaneously involve uncommunicable knowledge which does not involve anthropics (what you are claiming is possible) and rationality and honesty on the part of all participants.
You’re presupposing that trustworthiness is communicable, or that rationality demands trusting somebody absent evidence to do otherwise. There’s definitely incommunicable knowledge—what red looks like to me, for example. You’re stretching “rationality” to places it has no business being to defend a proposition, or else adding conditions (honesty) to make the proposition work.
What, exactly, are you calling anthropics? What I’m describing doesn’t depend on either SIA or SSA. If you’re saying that I’m depending upon an argument which treats any given observer as a special case—yes, yes I am, that is in fact the thrust of my argument. Your argument against anthropics was that it leads to “absurd” results. However, your judgment of “absurd” seems tautological; you seem to be treating the idea that being unable to arrive at the same posterior odds as absurd in itself. That’s not an argument. That’s begging the question.
So—where exactly am I absurd in the following statements:
1.) My prior odds of some stranger experiencing highly unlikely circumstances (and me hearing about them—assuming such circumstances are of general interest) should be relatively high
2.) My prior odds of me experiencing highly unlikely circumstances should be low
3.) Per Bayesian inference, given that the likelihood of the two distinct events is different, the posterior distribution is different
4.) Therefore, there is a different amount of information in something happening personally than to something happening to a stranger
Or, hopefully without error, as it’s been a while since I’ve mucked about with this stuff:
M is the event happening to me, O is the event happening to somebody else, X is some idea for which M and O are evidence of, and Z is the population size (assuming random distribution of events:
p(X|M)=p(M|X)*P(X)/P(M)
Assuming X guarantees M and O, we get:
p(X|M)=P(X)/P(M)
p(X|O)=P(X)/P(O)
where p(M) = p(O) / Z
Which means
p(X|M) = p(X|O) * Z
Which is to say, M is stronger evidence than O for Z:
p(X|M, M1) = p(M|X)*P(X|M1)/p(M|M1)
p(X|O, O1) = p(O|X)*P(X|O1)/p(O|O1)
Using the above assumption that X guarantees M and O:
p(X|M, M1) = p(X|M1)/P(M|M1)
p(X|O, O1) = p(X|O1)/P(O|O1)
Substituting, again where P(O1) = p(M1) * Z, and where p(M1|M) and p(O1|O) are both 1:
p(X|M, M1) = p(X|M1)/p(M|M1)
p(X|O, O1) = (P(X)/Z*P(M1)) / (Z*P(M) / Z*P(M1))
= p(X)/(Z*P(M))
Or, in short—the posteriors are different. The information is different. There is a piece of incommunicable evidence when something happens to me as opposed to somebody else.
You’re presupposing that trustworthiness is communicable, or that rationality demands trusting somebody absent evidence to do otherwise. There’s definitely incommunicable knowledge—what red looks like to me, for example. You’re stretching “rationality” to places it has no business being to defend a proposition, or else adding conditions (honesty) to make the proposition work.
The conditions are right there in the Aumann proofs, are they not? I’m not adding anything, I’m dividing up the possible outcomes: anthropics (questionable), communicable knowledge (contra you), or Aumann is inapplicable (honesty etc).
What I’m describing doesn’t depend on either SIA or SSA.
I’d be interested to see if you could prove that the result holds independently of them.
That’s not an argument. That’s begging the question.
That’s the point of the modus tollens vs modus ponens saying. You claim to derive a result, but using premises more questionable than the conclusion, in which case you may have merely disproven the premises via reductio ad absurdum. If this is begging the question (which it isn’t, since that’s when your premise contains the conclusion), then every proof by contradiction or reductio ad absurdum is question-begging.
Or, in short—the posteriors are different. The information is different. There is a piece of incommunicable evidence when something happens to me as opposed to somebody else.
Correct me if I am wrong, but in your example, M is not increased when O fails to happen—more concretely, you assume the number of spooked people you will hear of is constant—when it would be more appropriate to increase the number of observations of O by 1, since if you don’t go into the spooky house someone else does. Then you are merely deriving the uninteresting observation that if there are more events (by 1) consistent with Z, Z will be more likely. Well, yeah. But there is nothing special about one’s own observations in this case; if someone else went into the house and reported observations, you would update a little more, just like if you want into the house, and in both cases, more than if no one went into the house (or they went in and saw nothing).
Also, your equations are messed up. I think you need to escape some stuff.
Aha! I think the issue here is that you’re thinking of it in terms of two identical observers. The observers aren’t identical in my arguments—one is post-hoc. I have realized where the discrepancy between our arguments is coming from with your example, because of the way I keep framing problems as being about the observer. Suppose I and a friend, Bob, are arguing about who goes in the house. In this case, there’s not practical difference between our evidence. The difference isn’t between me and other, the difference is between me and (other who I wouldn’t have known about except for said experience).
Bob and I are identical (I did say this wasn’t necessarily anthropic!) for the purposes of calculation. Bob is included in p(M).
Steve, who wrote a post on a rationality forum describing his experiences, is -not- identical with me for the purposes of calculation. Steve is included in p(O).
Does my argument make more sense now? Bob’s information is fully transferable—in terms of flipping coins, he called heads before flipping ten heads in a row. Steve’s information is -not- - he’s the guy who flipped ten heads in a row without calling anything.
(ETA: I have no idea how to make them look right. How do you escape stuff?)
In certain contexts an asterisk is a magic character and you need to precede it with a backslash to keep it from turning into <em> or </em>. To get
p(X|M, M1) = p(M|X)*P(X|M1)/p(M|M1)
do
p(X|M, M1) = p(M|X)\*P(X|M1)/p(M|M1)
Or you can just put equations in their own paragraphs that are indented by four spaces, in which case no characters will have their magic meaning. (This is how I did the above paragraph where the backslash is visible.)
Fine, replace ‘mundane causes’ with ‘mundane causes minus cause X’ and ‘supernatural with ‘cause X’ in my examples. -_-
And they fail. In the desert island case, Aumann is perfectly applicable: if you have more evidence than he does, then this will be incorporated appropriately; in fact, the desert island case is a great example of Aumann in practice: that you’ve been lying to him merely shows that ‘disagreements are not honest’ (you are the dishonest party here).
How so? You didn’t predict it would be a haunted house before it went, to point out the most obvious disanalogy.
I feel like your disagreement is getting a little slippery here.
My rejection of Aumann is that there is no common knowledge of our posteriors. It’s not necessary for me to have lied to him before, after all; I could have been trying to cheer him up entirely honestly.
If I -had- predicted it would be a haunted house, I’d be suspicious of any evidence that suggested it was. The point isn’t the prediction—prediction is just one mechanism of framing an outcome. The point is in the priors; my prior of -somebody- experiencing a series of weird events in a given house are pretty high, there’s a lot of people out there to experience such weird events, and some of them will experience several. My prior odds of -me- experiencing a series of weird events in a given house should be pretty low. It’s thus much more significant for -me- to experience a series of weird events in a given house than for some stranger who I wouldn’t have known about except for their reporting such. If I’m not updating my priors after being surprised, what am I doing?
Then why does he distrust you? If you have never lied and will never lie in trying to cheer you up, then he is wrong to distrust you and this is simply an example of irrationality and not uncommunicable knowledge; if he is right to suspect that you or people like you would lie in such circumstances, then ‘disagreements are not honest’ and this is again not uncommunicable knowledge.
And you talk about me being slippery. We’re right back to where we began:
to the extent you have any knowledge in this case of observing a rare but known event, the knowledge is communicable to an outsider who can also update; it is no more ‘significant’ to you than a stranger, and any significance may just reflect known cognitive biases like anecdotes, salience, base-rate neglect, etc (and fall under the irrationality rubric)
to the extent that this knowledge is anthropic, one can argue that this is uncommunicable but anthropic arguments are so divergent and unreliable that it’s not clear you have learned uncommunicable knowledge rather than found yet another case in which anthropic arguments are unreliable and give absurd conclusions.
You have not shown any examples which simultaneously involve uncommunicable knowledge which does not involve anthropics (what you are claiming is possible) and rationality and honesty on the part of all participants.
You’re presupposing that trustworthiness is communicable, or that rationality demands trusting somebody absent evidence to do otherwise. There’s definitely incommunicable knowledge—what red looks like to me, for example. You’re stretching “rationality” to places it has no business being to defend a proposition, or else adding conditions (honesty) to make the proposition work.
What, exactly, are you calling anthropics? What I’m describing doesn’t depend on either SIA or SSA. If you’re saying that I’m depending upon an argument which treats any given observer as a special case—yes, yes I am, that is in fact the thrust of my argument. Your argument against anthropics was that it leads to “absurd” results. However, your judgment of “absurd” seems tautological; you seem to be treating the idea that being unable to arrive at the same posterior odds as absurd in itself. That’s not an argument. That’s begging the question.
So—where exactly am I absurd in the following statements: 1.) My prior odds of some stranger experiencing highly unlikely circumstances (and me hearing about them—assuming such circumstances are of general interest) should be relatively high 2.) My prior odds of me experiencing highly unlikely circumstances should be low 3.) Per Bayesian inference, given that the likelihood of the two distinct events is different, the posterior distribution is different 4.) Therefore, there is a different amount of information in something happening personally than to something happening to a stranger
Or, hopefully without error, as it’s been a while since I’ve mucked about with this stuff:
M is the event happening to me, O is the event happening to somebody else, X is some idea for which M and O are evidence of, and Z is the population size (assuming random distribution of events:
Or, in short—the posteriors are different. The information is different. There is a piece of incommunicable evidence when something happens to me as opposed to somebody else.
The conditions are right there in the Aumann proofs, are they not? I’m not adding anything, I’m dividing up the possible outcomes: anthropics (questionable), communicable knowledge (contra you), or Aumann is inapplicable (honesty etc).
I’d be interested to see if you could prove that the result holds independently of them.
That’s the point of the modus tollens vs modus ponens saying. You claim to derive a result, but using premises more questionable than the conclusion, in which case you may have merely disproven the premises via reductio ad absurdum. If this is begging the question (which it isn’t, since that’s when your premise contains the conclusion), then every proof by contradiction or reductio ad absurdum is question-begging.
Correct me if I am wrong, but in your example, M is not increased when O fails to happen—more concretely, you assume the number of spooked people you will hear of is constant—when it would be more appropriate to increase the number of observations of O by 1, since if you don’t go into the spooky house someone else does. Then you are merely deriving the uninteresting observation that if there are more events (by 1) consistent with Z, Z will be more likely. Well, yeah. But there is nothing special about one’s own observations in this case; if someone else went into the house and reported observations, you would update a little more, just like if you want into the house, and in both cases, more than if no one went into the house (or they went in and saw nothing).
Also, your equations are messed up. I think you need to escape some stuff.
Aha! I think the issue here is that you’re thinking of it in terms of two identical observers. The observers aren’t identical in my arguments—one is post-hoc. I have realized where the discrepancy between our arguments is coming from with your example, because of the way I keep framing problems as being about the observer. Suppose I and a friend, Bob, are arguing about who goes in the house. In this case, there’s not practical difference between our evidence. The difference isn’t between me and other, the difference is between me and (other who I wouldn’t have known about except for said experience).
Bob and I are identical (I did say this wasn’t necessarily anthropic!) for the purposes of calculation. Bob is included in p(M).
Steve, who wrote a post on a rationality forum describing his experiences, is -not- identical with me for the purposes of calculation. Steve is included in p(O).
Does my argument make more sense now? Bob’s information is fully transferable—in terms of flipping coins, he called heads before flipping ten heads in a row. Steve’s information is -not- - he’s the guy who flipped ten heads in a row without calling anything.
(ETA: I have no idea how to make them look right. How do you escape stuff?)
In certain contexts an asterisk is a magic character and you need to precede it with a backslash to keep it from turning into
<em>
or</em>
. To getp(X|M, M1) = p(M|X)*P(X|M1)/p(M|M1)
do
Or you can just put equations in their own paragraphs that are indented by four spaces, in which case no characters will have their magic meaning. (This is how I did the above paragraph where the backslash is visible.)
Is there a reference somewhere on LessWrong or the Wiki for the mark-up used in the comments?
There’s a “Show help” button on the right underneath comment fields. The quick reference it reveals includes a link to the wiki page.
The formatting language used is a (not totally bug-free) subset of Markdown.
Laughs I’m so used to useless help screens I ignored that button, looked for it manually on the wiki, couldn’t find it, and gave up. Thanks!