I get the feeling that there must be an “anthropic weirdness” literature out there that I don’t know about. I don’t know how else to explain why no one else is reacting to these paradoxes in the way that seems to me to be obvious. But perhaps my reaction would be quickly dismissed as naïve by those who have thought more about this.
The “obvious” reaction seems to me to be this:
The winner of the lottery, or Barack Obama for that matter, has no more evidence that he or she is in a holodeck than anyone else has.
Take the lottery winner. We all, including the winner, made the same observation. No one, including the winner, observed anything unusual. What we all saw was this: someone won the lottery that week. This is not an uncommon event. Someone wins the lottery in many weeks.
Perhaps people are confused because the winner will report this shared observation by saying
“I won the lottery this week.”
And, indeed, in that regard, the winner is unique. But, in the very way that I formulated this fact, the referent of “I” is defined to be the winner. Therefore, the above remark is logically equivalent to
“The winner of the lottery this week won the lottery this week.”
That hardly seems the sort of surprising evidence that might lead one to suspect holodecks. Moreover, it’s what we all observed. With regards to evidence for holodecks or what have you, the winner is not in a special position.
Maybe people think, “But the winner predicted what the numbers would be beforehand, and he or she then observed those predictions come true. That gives the winner strong evidence for the false conclusion that he or she can predict lotteries.”
But that conclusion just doesn’t follow. We all observed the same thing: Millions of people tried to guess the numbers, and one (or a few) got them right. That’s all that any of us saw. The number of correct predictions that we all saw was perfectly consistent with chance.
If the winner were unaware of all the other people who tried to guess the numbers, then he or she would be in trouble. Then he or she might validly reason “Just one person tried to guess the numbers, and that person got it right. Therefore, that person must have a special ability to predict the numbers.” That’s the person I pity, someone who had the misfortune to be exposed to extremely misleading observations. But normal lottery winners are not in that position.
I also don’t see the asymmetry in the Quantum Theory of Immortality scenario. You and your friend both make the same observation: the version of you in the Everett branch where the gun doesn’t go off doesn’t get shot. Assuming that you both believe Many-Worlds, you both know that there are scads of branches out there where both your friend and your bullet-punctured remains “observed” (i.e., recorded in their physical structure), the gun’s firing. And if you weren’t convinced of Many-Worlds, then you will likely conclude that your model of physics is wrong because of the high probability that it assigned to the gun’s firing. Rather than conclude that Many-Worlds is true, you will probably throw out QM altogether. (You might do this eventually even if you did go in believing Many-Worlds.) But, again, you have no privileged position over your friend here, because you don’t see anything that he doesn’t see.
Am I missing something? Are these paradoxes really this easy to dismiss?
The idea of a holodeck is that it’s a simulated reality centred around you. In fact, many, most, or all of the simulated people in the holodeck may not be conscious observers at all.
So, either I am one of 6 billion conscious people on Earth, or I am the centre of some relatively tiny simulation. Winning the lottery seems like evidence for the latter, because if I am in a holodeck, interesting things are more likely to happen to me.
As you say, when someone wins the lottery, all 6 billion people on Earth get the same information. But that’s assuming they’re real in the first place, and so seems to beg the question.
I’m not yet seeing that other peoples’ consciousness per se is relevant here. All that matters is that there be a vast pool of potential winners, conscious or otherwise. All that I (the winner, say) observed was that one of the members of this pool won.
If my prior belief had been that every member of the pool had an equal probability of winning, then I have no new evidence for the holodeck hypothesis after I observe my winning as opposed to any other member’s. I would have predicted in advance that some member of the pool would win that week, and that’s what I saw.
However, I take your point to be that it would not be rational to suppose that there were millions and millions of potential winners, each with an equal chance of winning. So, I now concede that initially there is a certain asymmetry between the lottery winner and a non-winner: The non-winner initially has stronger evidence that he or she was among the pool of potential winners, and that the odds of winning were distributed evenly throughout that pool. Of course, the winner has strong evidence for this, too. But I agree that the non-winner’s evidence is initially even stronger.
However, I disagree that these respective bodies of evidence are incommunicable, as Eliezer claimed. If I, the winner, observe you, the non-winner, sufficiently closely, then I will eventually have as much evidence as you have that you were a potential winner who had the same chance that I had. (And if it matters, I will eventually have as much evidence as you have that you are conscious. I side with Dennett in denying you an in-principle privileged access to your own consciousness.)
In the event that you win, you gain the information that a conscious person has won the lottery. When someone else wins, you merely gain the information that a “person” who may or may not be conscious has “won the lottery”.
The holodeck hypothesis predicts that interesting events are more likely to happen to conscious persons. Since you know that you are conscious, if you receive more than your fair share of interesting events, this seems to be (rather weak, but still real) evidence for the holodeck hypothesis.
I will eventually have as much evidence as you have that you are conscious.
For as long as you are studying me, yes. And then afterwards I get deleted and what you see of me is again just a few lines in an algorithm using up a couple of CPU cycles every hour.
(This post brought to you by universe.c, line 22,454,398,462,203)
I will eventually have as much evidence as you have that you are conscious.
For as long as you are studying me, yes. And then afterwards I get deleted and what you see of me is again just a few lines in an algorithm using up a couple of CPU cycles every hour.
Heh, true. But I confront the same possibility with regards to my observation of my own consciousness.
I tentatively agree with all the points you make above. This is a general principle: it shouldn’t matter where or when the mind making a decision is, the decision should come out the same, given the same evidence. In the case of instrumental rationality, it results in the timeless decision theory (at least of my variety), where the mind by its own choice makes the same decision that its past instance would’ve precommited to make. In the case of prisoner’s dilemma, the same applies to the conclusions made by the players running in parallel (as a special case). And in the cases of anthropic hazard, the same conclusions should be made by the target of the paradoxes and by the other agents.
The genuine problems begin when the mind gets directly copied or otherwise modified in such a way that the representation of evidence gets corrupted, becoming incorrect for the target environment. Another source of genuine problems comes from indexical uncertainty, such as in the Sleeping Beauty problem, the case I didn’t carefully think about yet. Which just might invalidate the whole of the above position about anthropics.
This is exactly what I was thinking. That someone won the lottery isn’t improbable at all, and shouldn’t be evidence of something weird going on, even for the person who won. Being the person who won the lottery three weeks in a row seems like something that shouldn’t happen, but being right next to that guy seems like it would provide the same evidence.
I get the feeling that there must be an “anthropic weirdness” literature out there that I don’t know about. I don’t know how else to explain why no one else is reacting to these paradoxes in the way that seems to me to be obvious. But perhaps my reaction would be quickly dismissed as naïve by those who have thought more about this.
The “obvious” reaction seems to me to be this:
The winner of the lottery, or Barack Obama for that matter, has no more evidence that he or she is in a holodeck than anyone else has.
Take the lottery winner. We all, including the winner, made the same observation. No one, including the winner, observed anything unusual. What we all saw was this: someone won the lottery that week. This is not an uncommon event. Someone wins the lottery in many weeks.
Perhaps people are confused because the winner will report this shared observation by saying
“I won the lottery this week.”
And, indeed, in that regard, the winner is unique. But, in the very way that I formulated this fact, the referent of “I” is defined to be the winner. Therefore, the above remark is logically equivalent to
“The winner of the lottery this week won the lottery this week.”
That hardly seems the sort of surprising evidence that might lead one to suspect holodecks. Moreover, it’s what we all observed. With regards to evidence for holodecks or what have you, the winner is not in a special position.
Maybe people think, “But the winner predicted what the numbers would be beforehand, and he or she then observed those predictions come true. That gives the winner strong evidence for the false conclusion that he or she can predict lotteries.”
But that conclusion just doesn’t follow. We all observed the same thing: Millions of people tried to guess the numbers, and one (or a few) got them right. That’s all that any of us saw. The number of correct predictions that we all saw was perfectly consistent with chance.
If the winner were unaware of all the other people who tried to guess the numbers, then he or she would be in trouble. Then he or she might validly reason “Just one person tried to guess the numbers, and that person got it right. Therefore, that person must have a special ability to predict the numbers.” That’s the person I pity, someone who had the misfortune to be exposed to extremely misleading observations. But normal lottery winners are not in that position.
I also don’t see the asymmetry in the Quantum Theory of Immortality scenario. You and your friend both make the same observation: the version of you in the Everett branch where the gun doesn’t go off doesn’t get shot. Assuming that you both believe Many-Worlds, you both know that there are scads of branches out there where both your friend and your bullet-punctured remains “observed” (i.e., recorded in their physical structure), the gun’s firing. And if you weren’t convinced of Many-Worlds, then you will likely conclude that your model of physics is wrong because of the high probability that it assigned to the gun’s firing. Rather than conclude that Many-Worlds is true, you will probably throw out QM altogether. (You might do this eventually even if you did go in believing Many-Worlds.) But, again, you have no privileged position over your friend here, because you don’t see anything that he doesn’t see.
Am I missing something? Are these paradoxes really this easy to dismiss?
The idea of a holodeck is that it’s a simulated reality centred around you. In fact, many, most, or all of the simulated people in the holodeck may not be conscious observers at all.
So, either I am one of 6 billion conscious people on Earth, or I am the centre of some relatively tiny simulation. Winning the lottery seems like evidence for the latter, because if I am in a holodeck, interesting things are more likely to happen to me.
As you say, when someone wins the lottery, all 6 billion people on Earth get the same information. But that’s assuming they’re real in the first place, and so seems to beg the question.
I’m not yet seeing that other peoples’ consciousness per se is relevant here. All that matters is that there be a vast pool of potential winners, conscious or otherwise. All that I (the winner, say) observed was that one of the members of this pool won.
If my prior belief had been that every member of the pool had an equal probability of winning, then I have no new evidence for the holodeck hypothesis after I observe my winning as opposed to any other member’s. I would have predicted in advance that some member of the pool would win that week, and that’s what I saw.
However, I take your point to be that it would not be rational to suppose that there were millions and millions of potential winners, each with an equal chance of winning. So, I now concede that initially there is a certain asymmetry between the lottery winner and a non-winner: The non-winner initially has stronger evidence that he or she was among the pool of potential winners, and that the odds of winning were distributed evenly throughout that pool. Of course, the winner has strong evidence for this, too. But I agree that the non-winner’s evidence is initially even stronger.
However, I disagree that these respective bodies of evidence are incommunicable, as Eliezer claimed. If I, the winner, observe you, the non-winner, sufficiently closely, then I will eventually have as much evidence as you have that you were a potential winner who had the same chance that I had. (And if it matters, I will eventually have as much evidence as you have that you are conscious. I side with Dennett in denying you an in-principle privileged access to your own consciousness.)
In the event that you win, you gain the information that a conscious person has won the lottery. When someone else wins, you merely gain the information that a “person” who may or may not be conscious has “won the lottery”.
The holodeck hypothesis predicts that interesting events are more likely to happen to conscious persons. Since you know that you are conscious, if you receive more than your fair share of interesting events, this seems to be (rather weak, but still real) evidence for the holodeck hypothesis.
For as long as you are studying me, yes. And then afterwards I get deleted and what you see of me is again just a few lines in an algorithm using up a couple of CPU cycles every hour.
(This post brought to you by universe.c, line 22,454,398,462,203)
Heh, true. But I confront the same possibility with regards to my observation of my own consciousness.
You believe in p-zombies?
No. But the simulation doesn’t need to run perfect simulations of humans who aren’t currently the focus of the, uh, holodeck customer’s attention.
You are missing something, and would benefit greatly from reading Nick Bostrom’s Anthropic Bias:
http://books.google.com/books?id=TZ5FLwnCTMAC&dq=nick+bostrom+anthropic&printsec=frontcover&source=bn&hl=en&ei=ImbaSe_GO-LVlQfXqLnoDA&sa=X&oi=book_result&ct=result&resnum=4
I found Chapters 1-5 on Bostrom’s website at
http://www.anthropic-principle.com/book/
Will those chapters explain the error in my thinking?
ETA: If someone could summarize the rebuttal contained in Bostrom’s book, I would also appreciate it.
I tentatively agree with all the points you make above. This is a general principle: it shouldn’t matter where or when the mind making a decision is, the decision should come out the same, given the same evidence. In the case of instrumental rationality, it results in the timeless decision theory (at least of my variety), where the mind by its own choice makes the same decision that its past instance would’ve precommited to make. In the case of prisoner’s dilemma, the same applies to the conclusions made by the players running in parallel (as a special case). And in the cases of anthropic hazard, the same conclusions should be made by the target of the paradoxes and by the other agents.
The genuine problems begin when the mind gets directly copied or otherwise modified in such a way that the representation of evidence gets corrupted, becoming incorrect for the target environment. Another source of genuine problems comes from indexical uncertainty, such as in the Sleeping Beauty problem, the case I didn’t carefully think about yet. Which just might invalidate the whole of the above position about anthropics.
This is exactly what I was thinking. That someone won the lottery isn’t improbable at all, and shouldn’t be evidence of something weird going on, even for the person who won. Being the person who won the lottery three weeks in a row seems like something that shouldn’t happen, but being right next to that guy seems like it would provide the same evidence.