Real-Life Anthropic Weirdness
In passing, I said:
From a statistical standpoint, lottery winners don’t exist—you would never encounter one in your lifetime, if it weren’t for the selective reporting.
And lo, CronoDAS said:
Well… one of my grandmothers’ neighbors, whose son I played with as a child, did indeed win the lottery. (AFAIK, it was a relatively modest jackpot, but he did win!)
To which I replied:
Well, yes, some of the modest jackpots are statistically almost possible, in the sense that on a large enough web forum, someone else’s grandmother’s neighbor will have won it. Just not your own grandmother’s neighbor.
Sorry about your statistical anomalatude, CronoDAS—it had to happen to someone, just not me.
There’s a certain resemblance here—though not an actual analogy—to the strange position your friend ends up in, after you test the Quantum Theory of Immortality.
For those unfamiliar with QTI, it’s a simple simultaneous test of many-worlds plus a particular interpretation of anthropic observer-selection effects: You put a gun to your head and wire up the trigger to a quantum coinflipper. After flipping a million coins, if the gun still hasn’t gone off, you can be pretty sure of the simultaneous truth of MWI+QTI.
But what is your watching friend supposed to think? Though his predicament is perfectly predictable to you—that is, you expected before starting the experiment to see his confusion—from his perspective it is just a pure 100% unexplained miracle. What you have reason to believe and what he has reason to believe would now seem separated by an uncrossable gap, which no amount of explanation can bridge. This is the main plausible exception I know to Aumann’s Agreement Theorem.
Pity those poor folk who actually win the lottery! If the hypothesis “this world is a holodeck” is normatively assigned a calibrated confidence well above 10-8, the lottery winner now has incommunicable good reason to believe they are in a holodeck. (I.e. to believe that the universe is such that most conscious observers observe ridiculously improbable positive events.)
It’s a sad situation to be in—but don’t worry: it will always happen to someone else, not you.
- 2 Anthropic Questions by 26 May 2012 22:51 UTC; 15 points) (
- 16 Jun 2012 16:01 UTC; 11 points) 's comment on How confident is your atheism? by (
- Average utilitarianism must be correct? by 6 Apr 2009 17:10 UTC; 5 points) (
- [SEQ RERUN] Real-Life Anthropic Weirdness by 12 Apr 2013 5:33 UTC; 4 points) (
- 4 May 2009 23:26 UTC; 3 points) 's comment on Bead Jar Guesses by (
- 7 Sep 2020 14:24 UTC; 2 points) 's comment on Reference Classes by (
So with what probability should Barack Obama believe he is on a holodeck, and how should this belief influence his behavior?
And not only Obama. The closer you are to the center of human history, the more likely you are to be on a holodeck. People simulating others should be more likely to simulate people in historically interesting times, and people simulating themselves for fun and blocking their memory should be more likely to simulate themselves as close to interesting events as possible.
And...if Singularity theory is true, the Singularity will be the most interesting and important event in all human history. Now, all of us are suspiciously close to the Singularity, with a suspiciously large ability to influence its course. Even I, a not-too-involved person who’s just donated a few hundred dollars to SIAI and gets to sit here talking to the SIAI leadership each night, am probably within the top millionth of humans who have ever lived in terms of Singularity “proximity”.
And Michael Vassar and Eliezer are so close to the theorized center of human history that they should assume they’re holodecking with probability ~1.
After all, which is more likely from their perspective—that they’re one of the dozen or so people most responsible for creating the Singularity and ensuring Friendly AI, or that they’re some posthuman history buff who wanted to know what being the guy who led the Singularity Institute was like?
(the alternate explanation, of course, is that we’re all on the completely wrong track and that we’re simply in the larger percentage of humans who think they’re extremely important.)
Still, I think that in most EU calculations, the weight of “holy crap this is improbable, how am I actually this important?” on the one side, and of “well, if I am this dude, I’d really better not @#$% this up” on the other should more or less scale together. I don’t think I’m stepping into Pascalian territory here.
The “with probability ~1” part is wrong, AFAICT. I’m confused about how to think about anthropics, and everybody I’ve talked to is also confused as far as I’ve noticed. Given this confusion, we can perhaps obtain simulation-probabilities by estimating the odds that our best-guess means of calculating anthropic probabilities is reliable, and then obtaining an estimate that we’re in a holodeck conditional on our anthropic calculation methods being correct. But it would be foolish to assign more than, say, a 90% estimate to “our best-guess means of calculating anthropic probabilities is basically correct”, unless someone has a better analysis of such methods than I’d expect.
Shouldn’t the fact that they can probably imagine better versions of themselves reduce this probability? If you’re in a holodeck, in addition to putting yourself at the center of the Singularity, why wouldn’t you also give yourself the looks of Brad Pitt and the wealth of Bill Gates?
We are actually in a ‘chip-punk’ version of the past in which silicon based computers became available all the way back in the late 20th century. The original Eliezer made friendly AI with vacuum tubes.
The more powerful computers are when you turn 15, the higher the difficulty level.
No if they are in a historical simulation. The real architects of the Singularity weren’t billionaires.
No if they are in some kind of holo-game, for the same reason that people playing computer games don’t hack them to make their character level infinity and impervious to bullets. Where would be the fun in that?
Not really. Think of Nozick’s experience machine. If you were to use the machine to simulate yourself in a situation extremely close to the center of the singularity, would you also give yourself the looks of Brad Pitt and the wealth of Bill Gates?
a) Would this not make the experience feel so ‘unreal’ that your simulated self would have trouble believing it’s not a simulation, and therefore not enjoy the simulation at all? In constructing the simulation, you need to define how many positive attributes you can give your simulated self before it realizes that its situation is so improbable that it must be a simulation. I’d use caution and not make my simulated self too ‘lucky.’
b) More importantly, you may believe that a) doesn’t apply, and that your simulated self would take the blue pill, and willingly choose to continue to live in the simulation. Even then, having great looks and great wealth would probably distract you from creating the singularity. All I’d care about is the singularity, and I’d design the simulation so that I have a comfortable, not too distracting life that would allow me to focus maximally on the singularity, and nothing else.
I agree these are possibilities. However, it seems to me that if you’re going to use improbable good fortune in some areas as evidence for being in a holodeck, it only makes sense to use misfortune (or at least lack of optimization, or below-averageness) in other areas as evidence against it. It doesn’t sit well with me to write off every shortcoming as an intentional contrivance to make the simulation more “real” for you, or to give you additional challenges. Of course, we’re only talking a priori probability here; if, say, Eliezer directly catalyzed the Singularity and found himself historically renowned, the odds would have to go way up.
The alternate explanation is of course far more likely a priori.
How likely is it that, say, at least 10 people think they’re Barack Obama, only one of which is correct?
Being mistaken about your importance is different from, and much more common than, being mistaken about who/where you are.
Unless most conscious observers are ancestor simulations of people in positions of historical importance, in which case most people are correct about the importance of the position and incorrect about who/where they are.
(Vide Doomsday Argument, Simulation Argument, and the “surprise” of finding yourself on Ancient Earth rather than much later in a civilization’s development. Of course these are all long-standing controversies in anthropics, I’m just raising their existence.)
Among people who believe themselves to be Barack Obama, most are mistaken about their position rather than the importance of the position.
Agreed.
Not all that unlikely. There have certainly been a lot of people who have believed themselves to be Napoleon or Jesus. I’d say 10 Obamas seems a little right now, but I wouldn’t be at all surprised by, say, three.
“seems a little MUCH right now”, I meant.
The idea of eternal inflation might cut against this. Under eternal inflation new universes are always being created at an exponentially increasing rate so there are always far more young than old universes. So under this theory if you are uncertain of whether you are at a relatively early (pre-singularity) or relatively late (post-singularity) point in the universe you are almost certainly in the relatively early state because there are so many more universes in this state.
Note: Eliezer and Robin object to this idea for reasons I don’t understand.
James, I don’t think inflation implies there are more early than late universes, nor do I object to inflation. I just don’t think inflation solves time-asymmetry.
Note that the alternate explanation is MUCH more probable.
I don’t think it should influence his behavior very much. Even if he assigns strong probability to being in a holodeck, his expected utility calculations should, I think, be dominated by the case in which he is in fact PotUS, since a president is in a better position to purchase utility.
I think The Onion has this one covered.
So if you find you ARE that friend, presumably you’d have no fear of stepping in front of that gun barrel yourself for a few million flips right afterwards. I mean it’s pretty convincing proof. Then you get to see the confusion in each other’s face!
Though you’re both more likely to end up mopping your friend’s blood of the floor.
On the whole, I think a good friend probably doesn’t let a friend test the Quantum Theory of Immortality.
Even if QTI is true, a good friend doesn’t test it, for fear of leaving behind (many copies of) a bereaved friend.
I don’t believe so. While the person who underwent the experiment has a completely convincing proof of MWI+QTI, the friend doesn’t. What he saw is just as unlikely under MWI as it is under Copenhagen.
If you didn’t believe in MWI+QTI (i.e. had very low priors for), and you saw your friend claim to do a billion coin flips and not get shot, then even if you “refuse” to significantly increase your belief in MWI+QTI, wouldn’t you at least increase your belief in the possibility that the gun is broken, and thus would not shoot you?
I would assume that the gun was broken and thus would not shoot me.
Although it was not via the lottery, my wife’s sister won one million dollars on a TV show in the 1980s called “The one million dollar chance of a lifetime”. It turns out that she and her husband would get $40,000 a year for 25 years, but they got divorced a few years later, so she received $20,000 a year until recently. It was quite a contrast between the show’s promise to “make you a millionaire” and the actual very modest improvement in lifestyle from an extra $20,000 a year.
Anyway, none of you know her so this doesn’t disprove the principle for you, but maybe it makes it a little more likely that I am in a simulation. The real problem with this conclusion is that it seems to require believing that most people (i.e. all of you readers among others) are zombies, which seems untenable. Otherwise my sister in law’s presence on the holodeck puts me there, and you as well.
I get the feeling that there must be an “anthropic weirdness” literature out there that I don’t know about. I don’t know how else to explain why no one else is reacting to these paradoxes in the way that seems to me to be obvious. But perhaps my reaction would be quickly dismissed as naïve by those who have thought more about this.
The “obvious” reaction seems to me to be this:
The winner of the lottery, or Barack Obama for that matter, has no more evidence that he or she is in a holodeck than anyone else has.
Take the lottery winner. We all, including the winner, made the same observation. No one, including the winner, observed anything unusual. What we all saw was this: someone won the lottery that week. This is not an uncommon event. Someone wins the lottery in many weeks.
Perhaps people are confused because the winner will report this shared observation by saying
“I won the lottery this week.”
And, indeed, in that regard, the winner is unique. But, in the very way that I formulated this fact, the referent of “I” is defined to be the winner. Therefore, the above remark is logically equivalent to
“The winner of the lottery this week won the lottery this week.”
That hardly seems the sort of surprising evidence that might lead one to suspect holodecks. Moreover, it’s what we all observed. With regards to evidence for holodecks or what have you, the winner is not in a special position.
Maybe people think, “But the winner predicted what the numbers would be beforehand, and he or she then observed those predictions come true. That gives the winner strong evidence for the false conclusion that he or she can predict lotteries.”
But that conclusion just doesn’t follow. We all observed the same thing: Millions of people tried to guess the numbers, and one (or a few) got them right. That’s all that any of us saw. The number of correct predictions that we all saw was perfectly consistent with chance.
If the winner were unaware of all the other people who tried to guess the numbers, then he or she would be in trouble. Then he or she might validly reason “Just one person tried to guess the numbers, and that person got it right. Therefore, that person must have a special ability to predict the numbers.” That’s the person I pity, someone who had the misfortune to be exposed to extremely misleading observations. But normal lottery winners are not in that position.
I also don’t see the asymmetry in the Quantum Theory of Immortality scenario. You and your friend both make the same observation: the version of you in the Everett branch where the gun doesn’t go off doesn’t get shot. Assuming that you both believe Many-Worlds, you both know that there are scads of branches out there where both your friend and your bullet-punctured remains “observed” (i.e., recorded in their physical structure), the gun’s firing. And if you weren’t convinced of Many-Worlds, then you will likely conclude that your model of physics is wrong because of the high probability that it assigned to the gun’s firing. Rather than conclude that Many-Worlds is true, you will probably throw out QM altogether. (You might do this eventually even if you did go in believing Many-Worlds.) But, again, you have no privileged position over your friend here, because you don’t see anything that he doesn’t see.
Am I missing something? Are these paradoxes really this easy to dismiss?
The idea of a holodeck is that it’s a simulated reality centred around you. In fact, many, most, or all of the simulated people in the holodeck may not be conscious observers at all.
So, either I am one of 6 billion conscious people on Earth, or I am the centre of some relatively tiny simulation. Winning the lottery seems like evidence for the latter, because if I am in a holodeck, interesting things are more likely to happen to me.
As you say, when someone wins the lottery, all 6 billion people on Earth get the same information. But that’s assuming they’re real in the first place, and so seems to beg the question.
I’m not yet seeing that other peoples’ consciousness per se is relevant here. All that matters is that there be a vast pool of potential winners, conscious or otherwise. All that I (the winner, say) observed was that one of the members of this pool won.
If my prior belief had been that every member of the pool had an equal probability of winning, then I have no new evidence for the holodeck hypothesis after I observe my winning as opposed to any other member’s. I would have predicted in advance that some member of the pool would win that week, and that’s what I saw.
However, I take your point to be that it would not be rational to suppose that there were millions and millions of potential winners, each with an equal chance of winning. So, I now concede that initially there is a certain asymmetry between the lottery winner and a non-winner: The non-winner initially has stronger evidence that he or she was among the pool of potential winners, and that the odds of winning were distributed evenly throughout that pool. Of course, the winner has strong evidence for this, too. But I agree that the non-winner’s evidence is initially even stronger.
However, I disagree that these respective bodies of evidence are incommunicable, as Eliezer claimed. If I, the winner, observe you, the non-winner, sufficiently closely, then I will eventually have as much evidence as you have that you were a potential winner who had the same chance that I had. (And if it matters, I will eventually have as much evidence as you have that you are conscious. I side with Dennett in denying you an in-principle privileged access to your own consciousness.)
In the event that you win, you gain the information that a conscious person has won the lottery. When someone else wins, you merely gain the information that a “person” who may or may not be conscious has “won the lottery”.
The holodeck hypothesis predicts that interesting events are more likely to happen to conscious persons. Since you know that you are conscious, if you receive more than your fair share of interesting events, this seems to be (rather weak, but still real) evidence for the holodeck hypothesis.
For as long as you are studying me, yes. And then afterwards I get deleted and what you see of me is again just a few lines in an algorithm using up a couple of CPU cycles every hour.
(This post brought to you by universe.c, line 22,454,398,462,203)
Heh, true. But I confront the same possibility with regards to my observation of my own consciousness.
You believe in p-zombies?
No. But the simulation doesn’t need to run perfect simulations of humans who aren’t currently the focus of the, uh, holodeck customer’s attention.
You are missing something, and would benefit greatly from reading Nick Bostrom’s Anthropic Bias:
http://books.google.com/books?id=TZ5FLwnCTMAC&dq=nick+bostrom+anthropic&printsec=frontcover&source=bn&hl=en&ei=ImbaSe_GO-LVlQfXqLnoDA&sa=X&oi=book_result&ct=result&resnum=4
I found Chapters 1-5 on Bostrom’s website at
http://www.anthropic-principle.com/book/
Will those chapters explain the error in my thinking?
ETA: If someone could summarize the rebuttal contained in Bostrom’s book, I would also appreciate it.
I tentatively agree with all the points you make above. This is a general principle: it shouldn’t matter where or when the mind making a decision is, the decision should come out the same, given the same evidence. In the case of instrumental rationality, it results in the timeless decision theory (at least of my variety), where the mind by its own choice makes the same decision that its past instance would’ve precommited to make. In the case of prisoner’s dilemma, the same applies to the conclusions made by the players running in parallel (as a special case). And in the cases of anthropic hazard, the same conclusions should be made by the target of the paradoxes and by the other agents.
The genuine problems begin when the mind gets directly copied or otherwise modified in such a way that the representation of evidence gets corrupted, becoming incorrect for the target environment. Another source of genuine problems comes from indexical uncertainty, such as in the Sleeping Beauty problem, the case I didn’t carefully think about yet. Which just might invalidate the whole of the above position about anthropics.
This is exactly what I was thinking. That someone won the lottery isn’t improbable at all, and shouldn’t be evidence of something weird going on, even for the person who won. Being the person who won the lottery three weeks in a row seems like something that shouldn’t happen, but being right next to that guy seems like it would provide the same evidence.
Ha ha ha. Classic.
This is one of those stories you can show to would-be rationalists that will make them both laugh and think about probability. Well done.
Most conscious observers? I would think a universe/multiverse containing holodecks would still contain many people not in them. At best, you can conclude that most observers who don’t see a world containing holodecks are in holodecks.
Possibly significant: the friend has some incommunicable evidence of his own – that he is conscious, in a world without holodecks, and didn’t win the lottery – against (the winner)/(most observers) being in (a) holodeck(s).
Just for fun: Not only are we living in someone’s holodeck fantasy, it’s a badly written holodeck fantasy!
(Taken from http://davidbrin.blogspot.com/2005/10/holodeck-scenario-part-i.html)
...
David Brin’s answer is here: http://davidbrin.blogspot.com/2005/10/holodeck-scenario-part-ii.html
That’s the most amusing thing I’ve heard today. :)
What, we’re not allowed to express appreciation here, Mr. Downvoter?
After a lot of improbable things happen the main thing you have evidence for is that the universe is large enough to have improbable things happen. This could happen in MWI, or it could just happen in an ordinary very large universe. Or it could happen in a simulation that focuses on special events, so if your event is special this is also something that gets more support, relative to a small universe.
But I don’t at all see how such events give you evidence about what sort of large universe you live in. And I don’t see how winning the lottery is remotely unlikely enough to kick in such considerations.
I don’t see how the size of the universe makes any difference—isn’t it only the density of weird events that matters?
Unless the hypothesis under consideration is a particularly weird universe, the main way to get more weird events is to get more total events.
But if you get more weird events and more total events, the probability of a given event being weird remains constant.
If it worked the way you said, you could also conclude a large universe based on normal events. This would violate conservation of expected evidence.
As I alluded to in a previous discussion this sort of thing is veering quickly into the territory of the small world phenomenon in human social networks.
With something likely to be remarked on in idle chatter with casual acquaintances, such as winning a lottery, you end up with a unexpectedly large likelihood of becoming aware of a small number of links from yourself to someone who had (Unusual Event X) occur to them.
Wouldn’t any of several multiverse theories predict the survival outcome, and therefore you can’t conclude that the quantum MWI is correct? That is, a world which is single, yet contains you infinitely many times due to being spatially infinite, will also “collapse” every so often and let you survive. Or the simulation masters could be selectively manipulating the quantum events to make people who do weird experiments like this survive (no, I don’t think this is likely at all).
Why would observing your own survival make you extremely confident of quantum MWI as opposed to other multiverse or savior theories?
When you said that, it seemed to me that you were saying that you shouldn’t play the lottery even if the expected payoff—or even the expected utility—were positive, because the payoff would happen so rarely.
Does that mean you have a formulation for rational behavior that maximizes something other than expected utility? Some nonlinear way of summing the utility from all possible worlds?
If someone suggested that everyone in the world should pool their money together, and give it to one person selected at random (pretend for the sake of argument that utility = money), people would think that was crazy. Yet the idea of maximizing expected utility over all possible worlds assumes that an uneven distribution of utility to all your possible future selves is as good as an equitable distribution among them. So there’s something wrong with maximizing expected utility.
Broken intuition pump. The fact that money isn’t utility (has diminishing returns) is actually very important here. I, for one, don’t think I can envision pooling and redistributing actual utility, at least not well enough to draw any conclusions whatsoever.
Also, a utility function might not be defined over selves at particular times, but over 4D universal histories, or even over the entire multiverse. (This is also relevant to your happiness vs. utility distinction, I think.)
What I’m getting at is that the decision society makes for how to distribute utility across different people, is very similar to the decision you make for how to distribute utility across your possible future selves.
Why do we think it’s reasonable to say that we should maximize average utility across all our possible future selves, when no one I know would say that we should maximize average utility across all living people?
Nothing so exotic. In game theory agents can be risk-averse, risk-neutral or risk-loving. This translates to convexity/concavity of the utility function.
The winning payoff would have to be truly enormous for the expected utility to be positive.
So I guess I’ll just go on posting disclaimers: Phil Goetz has an unusually terrible ability to figure out what I’m saying.
While you appear to be right about phil’s incorrect interpretation, I don’t think he meant any malice by it...however, you appear to me to have meant malice in return. So, I think your comment borders on unnecessary disrespect and if it were me who had made the comment, I would edit it to make the same point while sounding less hateful. If people disagree with me, please down vote this comment. (Though admittedly, if you edit your comment now, we won’t get good data, so you probably should leave it as is.)
I admit that I’m not factoring in your entire history with phil much so you may have further justification of which I’m unaware, but my view I would expect to be shared even more by casual readers who don’t know either of you well. Maybe in that case, a comment like yours is fine, but only if delivered privately.
Agreed. Also, saying somebody is wrong and then not bothering to explain how does come across as somewhat rude, as it forces the other person to try to guess what they did wrong instead of providing more constructive feedback.
Phil does this a lot, usually in ways which present me with the dilemma of spending a lot of time correcting him, or letting others pick up a poor idea of what my positions are (because people have a poor ability to discount this kind of evidence). I’ve said as much to Phil, and he apparently thinks it’s fine to go on doing this—that it’s good for him to force me to correct him, even though others don’t make similar misinterpretations. Whether or not this is done from conscious malice doesn’t change the fact that it’s a behavior that forces me to expend resources or suffer a penalty, which is game-theoretically a hostile act.
So, to discourage this unpleasant behavior, it seems to me that rather than scratching his itch for his benefit (encouraging repetition), I should make some reply which encourages him not to do it again.
I would like to just reply: “Phil Goetz repeatedly misinterprets what I’m saying in an attempt to force me to correct him, which I consider very annoying behavior and have asked him to stop.” If that’s not what Phil intends.… well, see how it feels to be misinterpreted, Phil? Unfortunately this comes too close to lying for my tastes, so I’ll have to figure out some similar standard reply. Maybe even a standard comment to link to each time he does this.
Ok, I soften my critique given your reply which made a point I hadn’t fully considered.
It sounds like the public disrespect is intentional, and it does have a purpose.. To be a good thing to do, you need to believe, among other things:
Publicly doing that is more likely to make him stop relative to privately doing it. (Seems plausible).
You’re not losing something greater than the wasted time by other people observing your doing it. (Unclear to me)
It would be better I think if you could just privately charge someone for the time wasted;but it does seem unlikely phil would agree to that. I think your suggestion of linking to a fairly respectful but forceful reply works pretty well for the time being.
Sure. And my standard reply will be, “Eliezer repeatedly claims that I’m misinterpreting him in order to avoid addressing inconsistencies or ambiguities in what he has said.”
You’re doing it again.
Er, did you misparse? I think you read
Eliezer repeatedly claims that I’m (misinterpreting him in order to avoid addressing inconsistencies or ambiguities in what he has said)
I thnk he meant
Eliezer repeatedly claims (that I’m misinterpreting him) in order to avoid addressing inconsistencies or ambiguities in what he has said
I have to say, I disagree with much of what he says but PhilGoetz has never struck me as one of the site’s ne’er-do-wells.
You may not have noticed that I was accusing you of being insightful.
I’m trying to be sensitive to your issues about this. So how would you have suggested that I phrase my comment? I said, “This is what Eliezer seems to be saying”, and asked if that was what you were saying. I don’t know what you want. You seem to be saying (and I have to say things like this, because in order to have a conversation with someone you have to try to figure out what they mean) “Shut up, Phil.”
In this case, when I said you seemed to be saying that rational decision-making about playing the lottery does not mean maximizing expected utility, I was just being polite. You said it. I quote:
This says that the chance of winning the lottery is so low that you don’t need to do an expected utility calculation. I will not back down and pretend that I might be misinterpreting you in this instance. Maybe you meant to say something different, but this is what you said.
You’re tired of me trying to interpret what you say? Well, I’m tired of you trying to disclaim or ignore the logical consequences of what you say.
Eli tends to say stylistically: “You will not ” for what others, when they’re thinking formally, express as “You very probably will not __” This is only a language confusion between speakers. There are other related ones here, I’ll link to them later. Telling someone to “win” versus “try to win” is a very similar issue.
To be exact, I say this when human brains undergo the failure mode of being unable to discount small probabilities. Vide: “But there’s still a chance, right?”
That’s not what’s at issue. The statement still says that the chance of winning is so low as not to be worth talking about. That implies that one does not calculate expected utility. My interpretation is correct. Eliezer has written 3 comments in reply, and is still trying to present it as if what is at issue here is that I consistently misrepresent him.
I am not misrepresenting him. My interpretation is correct. As has probably often been the case.
“That implies that one does not calculate expected utility.”
My impression has been that Eliezer means X and writes “Y” where Y could be interpreted to mean that Eliezer means either X or Z, you say “Eliezer means Z which implies this other obviously wrong thing”, and then Eliezer becomes upset because you have misinterpreted him and you become upset because he is ignoring your noting of the ambiguity of “Y”. Then hilarity is spawned.
A data point for ya.
Ambiguities can simply be asked. I might or might not answer depending on whether I had time. Speaking for a person is a different matter.
The comment that started this now-tedious thread said:
Sounds like asking to me. I clearly was not claiming to know what you were thinking.
Phil, I think you’re interpeting his claim too literally (relative to his intent). He is only trying to help people who have a psychological inability to discount small probabilities appropriately. Certainly if the lottery award grows high enough, standard decision theory implies you play ….this is one of the pascal’s mugging variants (similarly, whether to perform hypothetical exotic physics experiments with small probability of yielding infinite (or just extremely large) utility and large probability of destroying everything) which is not fully resolved for any of us, I think.
You’re probably right. But I’m still irritated that instead of EY saying, “I didn’t say exactly what I meant”, he is sticking to “Phil is stupid.”
If a gun were put to my head and I had to decide right now, I agree with your irritation. However, he did make an interesting point about public disrespect as a means of deterrence which deserves more thinking about. If that method looks promising after further inspection, we’d probably want to reconsider its application to this situation, though it’s still unclear to me to what extent it applies in this case.
There’s also the consideration of total time expenditures on my part. Since the main reason I don’t respond at length to Goetz is his repeated behaviors that force me to expend large amounts of time or suffer penalties, elaborate time-consuming courtesies aren’t a solution either.
Agreed
Yup.
Phil may be more likely to misinterpret you than the most prolific contributors, but he is probably less likely to misinterpret you than most most readers of LW. I understand that this may be beside the point to you. I empathize, and wish I could think of a solution.
OK, I don’t get this at all, but I totally understand the lottery example. I think Tyrrell McAllister raised this question, but only his other question was ever addressed. Are the two cases really the same? If so, how?
It’s true that, as the person next to the gun, you should expect to live with the same probability you give to the truth of the QTI. And that your friend should expect you to live with probability 2^(-n), where n is the number of coinflips. But for each branch where you live, both you and your friend are getting evidence for the truth of QTI. The only difference is that if you die from your POV, that pretty much disproves QTI, but if you die from your friend’s point of view, he only gets n bits of information (number of coinflips before you die). So after a million − 1 flips, both you and your friend are virtually certain of QTI. But if, on the millionth flip, you die, it’s disproved from your perspective, but virtually unchanged from your friend’s.
It’s true that only a small fraction of branches will contain a friend (or a public, for that matter) that becomes convinced of QTI, and that even if QTI isn’t true, and even if MWI isn’t true, that there would still be a very small chance that you would live (and thus be falsely convinced of the truth of QTI—poor you!). But the special distinction in the holodeck case is that winning the lottery personally would be more likely to happen in a sim, whereas someone else winning it would not. In the QTI case, there isn’t any alternate theory that becomes more or less likely just because you’re the one behind the gun.
Your friend does not predict higher odds of your survival conditional on many-worlds. Thus, your survival does not cause them to update upwards on many-worlds, and a high problility of many-worlds does not lessen the vast improbability of your survival. Hence a “miracle”.
I get the feeling that I missed a lot of prediscussion to this topic. I am new here and new to these types of discussions, so if I am way off target please nudge me in the right direction. :)
If the statistics of winning a lottery are almost none, they are not none. As such, the chances of a lottery winner existing as time goes on increases with each lottery ticket purchased. (The assumption here is that “winner” simply means “holding the right ticket”.)
Furthermore, it seems like the concept of the QTI is only useful if you already know what the probability of it being true /and/ find it helpful to consider yourself in the other variations as an extension of your personal identity. Otherwise, you are just killing yourself to prove a point to someone else.
But I really do not understand this:
“If the hypothesis ‘this world is a holodeck’ is normatively assigned a calibrated confidence well above 10-8, the lottery winner now has incommunicable good reason to believe they are in a holodeck.”
Why are the probabilities of the world being a holodeck tied to the probability of guessing a number correctly? It seems like this is the same reasoning that leads people to believing in Jesus just because his face showed up on their potato chip. It just sounds like a teleological argument with a different target. Or was that the point and I missed it?
PS) Is it better to post once with three topics, or three times with one topic each?
I interpreted the last statement as follows:
IF you assign a probability higher than 10^(-8) to the hypothesis that you are in a holodeck
AND you win the lottery (which had a probabiltiy of 10^(-8) or thereabouts)
THEN you have good reason to believe you’re in a holodeck, because you’ve had such improbable good fortune.
Correct me if I’m wrong on this.
Strictly speaking you need to know the probability that you’ll win the lottery given that you’re on the holodeck to complete the calculation.
The person controlling the holodeck (who presumably designed the simulation) needs to know the probability. But the person being simulated, who experiences winning the lottery, does not need to know anything about the inner working of his (simulated) world. For the experience to seem real enough, it’d be best, even, not to know every detail of what’s going on.
I mean that if we’re to know the evidential weight of winning the lottery to the theory that we’re on the holodeck, we need to know P(L|H), so that we can calculate P(H|L) = P(L|H)P(H)/(P(L|H)P(H) + P(L|¬H)P(¬H)).
I get your point now. But all we need to know is whether P(L|H) > P(L|~H)*.
If this is the case, then if an extremely unlikely (P(L/~H) → 0) event L happens to you, this evidently increases the chance that you’re in a holodeck simulation. In the formula, P(L|H) equates to (almost) 1 as P(L|~H) approaches zero. The unlikelier the event (amazons on unicorns descending from the heavens to take you to the land of bread and honey), i.e. the larger the difference between P(L|H) and P(L|~H), the larger the probability that you’re experiencing a simulation.
This is true as long as P(L|H) > P(L|~H). If L is a mundane event, P(L|H) = P(L|~H) and the formula reduces to P(H|L) = P(H). If L is so supremely banal that P(L|~H) > p(L|H), the occurence of L actually decreases the chance that you’re experiencing a holodeck simulation.
Indeed, I believe that was the point of the original post.
The core assumption remains, of course, that you’re more likely to win the lottery when you’re experiencing a holodeck simulation than in the real world (P(L|H) > P(L|~H)).
I’m not well-versed in Bayesian reasoning, so correct me if I’m wrong. Your posts have definitely helped to clarify my thoughts.
*I don’t know how to type the “not”-sign, so I’ll use a tilde.
I’ve pondered a toned-down version of this argument in the context of religious experience and other hallucinations. Also, this is an important consideration for Utilitarian-style Pascalian religion.
I also knew someone whose family won the lottery, though I don’t remember how much.
There are, of course, different degrees of lottery, and lottery winners. I take it that someone who wins (say) £20,000 is a lottery winner, but not really what Eliezer means.
I don’t think it’s an exception to the Agreement Theorem. All you have to do to to communicate the evidence is give your friend root access to your brain so he can verify you aren’t lying. Of course Omega could have just rigged your brain so you think you survived a million QTI tests, but that possibility shouldn’t worry your friend any more than it worries you.
also, BTW one of my Dad’s ham radio buddies won the lottery recently.
How does this give him any more evidence? I don’t believe lying was ever a hypothesis.