What would convince you you’d won the lottery?
The latest (06 Oct 2017) Euromillion lottery numbers were 01 − 09 − 15 − 19 − 25 , with the “Lucky Stars ” being 01 − 07.
Ha! Bet I convinced no-one about those numbers. The odds against 01 − 09 − 15 − 19 − 25 / 01 − 07 being the true lottery numbers are about 140 million to one, so I must have been a fool to think you’d believe me. The odds that I decided to lie is far more that one in a 140 million, so it’s very unlikely those are the true numbers.
Wait a moment. Something is wrong here. That argument could be used against any set of numbers; and yet one of them was the real winning set. The issue is that probabilities are not being compared properly.
Let S=(01,09,15,19,25,01,07), let W(S) be the fact these numbers won the lottery, let L be the fact that I lied about the winning number, and L(S) be the fact that I lied and claimed S as the winning numbers.
So though P(L) is indeed much higher than P(W(S)), generally P(W(S)) will be higher than P(L(S)). That’s because while W(S) gets penalised by all the other values S could have been, so does L(S).
Note the key word “generally”. The sum of P(L(S’)), across all S’, is P(L); this means that for most S’, P(L(S’)) must be low, less than 140 million times smaller than P(L). But it’s possible that for some values of S’, it might be much higher. If I’d claimed that the winning numbers were 01 − 02 −03 − 04 − 05 / 06 − 07, then you might have cause to doubt me (after taking into account selection bias in me reporting it, the number of lotteries going on around the world, and so on). The connections with Komogorov complexity should be obvious at this point: the more complex the sequence, the less likely it is that a human being will select it disproportionately often (and you can’t select all sequences disproportionately often).
What if I’d claimed that I’d won that lottery? That seems like a 01 − 02 −03 − 04 − 05 / 06 − 07 - style claim. I’m saying that the 01 − 09 − 15 − 19 − 25 / 01 − 07 did win (fair enough) but that these numbers are special to me because I selected them. Here you should be more sceptical. But what if I linked to a few articles that showed me with a gigantic novelty check? It would still seem more likely that I’d hacked and faked those than that I’d actually won… or would it?
Let’s make it more personal: what if you yourself ended up winning the lottery? Or at least, if you’d got/been given a ticket, and a friend told you that you’d won. And then another. And then people started calling you up, and you looked up the information on website, and there were news crews around… Still, odds of 140 million to one. It’s still more likely that people are doing elaborate practical jokes on you, or that you’re in a simulation.
First of all, are you so sure of those odds on practical jokes? There are maybe a few hundred lotteries in the world, drawing regularly (not to mention the other games of chance). If the practical jokes are more likely, that means that every week, there must be thousands of very elaborate fake lottery winning jokes, including multiple people lying (possible), and faked news crews and websites (much less likely). Do you really think that thousands of those happened every week (and that this almost never makes the news)?
If not, then you must be prepared to face the fact that you might actually be a winner. But see what this implies: that a few conversations with friends, a minor browse of the internet, and some news crew-looking people are enough to subjectively overturn odds of 140 million to one.
In a sense, this isn’t surprising: 140 million is roughly 2^27, so it should only take 27 bits of information to convince you of that. But those are 27 bits from a reliable source. Given the possibility for deception and manipulation, it still seems that you can believe incredibly unlikely specific events, given very small amounts of information.
What seems to be happening is that we have a background picture of the universe, built up by life experience, and we think we have a pretty reliable impression about things like how likely our friends are to be honest/pranksters/lazy pranksters/super-inspired super-hacking super-organised vicious pranksters. Our background picture of the universe has way more than 27 bits, so we’re not comparing “I won the lottery” with “someone is pranking me”, but with “someone is pranking me and major things I thought I knew about reality are wrong”.
What about the simulation possibility? Now, I don’t believe that the “probability of being in a simulation” is a meaningful concept. But let’s pretend it is. We already know that we can’t compare “probability of winning the lottery” with “probability of being in a simulation”, but instead with “probability of being in a simulation and winning the lottery”.
Now, that last doesn’t seem too unlikely, given a simulation. But that’s still not the valid comparison. It’s “probability of winning the lottery + background experience of the universe” versus “probability of being in a simulation and winning the lottery + background experience of the universe”. It’s that background experience that weighs heavily here. Sure, a simulation might be more likely to simulate you wining the lottery, but would it be likely to have first constructed a more conventional life for you, and then have you winning the lottery?
Knowing that you are in a simulation only matters if that information is useful to your plans. So we’re not even talking about “being in a simulation and winning the lottery + background experience of the universe”, but all that plus “and the future will be radically different from my current highly materialistic view of reality”. The more it looks like the laws of physics are being followed, the more you should expect them to continue to look as if they are followed. And winning a lottery, though unlikely, is not a radical departure from physics. So the simulation hypothesis doesn’t get all that much of a boost from that fact.
A conclusion?
Tradition maintains that posts must be concluded by conclusions. This post doesn’t really have any strong ones, but here are some thoughts:
Even if the dream/prank/simulation/everything-you-know-is-wrong hypothesis is overall more likely than event E, what matters is the probability of dream conditional on you seeing the evidence you’ve seen for E, and this can be much much smaller.
It doesn’t take much evidence for us to (correctly) believe extremely unlikely things.
One of the reasons for that is that we have background knowledge of the universe that provides a lot of bits of knowledge in our favour.
We’re not very good at estimating these kinds of probabilities.
If you have decent-seeming evidence that you won the lottery, you probably did: due to background knowledge, this evidence can be stronger than it seems.
Promoted to Featured for communicating a genuinely confusing problem in probability that’s useful to learn from.
This was fun!
I think we agree on the reality of the situation, but I’d rephrase you conclusions as, “Things that don’t seem like strong/fantastic evidence can often be so, due to how evidence can relate and interact with our background knowledge of reality.”
The way you currently phrase your last bullet point could be confusing, because you use the understanding you’ve developed in your post to say that you would be convinced by the previously mentioned evidence, yet you still refer to said evidence as “decent-but-not-fantastic”, which you would only do if you held the naive perspective that you proposed at the beginning of your post.
Mixing the two in one sentence makes things fuzzier and easier to misinterpret.
Rephrased.
That was really funny!
What I don’t get is how P(L(S)) can be bigger than P(W(S)), since I think the former includes the latter. You can only lie about S if S is true, right? So the chance of that happening must be smaller.
>since I think the former includes the latter.
No. P(W(S)) is that S won the lottery, and P(L(S)) is that S didn’t win the lottery, but I claimed that S did. P(L(S)) can be bigger than P(W(S)) if S is particularly simple, as all sequences are—approximately—equally likely to win, but simple sequences are more likely to be claimed by liars.
By this logic, why do we believe anything is real at all? The odds of any set of atoms coming to rest in the configuration that allows you and I to be communicating over a site called lesserwrong.com is even more astronomically small than you winning the Euromillions lottery.
One of the points of this post is that “this isn’t real” hypotheses get penalised by more than you’d expect. Yes, the various positions of atoms corresponding to this are incredibly unlikely; but you have a lot of evidence that they are in the right configuration (ie, your entire life experience).
This is called the Boltzmann Brain hypothesis.
And that hypothesis is one that Anthropic decision theory whacks. It ignores Boltzmann brains, not because they’re unlikely (though they are, arguably, less likely that the standard evolution of life on Earth), but because your decision makes no difference if you’re Boltzmann brain, so you may as well ignore that possibility.