Suppose that I live on a holodeck but don’t know it, such that anything I look at closely follows reductionist laws, but things farther away only follow high-level approximations, with some sort of intelligence checking the approximations to make sure I never notice an inconsistency. Call this the holodeck hypothesis. Suppose I assign this hypothesis probability 10^-4.
Now suppose I buy one lottery ticket, for the first time in my life, costing $1 with a potential payoff of $10^7 with probability 10^-8. If the holodeck hypothesis is false, then the expected value of this is $10^710^-8 - $1 = $-0.90. However, if the holodeck hypothesis is true, then someone outside the simulation might decide to be nice to me, so the probability that it will win is more like 10^-3. (This only applies to the first ticket, since someone who would rig the lottery in this way would be most likely to do so on their first chance, not a later chance.) In that case, the expected payoff is $10^710^-3 - $1 = $10^4. Combining these two cases, the expected payoff for buying a lottery ticket is +$0.10.
At some point in the future, if there is a singularity, it seems likely that people will be born for whom the holodeck hypothesis is true. If that happens, then the probability estimate will go way up, and so the expected payoff from buying lottery tickets will go up, too. This seems like a strong argument for buying exactly one lottery ticket in your lifetime.
so the probability that it will win is more like 10^-3
If the hololords smiled upon you, why would they even need you to buy a lottery ticket? How improbable is it that they not only want to help you, but they want to help you in this very specific way and in no other obvious way?
I don’t think this argument holds up. Suppose the holodeck hypothesis is true and someone outside the simulation decides to punish irrational choices by killing you if you buy a lottery ticket. The probability of you being killed is around 10^-3 so you should never risk buying a lottery ticket.
The problem is that you’ve no reasonable basis for assigning your 10^-3 probability for a good outcome rather than a bad outcome or a batshit insane outcome. You also have no basis for your 10^-4 probability of being in a holodeck. The only rational way to behave is to act as if you’re not in a holodeck (or a world with an occasional interventionist god who does his damndest not to ever leave clear proof of his interventions, or a simulation run by aliens, or The Matrix) because you have no basis on which to assign probabilities otherwise. This changes of course if you are confronted with evidence that implies a greater likelihood of one your holodeck hypothesis.
The problem is that you’ve no reasonable basis for assigning your 10^-3 probability for a good outcome rather than a bad outcome or a batshit insane outcome. You also have no basis for your 10^-4 probability of being in a holodeck. The only rational way to behave is to act as if you’re not in a holodeck (or a world with an occasional interventionist god who does his damndest not to ever leave clear proof of his interventions, or a simulation run by aliens, or The Matrix) because you have no basis on which to assign probabilities otherwise. This changes of course if you are confronted with evidence that implies a greater likelihood of one your holodeck hypothesis.
OK, what if we accept the simulation hypothesis, and we further say that the advanced civilizations are simulating their ancestors? Then we’d expect our simulators to be evolved or derived from humans in some way. It’s unlikely we’ll change our ideas of fun and entertainment too much, as those are terminal values—we value them for their own sake. This gives us some pretty strong priors, based on just what we know of current human players...
I’m not sure how to interpret that though. Based on players of the Sims, say, we would expect tons of either sadistic deaths by fire in bathrooms and starvation in the kitchen, or people with perfectly lovely lives and great material success. Since we don’t observe many of the former, that’d suggest that if we’re in a simulation, it’s not being run by human-like entities who intervene.
This is exactly like the old joke about the guy who prays fervently for years that God let him win the lottery; finally, a booming voice comes down “At least meet me halfway: buy a ticket!”
When exactly will the probability estimate go way up? Someone living in the holodeck obviously isn’t aware they are living “in the future” or not. The probability has to be calculated from the inside, so I don’t see how it would ever change.
When exactly will the probability estimate go way up?
The probability of living in a holodeck is P(it is possible to build a holodeck) * P(you are in a holodeck|it is possible to build a holodeck). If you ever see or hear about people a holodeck being built, then the first term becomes 1.
However, if the holodeck hypothesis is true, then someone outside the simulation might decide to be nice to me, so the probability that it will win is more like 10^-3.
Suppose that I live on a holodeck but don’t know it, such that anything I look at closely follows reductionist laws, but things farther away only follow high-level approximations, with some sort of intelligence checking the approximations to make sure I never notice an inconsistency. Call this the holodeck hypothesis. Suppose I assign this hypothesis probability 10^-4.
Now suppose I buy one lottery ticket, for the first time in my life, costing $1 with a potential payoff of $10^7 with probability 10^-8. If the holodeck hypothesis is false, then the expected value of this is $10^710^-8 - $1 = $-0.90. However, if the holodeck hypothesis is true, then someone outside the simulation might decide to be nice to me, so the probability that it will win is more like 10^-3. (This only applies to the first ticket, since someone who would rig the lottery in this way would be most likely to do so on their first chance, not a later chance.) In that case, the expected payoff is $10^710^-3 - $1 = $10^4. Combining these two cases, the expected payoff for buying a lottery ticket is +$0.10.
At some point in the future, if there is a singularity, it seems likely that people will be born for whom the holodeck hypothesis is true. If that happens, then the probability estimate will go way up, and so the expected payoff from buying lottery tickets will go up, too. This seems like a strong argument for buying exactly one lottery ticket in your lifetime.
Not my only objection, but:
If the hololords smiled upon you, why would they even need you to buy a lottery ticket? How improbable is it that they not only want to help you, but they want to help you in this very specific way and in no other obvious way?
I don’t think this argument holds up. Suppose the holodeck hypothesis is true and someone outside the simulation decides to punish irrational choices by killing you if you buy a lottery ticket. The probability of you being killed is around 10^-3 so you should never risk buying a lottery ticket.
The problem is that you’ve no reasonable basis for assigning your 10^-3 probability for a good outcome rather than a bad outcome or a batshit insane outcome. You also have no basis for your 10^-4 probability of being in a holodeck. The only rational way to behave is to act as if you’re not in a holodeck (or a world with an occasional interventionist god who does his damndest not to ever leave clear proof of his interventions, or a simulation run by aliens, or The Matrix) because you have no basis on which to assign probabilities otherwise. This changes of course if you are confronted with evidence that implies a greater likelihood of one your holodeck hypothesis.
OK, what if we accept the simulation hypothesis, and we further say that the advanced civilizations are simulating their ancestors? Then we’d expect our simulators to be evolved or derived from humans in some way. It’s unlikely we’ll change our ideas of fun and entertainment too much, as those are terminal values—we value them for their own sake. This gives us some pretty strong priors, based on just what we know of current human players...
I’m not sure how to interpret that though. Based on players of the Sims, say, we would expect tons of either sadistic deaths by fire in bathrooms and starvation in the kitchen, or people with perfectly lovely lives and great material success. Since we don’t observe many of the former, that’d suggest that if we’re in a simulation, it’s not being run by human-like entities who intervene.
This is exactly like the old joke about the guy who prays fervently for years that God let him win the lottery; finally, a booming voice comes down “At least meet me halfway: buy a ticket!”
When exactly will the probability estimate go way up? Someone living in the holodeck obviously isn’t aware they are living “in the future” or not. The probability has to be calculated from the inside, so I don’t see how it would ever change.
The probability of living in a holodeck is P(it is possible to build a holodeck) * P(you are in a holodeck|it is possible to build a holodeck). If you ever see or hear about people a holodeck being built, then the first term becomes 1.
However, if the holodeck hypothesis is true, then someone outside the simulation might decide to be nice to me, so the probability that it will win is more like 10^-3.
Um, what?