Pascal’s mugging only works if after some point your estimated prior for someone’s ability to cause utilitarian losses of size n decreases more slowly than n increases; otherwise, claims of extravagant consequences make the mugging less likely to succeed as they grow more extravagant. “Magic powers from outside the Matrix” fill that role in the canonical presentation, since while the probability of that sort of magic existing is undoubtedly extremely small we don’t have any good indirect ways of estimating its probability relative to its utilitarian implications, and we can’t calculate it directly for the reasons thomblake gave a few comments up.
A quadrillion humans, however, don’t fit the bill. We can arrive at a reasonable estimate for what it’d take to run that kind of simulation, and we can certainly calculate probabilities that small by fairly conventional means: there’s a constant factor here that I have no idea how to estimate, but 1 * 10^-15 is only about eight sigma from the mean on a standard normal distribution if I got some back-of-the-envelope math right. I’d feel quite comfortable rejecting a mugging of that form as having too little expected damage to be worth my time.
I must be missing something. To me a large number that does not require more processing power/complexity than the universe can provide is still large enough. TBH, even 10^15 looks to me too large to care, either the mugger can provide reasonable evidence or not, that’s all that matters.
If the mugger can provide reasonable evidence of his claims, it’s not a decision-theoretically interesting problem; instead it becomes a straightforward, if exotic, threat. If the claim’s modest enough that we can compute its probability by standard means, it becomes perfectly normal uncreditable rambling and stops being interesting from the other direction. It’s only interesting because of the particular interaction between our means of updating probability values and a threat so fantastically huge that the expected loss attached to it can’t be updated into neutral or negative territory by observation.
I guess that makes some philosophical sense. Not connected to any real-life decision making, though.
The problem was brought up in the context of making a computer program that correctly maximizes expected utility in all cases. Yes, in “real life” you can just ignore the mugger, but I don’t know of a rigorous way of proving that’s rational—your ability to ignore the mugger might well be a case of you getting the answer wrong, despite it seeming intuitively correct.
If you think you have a definitive solution, please show your work, in math.
If you think you have a definitive solution, please show your work, in math.
Irrelevant, because the original thread started with my reply to:
It would seem rational to accept any argument that is not fallacious; but this leads to consideration of problems such as Pascal’s mugging and other exploits.
to which I pointed out that it is not rational to simply accept any argument that does not appear fallacious, not in the way EY defines rationality (as winning). If you apply the maxim “extraordinary claims require extraordinary evidence” (e.g. requesting to show at least a simulated amoeba before you consider the mugger’s claims of simulating people any further), you win whether the mugger bluffs or not. WIN!
Pascal’s mugging only works if after some point your estimated prior for someone’s ability to cause utilitarian losses of size n decreases more slowly than n increases; otherwise, claims of extravagant consequences make the mugging less likely to succeed as they grow more extravagant. “Magic powers from outside the Matrix” fill that role in the canonical presentation, since while the probability of that sort of magic existing is undoubtedly extremely small we don’t have any good indirect ways of estimating its probability relative to its utilitarian implications, and we can’t calculate it directly for the reasons thomblake gave a few comments up.
A quadrillion humans, however, don’t fit the bill. We can arrive at a reasonable estimate for what it’d take to run that kind of simulation, and we can certainly calculate probabilities that small by fairly conventional means: there’s a constant factor here that I have no idea how to estimate, but 1 * 10^-15 is only about eight sigma from the mean on a standard normal distribution if I got some back-of-the-envelope math right. I’d feel quite comfortable rejecting a mugging of that form as having too little expected damage to be worth my time.
I must be missing something. To me a large number that does not require more processing power/complexity than the universe can provide is still large enough. TBH, even 10^15 looks to me too large to care, either the mugger can provide reasonable evidence or not, that’s all that matters.
If the mugger can provide reasonable evidence of his claims, it’s not a decision-theoretically interesting problem; instead it becomes a straightforward, if exotic, threat. If the claim’s modest enough that we can compute its probability by standard means, it becomes perfectly normal uncreditable rambling and stops being interesting from the other direction. It’s only interesting because of the particular interaction between our means of updating probability values and a threat so fantastically huge that the expected loss attached to it can’t be updated into neutral or negative territory by observation.
I guess that makes some philosophical sense. Not connected to any real-life decision making, though.
The problem was brought up in the context of making a computer program that correctly maximizes expected utility in all cases. Yes, in “real life” you can just ignore the mugger, but I don’t know of a rigorous way of proving that’s rational—your ability to ignore the mugger might well be a case of you getting the answer wrong, despite it seeming intuitively correct.
If you think you have a definitive solution, please show your work, in math.
Irrelevant, because the original thread started with my reply to:
to which I pointed out that it is not rational to simply accept any argument that does not appear fallacious, not in the way EY defines rationality (as winning). If you apply the maxim “extraordinary claims require extraordinary evidence” (e.g. requesting to show at least a simulated amoeba before you consider the mugger’s claims of simulating people any further), you win whether the mugger bluffs or not. WIN!