Since 3^^^3 is unfeasible, suppose the mugger claims to simulate and kill “only” a quadrillion humans. The number is still large enough to overload one’s utility, if you assign any credence to the claim. I am no expert in decision theory, but regardless of the exact claim, if the dude refuses to credibly simulate an amoeba, your decision is simple: ignore and move on. Please feel free to provide an example of Pascal mugging where this approach (extraordinary claims require extraordinary evidence) fails.
Pascal’s mugging only works if after some point your estimated prior for someone’s ability to cause utilitarian losses of size n decreases more slowly than n increases; otherwise, claims of extravagant consequences make the mugging less likely to succeed as they grow more extravagant. “Magic powers from outside the Matrix” fill that role in the canonical presentation, since while the probability of that sort of magic existing is undoubtedly extremely small we don’t have any good indirect ways of estimating its probability relative to its utilitarian implications, and we can’t calculate it directly for the reasons thomblake gave a few comments up.
A quadrillion humans, however, don’t fit the bill. We can arrive at a reasonable estimate for what it’d take to run that kind of simulation, and we can certainly calculate probabilities that small by fairly conventional means: there’s a constant factor here that I have no idea how to estimate, but 1 * 10^-15 is only about eight sigma from the mean on a standard normal distribution if I got some back-of-the-envelope math right. I’d feel quite comfortable rejecting a mugging of that form as having too little expected damage to be worth my time.
I must be missing something. To me a large number that does not require more processing power/complexity than the universe can provide is still large enough. TBH, even 10^15 looks to me too large to care, either the mugger can provide reasonable evidence or not, that’s all that matters.
If the mugger can provide reasonable evidence of his claims, it’s not a decision-theoretically interesting problem; instead it becomes a straightforward, if exotic, threat. If the claim’s modest enough that we can compute its probability by standard means, it becomes perfectly normal uncreditable rambling and stops being interesting from the other direction. It’s only interesting because of the particular interaction between our means of updating probability values and a threat so fantastically huge that the expected loss attached to it can’t be updated into neutral or negative territory by observation.
I guess that makes some philosophical sense. Not connected to any real-life decision making, though.
The problem was brought up in the context of making a computer program that correctly maximizes expected utility in all cases. Yes, in “real life” you can just ignore the mugger, but I don’t know of a rigorous way of proving that’s rational—your ability to ignore the mugger might well be a case of you getting the answer wrong, despite it seeming intuitively correct.
If you think you have a definitive solution, please show your work, in math.
If you think you have a definitive solution, please show your work, in math.
Irrelevant, because the original thread started with my reply to:
It would seem rational to accept any argument that is not fallacious; but this leads to consideration of problems such as Pascal’s mugging and other exploits.
to which I pointed out that it is not rational to simply accept any argument that does not appear fallacious, not in the way EY defines rationality (as winning). If you apply the maxim “extraordinary claims require extraordinary evidence” (e.g. requesting to show at least a simulated amoeba before you consider the mugger’s claims of simulating people any further), you win whether the mugger bluffs or not. WIN!
You can assign credence to the claim and still assign little enough that a quadrillion humans won’t overload it. I think the claim the be able to simulate a quadrillion humans is a lot more probable than the claim to be able to simulate 3^^^3 (you’d need technology that almost certainly doesn’t exist, but not outside-the-Matrix powers,) but I’d still rate it as being so improbable as to only account for a tiny fraction of an expected death.
I’m settling for just one quadrillion to avoid dealing with the contingency of “3^^^3 is impossible because complexity”. The requirement of testability is not affected by the contingency.
If you assign the threat a probability of, say, 10^-20, the mugger is extorting considerably more dead children from you than you should expect to die if you don’t comply.
I don’t assign a positive probability until I see some evidence. Not in this case, anyway
Does that mean you assign a negative probability or a probability of 0? The former doesn’t seem to make sense and the latter means it is impossible to ever update your belief regardless of evidence (or incontrovertible proof). ie. I think you mean something different than ‘probability’ here.
Indeed, I don’t count unsubstantiated claims as evidence. Neither should you, unless you enjoy being Pascal-mugged.
I take ubsubstantiated claims as evidence. I take damn near everything as evidence. Depending on the context the unsubstantiated claims may count for or against the conclusion they are intended to support.
In fact, sometimes I count substantiated claims as evidence against the conclusion they support (because given the motivation of the persuader I expected them to be able to come up with better evidence if it were available.)
Indeed, I don’t count unsubstantiated claims as evidence. Neither should you, unless you enjoy being Pascal-mugged.
That doesn’t seem to be a response to above. Even in absence of “claims”, probabilities should not equal 0. If you have an algorithm for updating probabilities of 0 that plays nice with everything else about probability, I’d be interested to see it.
Since 3^^^3 is unfeasible, suppose the mugger claims to simulate and kill “only” a quadrillion humans. The number is still large enough to overload one’s utility, if you assign any credence to the claim. I am no expert in decision theory, but regardless of the exact claim, if the dude refuses to credibly simulate an amoeba, your decision is simple: ignore and move on. Please feel free to provide an example of Pascal mugging where this approach (extraordinary claims require extraordinary evidence) fails.
Pascal’s mugging only works if after some point your estimated prior for someone’s ability to cause utilitarian losses of size n decreases more slowly than n increases; otherwise, claims of extravagant consequences make the mugging less likely to succeed as they grow more extravagant. “Magic powers from outside the Matrix” fill that role in the canonical presentation, since while the probability of that sort of magic existing is undoubtedly extremely small we don’t have any good indirect ways of estimating its probability relative to its utilitarian implications, and we can’t calculate it directly for the reasons thomblake gave a few comments up.
A quadrillion humans, however, don’t fit the bill. We can arrive at a reasonable estimate for what it’d take to run that kind of simulation, and we can certainly calculate probabilities that small by fairly conventional means: there’s a constant factor here that I have no idea how to estimate, but 1 * 10^-15 is only about eight sigma from the mean on a standard normal distribution if I got some back-of-the-envelope math right. I’d feel quite comfortable rejecting a mugging of that form as having too little expected damage to be worth my time.
I must be missing something. To me a large number that does not require more processing power/complexity than the universe can provide is still large enough. TBH, even 10^15 looks to me too large to care, either the mugger can provide reasonable evidence or not, that’s all that matters.
If the mugger can provide reasonable evidence of his claims, it’s not a decision-theoretically interesting problem; instead it becomes a straightforward, if exotic, threat. If the claim’s modest enough that we can compute its probability by standard means, it becomes perfectly normal uncreditable rambling and stops being interesting from the other direction. It’s only interesting because of the particular interaction between our means of updating probability values and a threat so fantastically huge that the expected loss attached to it can’t be updated into neutral or negative territory by observation.
I guess that makes some philosophical sense. Not connected to any real-life decision making, though.
The problem was brought up in the context of making a computer program that correctly maximizes expected utility in all cases. Yes, in “real life” you can just ignore the mugger, but I don’t know of a rigorous way of proving that’s rational—your ability to ignore the mugger might well be a case of you getting the answer wrong, despite it seeming intuitively correct.
If you think you have a definitive solution, please show your work, in math.
Irrelevant, because the original thread started with my reply to:
to which I pointed out that it is not rational to simply accept any argument that does not appear fallacious, not in the way EY defines rationality (as winning). If you apply the maxim “extraordinary claims require extraordinary evidence” (e.g. requesting to show at least a simulated amoeba before you consider the mugger’s claims of simulating people any further), you win whether the mugger bluffs or not. WIN!
You can assign credence to the claim and still assign little enough that a quadrillion humans won’t overload it. I think the claim the be able to simulate a quadrillion humans is a lot more probable than the claim to be able to simulate 3^^^3 (you’d need technology that almost certainly doesn’t exist, but not outside-the-Matrix powers,) but I’d still rate it as being so improbable as to only account for a tiny fraction of an expected death.
I’m settling for just one quadrillion to avoid dealing with the contingency of “3^^^3 is impossible because complexity”. The requirement of testability is not affected by the contingency.
If you assign the threat a probability of, say, 10^-20, the mugger is extorting considerably more dead children from you than you should expect to die if you don’t comply.
I don’t assign a positive probability until I see some evidence. Not in this case, anyway
Does that mean you assign a negative probability or a probability of 0? The former doesn’t seem to make sense and the latter means it is impossible to ever update your belief regardless of evidence (or incontrovertible proof). ie. I think you mean something different than ‘probability’ here.
Indeed, I don’t count unsubstantiated claims as evidence. Neither should you, unless you enjoy being Pascal-mugged.
I take ubsubstantiated claims as evidence. I take damn near everything as evidence. Depending on the context the unsubstantiated claims may count for or against the conclusion they are intended to support.
In fact, sometimes I count substantiated claims as evidence against the conclusion they support (because given the motivation of the persuader I expected them to be able to come up with better evidence if it were available.)
That doesn’t seem to be a response to above. Even in absence of “claims”, probabilities should not equal 0. If you have an algorithm for updating probabilities of 0 that plays nice with everything else about probability, I’d be interested to see it.