The idea behind Pascal’s mugging is that the complexity penalty to a theory of the form “A person’s decision will have an affect on the well being of N people” grows assymptotically slower than N. So given weak evidence (like a verbal claim) that narrows down the decision and effect to something specific, and large enough N, the hugeness of the expected payoff will overcome the smallness of the probability.
Hanson’s idea is that given that “A person’s decision will have an affect on the well being of N people” the prior probability that you, and not someone else, is the person who gets to make that decision is 1/N. This gets multiplied by the complexity penalty, and we have the probability of payoff getting smaller faster than the payoff gets bigger.
This is all very convenient for us humans, who value things similar to us. If a paperclip maximizing AGI faced a Pascal’s Mugging, with the payoff in non-agenty paperclips, it would assign a much higher probability that it, and not one of the paperclips, makes the crucial decision. (And an FAI that cares about humans faces a less extreme version of that problem.)
Yeah, that was my immediate line of thought too, but… I’ve never seen Eliezer being that blind in his area of expertise. Maybe he sees more to the asymmetry than just the anthropic considerations? Obviously Robin’s solution doesn’t work for Pascal’s mugging in the general case where an FAI would actually encounter it, and yet Eliezer claimed Robin solved an FAI problem. (?!) (And even in the human case it’s silly to assume that the anthropic/symmetry-maintaining update should correlate exactly with how big a number the prankster can think up, and even if it does it’s not obvious that such anthropic/symmetry-maintaining updates are decision theoretically sane in the first place.) Aghhh. Something is wrong.
I think I remember some very quiet crossover point between like 2007 and 2008 when Eliezer switched from saying ‘the infinite utilities cancel each other out’ to ‘why do you think a superintelligence would use a policy like your artificially neat approximation where all the probabilities just happen to cancel each other out nicely, instead of, say, actually trying to do the math and ending up with tiny differences that nonetheless swamp the calculation?’ with respect to some kind of Pascalian problem or class of Pascalian problems. This was in OB post comment threads. That’s kind of like an implicit retraction of endorsement of fuzzy Pascalian ‘solutions’ (if I’m actually remembering it correctly) but admittedly it’s not, like, an actual retraction.
I still think I might be missing some detail or intuition that Eliezer isn’t missing that could be charitably extracted from Robin’s argument… but yeah, if I had to bet I’d say it was a (hopefully rare) slip of the brain on Eliezer’s part, and if so it’d be nice to get a clarifying comment from Eliezer, even if I’m not sure it’s at all (socially) reasonable to expect one.
even if I’m not sure it’s at all (socially) reasonable to expect one.
There is no social debt to be paid in humble recompense. Rather it would be useful to have some form of signal that Eliezer’s current thinking is not broken in areas that are somewhat important.
Hanson’s idea is that given that “A person’s decision will have an affect on the well being of N people” the prior probability that you, and not someone else, is the person who gets to make that decision is 1/N.
What is the crucial difference between being 1 distinct person, of N people making N distinct decisions, and being 1 of N distinct people? In other words, why would the ability to make a decision, that is inaccessible to other decision makers, penalize the prior probability of its realization more than any other feature of distinct world-state?
I will probably have to grasp anthropic reasoning first. I am just a bit confused that if only 1 of N people faces a certain choice it becomes 1/N times more unlikely to be factual.
I am just a bit confused that if only 1 of N people faces a certain choice it becomes 1/N times more unlikely to be factual.
That only 1 of N people face the choice doesn’t make it less likely that the choice exists, it makes less likely the conjunction of the choice existing, and that you are the one that makes the choice.
The idea behind Pascal’s mugging is that the complexity penalty to a theory of the form “A person’s decision will have an affect on the well being of N people” grows assymptotically slower than N. So given weak evidence (like a verbal claim) that narrows down the decision and effect to something specific, and large enough N, the hugeness of the expected payoff will overcome the smallness of the probability.
Hanson’s idea is that given that “A person’s decision will have an affect on the well being of N people” the prior probability that you, and not someone else, is the person who gets to make that decision is 1/N. This gets multiplied by the complexity penalty, and we have the probability of payoff getting smaller faster than the payoff gets bigger.
This is all very convenient for us humans, who value things similar to us. If a paperclip maximizing AGI faced a Pascal’s Mugging, with the payoff in non-agenty paperclips, it would assign a much higher probability that it, and not one of the paperclips, makes the crucial decision. (And an FAI that cares about humans faces a less extreme version of that problem.)
Yeah, that was my immediate line of thought too, but… I’ve never seen Eliezer being that blind in his area of expertise. Maybe he sees more to the asymmetry than just the anthropic considerations? Obviously Robin’s solution doesn’t work for Pascal’s mugging in the general case where an FAI would actually encounter it, and yet Eliezer claimed Robin solved an FAI problem. (?!) (And even in the human case it’s silly to assume that the anthropic/symmetry-maintaining update should correlate exactly with how big a number the prankster can think up, and even if it does it’s not obvious that such anthropic/symmetry-maintaining updates are decision theoretically sane in the first place.) Aghhh. Something is wrong.
I know I’d like to see Eliezer retract the endorsement of the idea. Seems to be very reckless thinking!
I think I remember some very quiet crossover point between like 2007 and 2008 when Eliezer switched from saying ‘the infinite utilities cancel each other out’ to ‘why do you think a superintelligence would use a policy like your artificially neat approximation where all the probabilities just happen to cancel each other out nicely, instead of, say, actually trying to do the math and ending up with tiny differences that nonetheless swamp the calculation?’ with respect to some kind of Pascalian problem or class of Pascalian problems. This was in OB post comment threads. That’s kind of like an implicit retraction of endorsement of fuzzy Pascalian ‘solutions’ (if I’m actually remembering it correctly) but admittedly it’s not, like, an actual retraction.
I still think I might be missing some detail or intuition that Eliezer isn’t missing that could be charitably extracted from Robin’s argument… but yeah, if I had to bet I’d say it was a (hopefully rare) slip of the brain on Eliezer’s part, and if so it’d be nice to get a clarifying comment from Eliezer, even if I’m not sure it’s at all (socially) reasonable to expect one.
There is no social debt to be paid in humble recompense. Rather it would be useful to have some form of signal that Eliezer’s current thinking is not broken in areas that are somewhat important.
What is the crucial difference between being 1 distinct person, of N people making N distinct decisions, and being 1 of N distinct people? In other words, why would the ability to make a decision, that is inaccessible to other decision makers, penalize the prior probability of its realization more than any other feature of distinct world-state?
I will probably have to grasp anthropic reasoning first. I am just a bit confused that if only 1 of N people faces a certain choice it becomes 1/N times more unlikely to be factual.
That only 1 of N people face the choice doesn’t make it less likely that the choice exists, it makes less likely the conjunction of the choice existing, and that you are the one that makes the choice.
Each of the people in question could be claimed (by the mugger) to be making this exact same choice.