Yeah, that was my immediate line of thought too, but… I’ve never seen Eliezer being that blind in his area of expertise. Maybe he sees more to the asymmetry than just the anthropic considerations? Obviously Robin’s solution doesn’t work for Pascal’s mugging in the general case where an FAI would actually encounter it, and yet Eliezer claimed Robin solved an FAI problem. (?!) (And even in the human case it’s silly to assume that the anthropic/symmetry-maintaining update should correlate exactly with how big a number the prankster can think up, and even if it does it’s not obvious that such anthropic/symmetry-maintaining updates are decision theoretically sane in the first place.) Aghhh. Something is wrong.
I think I remember some very quiet crossover point between like 2007 and 2008 when Eliezer switched from saying ‘the infinite utilities cancel each other out’ to ‘why do you think a superintelligence would use a policy like your artificially neat approximation where all the probabilities just happen to cancel each other out nicely, instead of, say, actually trying to do the math and ending up with tiny differences that nonetheless swamp the calculation?’ with respect to some kind of Pascalian problem or class of Pascalian problems. This was in OB post comment threads. That’s kind of like an implicit retraction of endorsement of fuzzy Pascalian ‘solutions’ (if I’m actually remembering it correctly) but admittedly it’s not, like, an actual retraction.
I still think I might be missing some detail or intuition that Eliezer isn’t missing that could be charitably extracted from Robin’s argument… but yeah, if I had to bet I’d say it was a (hopefully rare) slip of the brain on Eliezer’s part, and if so it’d be nice to get a clarifying comment from Eliezer, even if I’m not sure it’s at all (socially) reasonable to expect one.
even if I’m not sure it’s at all (socially) reasonable to expect one.
There is no social debt to be paid in humble recompense. Rather it would be useful to have some form of signal that Eliezer’s current thinking is not broken in areas that are somewhat important.
Yeah, that was my immediate line of thought too, but… I’ve never seen Eliezer being that blind in his area of expertise. Maybe he sees more to the asymmetry than just the anthropic considerations? Obviously Robin’s solution doesn’t work for Pascal’s mugging in the general case where an FAI would actually encounter it, and yet Eliezer claimed Robin solved an FAI problem. (?!) (And even in the human case it’s silly to assume that the anthropic/symmetry-maintaining update should correlate exactly with how big a number the prankster can think up, and even if it does it’s not obvious that such anthropic/symmetry-maintaining updates are decision theoretically sane in the first place.) Aghhh. Something is wrong.
I know I’d like to see Eliezer retract the endorsement of the idea. Seems to be very reckless thinking!
I think I remember some very quiet crossover point between like 2007 and 2008 when Eliezer switched from saying ‘the infinite utilities cancel each other out’ to ‘why do you think a superintelligence would use a policy like your artificially neat approximation where all the probabilities just happen to cancel each other out nicely, instead of, say, actually trying to do the math and ending up with tiny differences that nonetheless swamp the calculation?’ with respect to some kind of Pascalian problem or class of Pascalian problems. This was in OB post comment threads. That’s kind of like an implicit retraction of endorsement of fuzzy Pascalian ‘solutions’ (if I’m actually remembering it correctly) but admittedly it’s not, like, an actual retraction.
I still think I might be missing some detail or intuition that Eliezer isn’t missing that could be charitably extracted from Robin’s argument… but yeah, if I had to bet I’d say it was a (hopefully rare) slip of the brain on Eliezer’s part, and if so it’d be nice to get a clarifying comment from Eliezer, even if I’m not sure it’s at all (socially) reasonable to expect one.
There is no social debt to be paid in humble recompense. Rather it would be useful to have some form of signal that Eliezer’s current thinking is not broken in areas that are somewhat important.