In personal conversation, Nick Bostrom suggested that a division-of-responsibility principle might cancel out the anthropic update—i.e., the paperclip maximizer would have to reason, “If the logical coin came up heads then I am 1/18th responsible for adding +1 paperclip, if the logical coin came up tails then I am 1⁄2 responsible for destroying 3 paperclips.” I confess that my initial reaction to this suggestion was “Ewwww”, but I’m not exactly comfortable concluding I’m a Boltzmann brain, either.
I would perhaps prefer to use different language in the description but this seems to be roughly the answer to the apparent inconsistency. When reasoning anthropically you must decide anthropically. Unfortunately it is hard to describe such decision making without using sounding either unscientific or outright incomprehensible
I’m rather looking forward to another Eleizer post on this topic once he has finished dissolving his confusion. I’ve gained plenty from absorbing the posts and discussions and more from mentally reducing the concepts myself. But this stuff is rather complicated and to be perfectly honest, I don’t trust myself to not have missed something.
I would perhaps prefer to use different language in the description but this seems to be roughly the answer to the apparent inconsistency. When reasoning anthropically you must decide anthropically. Unfortunately it is hard to describe such decision making without using sounding either unscientific or outright incomprehensible
I’m rather looking forward to another Eleizer post on this topic once he has finished dissolving his confusion. I’ve gained plenty from absorbing the posts and discussions and more from mentally reducing the concepts myself. But this stuff is rather complicated and to be perfectly honest, I don’t trust myself to not have missed something.