This might be easier to consider as the simpler case of “given we live in a deterministic universe, what does any choice I make matter?” I would say that I still have to make decisions of how to act and choosing not to act is also a choice, so I should do what ever it is that I want to do.
That’s not the same problem, though Egan’s Law is equally applicable to both. An agent might have no confusion over free will, have clear preferences and act normally on them in a single deterministic world, but not care about quantum measure and thus be a nihilist in many-worlds. (Actually, if such an agent seems to be in MW, it should by its preferences proceed under the Pascalian assumption that it lives in a single world and is being deceived.)
This might be easier to consider as the simpler case of “given we live in a deterministic universe, what does any choice I make matter?” I would say that I still have to make decisions of how to act and choosing not to act is also a choice, so I should do what ever it is that I want to do.
http://wiki.lesswrong.com/wiki/Free_will
That’s not the same problem, though Egan’s Law is equally applicable to both. An agent might have no confusion over free will, have clear preferences and act normally on them in a single deterministic world, but not care about quantum measure and thus be a nihilist in many-worlds. (Actually, if such an agent seems to be in MW, it should by its preferences proceed under the Pascalian assumption that it lives in a single world and is being deceived.)
Nick Bostrom has a couple of papers on this:
Infinite Ethics
Quantity of Experience: Brain-Duplication and Degrees of Consciousness
Could you explain that more? As far as I can see, an agent which doesn’t care about measure would engage in high rate quantum suicide.