Does that mean that utilitarianism is incompatible with Many Worlds? if everything that is possible for you to do is something that you actually do then that would mean that utility, across the whole multiverse, is constant, even assuming any notion of free will.
Everything is possible, but not everything has the same measure (is equally likely). Killing someone in 10% of “worlds” is worse than killing them in 1% of “worlds”.
At the end, believing in many worlds will give you the same results as believing in collapse. It’s just that epistemologically, the believer in collapse needs to deal with the problem of luck. Does “having a 10% probability of killing someone, and actually killing them” make you a worse person that “having a 10% probability of killing someone, but not killing them”?
(From many-worlds perspective, it’s the same. You simply shouldn’t do things that have 10% risk of killing someone, unless it is to avoid even worse things.)
(And yes, there is the technical problem of how exactly you determine that the probability was exactly 10%, considering that you don’t see the parallel “words”.)
Everything is possible, but not everything has the same measure (is equally likely). Killing someone in 10% of “worlds” is worse than killing them in 1% of “worlds”.
Apart from the other problem: MWI is deterministic, so you can’t alter the percentages by any kind of free will, despite what people keep asserting.
Does “having a 10% probability of killing someone, and actually killing them” make you a worse person that “having a 10% probability of killing someone, but not killing them”?
Actually killing them is certainly worse. We place moral weight on actions as well as character.
MWI is deterministic, so you can’t alter the percentages by any kind of free will, despite what people keep asserting.
Neither most collapse-theories nor MWI allow for super-physical free will, so that doesn’t seem relevant to this question. Since the question concerns what one should do, it seems reasonable to assume that some notion of choice is possible.
(FWIW, I’d guess compatibilism is the most popular take on free will on LW.)
No, if 99% of timelines have utility 1, while in 1% of timelines something very improbable happens and you instead cause utility to go to 0, the global utility is still pretty much 1. Some part of the human utility function seems to care about absolute existence or nonexistence, and that component is going to be sort of steamrolled by multiverse theory, but we will mostly just keep on going in pursuit of greater relative measure.
That amounts to saying that if the conjunction of MWI and utilitarianism is correct, we would or should behave as though it isn’t. That is a major departure from typical rationalism (eg the Litany of Tarski).
Does that mean that utilitarianism is incompatible with Many Worlds? if everything that is possible for you to do is something that you actually do then that would mean that utility, across the whole multiverse, is constant, even assuming any notion of free will.
Everything is possible, but not everything has the same measure (is equally likely). Killing someone in 10% of “worlds” is worse than killing them in 1% of “worlds”.
At the end, believing in many worlds will give you the same results as believing in collapse. It’s just that epistemologically, the believer in collapse needs to deal with the problem of luck. Does “having a 10% probability of killing someone, and actually killing them” make you a worse person that “having a 10% probability of killing someone, but not killing them”?
(From many-worlds perspective, it’s the same. You simply shouldn’t do things that have 10% risk of killing someone, unless it is to avoid even worse things.)
(And yes, there is the technical problem of how exactly you determine that the probability was exactly 10%, considering that you don’t see the parallel “words”.)
Apart from the other problem: MWI is deterministic, so you can’t alter the percentages by any kind of free will, despite what people keep asserting.
Actually killing them is certainly worse. We place moral weight on actions as well as character.
Neither most collapse-theories nor MWI allow for super-physical free will, so that doesn’t seem relevant to this question. Since the question concerns what one should do, it seems reasonable to assume that some notion of choice is possible.
(FWIW, I’d guess compatibilism is the most popular take on free will on LW.)
Yes, but compatibilism doesn’t suggest that you choose between different actions or between different decision theories.
Wait, what? If compatibilism doesn’t suggest that I’m choosing between actions, what am I choosing between?
Theories, imaginary ideas.
No, if 99% of timelines have utility 1, while in 1% of timelines something very improbable happens and you instead cause utility to go to 0, the global utility is still pretty much 1. Some part of the human utility function seems to care about absolute existence or nonexistence, and that component is going to be sort of steamrolled by multiverse theory, but we will mostly just keep on going in pursuit of greater relative measure.
That amounts to saying that if the conjunction of MWI and utilitarianism is correct, we would or should behave as though it isn’t. That is a major departure from typical rationalism (eg the Litany of Tarski).
The question isn’t really whether it’s correct, the question is closer to “is it equivalent to the thing we already believed”.