I have trouble understanding how both of these can be true. If situations where it applies are very infrequent, how essential can it really be?
What I should have said is “When discussing or thinking about morality we consider situations where it applies very infrequently.” When people think about morality, and posit moral dilemmas, they typically only consider situations where everyone involved is capable of interacting. When people consider the Trolley Problem they only consider the six people on the tracks and the one person with the switch.
I suppose that technically separability applies to every decision we make. For every action we take there is a possibility that someone, somewhere does not approve of our taking it and would stop us if they could. This is especially true if the universe is as vast as we now think it is. So we need separability in order to discount the desires of those extremely causally distant people.
To avoid paralysis, utilitarians need some way of resolving intersubjective differences in utility calculation for the same shared world-state. Using “separability” to discount the unknowable utility calculations of unknown Matrioshka Brains is a negligible portion of the work that needs to be done here.
You are certainly right that separability isn’t the only thing that utilitarianism needs to avoid paralysis, and that there are other issues that it needs to resolve before it even gets to the stage where separability is needed. I’m merely saying that, at that particular stage, separability is essential. It certainly isn’t the only possible way utilitarianism could be paralyzed, or otherwise run into problems.
For my own part, I would spend considerably more than a cent to create an identical copy of myself whom I can interact with
When I refer to identical copies I mean a copy that starts out identical to me, and remains identical throughout its entire lifespan, like the copies that exist in parallel universes, or the ones in this matrix-scenario Wei Dai describes. You appear to also be using “identical” to refer to copies that start out identical, but diverge later and have different experiences.
Like you, I would probably pay to create copies I could interact with, but I’m not sure how enthusiastic about it I would be. This is because I find experiences to be much more valuable if I can remember them afterwards and compare them to other experiences. If both mes get net value out of the experience like you expect then this isn’t a relevant concern. But I certainly wouldn’t consider having 3650 copies of me existing for one day and then being deleted to be equivalent to living an extra 10 years the way Robin Hanson appears to.
What I should have said is “When discussing or thinking about morality we consider situations where it applies very infrequently.” When people think about morality, and posit moral dilemmas, they typically only consider situations where everyone involved is capable of interacting. When people consider the Trolley Problem they only consider the six people on the tracks and the one person with the switch.
I suppose that technically separability applies to every decision we make. For every action we take there is a possibility that someone, somewhere does not approve of our taking it and would stop us if they could. This is especially true if the universe is as vast as we now think it is. So we need separability in order to discount the desires of those extremely causally distant people.
You are certainly right that separability isn’t the only thing that utilitarianism needs to avoid paralysis, and that there are other issues that it needs to resolve before it even gets to the stage where separability is needed. I’m merely saying that, at that particular stage, separability is essential. It certainly isn’t the only possible way utilitarianism could be paralyzed, or otherwise run into problems.
When I refer to identical copies I mean a copy that starts out identical to me, and remains identical throughout its entire lifespan, like the copies that exist in parallel universes, or the ones in this matrix-scenario Wei Dai describes. You appear to also be using “identical” to refer to copies that start out identical, but diverge later and have different experiences.
Like you, I would probably pay to create copies I could interact with, but I’m not sure how enthusiastic about it I would be. This is because I find experiences to be much more valuable if I can remember them afterwards and compare them to other experiences. If both mes get net value out of the experience like you expect then this isn’t a relevant concern. But I certainly wouldn’t consider having 3650 copies of me existing for one day and then being deleted to be equivalent to living an extra 10 years the way Robin Hanson appears to.