Wei, no I don’t think I considered the possibility of discounting people by their algorithmic complexity.
I can see that in the context of Everett it seems plausible to weigh each observer with a measure proportional to the amplitude squared of the branch of the wave function on which he is living. Moreover, it seems right to use this measure both to calculate the anthropic probability of me finding myself as that observer and the moral importance of that observer’s well-being.
Assigning anthropic probabilities over infinite domains is problematic. I don’t know of a fully satisfactory explanation of how to do this. One natural approach might to explore might be to assign some Turing machine based measure to each of the infinite observers. Perhaps we could assign plausible probabilities by using such an approach (although I’d like to see this worked out in detail before accepting that it would work).
If I understand your suggestion correctly, you propose that the same anthropic probability measure should also be used as a measure of moral importance. But there seems to me to be a problem. Consider a simple classical universe with two very similar observers. On my reckoning they should each get anthropic probability measure 1⁄2 (rejecting SIA, the Self-Indication Assumption). Yet it appears that they should each have a moral weight of 1. Does your proposal require that one accepts the SIA? Or am I misinterpreting you? Or are you trying to explicate not total utilitarianism but average utilitarianism?
Wei, no I don’t think I considered the possibility of discounting people by their algorithmic complexity.
I can see that in the context of Everett it seems plausible to weigh each observer with a measure proportional to the amplitude squared of the branch of the wave function on which he is living. Moreover, it seems right to use this measure both to calculate the anthropic probability of me finding myself as that observer and the moral importance of that observer’s well-being.
Assigning anthropic probabilities over infinite domains is problematic. I don’t know of a fully satisfactory explanation of how to do this. One natural approach might to explore might be to assign some Turing machine based measure to each of the infinite observers. Perhaps we could assign plausible probabilities by using such an approach (although I’d like to see this worked out in detail before accepting that it would work).
If I understand your suggestion correctly, you propose that the same anthropic probability measure should also be used as a measure of moral importance. But there seems to me to be a problem. Consider a simple classical universe with two very similar observers. On my reckoning they should each get anthropic probability measure 1⁄2 (rejecting SIA, the Self-Indication Assumption). Yet it appears that they should each have a moral weight of 1. Does your proposal require that one accepts the SIA? Or am I misinterpreting you? Or are you trying to explicate not total utilitarianism but average utilitarianism?