Eliezer, suppose the nature of the catastrophe is such that everyone on the planet dies instantaneously and painlessly. Why should such deaths bother you, given that identical people are still living in adjacent branches? If avoiding death is simply a terminal value for you, then I don’t see why encouraging births shouldn’t be a similar terminal value.
I agree that the worlds in which we survive may not be pleasant, but average utilitarianism implies that we should try to minimize such unpleasant worlds that survive, rather than the existential risk per se, which is still strongly counterintuitive.
I don’t know what you are referring to by “hard to make numbers add up on anthropics without Death events”. If you wrote about that somewhere else, I’ve missed it.
A separate practical problem I see with the combination of MWI and consequentialism is that due to branching, the measure of worlds a person is responsible for is always rapidly and continuously decreasing, so that for example I’m now responsible for a much smaller portion of the multiverse than I was just yesterday or even a few seconds ago. In theory this doesnât matter because the costs and benefits of every choice I face are reduced by the same factor, so the relative rankings are preserved. But in practice this seems pretty demotivational, since the subjective mental cost of making an effort appears to stay the same, while the objective benefits of such effort decreases rapidly. Eliezer, I’m curious how you’ve dealt with this problem.
Eliezer, suppose the nature of the catastrophe is such that everyone on the planet dies instantaneously and painlessly. Why should such deaths bother you, given that identical people are still living in adjacent branches? If avoiding death is simply a terminal value for you, then I don’t see why encouraging births shouldn’t be a similar terminal value.
I agree that the worlds in which we survive may not be pleasant, but average utilitarianism implies that we should try to minimize such unpleasant worlds that survive, rather than the existential risk per se, which is still strongly counterintuitive.
I don’t know what you are referring to by “hard to make numbers add up on anthropics without Death events”. If you wrote about that somewhere else, I’ve missed it.
A separate practical problem I see with the combination of MWI and consequentialism is that due to branching, the measure of worlds a person is responsible for is always rapidly and continuously decreasing, so that for example I’m now responsible for a much smaller portion of the multiverse than I was just yesterday or even a few seconds ago. In theory this doesnât matter because the costs and benefits of every choice I face are reduced by the same factor, so the relative rankings are preserved. But in practice this seems pretty demotivational, since the subjective mental cost of making an effort appears to stay the same, while the objective benefits of such effort decreases rapidly. Eliezer, I’m curious how you’ve dealt with this problem.