I don’t know how far this generalizes, but in your toy example, you seem to be optimizing over 3D world states (“number of people who are having a positive experience”) rather than 4D world states (“number of people who have had or will have or are having positive experiences”). If my choice is between 1 person having a positive experience today, and 10 having an equivalent experience tomorrow, then either way 2 days from now 100 people will look back on having had such an experience.
When you introduce the model you assume there are no relevant anticipations, which makes sense to me since it creates measurable additional value to delaying the experience, but also no fond memories, which doesn’t make sense to me. Assuming no value for memories, to me, gives the experience the same moral value as an experience that everyone involved in forgets due to being blackout drunk: maybe not none, but very very low, since it essentially stops being part of the story of anyone’s life.
What I mean is, both options in the toy model (with or without any kind of conserved measure) result in the same number of people having equivalent experiences in their past and future light cones. Why should the moral value of those experiences depend on when in their light cones those experiences happened? For whom does this change the total utility of their experiences?
(Side note: I’m still confused about the implications of the idea of quantum immortality/suicide/Russian roulette, in that my brain rebels against anything I think about it).
I don’t know how far this generalizes, but in your toy example, you seem to be optimizing over 3D world states (“number of people who are having a positive experience”) rather than 4D world states (“number of people who have had or will have or are having positive experiences”). If my choice is between 1 person having a positive experience today, and 10 having an equivalent experience tomorrow, then either way 2 days from now 100 people will look back on having had such an experience.
When you introduce the model you assume there are no relevant anticipations, which makes sense to me since it creates measurable additional value to delaying the experience, but also no fond memories, which doesn’t make sense to me. Assuming no value for memories, to me, gives the experience the same moral value as an experience that everyone involved in forgets due to being blackout drunk: maybe not none, but very very low, since it essentially stops being part of the story of anyone’s life.
What I mean is, both options in the toy model (with or without any kind of conserved measure) result in the same number of people having equivalent experiences in their past and future light cones. Why should the moral value of those experiences depend on when in their light cones those experiences happened? For whom does this change the total utility of their experiences?
(Side note: I’m still confused about the implications of the idea of quantum immortality/suicide/Russian roulette, in that my brain rebels against anything I think about it).