Suppose your moral intuitions cause you to evaluate worlds based on your prospects as a potential human—as in, in pop A you will get utility −10, in pop B you get an expected (1/m)(-n) + (m-1/m)(-9.9). These intuitions could correspond to a straightforward “maximize expected util of ‘being someone in this world’”, or something like “suppose all consciousness is experienced by a single entity from multiple perspectives, completing all lives and then cycling back again from the beginning, maximize this being’s utility”. Such perspectives would give the “non-intuitive” result in these sorts of thought experiments.
Hm, a downvote. Is my reasoning faulty? Or is someone objecting to my second example of a metaphysical stance that would motivate this type of calculation?
Perhaps! Though I certainly didn’t intend to imply that this was a selfish calculation—one could totally believe that the best altruistic strategy is to maximize the expected utility of being a person.
Suppose your moral intuitions cause you to evaluate worlds based on your prospects as a potential human—as in, in pop A you will get utility −10, in pop B you get an expected (1/m)(-n) + (m-1/m)(-9.9). These intuitions could correspond to a straightforward “maximize expected util of ‘being someone in this world’”, or something like “suppose all consciousness is experienced by a single entity from multiple perspectives, completing all lives and then cycling back again from the beginning, maximize this being’s utility”. Such perspectives would give the “non-intuitive” result in these sorts of thought experiments.
Hm, a downvote. Is my reasoning faulty? Or is someone objecting to my second example of a metaphysical stance that would motivate this type of calculation?
Perhaps people simply objected to the implied selfish motivations.
Perhaps! Though I certainly didn’t intend to imply that this was a selfish calculation—one could totally believe that the best altruistic strategy is to maximize the expected utility of being a person.