As Tim Tyler pointed out, the fact that a singleton government physically could choose a random person and appoint him dictator of the universe is irrelevant; we know very well it isn’t going to. This assumption favored project A because almost all your calculated utility derived from the hope of becoming dictator of the universe; when we accept this is not going to happen, all that fictional utility evaporates.
To take the total utility of the rest of the universe as approximately zero for the purpose of this calculation would require that we value other people in general less than we value ourselves by a factor on the order of 10^32. Some discount factor is reasonable—we do behave as though we value ourselves more highly than random other people. But if you agree that you wouldn’t save your own life at the cost of letting a million other people die, then you agree the discount factor should not be as high as 10^6, let alone 10^32.
To answer 1, the reason that a singleton government won’t choose a random person and let him be dictator is that it has an improvement upon that. For example, if people’s utilities are less than linear in negentropy, then it would do better to give everyone an equal share of negentropy. So why shouldn’t I assume that in the singleton scenario my utility would be at least as large as if I have a random chance to be dictator?
For 2, I don’t think a typical egoist would have a constant discount factor for other people, and certainly not the kind described in Robin’s The Rapacious Hardscrapple Frontier. He might be willing to value the entire rest of the universe combined at, say, a billion times his own life, but that’s not nearly enough to make EU(B)>EU(A). An altruist would have a completely different kind of utility function, but I think it would still be the case that EU(A)>EU(B).
Okay, so now the assumptions seem to be that a singleton government will give you exclusive personal title to a trillion galaxies, that we should otherwise behave as though the future universe were going to imitate a particular work of early 21st century dystopian science fiction, and that one discounts the value of other people compared to oneself by a factor of perhaps 10^23. I stand by my claim that the only effect of whipping out the calculator here is obfuscation; the real source of the bizarre conclusions is the bizarre set of assumptions.
As Tim Tyler pointed out, the fact that a singleton government physically could choose a random person and appoint him dictator of the universe is irrelevant; we know very well it isn’t going to. This assumption favored project A because almost all your calculated utility derived from the hope of becoming dictator of the universe; when we accept this is not going to happen, all that fictional utility evaporates.
To take the total utility of the rest of the universe as approximately zero for the purpose of this calculation would require that we value other people in general less than we value ourselves by a factor on the order of 10^32. Some discount factor is reasonable—we do behave as though we value ourselves more highly than random other people. But if you agree that you wouldn’t save your own life at the cost of letting a million other people die, then you agree the discount factor should not be as high as 10^6, let alone 10^32.
To answer 1, the reason that a singleton government won’t choose a random person and let him be dictator is that it has an improvement upon that. For example, if people’s utilities are less than linear in negentropy, then it would do better to give everyone an equal share of negentropy. So why shouldn’t I assume that in the singleton scenario my utility would be at least as large as if I have a random chance to be dictator?
For 2, I don’t think a typical egoist would have a constant discount factor for other people, and certainly not the kind described in Robin’s The Rapacious Hardscrapple Frontier. He might be willing to value the entire rest of the universe combined at, say, a billion times his own life, but that’s not nearly enough to make EU(B)>EU(A). An altruist would have a completely different kind of utility function, but I think it would still be the case that EU(A)>EU(B).
Okay, so now the assumptions seem to be that a singleton government will give you exclusive personal title to a trillion galaxies, that we should otherwise behave as though the future universe were going to imitate a particular work of early 21st century dystopian science fiction, and that one discounts the value of other people compared to oneself by a factor of perhaps 10^23. I stand by my claim that the only effect of whipping out the calculator here is obfuscation; the real source of the bizarre conclusions is the bizarre set of assumptions.