To answer 1, the reason that a singleton government won’t choose a random person and let him be dictator is that it has an improvement upon that. For example, if people’s utilities are less than linear in negentropy, then it would do better to give everyone an equal share of negentropy. So why shouldn’t I assume that in the singleton scenario my utility would be at least as large as if I have a random chance to be dictator?
For 2, I don’t think a typical egoist would have a constant discount factor for other people, and certainly not the kind described in Robin’s The Rapacious Hardscrapple Frontier. He might be willing to value the entire rest of the universe combined at, say, a billion times his own life, but that’s not nearly enough to make EU(B)>EU(A). An altruist would have a completely different kind of utility function, but I think it would still be the case that EU(A)>EU(B).
Okay, so now the assumptions seem to be that a singleton government will give you exclusive personal title to a trillion galaxies, that we should otherwise behave as though the future universe were going to imitate a particular work of early 21st century dystopian science fiction, and that one discounts the value of other people compared to oneself by a factor of perhaps 10^23. I stand by my claim that the only effect of whipping out the calculator here is obfuscation; the real source of the bizarre conclusions is the bizarre set of assumptions.
To answer 1, the reason that a singleton government won’t choose a random person and let him be dictator is that it has an improvement upon that. For example, if people’s utilities are less than linear in negentropy, then it would do better to give everyone an equal share of negentropy. So why shouldn’t I assume that in the singleton scenario my utility would be at least as large as if I have a random chance to be dictator?
For 2, I don’t think a typical egoist would have a constant discount factor for other people, and certainly not the kind described in Robin’s The Rapacious Hardscrapple Frontier. He might be willing to value the entire rest of the universe combined at, say, a billion times his own life, but that’s not nearly enough to make EU(B)>EU(A). An altruist would have a completely different kind of utility function, but I think it would still be the case that EU(A)>EU(B).
Okay, so now the assumptions seem to be that a singleton government will give you exclusive personal title to a trillion galaxies, that we should otherwise behave as though the future universe were going to imitate a particular work of early 21st century dystopian science fiction, and that one discounts the value of other people compared to oneself by a factor of perhaps 10^23. I stand by my claim that the only effect of whipping out the calculator here is obfuscation; the real source of the bizarre conclusions is the bizarre set of assumptions.