I don’t understand the assumption that each rationalist prefers to be a civilian while someone else risks her life. They can be rational and use a completely altruistic utility function that values all people equally a priori. The strongest rationalist society is the rationalist society where everyone have the same terminal values (in an absolute rather than relative sense).
Imagine a community of self-modifying AIs who collectively prefer fighting to surrender, but individually prefer being a civilian to fighting. One solution is to run a lottery, unpredictable to any agent, to select warriors. Before the lottery is run, all the AIs change their code, in advance, so that if selected they will fight as a warrior in the most communally efficient possible way—even if it means calmly marching into their own death...
You can have lotteries for who gets elected as a warrior. Sort of like the example above with AIs changing their own code. Except that if “be reflectively consistent; do that which you would precommit to do” is not sufficient motivation for humans to obey the lottery, then...
...well, in advance of the lottery actually running, we can perhaps all agree that it is a good idea to give the selectees drugs that will induce extra courage, and shoot them if they run away. Even considering that we ourselves might be selected in the lottery. Because in advance of the lottery, this is the general policy that gives us the highest expectation of survival.
Eliezer is analyzing the situation as a Prisoner’s Dilemma: different players have different utility functions. This analysis would be completely redundant in a society where everyone have the same utility function (or at least sufficiently similar / non-egocentric utility functions). In such a society there wouldn’t be a need for a lottery: the soldiers would be those most skilled for the job. There would be no need for drugs / shooting deserters: the soldiers would want to fight because the choice to fight would be associated with positive expected utility (even if it means high likelihood of death).
I don’t understand the assumption that each rationalist prefers to be a civilian while someone else risks her life. They can be rational and use a completely altruistic utility function that values all people equally a priori. The strongest rationalist society is the rationalist society where everyone have the same terminal values (in an absolute rather than relative sense).
That isn’t an assumption Eliezer is making, it’s an assumption he’s attacking.
It doesn’t look like it:
Eliezer is analyzing the situation as a Prisoner’s Dilemma: different players have different utility functions. This analysis would be completely redundant in a society where everyone have the same utility function (or at least sufficiently similar / non-egocentric utility functions). In such a society there wouldn’t be a need for a lottery: the soldiers would be those most skilled for the job. There would be no need for drugs / shooting deserters: the soldiers would want to fight because the choice to fight would be associated with positive expected utility (even if it means high likelihood of death).