So on the one hand you have values that are easily, trivially compatible, such as “I want to spend 1000 years climbing the mountains of Mars” or “I want to host blood-sports with my uncoerced friends with the holodeck safety on”.
On the other hand you have insoluble, or at least apparently insoluble, conflicts: B wants to torture people, C wants there to be no torture anywhere at all. C wants to monitor everyone everywhere forever to check that they aren’t torturing anyone or plotting to torture anyone, D wants privacy. E and F both want to be the best in the universe at quantum soccer, even if they have to kneecap everyone else to get that. Etc.
It’s simply false that you can just put people in the throne as emperor of the universe, and they’ll justly compromise about all conflicts. Or even do anything remotely like that.
How many people have conflictual values that they, effectively, value lexicographically more than their other values? Does decision theory imply that compromise will be chosen by sufficiently well-informed agents who do not have lexicographically valued conflictual values?
So on the one hand you have values that are easily, trivially compatible, such as “I want to spend 1000 years climbing the mountains of Mars” or “I want to host blood-sports with my uncoerced friends with the holodeck safety on”.
On the other hand you have insoluble, or at least apparently insoluble, conflicts: B wants to torture people, C wants there to be no torture anywhere at all. C wants to monitor everyone everywhere forever to check that they aren’t torturing anyone or plotting to torture anyone, D wants privacy. E and F both want to be the best in the universe at quantum soccer, even if they have to kneecap everyone else to get that. Etc.
It’s simply false that you can just put people in the throne as emperor of the universe, and they’ll justly compromise about all conflicts. Or even do anything remotely like that.
How many people have conflictual values that they, effectively, value lexicographically more than their other values? Does decision theory imply that compromise will be chosen by sufficiently well-informed agents who do not have lexicographically valued conflictual values?