In other words, if my opponent begins to make choices that better optimize their goals, do I gain or lose?
It seems clear that the answer depends on how many of their goals I share, how many I oppose, and how much I value the shared goals relative to the opposed goals.
Suppose we are Swift’s Big-Endians and Little-Endians, who agree on pretty much everything that matters (even by their own standards!) and are bitterly divided over a single relatively trivial issue. If one side is suddenly optimized, everybody wins. That is, the vast majority of everyone’s current goals are more effectively and efficiently met, including those of the opposition.
Sure, the optimized party gets all of that plus the value of having everyone open their eggs on the side they endorse… which means their opponents suffer in turn the value-loss of everyone opening their eggs on the side they reject. But they will be suffering that value-loss in the context of an overall increase in their value. I’m not saying everyone wins equally, just that everybody wins. Whether they are happy about this or not depends on other factors, but they seem pretty clearly to be better off.
In that scenario, upgrading my opponents means I win, although upgrading my allies means I win more.
(Of course, it’s possible that both kinds of Endian conclude that they get more of what they want by self-modifying to stop caring so much about peeling eggs, and then work out ways to do so. One person’s “value” is another person’s “bias.” But that’s another discussion.)
By contrast, if instead of Endians we have more fundamentally opposed opponents… say, aliens who want to modify planets like Earth to have cyanide-rich atmospheres so they can colonize, whereas we would prefer to have more oxygen-rich atmospheres which are toxic to the aliens.
In a case like that, optimizing our opponents means they get a larger share of the available worlds (either through better negotiations, or winning wars, or more efficient exploration, or whatever) and in the long run dominate the galaxy. If we’re at a point where planetary surfaces really are the most valuable thing in play, then they win and we lose.
(Of course, it’s possible we both conclude that we get more of what we want by self-modifying to breathe whatever atmosphere the planet happens to have. But again, that’s another discussion.)
Coming back down to Earth, then: I guess the question is, how many existing group-level conflicts among humans are primarily superficial conflicts among groups whose shared goals dwarf their opposed ones (“Endian” conflicts), and how many really are deep conflicts among groups whose opposed goals dwarf their shared ones (“oxygen-cyanide” conflicts)?
I don’t know, but I would be surprised if a significant number were non-Endian.
If that’s true, then in general optimizing everyone, even my opponents, leads to everyone being better off, even me. Not because everyone immediately realizes that I’m right and they’re wrong, but because most of us already agree on the overwhelming majority of our values.
I’m not saying everyone wins equally, just that everybody wins.
I really hope that this is the case, but I don’t think that it is. I think that the difference between the hypothetical socialist and libertarian are more dramatic than the difference between a Big-Ender and a Little-Ender. Consider this situation:
All of humanity consists of 100 people, starting at utility 10, and a random one of them is given this choice: either keep things the way they are (everyone has 10 utilons, total of 1000) or one person, at random, is given 990 utilons while everyone else loses 9, so one person will have 1000, and everyone else will have 1, for a total of 1099~11 per person. The expected utility of the latter option is higher than the first so every rational being must pick the latter, right? Though I’ve learned a lot since that conversation and I no longer would make the same points, I still think that an equitable distribution of utility is better than an unequal one. Many people genuinely think it is a wonderful thing to make it so that the world is highly stratified, that there are a whole lot of people who lose in order to have a few people who really, really win. There are also a whole lot of people who genuinely think it is worth sacrificing some amount of “progress” (by which I mean technological innovation, cheapness of consumer goods, whatever) in order to have people’s lives be more equitable. I lie closer to the second camp, but I haven’t pounded my tentstakes into the ground, and even if I have, I certainly haven’t laid a brick-and-mortar foundation, so I can uproot fairly quickly. I understand the logic that comes to the former conclusion; I think it just starts from different premises than the people who come to the latter (though of course there are crazies who come to both but that goes without saying). It does seem to me, however, that the two actually are fundamentally irreconcilable in very important ways. I hope I’m wrong about that, but it really seems like I’m not...
edit: Certainly arguments like “ought gay people/mixed race couples be allowed to get married” seem more like arguments about egg-peeling, and so your strategy hopefully would work there
Absolutely agreed that the difference between “I’m worse off than I was, and you’re better off” (as in your example) and “I’m better off than I was, and you’re much more better off than I am” (e.g.; we start off at 10 utilons each, a randomly chosen person gets +1010 utilons and everyone else gets +10 utilons) matters here.
I’m talking about the second case… that is, I’m not making the “maximize global utility” argument.
This has nothing to do with inequity. The second case is just as unequal as the first: at the end of the day one person has 999 utility more than his neighbors. The difference is that in the second case his neighbors are better off than they were at the start, and in the first case they are worse off.
As for whether one or the other real-world cases (e.g., socialist/libertarian) are more like the first or second; I don’t really know.
In other words, if my opponent begins to make choices that better optimize their goals, do I gain or lose?
It seems clear that the answer depends on how many of their goals I share, how many I oppose, and how much I value the shared goals relative to the opposed goals.
Suppose we are Swift’s Big-Endians and Little-Endians, who agree on pretty much everything that matters (even by their own standards!) and are bitterly divided over a single relatively trivial issue. If one side is suddenly optimized, everybody wins. That is, the vast majority of everyone’s current goals are more effectively and efficiently met, including those of the opposition.
Sure, the optimized party gets all of that plus the value of having everyone open their eggs on the side they endorse… which means their opponents suffer in turn the value-loss of everyone opening their eggs on the side they reject. But they will be suffering that value-loss in the context of an overall increase in their value. I’m not saying everyone wins equally, just that everybody wins. Whether they are happy about this or not depends on other factors, but they seem pretty clearly to be better off.
In that scenario, upgrading my opponents means I win, although upgrading my allies means I win more.
(Of course, it’s possible that both kinds of Endian conclude that they get more of what they want by self-modifying to stop caring so much about peeling eggs, and then work out ways to do so. One person’s “value” is another person’s “bias.” But that’s another discussion.)
By contrast, if instead of Endians we have more fundamentally opposed opponents… say, aliens who want to modify planets like Earth to have cyanide-rich atmospheres so they can colonize, whereas we would prefer to have more oxygen-rich atmospheres which are toxic to the aliens.
In a case like that, optimizing our opponents means they get a larger share of the available worlds (either through better negotiations, or winning wars, or more efficient exploration, or whatever) and in the long run dominate the galaxy. If we’re at a point where planetary surfaces really are the most valuable thing in play, then they win and we lose.
(Of course, it’s possible we both conclude that we get more of what we want by self-modifying to breathe whatever atmosphere the planet happens to have. But again, that’s another discussion.)
Coming back down to Earth, then: I guess the question is, how many existing group-level conflicts among humans are primarily superficial conflicts among groups whose shared goals dwarf their opposed ones (“Endian” conflicts), and how many really are deep conflicts among groups whose opposed goals dwarf their shared ones (“oxygen-cyanide” conflicts)?
I don’t know, but I would be surprised if a significant number were non-Endian.
If that’s true, then in general optimizing everyone, even my opponents, leads to everyone being better off, even me. Not because everyone immediately realizes that I’m right and they’re wrong, but because most of us already agree on the overwhelming majority of our values.
OTOH, that might be false.
I really hope that this is the case, but I don’t think that it is. I think that the difference between the hypothetical socialist and libertarian are more dramatic than the difference between a Big-Ender and a Little-Ender. Consider this situation:
All of humanity consists of 100 people, starting at utility 10, and a random one of them is given this choice: either keep things the way they are (everyone has 10 utilons, total of 1000) or one person, at random, is given 990 utilons while everyone else loses 9, so one person will have 1000, and everyone else will have 1, for a total of 1099~11 per person. The expected utility of the latter option is higher than the first so every rational being must pick the latter, right? Though I’ve learned a lot since that conversation and I no longer would make the same points, I still think that an equitable distribution of utility is better than an unequal one. Many people genuinely think it is a wonderful thing to make it so that the world is highly stratified, that there are a whole lot of people who lose in order to have a few people who really, really win. There are also a whole lot of people who genuinely think it is worth sacrificing some amount of “progress” (by which I mean technological innovation, cheapness of consumer goods, whatever) in order to have people’s lives be more equitable. I lie closer to the second camp, but I haven’t pounded my tentstakes into the ground, and even if I have, I certainly haven’t laid a brick-and-mortar foundation, so I can uproot fairly quickly. I understand the logic that comes to the former conclusion; I think it just starts from different premises than the people who come to the latter (though of course there are crazies who come to both but that goes without saying). It does seem to me, however, that the two actually are fundamentally irreconcilable in very important ways. I hope I’m wrong about that, but it really seems like I’m not...
edit: Certainly arguments like “ought gay people/mixed race couples be allowed to get married” seem more like arguments about egg-peeling, and so your strategy hopefully would work there
Absolutely agreed that the difference between “I’m worse off than I was, and you’re better off” (as in your example) and “I’m better off than I was, and you’re much more better off than I am” (e.g.; we start off at 10 utilons each, a randomly chosen person gets +1010 utilons and everyone else gets +10 utilons) matters here.
I’m talking about the second case… that is, I’m not making the “maximize global utility” argument.
This has nothing to do with inequity. The second case is just as unequal as the first: at the end of the day one person has 999 utility more than his neighbors. The difference is that in the second case his neighbors are better off than they were at the start, and in the first case they are worse off.
As for whether one or the other real-world cases (e.g., socialist/libertarian) are more like the first or second; I don’t really know.