Wha? There’s no law of nature forcing all my goals to be egotistical. If I saw a kitten about to get run over by a train, I’d try to save it. The fact that insectoid aliens may not adore kittens doesn’t change my values one bit.
That’s certainly true, but from the regular human perspective, the real trouble is that in case of a conflict of values and interests, there is no “right,” only naked power. (Which, of course, depending on the game-theoretic aspects of the concrete situation, may or may not escalate into warfare.) This does have some unpleasant implications not just when it comes to insectoid aliens, but also the regular human conflicts.
In fact, I think there is a persistent thread of biased thinking on LW in this regard. People here often write as if sufficiently rational individuals would surely be able to achieve harmony among themselves (this often cited post, for example, seems to take this for granted). Whereas in reality, even if they are so rational to leave no possibility of factual disagreement, if their values and interests differ—and they often will—it must be either “good fences make good neighbors” or “who-whom.” In fact, I find it quite plausible that a no-holds-barred dissolving of the socially important beliefs and concepts would in fact exacerbate conflict, since this would become only more obvious.
Negative-sum conflicts happen due to factual disagreements (mostly inaccurate assessments of relative power), not value disagreements. If two parties have accurate beliefs but different values, bargaining will be more beneficial to both than making war, because bargaining can avoid destroying wealth but still take into account the “correct” counterfactual outcome of war.
Though bargaining may still look like “who whom” if one party is much more powerful than the other.
How strong perfect-information assumptions do you need to guarantee that rational decision-making can never lead both sides in a conflict to precommit to escalation, even in a situation where their behavior has signaling implications for other conflicts in the future? (I don’t know the answer to this question, but my hunch is that even if this is possible, the assumptions would have to be unrealistic for anything conceivable in reality.)
And of course, as you note, even if every conflict is resolved by perfect Coasian bargaining, if there is a significant asymmetry of power, the practical outcome can still be little different from defeat and subjugation (or even obliteration) in a war for the weaker side.
Negative-sum conflicts happen due to factual disagreements (mostly inaccurate assessments of relative power), not value disagreements.
By ‘negative-sum’ do you really mean ‘negative for all parties’? Because, taking ‘negative-sum’ literally, we can imagine a variant of the Prisoner’s Dilemma where A defecting gains 1 and costs B 2, and where B defecting gains 3 and costs A 10.
How does that make sense? You are correct that under sufficiently generous Coasian assumptions, any attempt at predation will be negotiated into a zero-sum transfer, thus avoiding a negative-sum conflict. But that is still a violation of Pareto optimality, which requires that nobody ends up worse off.
I don’t understand your comment. There can be many Pareto optimal outcomes. For example, “Alice gives Bob a million dollars” is Pareto optimal, even though it makes Alice worse off than the other Pareto optimal outcome where everyone keeps their money.
Yes, this was a confusion on my part. You are right that starting from a Pareto-optimal state, a pure transfer results in another Pareto-optimal state.
I expect I’ll keep on doing what I’m doing, which is trying to work out what I actually want. [...] So far I haven’t lapsed into nihilist catatonia or killed everyone or destroyed the economy. This suggests that assuming a morality is not a requirement for not behaving like a sociopath. I have friends and it pleases me to be nice to them and I have a lovely girlfriend and a lovely three year old daughter who I spend most of my life’s efforts on trying to bring up and on the prerequisites to that.
Without an intrinsic point to the universe, it seems likely to me that people would go on behaving with the same sort of observable morality they had before. I consider this supported by the observed phenomenon that Christians who turn atheist seem to still behave as ethically as they did before, without a perception of God to direct them.
This may or may not directly answer your question of what’s the correct moral engine to have in one’s mind (if there is a single correct moral engine to have in one’s mind—and even assuming what’s in one’s mind has a tremendous effect on one’s observed ethical behaviour, rather than said ethical behaviour largely being evolved behaviour going back millions of years before the mind), but I don’t actually care about that except insofar as it affects the observed behaviour.
So in other words you agree with Lovecraft that only egotism exists?
Wha? There’s no law of nature forcing all my goals to be egotistical. If I saw a kitten about to get run over by a train, I’d try to save it. The fact that insectoid aliens may not adore kittens doesn’t change my values one bit.
That’s certainly true, but from the regular human perspective, the real trouble is that in case of a conflict of values and interests, there is no “right,” only naked power. (Which, of course, depending on the game-theoretic aspects of the concrete situation, may or may not escalate into warfare.) This does have some unpleasant implications not just when it comes to insectoid aliens, but also the regular human conflicts.
In fact, I think there is a persistent thread of biased thinking on LW in this regard. People here often write as if sufficiently rational individuals would surely be able to achieve harmony among themselves (this often cited post, for example, seems to take this for granted). Whereas in reality, even if they are so rational to leave no possibility of factual disagreement, if their values and interests differ—and they often will—it must be either “good fences make good neighbors” or “who-whom.” In fact, I find it quite plausible that a no-holds-barred dissolving of the socially important beliefs and concepts would in fact exacerbate conflict, since this would become only more obvious.
Negative-sum conflicts happen due to factual disagreements (mostly inaccurate assessments of relative power), not value disagreements. If two parties have accurate beliefs but different values, bargaining will be more beneficial to both than making war, because bargaining can avoid destroying wealth but still take into account the “correct” counterfactual outcome of war.
Though bargaining may still look like “who whom” if one party is much more powerful than the other.
How strong perfect-information assumptions do you need to guarantee that rational decision-making can never lead both sides in a conflict to precommit to escalation, even in a situation where their behavior has signaling implications for other conflicts in the future? (I don’t know the answer to this question, but my hunch is that even if this is possible, the assumptions would have to be unrealistic for anything conceivable in reality.)
And of course, as you note, even if every conflict is resolved by perfect Coasian bargaining, if there is a significant asymmetry of power, the practical outcome can still be little different from defeat and subjugation (or even obliteration) in a war for the weaker side.
By ‘negative-sum’ do you really mean ‘negative for all parties’? Because, taking ‘negative-sum’ literally, we can imagine a variant of the Prisoner’s Dilemma where A defecting gains 1 and costs B 2, and where B defecting gains 3 and costs A 10.
I suppose I meant “Pareto-suboptimal”. Sorry.
How does that make sense? You are correct that under sufficiently generous Coasian assumptions, any attempt at predation will be negotiated into a zero-sum transfer, thus avoiding a negative-sum conflict. But that is still a violation of Pareto optimality, which requires that nobody ends up worse off.
I don’t understand your comment. There can be many Pareto optimal outcomes. For example, “Alice gives Bob a million dollars” is Pareto optimal, even though it makes Alice worse off than the other Pareto optimal outcome where everyone keeps their money.
Yes, this was a confusion on my part. You are right that starting from a Pareto-optimal state, a pure transfer results in another Pareto-optimal state.
As I commented on What Would You Do Without Morality?:
Without an intrinsic point to the universe, it seems likely to me that people would go on behaving with the same sort of observable morality they had before. I consider this supported by the observed phenomenon that Christians who turn atheist seem to still behave as ethically as they did before, without a perception of God to direct them.
This may or may not directly answer your question of what’s the correct moral engine to have in one’s mind (if there is a single correct moral engine to have in one’s mind—and even assuming what’s in one’s mind has a tremendous effect on one’s observed ethical behaviour, rather than said ethical behaviour largely being evolved behaviour going back millions of years before the mind), but I don’t actually care about that except insofar as it affects the observed behaviour.