This question, of what is fairness / morality, seems a lot easier (to me) than the posters here appear to feel.
Isn’t the answer: You start with purely selfish desires. These sometimes cause conflict with limited resources. Then you take Rawl’s Veil of Ignorance, and come up with social rules (like “don’t murder”) that result in a net positive outcome for society. It’s not a zero-sum game. Cooperation can result in greater returns for everybody, than constant conflict.
Individuals breaking agreed morality are shunned, in much the same way as someone betraying in a Prisoner’s Dilemma, or a herder allowing extra sheep onto a field, resulting in the Tragedy of the Commons.
Yes, any of us could break common morality—that’s easy. The whole point is, that if you didn’t know which of the individuals you were going to be, then you wouldn’t be so eager to propose some particularly non-moral solution.
Meanwhile, moral dilemmas that actually are zero sum, like two monkeys and a banana that can’t be divided, don’t have consensus solutions in society.
Finally, this formulation doesn’t completely resolve all scenarios, because it matters a lot which group of people/things you consider in the class that “you” “might have been”. In morality a few centuries ago, “you” were a white slaveowner, and it didn’t occur to you that “you” “might have been” a black slave. So it was not immoral to own slaves, then. Just as, today, you might imagine yourself to be any citizen (of your country? of the world?), but not, say, a cow. So the conflicts become one of what is the population from which the Veil of Ignorance is drawn.
(Of course, all this imagination is beside the point that it is meaningless that “you” “might have been” someone else. But you can still do the computation even though the scenario is not physically plausible.)
But the basic structure seems pretty clear. It’s not “right” for strong people to beat up weak people, because if you don’t know whether you would have been born strong or weak, you’d much rather a society where nobody does it, than one where the strong dominate the weak. In other words, the gains from beating people up are far less than the losses from being beaten up.
(...we do what we must, because we can. For the good of all of us. Except the ones who are dead.)
This question, of what is fairness / morality, seems a lot easier (to me) than the posters here appear to feel.
Isn’t the answer: You start with purely selfish desires. These sometimes cause conflict with limited resources. Then you take Rawl’s Veil of Ignorance, and come up with social rules (like “don’t murder”) that result in a net positive outcome for society. It’s not a zero-sum game. Cooperation can result in greater returns for everybody, than constant conflict.
Individuals breaking agreed morality are shunned, in much the same way as someone betraying in a Prisoner’s Dilemma, or a herder allowing extra sheep onto a field, resulting in the Tragedy of the Commons.
Yes, any of us could break common morality—that’s easy. The whole point is, that if you didn’t know which of the individuals you were going to be, then you wouldn’t be so eager to propose some particularly non-moral solution.
Meanwhile, moral dilemmas that actually are zero sum, like two monkeys and a banana that can’t be divided, don’t have consensus solutions in society.
Finally, this formulation doesn’t completely resolve all scenarios, because it matters a lot which group of people/things you consider in the class that “you” “might have been”. In morality a few centuries ago, “you” were a white slaveowner, and it didn’t occur to you that “you” “might have been” a black slave. So it was not immoral to own slaves, then. Just as, today, you might imagine yourself to be any citizen (of your country? of the world?), but not, say, a cow. So the conflicts become one of what is the population from which the Veil of Ignorance is drawn.
(Of course, all this imagination is beside the point that it is meaningless that “you” “might have been” someone else. But you can still do the computation even though the scenario is not physically plausible.)
But the basic structure seems pretty clear. It’s not “right” for strong people to beat up weak people, because if you don’t know whether you would have been born strong or weak, you’d much rather a society where nobody does it, than one where the strong dominate the weak. In other words, the gains from beating people up are far less than the losses from being beaten up.
(...we do what we must, because we can. For the good of all of us. Except the ones who are dead.)