Honestly, it would just be much better to open up “shared-value game theory” as a formal subject and then see how well that elaborated field actually matches our normal conceptions of ethics.
Largely because, in my opinion, it explains the real world much, much better than a “selfish” game theory.
Using selfish game theories, “generous” or “altruistic” strategies can evolve to dominate in iterated games and evolved populations (there’s a link somewhere upthread to the paper). You’re still then left with the question of: if they do, why did evolution build us to place fundamental emotional and normative value on conforming to what any rational selfish agent will figure out?
Using theories in which agents share some of their values, “generous” or “altruistic” strategies become the natural, obvious result: shared values are nonrivalrous in the first place. Evolution builds us to feel Good and Moral about creatures who share our values because that’s a sign they probably have similar genes (though I just made that up now, so it’s probably totally wrong) (also, because nothing had time to evolve to fake human moral behavior, so the kin-signal remained reasonably strong).
Using selfish game theories, “generous” or “altruistic” strategies can evolve to dominate in iterated games and evolved populations (there’s a link somewhere upthread to the paper). You’re still then left with the question of: if they do, why did evolution build us to place fundamental emotional and normative value on conforming to what any rational selfish agent will figure out?
Because we’re adaptation executors, not fitness maximizers. Evolution gets us to do useful things by having us derive emotional value directly from doing those things, not by introducing the extra indirect step of moulding us into rational calculators who first have to consciously compute what’s most useful.
why did evolution build us to place fundamental emotional and normative value on conforming to what any rational selfish agent will figure out?
If you’re running some calculation involving a lot of logarithms, and portable electronics haven’t been invented yet, would you rather take a week to derive the exact answer with an abacus, and another three weeks hunting down a boneheaded sign error, or ten seconds for the first two or three decimal places on a slide rule?
Rational selfishness is expensive to set up, expensive to run, and can break down catastrophically at the worst possible times. Evolution tends to prefer error-tolerant systems.
If ethics is game theoretic , it is not so to an extent where we could calculate exact outcomes.
It may still be game theoretic in some fuzzy or intractable way.
The claim that ethics is game theoretic could therefore be a philosophy-grade truth even if it is not a science-garde truth.
Honestly, it would just be much better to open up “shared-value game theory” as a formal subject and then see how well that elaborated field actually matches our normal conceptions of ethics.
Why assume some values have to be shared? If decision theoretic ethics canoe made to work without shared values, that would be interesting.
And decision theoretic ethics is already extant.
Largely because, in my opinion, it explains the real world much, much better than a “selfish” game theory.
Using selfish game theories, “generous” or “altruistic” strategies can evolve to dominate in iterated games and evolved populations (there’s a link somewhere upthread to the paper). You’re still then left with the question of: if they do, why did evolution build us to place fundamental emotional and normative value on conforming to what any rational selfish agent will figure out?
Using theories in which agents share some of their values, “generous” or “altruistic” strategies become the natural, obvious result: shared values are nonrivalrous in the first place. Evolution builds us to feel Good and Moral about creatures who share our values because that’s a sign they probably have similar genes (though I just made that up now, so it’s probably totally wrong) (also, because nothing had time to evolve to fake human moral behavior, so the kin-signal remained reasonably strong).
Because we’re adaptation executors, not fitness maximizers. Evolution gets us to do useful things by having us derive emotional value directly from doing those things, not by introducing the extra indirect step of moulding us into rational calculators who first have to consciously compute what’s most useful.
If you’re running some calculation involving a lot of logarithms, and portable electronics haven’t been invented yet, would you rather take a week to derive the exact answer with an abacus, and another three weeks hunting down a boneheaded sign error, or ten seconds for the first two or three decimal places on a slide rule?
Rational selfishness is expensive to set up, expensive to run, and can break down catastrophically at the worst possible times. Evolution tends to prefer error-tolerant systems.