Irrationality game: Humanity’s concept of morality (fairness, justice, etc) is just a collection of adaptations or adaptive behaviours that have grown out of game theory; specifically, out of trying to get to symmetrical cooperation in the iterated Prisoner’s Dilemma. 85% confident.
There’s no other source of morality and there’s no other criterion to evaluate a behaviour’s moral worth by. (Theorised sources such as “God” or “innate human goodness” or “empathy” are incorrect; criteria like “the golden rule” or “the Kantian imperative” or “utility maximisation” are only correct to the extent that they mirror the game theory evaluation.)
Of course we claim to have other sources and we act according to those sources; the claim is that those moral-according-to-X behaviours are immoral.
what is different about how we value morality based on its origin?
Evolution, either genetic or cultural, doesn’t have infinite search capacity. We can evaluate which of our adaptations actually are promoting or enforcing symmetric cooperation in the IPD, and which are still climbing that hill, or are harmless extraneous adaptations generated by the search but not yet optimised away by selection pressures.
Evolution, either genetic or cultural, doesn’t have infinite search capacity. We can evaluate which of our adaptations actually are promoting or enforcing symmetric cooperation in the IPD, and which are still climbing that hill, or are harmless extraneous adaptations generated by the search but not yet optimised away by selection pressures.
But we are our adaptations. Are you claiming morality should be defined by evolutionary fitness? (So we should tile the universe by our DNA?) How is that better than other external sources of morality? We already have a morality, it doesn’t matter (for the purpose of being moral) where it came from, be it God or evolution.
Also, saying the morality comes from solving PD doesn’t help, since PD already assumes the agents have utility functions. Game theory is only directly relevant to rationality, not morality. If you and I are playing a non-zero sum game then we better cooperate for our own good. But the fact that my utility function already includes your well-being is completely independent.
I agree that evolutionary thinking can be helpful to figure out what our morality is (since moral intuition is low bandwidth and noisy), but I’m against imaginary extrapolations of evolution.
criteria like “the golden rule” or “the Kantian imperative” or “utility maximisation” are only correct to the extent that they mirror the game theory evaluation.
Sorry, I was trying to get at ‘moral intuitions’ by saying fairness, justice, etc. In this view, ethical theories are basically attempts to fit a line to the collection moral intuitions—to try and come up with a parsimonious theory that would have produced these behaviours—and then the outputs are right or interesting only as far as they approximate game-theoretic-good actions or maxims.
Humanity’s concept of morality (fairness, justice, etc) is just a collection of adaptations or adaptive behaviours that have grown out of game theory; specifically, out of trying to get to symmetrical cooperation in the iterated Prisoner’s Dilemma.
Irrationality game: Humanity’s concept of morality (fairness, justice, etc) is just a collection of adaptations or adaptive behaviours that have grown out of game theory; specifically, out of trying to get to symmetrical cooperation in the iterated Prisoner’s Dilemma. 85% confident.
Unsure what you mean by the ‘just’. Should it be more, and what is different about how we value morality based on its origin?
There’s no other source of morality and there’s no other criterion to evaluate a behaviour’s moral worth by. (Theorised sources such as “God” or “innate human goodness” or “empathy” are incorrect; criteria like “the golden rule” or “the Kantian imperative” or “utility maximisation” are only correct to the extent that they mirror the game theory evaluation.)
Of course we claim to have other sources and we act according to those sources; the claim is that those moral-according-to-X behaviours are immoral.
Evolution, either genetic or cultural, doesn’t have infinite search capacity. We can evaluate which of our adaptations actually are promoting or enforcing symmetric cooperation in the IPD, and which are still climbing that hill, or are harmless extraneous adaptations generated by the search but not yet optimised away by selection pressures.
But we are our adaptations. Are you claiming morality should be defined by evolutionary fitness? (So we should tile the universe by our DNA?) How is that better than other external sources of morality? We already have a morality, it doesn’t matter (for the purpose of being moral) where it came from, be it God or evolution.
Also, saying the morality comes from solving PD doesn’t help, since PD already assumes the agents have utility functions. Game theory is only directly relevant to rationality, not morality. If you and I are playing a non-zero sum game then we better cooperate for our own good. But the fact that my utility function already includes your well-being is completely independent.
I agree that evolutionary thinking can be helpful to figure out what our morality is (since moral intuition is low bandwidth and noisy), but I’m against imaginary extrapolations of evolution.
What makes the game theory evaluation correct?
By “concept of morality”, do you mean moral intuitions or the output of ethical theories?
Sorry, I was trying to get at ‘moral intuitions’ by saying fairness, justice, etc. In this view, ethical theories are basically attempts to fit a line to the collection moral intuitions—to try and come up with a parsimonious theory that would have produced these behaviours—and then the outputs are right or interesting only as far as they approximate game-theoretic-good actions or maxims.
What do you mean “just”?