This description bothers me, because it pattern matches to bad reductionisms, which tend to have the form:
X (which is hard to understand) is really just Y (which we already understand).
A stock criticism of things reduced in this way is this:
If we understand Y so well, why are we still in the dark about X?
So, if ethics is just game theory between agents who share values (which reads to me as ‘ethics is game theory’), then why doesn’t game theory produce really good answers to otherwise really hard ethical questions? Or does it, and I just haven’t noticed? Or am I overestimating how much we understand game theory?
Game theory has been applied to some problems related to morality. In a strict sense we cannot prove such conclusions because universal laws are uncertain
Well as I said: we don’t have maths for this so-called reduction, so its trustworthiness is questionable. We know about game theory, but I don’t know of a game-theoretic formalism allowing for agents to win something other than generic “dollars” or “points”, such that we can encode in the formalism that agents share some values but not others, and have tradeoffs among their different values.
I don’t know of a game-theoretic formalism allowing for agents to win something other than generic “dollars” or “points”, such that we can encode in the formalism that agents share some values but not others, and have tradeoffs among their different values.
I suspect this isn’t the main obstacle to reducing ethics to game theory. Once I’m willing to represent agents’ preferences with utility functions in the first place, I can operationalize “agents share some values” as some features of the world contributing positively to the utility functions of multiple agents, while an agent having “tradeoffs among their different values” is encoded in the same way as any other tradeoff they face between two things — as a ratio of marginal utilities arising from a marginal change in either of the two things.
Well yes, of course. It’s the “share some values but not others” that’s currently not formalized, as in current game-theory agents are (to my knowledge) only paid in “money”, denoted as a single scalar dimension measuring utility as a function of the agent’s experiences of game outcomes (rather than as a function of states of the game construed as an external world the agent cares about).
A useful concept here (which I picked up from a pro player of Magic: The Gathering, but exists in many other environments) is “board state.” A lot of the research I’ve seen in game theory deals with very simple games, only a handful of decision-points followed by a payout. How much research has there been about games where there are variables (like capital investments, or troop positions, or land which can be sown with different plants or left fallow), which can be manipulated by the players and whose values affect the relative payoffs of different strategies?
Altruism can be more than just directly aiding someone you personally like; there’s also manipulating the environment to favor your preferred strategy in the long term, which costs you resources in the short term but benefits everyone who uses the same strategy as you, including your natural allies.
Honestly, it would just be much better to open up “shared-value game theory” as a formal subject and then see how well that elaborated field actually matches our normal conceptions of ethics.
Largely because, in my opinion, it explains the real world much, much better than a “selfish” game theory.
Using selfish game theories, “generous” or “altruistic” strategies can evolve to dominate in iterated games and evolved populations (there’s a link somewhere upthread to the paper). You’re still then left with the question of: if they do, why did evolution build us to place fundamental emotional and normative value on conforming to what any rational selfish agent will figure out?
Using theories in which agents share some of their values, “generous” or “altruistic” strategies become the natural, obvious result: shared values are nonrivalrous in the first place. Evolution builds us to feel Good and Moral about creatures who share our values because that’s a sign they probably have similar genes (though I just made that up now, so it’s probably totally wrong) (also, because nothing had time to evolve to fake human moral behavior, so the kin-signal remained reasonably strong).
Using selfish game theories, “generous” or “altruistic” strategies can evolve to dominate in iterated games and evolved populations (there’s a link somewhere upthread to the paper). You’re still then left with the question of: if they do, why did evolution build us to place fundamental emotional and normative value on conforming to what any rational selfish agent will figure out?
Because we’re adaptation executors, not fitness maximizers. Evolution gets us to do useful things by having us derive emotional value directly from doing those things, not by introducing the extra indirect step of moulding us into rational calculators who first have to consciously compute what’s most useful.
why did evolution build us to place fundamental emotional and normative value on conforming to what any rational selfish agent will figure out?
If you’re running some calculation involving a lot of logarithms, and portable electronics haven’t been invented yet, would you rather take a week to derive the exact answer with an abacus, and another three weeks hunting down a boneheaded sign error, or ten seconds for the first two or three decimal places on a slide rule?
Rational selfishness is expensive to set up, expensive to run, and can break down catastrophically at the worst possible times. Evolution tends to prefer error-tolerant systems.
This description bothers me, because it pattern matches to bad reductionisms, which tend to have the form:
A stock criticism of things reduced in this way is this:
So, if ethics is just game theory between agents who share values (which reads to me as ‘ethics is game theory’), then why doesn’t game theory produce really good answers to otherwise really hard ethical questions? Or does it, and I just haven’t noticed? Or am I overestimating how much we understand game theory?
http://pnas.org/content/early/2013/08/28/1306246110
Game theory has been applied to some problems related to morality. In a strict sense we cannot prove such conclusions because universal laws are uncertain
Well as I said: we don’t have maths for this so-called reduction, so its trustworthiness is questionable. We know about game theory, but I don’t know of a game-theoretic formalism allowing for agents to win something other than generic “dollars” or “points”, such that we can encode in the formalism that agents share some values but not others, and have tradeoffs among their different values.
I suspect this isn’t the main obstacle to reducing ethics to game theory. Once I’m willing to represent agents’ preferences with utility functions in the first place, I can operationalize “agents share some values” as some features of the world contributing positively to the utility functions of multiple agents, while an agent having “tradeoffs among their different values” is encoded in the same way as any other tradeoff they face between two things — as a ratio of marginal utilities arising from a marginal change in either of the two things.
Well yes, of course. It’s the “share some values but not others” that’s currently not formalized, as in current game-theory agents are (to my knowledge) only paid in “money”, denoted as a single scalar dimension measuring utility as a function of the agent’s experiences of game outcomes (rather than as a function of states of the game construed as an external world the agent cares about).
So yeah.
A useful concept here (which I picked up from a pro player of Magic: The Gathering, but exists in many other environments) is “board state.” A lot of the research I’ve seen in game theory deals with very simple games, only a handful of decision-points followed by a payout. How much research has there been about games where there are variables (like capital investments, or troop positions, or land which can be sown with different plants or left fallow), which can be manipulated by the players and whose values affect the relative payoffs of different strategies?
Altruism can be more than just directly aiding someone you personally like; there’s also manipulating the environment to favor your preferred strategy in the long term, which costs you resources in the short term but benefits everyone who uses the same strategy as you, including your natural allies.
If ethics is game theoretic , it is not so to an extent where we could calculate exact outcomes.
It may still be game theoretic in some fuzzy or intractable way.
The claim that ethics is game theoretic could therefore be a philosophy-grade truth even if it is not a science-garde truth.
Honestly, it would just be much better to open up “shared-value game theory” as a formal subject and then see how well that elaborated field actually matches our normal conceptions of ethics.
Why assume some values have to be shared? If decision theoretic ethics canoe made to work without shared values, that would be interesting.
And decision theoretic ethics is already extant.
Largely because, in my opinion, it explains the real world much, much better than a “selfish” game theory.
Using selfish game theories, “generous” or “altruistic” strategies can evolve to dominate in iterated games and evolved populations (there’s a link somewhere upthread to the paper). You’re still then left with the question of: if they do, why did evolution build us to place fundamental emotional and normative value on conforming to what any rational selfish agent will figure out?
Using theories in which agents share some of their values, “generous” or “altruistic” strategies become the natural, obvious result: shared values are nonrivalrous in the first place. Evolution builds us to feel Good and Moral about creatures who share our values because that’s a sign they probably have similar genes (though I just made that up now, so it’s probably totally wrong) (also, because nothing had time to evolve to fake human moral behavior, so the kin-signal remained reasonably strong).
Because we’re adaptation executors, not fitness maximizers. Evolution gets us to do useful things by having us derive emotional value directly from doing those things, not by introducing the extra indirect step of moulding us into rational calculators who first have to consciously compute what’s most useful.
If you’re running some calculation involving a lot of logarithms, and portable electronics haven’t been invented yet, would you rather take a week to derive the exact answer with an abacus, and another three weeks hunting down a boneheaded sign error, or ten seconds for the first two or three decimal places on a slide rule?
Rational selfishness is expensive to set up, expensive to run, and can break down catastrophically at the worst possible times. Evolution tends to prefer error-tolerant systems.
Isn’t that what usually is known as “trade”?