Observation: game theory is not uniquely human, and does not inherently cater to important human values.
Immediate consequence: game theory, taken to extremes already found in human history, is inhuman.
Immediate consequence the second: Austrian school economics, in its reliance on allowing markets to come to equilibrium on their own, is inhuman.
Conjecture: if you attempt to optimize by taking your own use of game theory and similar arts to similar extremes, you will become a monster of a similar type.
Observation: a refusal to use game theory in considerations results in a strictly worse life than otherwise, and possibly its use more often, more intensely, and with less puny human mercy will result in a better life for you alone.
Conjecture: this really, really looks like the scary and horrifying spawn of a Red Queen race, defecting on PD, and being a jerk in the style of Cthulhu.
Sorry, how did you go from “non human agents use X” (a statement about commonality) to “X is inhuman” (a value judgement) to “if you use X you become a monster” (an even stronger value judgement), to “being a jerk in the style of Cthulhu” (!!!???).
Does this then mean you think using eyesight is monstrous because cephalopodes also have eyes they independently evolved?
Or that maximizing functions is a bad idea because ants have a different function than humans?
Nonhuman agents use X → X does not necessarily and pretty likely does not preserve human values → your overuse of X will cause you not to preserve human values. Being a jerk in a style of Cthulhu I use to mean being a jerk incidentally. Eyesight is not a means of interacting with people, and maximization is not a bad thing if you maximize for the right things, which game theory does not necessarily do.
Immediate consequence the second: Austrian school economics, in its reliance on allowing markets to come to equilibrium on their own, is inhuman.
I suspect all economics is inhuman. I suspect that any complex economy that connects millions or billions of people is going to be incomprehensible and inhuman. By far the best explanation I’ve heard of this thought is by Cosma Shalizi.
The key bit here is the conclusion:
There is a fundamental level at which Marx’s nightmare vision is right: capitalism, the market system, whatever you want to call it, is a product of humanity, but each and every one of us confronts it as an autonomous and deeply alien force. Its ends, to the limited and debatable extent that it can even be understood as having them, are simply inhuman. The ideology of the market tell us that we face not something inhuman but superhuman, tells us to embrace our inner zombie cyborg and lose ourselves in the dance. One doesn’t know whether to laugh or cry or run screaming.
But, and this is I think something Marx did not sufficiently appreciate, human beings confront all the structures which emerge from our massed interactions in this way. A bureaucracy, or even a thoroughly democratic polity of which one is a citizen, can feel, can be, just as much of a cold monster as the market. We have no choice but to live among these alien powers which we create, and to try to direct them to human ends. It is beyond us, it is even beyond all of us, to find “a human measure, intelligible to all, chosen by all”, which says how everyone should go.
A bureaucracy, or even a thoroughly democratic polity of which one is a citizen, can feel, can be, just as much of a cold monster as the market.
This is a great way to express it. I was thinking about something similar, but could not express it like this.
The essence of the problem is, all “systems of human interaction” are not humans. A market is not a human. An election is not a human. An organization is not a human. Etc. Complaining that we are governed by non-humans is essentially complaining that there is more than one human, and that the interaction between humans is not itself a human. Yes, it is true. Yes, it can (and probably will) have horrible consequences. It just does not depend on any specific school of economics, or anything like this.
not uniquely human does not imply inhuman. Lungs are not uniquely human, hardly inhuman though.
Generally, using loaded, non-factual words like “inhuman” and “monster” and “cthulhu” and “horrifying” and “puny” in a pseudo-logical format is worthy of a preacher exhorting illiterates. But is it helpful here? I”d like to think it isn’t, and yet I’d rather discuss game theory in a visible thread than downvote your post.
“Inhuman” has strong connotations of inimical to human values—your argument looks different if it starts with something like “game theory is a non-human—it’s a simplified version of some aspects of human behavior”. In that case, altruism is non-human in the same sense.
I guess I’m mostly reacting to RAND and its ilk, having read the article about Schelling’s book (which I intend to buy), and am thinking of market failures, as well.
OK Mr Bayeslisk, I am one boxing you. I am upvoting this post now knowing that you predicted I would upvote it and intended all along to include or add some links to the above post so I don’t have to do a lot of extra work to figure out what RAND is and what book you are talking about.
What you’re referring to is a problem I’ve been thinking about and chipping away at for some time; I’ve even had some discussions about it here and people have generally been receptive. Maybe the reason you’re being downvoted is that you’re using the word ‘human’ to mean ‘good’.
The core issue is that humans have empathy, and by this we mean that other people’s utility function matters to us. More concisely, our perception of other people’s utility forms a part of our utility which is conditionally independent of the direct benefits to us.
Our empathy not only extends to other humans, but also animals and perhaps even robots.
So what are examples of human beings who lack empathy? Lacking empathy is basically the definition of psychopathy. And, indeed, some psychopaths (not all, but some) have been violent criminals who e.g. killed babies for money, tortured people for amusement, etc. etc.
So you’re essentially right that a game theory where the players do not have models of each other’s utility functions shows aspects of psychopathy and ‘inhumanity’.
But that doesn’t mean game theory is wrong or ‘inhuman’! All it means is that you’re missing the ‘empathy’ ingredient. It also means that it would not be a good idea to build an AI without empathy. That’s exactly what CEV attempts to solve. CEV is basically a crude attempt at trying to instill empathy in a machine.
Yes, that was what I was getting at. Like I said elsewhere—game theory is not evil. It’s just horrifyingly neutral. I am not using inhuman as bad; I am using inhuman as unfriendly.
Game theory is about strategies, not about values. It tells you which strategy should you use, if your goal is to maximize X. It does not tell you what X is. (Although some X’s, such as survival, are instrumental goals for many different terminal goals, so they will be supported by many strategies.)
OK, I think I was misunderstood and also tired and phrased things poorly. Game theory itself is not a bad thing; it is somewhat like a knife, or a nuke. It has no intrinsic morality, but the things it seems to tend to be used for, for several reasons, wind up being things that eject negative externalities like crazy.
Yes, but this seems to be most egregious when you advocate letting millions of people starve because the precious Market might be upset.
Besides the fact that maximizing a non-Friendly function leads to horrible results (whether the system being maximized is the Market, the Party, the Church, or… whatever), what exactly are you trying to say? Do you think that markets create more horrible results than those other options? Do you have any specific evidence for that? In that case it would be probably better to discuss the specific thing, before moving to a wide generalization.
I have no idea how the Holodomor is germane to this discussion.
The observation being made, I believe, is that the most prominent examples in the 20th century of mass death due to famine were caused by economic and political systems very far from the Austrian school economics. There’s a longish list of mass starvation due to Communist governments.
Is there an example of Austrian economists giving advice that led to a major famine, or that would have led to famine? I cannot offhand think of an example of anybody advocating “letting millions of people starve because the precious Market might be upset.”
Game theory is not like calculus or evolutionary theory—something any alien race smart enough to develop space travel is likely to formulate. It does represent human values.
You solve games by having solution criteria . Unfortunately, for any reasonable list of solution criteria you will always be able to find games where the result doesn’t seem to make sense. Also, there is no set of obviously correct and complete solution concepts. Consider the following game:
Two rational people simultaneously and secretly write down a real number [0,100]. The person who writes down the highest number gets a payoff of zero, and the person who writes down the lowest number gets that as his payoff. If there is a tie they each get zero. What happens?
The only “Nash equilibrium” (the most important solution concept in all of game theory) is for both players to write down 0, but this is a crazy result because picking 0 is weakly dominated by picking any other number (expect 100).
Game theory also has trouble solving many games where (a) Player Two only gets to move if Player One does a certain thing, (b) Player One’s strategy is determined by what he expects Player Two would do if Player Two gets to move, and (c) in equilibrium Player Two never moves.
Are you agreeing or disagreeing with “the things you describe in this post seem to be the kind of maths a smart alien race might discover just like we did”?
It depends on what you mean by “might” and “discover” (as opposed to invent). I predict that smart aliens’ theories of physics, chemistry, and evolution would be much more similar to ours than their theories of how rational people play games would be.
How so? Game theory basically studies interactions between two (or more) agents which can make choices the outcome of which depends on what the other agent does. You can use game theory to model interaction between two pieces of software, for example.
I still don’t see what does all this have to do with human values.
I am talking about game theory as a field of inquiry. You’re talking about the current state of the art in this field and pointing out that it has unsolved issues. So? Physics has unsolved issues, too.
I still don’t see what does all this have to do with human values.
I also don’t understand what does it mean for game theory to “be solved”. If you mean that in certain specific situations you don’t get an answer, that’s true for physics as well.
Game theory would be solved if there were a set of reasonable criteria which, if applied to every possible game of rational players, would cause you to know what the players would do.
Game theory would be solved if there were a set of reasonable criteria which, if applied to every possible game of rational players, would cause you to know what the players would do.
To continue with physics: physics would be solved if there were a set of reasonable criteria which, if applied to every possible interaction of particles, would cause you to know what the particles would do.
Consider a situation in which using physics you could prove that (1) X won’t happen, and (2) X will happen. If this situation existed physics wouldn’t be capable of being solved, but my understanding of science is that such a situation is unlikely to exist. Alas, this kind of situation does come up in game theory.
Whether you get an answer is dependent on the criteria you choose, but these criteria must have arbitrariness in them even for rational people. Consider the solution concept “never play a weakly dominated strategy.” This is neither right nor wrong but an arbitrary criteria that reflects human values.
Saying “the game theory solution is A,Y” is closer to “this picture is pretty” than “the electron will...”
Also, assuming someone is rational and wants to maximize his payoff isn’t enough to fully specify him, and consequently you need to bring in human values to figure out how this person will behave.
You seem to be talking about forecasting human behavior and giving advice to humans about how to behave.
That, of course, depends on human values. But that is related to game theory in the same way engineering is related to mathematics. If you are building a bridge you need to know the properties of materials you’re building it out of. Doesn’t change the equations, though.
You know that a race of aliens is rational. Do you need to know more about their values to predict how they will build bridges? Yes. Do you need to know more about their values to predict how they will play games? Yes.
Game theory is (basically) the study of how rational people behave. Unfortunately, there will always exist relatively simple games for which you can not use the tools of game theory to determine how players will behave.
Game theory is (basically) the study of how rational people behave.
Ah. We have a terminology difference. I defined my understanding of game theory a bit upthread and it’s not about people at all. For example, consider software agents operating in a network with distributed resources and untrusted counterparties.
Observation: game theory is not uniquely human, and does not inherently cater to important human values.
Immediate consequence: game theory, taken to extremes already found in human history, is inhuman.
Immediate consequence the second: Austrian school economics, in its reliance on allowing markets to come to equilibrium on their own, is inhuman.
Conjecture: if you attempt to optimize by taking your own use of game theory and similar arts to similar extremes, you will become a monster of a similar type.
Observation: a refusal to use game theory in considerations results in a strictly worse life than otherwise, and possibly its use more often, more intensely, and with less puny human mercy will result in a better life for you alone.
Conjecture: this really, really looks like the scary and horrifying spawn of a Red Queen race, defecting on PD, and being a jerk in the style of Cthulhu.
Thoughts?
Continue laying siege to me; I’m done here.
Sorry, how did you go from “non human agents use X” (a statement about commonality) to “X is inhuman” (a value judgement) to “if you use X you become a monster” (an even stronger value judgement), to “being a jerk in the style of Cthulhu” (!!!???).
Does this then mean you think using eyesight is monstrous because cephalopodes also have eyes they independently evolved?
Or that maximizing functions is a bad idea because ants have a different function than humans?
Nonhuman agents use X → X does not necessarily and pretty likely does not preserve human values → your overuse of X will cause you not to preserve human values. Being a jerk in a style of Cthulhu I use to mean being a jerk incidentally. Eyesight is not a means of interacting with people, and maximization is not a bad thing if you maximize for the right things, which game theory does not necessarily do.
Try replacing “game theory” with “science” or “rationality” in your rant. Do you still agree with it?
The appeal to probability doesn’t work here, since you’re not drawing at random from X.
I suspect all economics is inhuman. I suspect that any complex economy that connects millions or billions of people is going to be incomprehensible and inhuman. By far the best explanation I’ve heard of this thought is by Cosma Shalizi.
The key bit here is the conclusion:
I suspect this sub-thread implicitly defined “human” as “generating warm fuzzies”. There are, um, problems with this definition.
This is a great way to express it. I was thinking about something similar, but could not express it like this.
The essence of the problem is, all “systems of human interaction” are not humans. A market is not a human. An election is not a human. An organization is not a human. Etc. Complaining that we are governed by non-humans is essentially complaining that there is more than one human, and that the interaction between humans is not itself a human. Yes, it is true. Yes, it can (and probably will) have horrible consequences. It just does not depend on any specific school of economics, or anything like this.
not uniquely human does not imply inhuman. Lungs are not uniquely human, hardly inhuman though.
Generally, using loaded, non-factual words like “inhuman” and “monster” and “cthulhu” and “horrifying” and “puny” in a pseudo-logical format is worthy of a preacher exhorting illiterates. But is it helpful here? I”d like to think it isn’t, and yet I’d rather discuss game theory in a visible thread than downvote your post.
“Inhuman” has strong connotations of inimical to human values—your argument looks different if it starts with something like “game theory is a non-human—it’s a simplified version of some aspects of human behavior”. In that case, altruism is non-human in the same sense.
I guess I’m mostly reacting to RAND and its ilk, having read the article about Schelling’s book (which I intend to buy), and am thinking of market failures, as well.
OK Mr Bayeslisk, I am one boxing you. I am upvoting this post now knowing that you predicted I would upvote it and intended all along to include or add some links to the above post so I don’t have to do a lot of extra work to figure out what RAND is and what book you are talking about.
That is actually not true at all. I was actually planning on abandoning this trainwreck of an attempt at dissent. But since you’re so nice:
http://en.wikipedia.org/wiki/RAND_Corporation
http://en.wikipedia.org/wiki/Thomas_Schelling#The_Strategy_of_Conflict_.281960.29
Apparently I was right to one box all along! Thanks!
Are you thinking of failures of market alternatives as well?
What you’re referring to is a problem I’ve been thinking about and chipping away at for some time; I’ve even had some discussions about it here and people have generally been receptive. Maybe the reason you’re being downvoted is that you’re using the word ‘human’ to mean ‘good’.
The core issue is that humans have empathy, and by this we mean that other people’s utility function matters to us. More concisely, our perception of other people’s utility forms a part of our utility which is conditionally independent of the direct benefits to us.
Our empathy not only extends to other humans, but also animals and perhaps even robots.
So what are examples of human beings who lack empathy? Lacking empathy is basically the definition of psychopathy. And, indeed, some psychopaths (not all, but some) have been violent criminals who e.g. killed babies for money, tortured people for amusement, etc. etc.
So you’re essentially right that a game theory where the players do not have models of each other’s utility functions shows aspects of psychopathy and ‘inhumanity’.
But that doesn’t mean game theory is wrong or ‘inhuman’! All it means is that you’re missing the ‘empathy’ ingredient. It also means that it would not be a good idea to build an AI without empathy. That’s exactly what CEV attempts to solve. CEV is basically a crude attempt at trying to instill empathy in a machine.
Yes, that was what I was getting at. Like I said elsewhere—game theory is not evil. It’s just horrifyingly neutral. I am not using inhuman as bad; I am using inhuman as unfriendly.
Then you must be horrified by all science.
Game theory is about strategies, not about values. It tells you which strategy should you use, if your goal is to maximize X. It does not tell you what X is. (Although some X’s, such as survival, are instrumental goals for many different terminal goals, so they will be supported by many strategies.)
There is a risk of maximizing some X that looks like a good approximation of human values, but its actual maximization is unFriendly.
Connotational objection: so is any school of anything; at least unless the problem of Friendliness is solved.
OK, I think I was misunderstood and also tired and phrased things poorly. Game theory itself is not a bad thing; it is somewhat like a knife, or a nuke. It has no intrinsic morality, but the things it seems to tend to be used for, for several reasons, wind up being things that eject negative externalities like crazy.
Yes, but this seems to be most egregious when you advocate letting millions of people starve because the precious Market might be upset.
Who precisely are you thinking of, who advocated allowing mass starvation for this reason?
Millions of people did starve for reasons completely opposed to free markets.
Besides the fact that maximizing a non-Friendly function leads to horrible results (whether the system being maximized is the Market, the Party, the Church, or… whatever), what exactly are you trying to say? Do you think that markets create more horrible results than those other options? Do you have any specific evidence for that? In that case it would be probably better to discuss the specific thing, before moving to a wide generalization.
I have no idea how the Holodomor is germane to this discussion.
The observation being made, I believe, is that the most prominent examples in the 20th century of mass death due to famine were caused by economic and political systems very far from the Austrian school economics. There’s a longish list of mass starvation due to Communist governments.
Is there an example of Austrian economists giving advice that led to a major famine, or that would have led to famine? I cannot offhand think of an example of anybody advocating “letting millions of people starve because the precious Market might be upset.”
You said “letting millions of people starve”.
There were not that many cases of millions of people starving during the last hundred years.
Yes.
I suspect you’re looking at it with a rather biased view.
Sigh. You made a cobman—one constructed of mud and straw. Congratulations.
Game theory is not like calculus or evolutionary theory—something any alien race smart enough to develop space travel is likely to formulate. It does represent human values.
Can you explain this? I always thought of game theory as being like calculus, and not about human values (like this comment says).
You solve games by having solution criteria . Unfortunately, for any reasonable list of solution criteria you will always be able to find games where the result doesn’t seem to make sense. Also, there is no set of obviously correct and complete solution concepts. Consider the following game:
Two rational people simultaneously and secretly write down a real number [0,100]. The person who writes down the highest number gets a payoff of zero, and the person who writes down the lowest number gets that as his payoff. If there is a tie they each get zero. What happens?
The only “Nash equilibrium” (the most important solution concept in all of game theory) is for both players to write down 0, but this is a crazy result because picking 0 is weakly dominated by picking any other number (expect 100).
Game theory also has trouble solving many games where (a) Player Two only gets to move if Player One does a certain thing, (b) Player One’s strategy is determined by what he expects Player Two would do if Player Two gets to move, and (c) in equilibrium Player Two never moves.
I’m not understanding you, the things you describe in this post seem to be the kind of maths a smart alien race might discover just like we did.
Many games don’t have solutions, or the solutions depend on arbitrary criteria.
… and?
Are you agreeing or disagreeing with “the things you describe in this post seem to be the kind of maths a smart alien race might discover just like we did”?
It depends on what you mean by “might” and “discover” (as opposed to invent). I predict that smart aliens’ theories of physics, chemistry, and evolution would be much more similar to ours than their theories of how rational people play games would be.
How so? Game theory basically studies interactions between two (or more) agents which can make choices the outcome of which depends on what the other agent does. You can use game theory to model interaction between two pieces of software, for example.
Please see my answer to PECOS-9.
I still don’t see what does all this have to do with human values.
I am talking about game theory as a field of inquiry. You’re talking about the current state of the art in this field and pointing out that it has unsolved issues. So? Physics has unsolved issues, too.
There are proofs showing that game theory can never be solved.
I still don’t see what does all this have to do with human values.
I also don’t understand what does it mean for game theory to “be solved”. If you mean that in certain specific situations you don’t get an answer, that’s true for physics as well.
Game theory would be solved if there were a set of reasonable criteria which, if applied to every possible game of rational players, would cause you to know what the players would do.
To continue with physics: physics would be solved if there were a set of reasonable criteria which, if applied to every possible interaction of particles, would cause you to know what the particles would do.
Consider a situation in which using physics you could prove that (1) X won’t happen, and (2) X will happen. If this situation existed physics wouldn’t be capable of being solved, but my understanding of science is that such a situation is unlikely to exist. Alas, this kind of situation does come up in game theory.
Well, it’s math but...
Whether you get an answer is dependent on the criteria you choose, but these criteria must have arbitrariness in them even for rational people. Consider the solution concept “never play a weakly dominated strategy.” This is neither right nor wrong but an arbitrary criteria that reflects human values.
Saying “the game theory solution is A,Y” is closer to “this picture is pretty” than “the electron will...”
Also, assuming someone is rational and wants to maximize his payoff isn’t enough to fully specify him, and consequently you need to bring in human values to figure out how this person will behave.
You seem to be talking about forecasting human behavior and giving advice to humans about how to behave.
That, of course, depends on human values. But that is related to game theory in the same way engineering is related to mathematics. If you are building a bridge you need to know the properties of materials you’re building it out of. Doesn’t change the equations, though.
You know that a race of aliens is rational. Do you need to know more about their values to predict how they will build bridges? Yes. Do you need to know more about their values to predict how they will play games? Yes.
Game theory is (basically) the study of how rational people behave. Unfortunately, there will always exist relatively simple games for which you can not use the tools of game theory to determine how players will behave.
Ah. We have a terminology difference. I defined my understanding of game theory a bit upthread and it’s not about people at all. For example, consider software agents operating in a network with distributed resources and untrusted counterparties.