Oh, dear. A paper in PNAS says that the usual psychological experiments which show that people have a tendency to cooperate at the cost of not maximizing their own welfare are flawed. People are not cooperative, people are stupid and cooperate just because they can’t figure out how the game works X-D
Abstract:
Economic experiments are often used to study if humans altruistically value the welfare of others. A canonical result from public-good games is that humans vary in how they value the welfare of others, dividing into fair-minded conditional cooperators, who match the cooperation of others, and selfish noncooperators. However, an alternative explanation for the data are that individuals vary in their understanding of how to maximize income, with misunderstanding leading to the appearance of cooperation. We show that (i) individuals divide into the same behavioral types when playing with computers,
whom they cannot be concerned with the welfare of; (ii) behavior across games with computers and humans is correlated and can be explained by variation in understanding of how to maximize income; (iii) misunderstanding correlates with higher levels of cooperation; and (iv) standard control questions do not guarantee understanding. These results cast doubt on certain experimental methods and demonstrate that a common assumption in behavioral economics experiments, that
choices reveal motivations, will not necessarily hold.
A series of experiments performed in (of all places) a truck driving school investigated a Window Game. Two players are seated at a desk with a partition between them; there is a small window in the partition. Player A gets $5 and may pass as much of that as she wants through the window to Player B. Player B may then pass as much as she wants back through the window to Player A, after which the game ends. All money that passes through the window is tripled; eg if Player A passes the entire $5 through it becomes $15, and if Player B passes the $15 back it becomes $45 – making passing a lucrative strategy but one requiring lots of trust in the other player. I got briefly nerd-sniped trying to figure out the best (morally correct?) strategy here, but getting back to the point: players with high-IQ were more likely to pass money through the window. They were also more likely to reciprocate – ie repay good for good and bad for bad. In a Public Goods Game (each of N players starts with $10 and can put as much or as little as they like into a pot; afterwards the pot is tripled and redistributed to all players evenly, contributors and noncontributors alike), high-IQ players put more into the pot. They were also more likely to vote for rules penalizing noncontributors. They were also more likely to cooperate and more likely to play closer to traditional tit-for-tat on iterated prisoners’ dilemmas. The longer and more complicated the game, the more clearly a pattern emerged: having one high-IQ player was moderately good, but having all the players be high-IQ was amazing: they all caught on quickly, cooperated with one another, and built stable systems to enforce that cooperation. In a ten-round series run by Jones himself, games made entirely of high-IQ players had five times as much cooperation as average.
Only if you assume that IQ is independent of altruism. Given that IQ covaries with altruism, patience, willingness to invest, willingness to trust strangers, etc, I don’t see why you would make that assumption. I’m fine with believing that greater IQ also causes more cooperation and altruism and so high IQ players understand better how to exploit others but don’t want to. If anything, the results suggest that the relationships may have been underestimated, because lower IQ subjects’ responses will be a mix of incompetence & selfishness, adding measurement error.
(ii) They may also be anthropomorphizing the computers. (iii) This just means that the sort of person who cooperates in this sort of game also treats humans and computers equally, right?
Does “value the welfare of others” necessarily mean “consciously value the welfare of others”? Is it wrong to say “I know how to interpret human sounds into language and meaning” just because I can do it? Or do I have to demonstrate I know how because I can deconstruct the process to the point that I can write an algorithm (or computer code) to do it?
The idea that we cannot value the welfare of computers seems ludicrously naive and misinterpretative. If I can value the welfare of a stranger, then clearly the thing for which I value welfare is not defined too tightly. If a computer (running the right program) displays some of the features that signal me that a human is something i should value, why couldn’t I value the computer? We watch animated shows and value and have empathy for all sorts of animated entities. In all sorts of stories we have empathy for robots or other mechanical things. The idea that we cannot value the welfare of a computer flies in the face of the evidence that we can empathize with all sorts of non-human things fictional and real. In real life, we value and have human-like empathy for animals, fishes, and even plants in many cases.
I think the interpretations or assumptions behind this paper are bad ones. Certainly, they are not brought out explicitly and argued for.
It might also be argued that people playing with computers cannot help behaving as if they were playing with humans. However, this interpretation would: (i) be inconsistent with other studies showing that people discriminate behaviorally, neurologically, and physiologically between humans and computers when playing simpler games (19, 56–58), (ii) not explain why behavior significantly correlated with understanding (Fig. 2B and Tables S3 and S4)...”
((iii) and (iv) apply to the general case of “people behave as if they are playing with humans”, but not to the specific case of “people behave as if they are playing with humans, because of empathy with the computer”).
The idea that we cannot value the welfare of computers seems ludicrously naive and misinterpretative.
I am always up for being ludicrous :-P
So, what is the welfare of a computer? Does it involve a well-regulated power supply? Good ventilation in a case? Is overclocking an example of inhumane treatment?
Or maybe you want to talk about software and the awful assault on its dignity by an invasive debugger...
Oh, dear. A paper in PNAS says that the usual psychological experiments which show that people have a tendency to cooperate at the cost of not maximizing their own welfare are flawed. People are not cooperative, people are stupid and cooperate just because they can’t figure out how the game works X-D
Abstract:
That sounds like it would contradict the results on IQ correlating positively with cooperation:
Only if you assume that IQ is independent of altruism. Given that IQ covaries with altruism, patience, willingness to invest, willingness to trust strangers, etc, I don’t see why you would make that assumption. I’m fine with believing that greater IQ also causes more cooperation and altruism and so high IQ players understand better how to exploit others but don’t want to. If anything, the results suggest that the relationships may have been underestimated, because lower IQ subjects’ responses will be a mix of incompetence & selfishness, adding measurement error.
Good point.
(ii) They may also be anthropomorphizing the computers. (iii) This just means that the sort of person who cooperates in this sort of game also treats humans and computers equally, right?
I would count it as supporting evidence for “they’re just stoopid” hypothesis X-)
Does “value the welfare of others” necessarily mean “consciously value the welfare of others”? Is it wrong to say “I know how to interpret human sounds into language and meaning” just because I can do it? Or do I have to demonstrate I know how because I can deconstruct the process to the point that I can write an algorithm (or computer code) to do it?
The idea that we cannot value the welfare of computers seems ludicrously naive and misinterpretative. If I can value the welfare of a stranger, then clearly the thing for which I value welfare is not defined too tightly. If a computer (running the right program) displays some of the features that signal me that a human is something i should value, why couldn’t I value the computer? We watch animated shows and value and have empathy for all sorts of animated entities. In all sorts of stories we have empathy for robots or other mechanical things. The idea that we cannot value the welfare of a computer flies in the face of the evidence that we can empathize with all sorts of non-human things fictional and real. In real life, we value and have human-like empathy for animals, fishes, and even plants in many cases.
I think the interpretations or assumptions behind this paper are bad ones. Certainly, they are not brought out explicitly and argued for.
I actually read the paper.
((iii) and (iv) apply to the general case of “people behave as if they are playing with humans”, but not to the specific case of “people behave as if they are playing with humans, because of empathy with the computer”).
I am always up for being ludicrous :-P
So, what is the welfare of a computer? Does it involve a well-regulated power supply? Good ventilation in a case? Is overclocking an example of inhumane treatment?
Or maybe you want to talk about software and the awful assault on its dignity by an invasive debugger...