The game and decision theoretic models being discussed here and elsewhere are often overly simplistic. Take for example the prisoner’s dilemma you mentioned. In such a game theoretic setting, where all agents solely care about reducing their prison sentence, it is perfectly rational to defect. This isn’t the case in real life, the situation often is much more complex and people care about more than the sentence they will receive.
Similarly with Newcomb-style problems, where one agent is giving and the other is on the receiving side. Either you care about the prize, in which case it would be perfectly rational to adopt the decision theory that will make you predictably precommit to one-boxing, or you don’t, in which case you’ll just ignore the game.
The problem I see is that I believe it to be mistaken to adopt such models universally, just because they work for simple thought experiments. Doing so leads to all kinds of idiotic decisions like walking into death camps if it decreases the chance of being blackmailed. This is idiotic because doing so means to lose by preemptively turning yourself into something you don’t want to be, doing something you don’t want to do, just because some abstract mathematical models suggest that by doing so you’ll reach an equilibrium between yourself and other agents. That is not what humans want. Humans want to win in a certain way, or die trying, whatever the consequences.
Many theories ignore that humans discount arbitrarily and are not consistent, that humans can assign arbitrary amounts of utility to certain decisions. Just because we would die for our family does not mean that you can extrapolate this decision to mean that we would die for a trillion humans, or precommit to becoming an asshole to win an amount of money that could outweigh being an asshole. Some things can’t be outweighed by an even bigger amount of utility, that’s not how humans work.
We do not want to be blackmailed, but we also do not want to become an asshole. If not to be blackmailed means to become an asshole, you are perfectly rational to choose to be blackmailed, even if that means that you’ll be turned into an even bigger asshole when you don’t give in to the blackmail. That’s human!
I think that some of the points that you make here are valid; but they seem oblique to the thrust of my post which is about (hypothetically) why humans evolved to be the way they are.
The game and decision theoretic models being discussed here and elsewhere are often overly simplistic. Take for example the prisoner’s dilemma you mentioned. In such a game theoretic setting, where all agents solely care about reducing their prison sentence, it is perfectly rational to defect. This isn’t the case in real life, the situation often is much more complex and people care about more than the sentence they will receive.
Similarly with Newcomb-style problems, where one agent is giving and the other is on the receiving side. Either you care about the prize, in which case it would be perfectly rational to adopt the decision theory that will make you predictably precommit to one-boxing, or you don’t, in which case you’ll just ignore the game.
The problem I see is that I believe it to be mistaken to adopt such models universally, just because they work for simple thought experiments. Doing so leads to all kinds of idiotic decisions like walking into death camps if it decreases the chance of being blackmailed. This is idiotic because doing so means to lose by preemptively turning yourself into something you don’t want to be, doing something you don’t want to do, just because some abstract mathematical models suggest that by doing so you’ll reach an equilibrium between yourself and other agents. That is not what humans want. Humans want to win in a certain way, or die trying, whatever the consequences.
Many theories ignore that humans discount arbitrarily and are not consistent, that humans can assign arbitrary amounts of utility to certain decisions. Just because we would die for our family does not mean that you can extrapolate this decision to mean that we would die for a trillion humans, or precommit to becoming an asshole to win an amount of money that could outweigh being an asshole. Some things can’t be outweighed by an even bigger amount of utility, that’s not how humans work.
We do not want to be blackmailed, but we also do not want to become an asshole. If not to be blackmailed means to become an asshole, you are perfectly rational to choose to be blackmailed, even if that means that you’ll be turned into an even bigger asshole when you don’t give in to the blackmail. That’s human!
I think that some of the points that you make here are valid; but they seem oblique to the thrust of my post which is about (hypothetically) why humans evolved to be the way they are.