Finite life does not imply that you know up front which iteration will be the last
Yes, but the older you are, the higher is the probability—could we design an experiment to check for this (old people are more likely to defect in Prisonners’ Dilemmas)?
I also suggested to look for such a phenomenon in vampire bats, and other reciprocating species. Do bats stop co-operating after a certain age? (Or do other bats stop co-operating with them?)
In my experience, old people are LESS likely to defect in Prisoner’s dilemma, as judged by real-life instances. And other people are less likely to defect when interacting with them. This fact is worthy of some explanation, as it’s not what the basic theory of reciprocal altruism would predict.
The best explanation I’ve heard so far on the thread is that it is because of reputation post-mortem affecting relatives. This requires a social context where the “sins of the father are visited on the son” (to quote Randaly’s example).
One potential confound is that the rewards may not scale right: the older you are, often the wealthier you are. A kindergartner might be thrilled to defect for $1, while an old person can barely be troubled to stoop for a $1 bill.
I was responding to the particular one factor argument claiming hat defection was the rational strategy, which wasn’t correct even with that factor in isolation.
For your point, as your probability of dying increases, so too does your need for cooperation to avoid it. The closer you are to risk of dying, the more likely you will need help to avoid it, so the more you would want to encourage cooperation. Again, the argument that it is rational to defect does not hold from this factor alone either.
But it isn’t that it’s necessarily rational to cooperate either—it’s just that the trade off from defection versus cooperation is an empirical matter of all the factors in the situation, and arguments from based on one factor alone aren’t decisive, even when they are correct.
As for an experiment, it wouldn’t show what it is rational to do, only what people in fact do. If you had lived a life of cooperation, encouraging others to cooperate, and denouncing those who don’t, the consistency bias would make it less likely that you would change that behavior despite any mistakenly perceived benefit.
There would be a billion and one factors involved, not the least of which would be the particulars of the experiment chosen. Maybe you found in the lab, in your experiment, that age correlated with defection. It’s quite a leap to generalize that to a propensity to defect in real life.
Yes, but the older you are, the higher is the probability—could we design an experiment to check for this (old people are more likely to defect in Prisonners’ Dilemmas)?
Thanks for this suggestion.
I also suggested to look for such a phenomenon in vampire bats, and other reciprocating species. Do bats stop co-operating after a certain age? (Or do other bats stop co-operating with them?)
In my experience, old people are LESS likely to defect in Prisoner’s dilemma, as judged by real-life instances. And other people are less likely to defect when interacting with them. This fact is worthy of some explanation, as it’s not what the basic theory of reciprocal altruism would predict.
The best explanation I’ve heard so far on the thread is that it is because of reputation post-mortem affecting relatives. This requires a social context where the “sins of the father are visited on the son” (to quote Randaly’s example).
One potential confound is that the rewards may not scale right: the older you are, often the wealthier you are. A kindergartner might be thrilled to defect for $1, while an old person can barely be troubled to stoop for a $1 bill.
I was responding to the particular one factor argument claiming hat defection was the rational strategy, which wasn’t correct even with that factor in isolation.
For your point, as your probability of dying increases, so too does your need for cooperation to avoid it. The closer you are to risk of dying, the more likely you will need help to avoid it, so the more you would want to encourage cooperation. Again, the argument that it is rational to defect does not hold from this factor alone either.
But it isn’t that it’s necessarily rational to cooperate either—it’s just that the trade off from defection versus cooperation is an empirical matter of all the factors in the situation, and arguments from based on one factor alone aren’t decisive, even when they are correct.
As for an experiment, it wouldn’t show what it is rational to do, only what people in fact do. If you had lived a life of cooperation, encouraging others to cooperate, and denouncing those who don’t, the consistency bias would make it less likely that you would change that behavior despite any mistakenly perceived benefit.
There would be a billion and one factors involved, not the least of which would be the particulars of the experiment chosen. Maybe you found in the lab, in your experiment, that age correlated with defection. It’s quite a leap to generalize that to a propensity to defect in real life.