@Marcello:
I assumed you agree that increasing the babyeating problem tenfold isn’t something you’d expect to be reciprocated, not without knowing something they presently don’t, and so the issue actually should be dismissed on that ground for the time being. It seems that you didn’t start from this premise. Where you expect to profit—sure, it’s normal trade at that point.
The trick with cooperating in prisoner’s dilemma is primarily in decision-theoretic setting, where you’ve only got one decision that’s estimated over everything. The thesis is that cooperation is not what you get as instrumental strategy from structure of a game, it’s what you start from as terminal choice (and can lose in structure of the game). It doesn’t translate well to bounded rationality, sometimes you have to do what looks like defecting because you don’t know the consequences.
For example, cooperation result should extend to a setting where one player observes the decision of the other player. Should I cooperate, knowing that the other player will observe my decision before making his? It looks like I shouldn’t, unless I have a way of knowing that he cooperates, just expecting him to do that in order to be in the position to receive my cooperation doesn’t work (unless he really makes a commitment/changes his utility, and presents evidence). But if I have the predictive power of Omega, sure, cooperation as the right decision in that setting is what I’d expect.
@Marcello: I assumed you agree that increasing the babyeating problem tenfold isn’t something you’d expect to be reciprocated, not without knowing something they presently don’t, and so the issue actually should be dismissed on that ground for the time being. It seems that you didn’t start from this premise. Where you expect to profit—sure, it’s normal trade at that point.
The trick with cooperating in prisoner’s dilemma is primarily in decision-theoretic setting, where you’ve only got one decision that’s estimated over everything. The thesis is that cooperation is not what you get as instrumental strategy from structure of a game, it’s what you start from as terminal choice (and can lose in structure of the game). It doesn’t translate well to bounded rationality, sometimes you have to do what looks like defecting because you don’t know the consequences.
For example, cooperation result should extend to a setting where one player observes the decision of the other player. Should I cooperate, knowing that the other player will observe my decision before making his? It looks like I shouldn’t, unless I have a way of knowing that he cooperates, just expecting him to do that in order to be in the position to receive my cooperation doesn’t work (unless he really makes a commitment/changes his utility, and presents evidence). But if I have the predictive power of Omega, sure, cooperation as the right decision in that setting is what I’d expect.