Vladimir says:
“”“Every decision to give a gift on your side corresponds to a decision to abstain from accepting your gift on the other side. Thus, decisions to give must be made on case-to-case basis, cooperation in true prisoner’s dilemma doesn’t mean unconditional charity.”””
Agreed. Obviously (for example) the human ship shouldn’t self-destruct. But I wasn’t talking about all gifts, I was talking about the specific class of gifts called “helpful advice.” And I did specify: “provided that, on the whole, situations in which helpful advice is given freely are better.”
I was comparing the two strategies “Don’t give away any helpful advice of the level the other party is likely to be able to reciprocate” and “give away all helpful advice of the level the other party is likely to be able to reciprocate” and pointing out that maybe they form another prisoner’s dilemma. Of course, there may be more fine-grained strategies that work even better, strategies that actually take into account the relative amount of good and bad each piece of advice brings to the two parties. But remember that you must also consider how your strategy is going to be chronophoned over to the baby eaters. If we make the first gift, what exchange rate of baby-eater utilons for human utilons do we tolerate? (If the gifts are made of information, it may be impossible for trades to be authenticated without the possibility of other party taking the gift and using it (though of course it might be that the equilibrium has an honor system....)) It looks like it gets really complicated. Worth thinking about? Yes, but right now I’m busy.
Vladimir says: “”“Every decision to give a gift on your side corresponds to a decision to abstain from accepting your gift on the other side. Thus, decisions to give must be made on case-to-case basis, cooperation in true prisoner’s dilemma doesn’t mean unconditional charity.”””
Agreed. Obviously (for example) the human ship shouldn’t self-destruct. But I wasn’t talking about all gifts, I was talking about the specific class of gifts called “helpful advice.” And I did specify: “provided that, on the whole, situations in which helpful advice is given freely are better.”
I was comparing the two strategies “Don’t give away any helpful advice of the level the other party is likely to be able to reciprocate” and “give away all helpful advice of the level the other party is likely to be able to reciprocate” and pointing out that maybe they form another prisoner’s dilemma. Of course, there may be more fine-grained strategies that work even better, strategies that actually take into account the relative amount of good and bad each piece of advice brings to the two parties. But remember that you must also consider how your strategy is going to be chronophoned over to the baby eaters. If we make the first gift, what exchange rate of baby-eater utilons for human utilons do we tolerate? (If the gifts are made of information, it may be impossible for trades to be authenticated without the possibility of other party taking the gift and using it (though of course it might be that the equilibrium has an honor system....)) It looks like it gets really complicated. Worth thinking about? Yes, but right now I’m busy.