I will take minor exception to your exceptions. One of the big lessons of LessWrong for me is how different decision processes react in the iterated prisoner’s dilemma. In your exceptions, you don’t condition your behaviour on the expected behaviour of your trading partner. The greatest lesson I took away from LessWrong was Don’t Be CooperateBot. I would however, endorse FairBot versions of your statements:
“I am the kind of person who keeps promises to the kind of person who keeps promises,” and “I am a person who can be relied upon to cooperate with people who can be relied upon to cooperate.”
(You’ll notice that I cut out the loyalty part on that second one. I am undecided here. A lot of social technology at least vaguely pattern matches to CliqueBot, which is how I generally map loyalty to the prisoner’s dilemma. However, I’m not going to endorse it as optimal.)
I will take minor exception to your exceptions. One of the big lessons of LessWrong for me is how different decision processes react in the iterated prisoner’s dilemma. In your exceptions, you don’t condition your behaviour on the expected behaviour of your trading partner. The greatest lesson I took away from LessWrong was Don’t Be CooperateBot. I would however, endorse FairBot versions of your statements:
“I am the kind of person who keeps promises to the kind of person who keeps promises,” and “I am a person who can be relied upon to cooperate with people who can be relied upon to cooperate.”
(You’ll notice that I cut out the loyalty part on that second one. I am undecided here. A lot of social technology at least vaguely pattern matches to CliqueBot, which is how I generally map loyalty to the prisoner’s dilemma. However, I’m not going to endorse it as optimal.)