Well, one would actually take into account the degree of dependence when doing the relevant computation.
And your decision to be more charitable would correlate to others being so to the extent that they’re using related methods to come to their own decision.
Well, one would actually take into account the degree of dependence when doing the relevant computation.
Yes, and here’s what it would look like: I anticipate a 1⁄2 + e probability of the other person doing the same thing as me in the true PD. I’ll use the payoff matrix of
C D
C (3,3) (0,5)
D (5,0) (1,1)
where the first value is my utility. The expected payoff is then (after a little algebra):
If I cooperate: 3⁄2 + 3e; if I defect: 3 − 4e
Defection has a higher payoff as long as e is less than 3⁄14 (total probability of other person doing what I do = 10⁄14). So you should cooperate as long as you have over 0.137 bits of evidence that they will do what you do. Does the assumption that other people’s algorithm has a minor resemblance to mine get me that?
And your decision to be more charitable would correlate to others being so to the extent that they’re using related methods to come to their own decision.
Yes, and that’s the tough bullet to bite: me being more charitable, irrespective the impact of my charitable action, causes (me to observe) other people being more charitable.
Well, one would actually take into account the degree of dependence when doing the relevant computation.
And your decision to be more charitable would correlate to others being so to the extent that they’re using related methods to come to their own decision.
Yes, and here’s what it would look like: I anticipate a 1⁄2 + e probability of the other person doing the same thing as me in the true PD. I’ll use the payoff matrix of
C D
C (3,3) (0,5)
D (5,0) (1,1)
where the first value is my utility. The expected payoff is then (after a little algebra):
If I cooperate: 3⁄2 + 3e; if I defect: 3 − 4e
Defection has a higher payoff as long as e is less than 3⁄14 (total probability of other person doing what I do = 10⁄14). So you should cooperate as long as you have over 0.137 bits of evidence that they will do what you do. Does the assumption that other people’s algorithm has a minor resemblance to mine get me that?
Yes, and that’s the tough bullet to bite: me being more charitable, irrespective the impact of my charitable action, causes (me to observe) other people being more charitable.