The trouble is that cooperating is highly contingent on the other agent having heard of or being smart enough to think in five minutes the idea of superrationality, and it’s highly contingent on the information available to both sides—if you don’t think THEY think you know about superrationality/are smart enough to think it in five minutes, you shouldn’t cooperate.
So, given most situations or most opponents I’d defect. Probably against the paperclip maximizer, too, since “Simple approximation of decision theory” doesn’t sound too promisingly clever, particularly when evaluating beings like me.
I agree with this assessment of the situation.
The trouble is that cooperating is highly contingent on the other agent having heard of or being smart enough to think in five minutes the idea of superrationality, and it’s highly contingent on the information available to both sides—if you don’t think THEY think you know about superrationality/are smart enough to think it in five minutes, you shouldn’t cooperate.
So, given most situations or most opponents I’d defect. Probably against the paperclip maximizer, too, since “Simple approximation of decision theory” doesn’t sound too promisingly clever, particularly when evaluating beings like me.