I admit taking “Rational people playing true, one-shot PD with beings as rational as they, co-operate” for granted. I didn’t think this was going to be an issue, and so, since I’m building upon that as an axiom, things might look weird if you think that foundation is untrue. And for this reason, I’m unsure if this discussion should continue here. If you’re alone with that opinion, I think this discussion should take place elsewhere, but if there are many who disagree with me on that basic level, I guess that discussion should happen here.
The trouble is that cooperating is highly contingent on the other agent having heard of or being smart enough to think in five minutes the idea of superrationality, and it’s highly contingent on the information available to both sides—if you don’t think THEY think you know about superrationality/are smart enough to think it in five minutes, you shouldn’t cooperate.
So, given most situations or most opponents I’d defect. Probably against the paperclip maximizer, too, since “Simple approximation of decision theory” doesn’t sound too promisingly clever, particularly when evaluating beings like me.
I admit taking “Rational people playing true, one-shot PD with beings as rational as they, co-operate” for granted. I didn’t think this was going to be an issue, and so, since I’m building upon that as an axiom, things might look weird if you think that foundation is untrue. And for this reason, I’m unsure if this discussion should continue here. If you’re alone with that opinion, I think this discussion should take place elsewhere, but if there are many who disagree with me on that basic level, I guess that discussion should happen here.
I agree with this assessment of the situation.
The trouble is that cooperating is highly contingent on the other agent having heard of or being smart enough to think in five minutes the idea of superrationality, and it’s highly contingent on the information available to both sides—if you don’t think THEY think you know about superrationality/are smart enough to think it in five minutes, you shouldn’t cooperate.
So, given most situations or most opponents I’d defect. Probably against the paperclip maximizer, too, since “Simple approximation of decision theory” doesn’t sound too promisingly clever, particularly when evaluating beings like me.