Of course humans can cooperate with AGI for a variety of reasons, just as we cooperate with humans. I don’t think decision theory philosophy explains humans well, and the evidence required to convince me that humans can’t cooperate with AGI would be enormous, so I don’t see the potential relevance of that post.
Conflict is expensive. If you have an alternative (i.e. performing a values handshake) which is cheaper, you’d probably take it? (Humans can’t do that, for reasons outlined in Decision theory does not imply that we get to have nice things.)
Of course humans can cooperate with AGI for a variety of reasons, just as we cooperate with humans. I don’t think decision theory philosophy explains humans well, and the evidence required to convince me that humans can’t cooperate with AGI would be enormous, so I don’t see the potential relevance of that post.