Of course humans can cooperate with AGI for a variety of reasons, just as we cooperate with humans. I don’t think decision theory philosophy explains humans well, and the evidence required to convince me that humans can’t cooperate with AGI would be enormous, so I don’t see the potential relevance of that post.
Of course humans can cooperate with AGI for a variety of reasons, just as we cooperate with humans. I don’t think decision theory philosophy explains humans well, and the evidence required to convince me that humans can’t cooperate with AGI would be enormous, so I don’t see the potential relevance of that post.