Would it be correct to say you mean “should” in the wishful thinking sense of “we really want this outcome,” rather than something normative or probabilistic?
Good question. The answer’s yes, but now I’m wondering whether we really should expect alien-built AIs to be cooperators. I know Eliezer thinks we should.
The baby-eaters were cooperators, yes; they were also stated to be relatively similar to humanity except for their unfortunate tendency to eat preteens.
The other ones, though? I didn’t see them do anything obviously cooperative, but I did see a few events that’d argue against it. The overall impression I got was that we really can’t be sure, except that it might be unlikely for both sides of a contact to come out unscathed.
Would it be correct to say you mean “should” in the wishful thinking sense of “we really want this outcome,” rather than something normative or probabilistic?
Good question. The answer’s yes, but now I’m wondering whether we really should expect alien-built AIs to be cooperators. I know Eliezer thinks we should.
That is not the impression I got from the story.
The baby-eaters were cooperators, yes; they were also stated to be relatively similar to humanity except for their unfortunate tendency to eat preteens.
The other ones, though? I didn’t see them do anything obviously cooperative, but I did see a few events that’d argue against it. The overall impression I got was that we really can’t be sure, except that it might be unlikely for both sides of a contact to come out unscathed.