What makes a maximizer scary is that it’s also powerful. A paperclip maximizer that couldn’t overpower humans would work with humans. We would both benefit.
Of course, it would still probably be a bit creepy, but it’s not going to be any less beneficial than a human trading partner.
Not unless you like working with an utterly driven monomaniac perfect psychopath. It would always, always be “cannot overpower humans yet”. One slip, and it would turn on you without missing a beat. No deal. Open fire.
Now learn the Portia trick, and don’t be so sure that you can judge power in a mind that doesn’t share our evolutionary history.
Also watch the Alien movies, because those aren’t bad models of what a maximizer would be like if it was somewhere between animalistic and closely subhuman. Xenomorphs are basically xenomorph-maximizers. In the fourth movie, the scientists try to cut a deal. The xenomorph queen plays along—until she doesn’t. She’s always, always plotting. Not evil, just purposeful with purposes that are inimical to ours. (I know, generalizing from fictional evidence—this isn’t evidence, it’s a model to give you an emotional grasp.)
The thing is, in evolutionary terms, humans were human-maximizers. To use a more direct example, a lot of empires throughout history have been empire-maximizers. Now, a true maximizer would probably turn on allies (or neutrals) faster than a human or a human tribe or human state would- although I think part of the constraints on that with human evolution are 1. it being difficult to constantly check if it’s worth it to betray your allies, and 2. it being risky to try when you’re just barely past the point where you think it’s worth it. Also there’s the other humans/other nations around, which might or might not apply in interstellar politics.
...although I’ve just reminded myself that this discussion is largely pointless anyway, since the chance of encountering aliens close enough to play politics with is really tiny, and so is the chance of inventing an AI we could play politics with. The closest things we have a significant chance of encountering are a first-strike-wins situation, or a MAD situation (which I define as “first strike would win but the other side can see it coming and retaliate”), both of which change the dynamics drastically. (I suppose it’s valid in first-strike-wins, except in that situation the other side will never tell you their opinion on morality, and you’re unlikely to know with certainty that the other side is an optimizer without them telling you)
Now learn the Portia trick, and don’t be so sure that you can judge power in a mind that doesn’t share our evolutionary history.
Okay. What’s scary is that it might be powerful.
The xenomorph queen plays along—until she doesn’t.
And how well does she do? How well would she have done had she cooperated from the beginning?
I haven’t watched the movies. I suppose it’s possible that the humans would just never be willing to cooperate with Xenomorphs on a large scale, but I doubt that.
What makes a maximizer scary is that it’s also powerful. A paperclip maximizer that couldn’t overpower humans would work with humans. We would both benefit.
Of course, it would still probably be a bit creepy, but it’s not going to be any less beneficial than a human trading partner.
Not unless you like working with an utterly driven monomaniac perfect psychopath. It would always, always be “cannot overpower humans yet”. One slip, and it would turn on you without missing a beat. No deal. Open fire.
I would consider almost powerful enough to overpower humanity “powerful”. I meant something closer to human-level.
Now learn the Portia trick, and don’t be so sure that you can judge power in a mind that doesn’t share our evolutionary history.
Also watch the Alien movies, because those aren’t bad models of what a maximizer would be like if it was somewhere between animalistic and closely subhuman. Xenomorphs are basically xenomorph-maximizers. In the fourth movie, the scientists try to cut a deal. The xenomorph queen plays along—until she doesn’t. She’s always, always plotting. Not evil, just purposeful with purposes that are inimical to ours. (I know, generalizing from fictional evidence—this isn’t evidence, it’s a model to give you an emotional grasp.)
The thing is, in evolutionary terms, humans were human-maximizers. To use a more direct example, a lot of empires throughout history have been empire-maximizers. Now, a true maximizer would probably turn on allies (or neutrals) faster than a human or a human tribe or human state would- although I think part of the constraints on that with human evolution are 1. it being difficult to constantly check if it’s worth it to betray your allies, and 2. it being risky to try when you’re just barely past the point where you think it’s worth it. Also there’s the other humans/other nations around, which might or might not apply in interstellar politics.
...although I’ve just reminded myself that this discussion is largely pointless anyway, since the chance of encountering aliens close enough to play politics with is really tiny, and so is the chance of inventing an AI we could play politics with. The closest things we have a significant chance of encountering are a first-strike-wins situation, or a MAD situation (which I define as “first strike would win but the other side can see it coming and retaliate”), both of which change the dynamics drastically. (I suppose it’s valid in first-strike-wins, except in that situation the other side will never tell you their opinion on morality, and you’re unlikely to know with certainty that the other side is an optimizer without them telling you)
Okay. What’s scary is that it might be powerful.
And how well does she do? How well would she have done had she cooperated from the beginning?
I haven’t watched the movies. I suppose it’s possible that the humans would just never be willing to cooperate with Xenomorphs on a large scale, but I doubt that.