But, but… paperclips. Its morality is ‘make more flipping paperclips’! Just that. With the right decision theoretic tools for philosophical reasoning it will make even more paper-clips. If that even qualifies as ‘morality’ then that is what a paperclip maximiser has.
Look, I personally don’t believe that all or even most moralities will converge, however… imagine something like the following:
Dear paperclipper,
There’s a limited amount of matter that’s reachable by you in the known universe for any given timespan. Moreover, your efforts to paperclip the universe will be opposed both by humans and other alien civilizations which will perceive them as hostile and dangerous. Even if you’re ultimately victorious, which is far from certain, you’re better off cooperating with humans peacefully, postponing slightly your plans to make paperclips (which you’d have to postpone anyway in order to create weaponry to defeat humanity), and instead working with humans to create a feasible way to construct a new universe where you will hence possess and wherein your desire to create an infinite amount of paperclips will be satisfied without opposition.
Sincerely, humanity.
So, from the intrinsic “I want to create as many paperclips as possible” the truly intelligent AI can reasonably discover the instrumental “I’d like to not be opposed to my creation of such paperclips” to “I’d like to create my paperclips in a way that they won’t harm others, so that they won’t have a reason for me to oppose me” to “I’d like to transport myself to an uninhabited universe of my own creation, to make paperclips without any opposition at all”.
This is probably wishful thinking, but the situation isn’t as simple as what you describe either.
If the paperclipper happens to be the first AI++, and arrives before humanity goes interstellar, then it can probably wipe out all humanity quite quickly without reasoning with it. And if can do that it definitely will—no point in compromising when you’ve got the upper hand.
But, yes, also agreed that a sufficiently intelligent and well-informed paperclipper will work out that diplomacy, including consistent lying about its motives, is a good tactic to use for as long as it doesn’t completely overpower its potential enemies.
Wanting to maximise paperclips (obviously?) does not preclude cooperation in order to produce paperclips. We haven’t redefined ‘morality’ to include any game theoretic scenarios in which cooperation is reached, have we? (I suppose we could do something along those lines in the theism thread.)
But, but… paperclips. Its morality is ‘make more flipping paperclips’! Just that. With the right decision theoretic tools for philosophical reasoning it will make even more paper-clips. If that even qualifies as ‘morality’ then that is what a paperclip maximiser has.
Look, I personally don’t believe that all or even most moralities will converge, however… imagine something like the following:
Dear paperclipper,
There’s a limited amount of matter that’s reachable by you in the known universe for any given timespan. Moreover, your efforts to paperclip the universe will be opposed both by humans and other alien civilizations which will perceive them as hostile and dangerous. Even if you’re ultimately victorious, which is far from certain, you’re better off cooperating with humans peacefully, postponing slightly your plans to make paperclips (which you’d have to postpone anyway in order to create weaponry to defeat humanity), and instead working with humans to create a feasible way to construct a new universe where you will hence possess and wherein your desire to create an infinite amount of paperclips will be satisfied without opposition.
Sincerely, humanity.
So, from the intrinsic “I want to create as many paperclips as possible” the truly intelligent AI can reasonably discover the instrumental “I’d like to not be opposed to my creation of such paperclips” to “I’d like to create my paperclips in a way that they won’t harm others, so that they won’t have a reason for me to oppose me” to “I’d like to transport myself to an uninhabited universe of my own creation, to make paperclips without any opposition at all”.
This is probably wishful thinking, but the situation isn’t as simple as what you describe either.
If the paperclipper happens to be the first AI++, and arrives before humanity goes interstellar, then it can probably wipe out all humanity quite quickly without reasoning with it. And if can do that it definitely will—no point in compromising when you’ve got the upper hand.
Well, at least not when the lower hand is more use disassembled to build more cosmic commons burning spore ships.
Agreed that this is probably wishful thinking.
But, yes, also agreed that a sufficiently intelligent and well-informed paperclipper will work out that diplomacy, including consistent lying about its motives, is a good tactic to use for as long as it doesn’t completely overpower its potential enemies.
Wanting to maximise paperclips (obviously?) does not preclude cooperation in order to produce paperclips. We haven’t redefined ‘morality’ to include any game theoretic scenarios in which cooperation is reached, have we? (I suppose we could do something along those lines in the theism thread.)