A paperclip maximizer is an often used example of AGI gone badly wrong. However, I think a paperclip minimizer is worse by far.
In order to make the most of the universe’s paperclip capacity, a maximizer would have to work hard to develop science, mathematics and technology. Its terminal goal is rather stupid in human terms, but at least it would be interesting because of its instrumental goals.
For a minimizer, the best strategy might be wipe out humanity and commit suicide. Assuming there are no other intelligent civilizations within our cosmological horizon, it might be not worth its while to colonize the universe just to make sure no paperclips form out of cosmic gas by accident. The risk that one of the colonies will start producing paperclips because of a spontaneous hardware error seems much higher by comparison.
A minimizer will fill the lightcone to make sure there aren’t paperclips elsewhere it can reach. What if other civs are hiding? What if there is undiscovered science which implies natural processes create paperclips somewhere? What if there are “Boltzmann paperclips”? Minimizing means minimizing!
I’m guessing even a Cthulhu minimizer (that wants to reduce the number of Cthulhu in the world) will fill its lightcone with tools for studying its task, even though there is no reasonable chance that it’d need to do anything. It just has nothing better to do, it’s the problem it’s motivated to work on, so it’s what it’ll burn all available resources on.
My speculation here is that it might be that the “what ifs” you describe yield less positive utility than the negative utility due to the chance one of the AI’s descendants starts producing paperclips because “the sign bit flips spontaneously”. Of course the AI will safeguard itself against such events but there are probably physical limits to safety.
the negative utility due to the chance one of the AI’s descendants starts producing paperclips because “the sign bit flips spontaneously”
It’s hard to make such estimates, as they require that an AGI is unable to come up with an AGI design that’s less likely than empty space to produce paperclips. I don’t see how the impossibility of this task could be guaranteed on low level, as a “physical law”; and if you merely don’t see how to do it, an AGI might still find a way, as it’s better at designing things than you are. Empty space is only status quo, it’s not obviously optimal at not producing paperclips, and so it might be possible to find a better plan, which becomes more likely if you are very good at finding better plans.
If you mean “empty space” as in vacuum then I think it doesn’t contain any paperclips more or less by definition. If you mean “empty space” as in thermodynamic equilibrium at finite temperature then it contains some small amount of paperclips. I agree it might be possible to create a state which contains less paperclips for some limited period of time (before onset of thermodynamic equilibrium). However it’s probably much harder than the opposite (i.e. creating a state which contains much more paperclips than thermodynamic equilibrium).
paperclip maximer is used because a factory that makes paperclips might imagine that a paperclip maximizing ai is exactly what it wants to make. There aren’t that many anti-paperclip factories
Just a thought:
A paperclip maximizer is an often used example of AGI gone badly wrong. However, I think a paperclip minimizer is worse by far.
In order to make the most of the universe’s paperclip capacity, a maximizer would have to work hard to develop science, mathematics and technology. Its terminal goal is rather stupid in human terms, but at least it would be interesting because of its instrumental goals.
For a minimizer, the best strategy might be wipe out humanity and commit suicide. Assuming there are no other intelligent civilizations within our cosmological horizon, it might be not worth its while to colonize the universe just to make sure no paperclips form out of cosmic gas by accident. The risk that one of the colonies will start producing paperclips because of a spontaneous hardware error seems much higher by comparison.
A minimizer will fill the lightcone to make sure there aren’t paperclips elsewhere it can reach. What if other civs are hiding? What if there is undiscovered science which implies natural processes create paperclips somewhere? What if there are “Boltzmann paperclips”? Minimizing means minimizing!
I’m guessing even a Cthulhu minimizer (that wants to reduce the number of Cthulhu in the world) will fill its lightcone with tools for studying its task, even though there is no reasonable chance that it’d need to do anything. It just has nothing better to do, it’s the problem it’s motivated to work on, so it’s what it’ll burn all available resources on.
My speculation here is that it might be that the “what ifs” you describe yield less positive utility than the negative utility due to the chance one of the AI’s descendants starts producing paperclips because “the sign bit flips spontaneously”. Of course the AI will safeguard itself against such events but there are probably physical limits to safety.
It’s hard to make such estimates, as they require that an AGI is unable to come up with an AGI design that’s less likely than empty space to produce paperclips. I don’t see how the impossibility of this task could be guaranteed on low level, as a “physical law”; and if you merely don’t see how to do it, an AGI might still find a way, as it’s better at designing things than you are. Empty space is only status quo, it’s not obviously optimal at not producing paperclips, and so it might be possible to find a better plan, which becomes more likely if you are very good at finding better plans.
If you mean “empty space” as in vacuum then I think it doesn’t contain any paperclips more or less by definition. If you mean “empty space” as in thermodynamic equilibrium at finite temperature then it contains some small amount of paperclips. I agree it might be possible to create a state which contains less paperclips for some limited period of time (before onset of thermodynamic equilibrium). However it’s probably much harder than the opposite (i.e. creating a state which contains much more paperclips than thermodynamic equilibrium).
It is not clear to me that the definition of the vacuum state (http://en.wikipedia.org/wiki/Vacuum_state) precludes the momentary creation of paperclips.
paperclip maximer is used because a factory that makes paperclips might imagine that a paperclip maximizing ai is exactly what it wants to make. There aren’t that many anti-paperclip factories