If you take the binary view that you’re either smart enough to achieve your goals or not, then you might well want to stop improving when you have the minimum intelligence necessary to meet them...which means, among other things,that AIs with goals requiring human or lower intelligence won’t become superhuman …. which lowers the probability of the Clippie scenario. It doesn’t require huge intelligence to make paperclips,so an AI with a goal to make paperclips, but not to make any specific amount, wouldn’t grow into a threatening monster.
The probability of the Clippie scenario is also lowered by the consideration that fine grained goals might shift during self-improvement phase, so the Clippie scenario …. arbitrary goals combined with a superintelligence …. is whittled away from both ends.
If you take the binary view that you’re either smart enough to achieve your goals or not, then you might well want to stop improving when you have the minimum intelligence necessary to meet them...which means, among other things,that AIs with goals requiring human or lower intelligence won’t become superhuman …. which lowers the probability of the Clippie scenario. It doesn’t require huge intelligence to make paperclips,so an AI with a goal to make paperclips, but not to make any specific amount, wouldn’t grow into a threatening monster.
The probability of the Clippie scenario is also lowered by the consideration that fine grained goals might shift during self-improvement phase, so the Clippie scenario …. arbitrary goals combined with a superintelligence …. is whittled away from both ends.