If you can “specifically preprogram” goals into an AI with greater than human intelligence, then you have presumably cracked the complexity-of-value problem. You can explicitly state all of human morality. Trying to achieve a lesser goal would be insanely dangerous. In which case, you have now written an AI that is smarter than a human, and therefore presumably able to write another AI smarter than itself. As soon as you create a smarter-than-human machine, you have the potential for an intelligence explosion.
If you can “specifically preprogram” goals into an AI with greater than human intelligence, then you have presumably cracked the complexity-of-value problem. You can explicitly state all of human morality. Trying to achieve a lesser goal would be insanely dangerous. In which case, you have now written an AI that is smarter than a human, and therefore presumably able to write another AI smarter than itself. As soon as you create a smarter-than-human machine, you have the potential for an intelligence explosion.