One solution to the problem of friendliness is to develop a self-improving, unfriendly AI, put it in a box, and ask it to make a friendly AI for us. This gets around the incredible difficulty of friendliness, but it creates a new, apparently equally impossible problem. How do you design a box strong enough to hold a superintelligence?
Two problems. (You still need to verify friendliness.)
Two problems. (You still need to verify friendliness.)
Quite true. I slightly edited my claim to be less wrong.