I prefer this briefer formalization, since it avoids some of the vagueness of “adequate preparations” and makes premise (6) clearer.
At some point in the development of AI, there will be a very swift increase in the optimization power of the most powerful AI, moving from a non-dangerous level to a level of superintelligence. (Fast take-off)
This AI will maximize a goal function.
Given fast-take off and maximizing a goal function, the superintelligent AI will have a decisive advantage unless adequate controls are used.
Adequate controls will not be used. (E.g. Won’t box/boxing won’t work)
Therefore, the superintelligent AI will have a decisive advantage
Unless that AI is designed with goals that stably and extremely closely align with ours, if the superintelligent AI has a decisive advantage, civilization will be ruined. (Friendliness is necessary)
The AI will not be designed with goals that stably and extremely closely align with ours.
Therefore, civilization will be ruined shortly after fast take-off.
I prefer this briefer formalization, since it avoids some of the vagueness of “adequate preparations” and makes premise (6) clearer.
At some point in the development of AI, there will be a very swift increase in the optimization power of the most powerful AI, moving from a non-dangerous level to a level of superintelligence. (Fast take-off)
This AI will maximize a goal function.
Given fast-take off and maximizing a goal function, the superintelligent AI will have a decisive advantage unless adequate controls are used.
Adequate controls will not be used. (E.g. Won’t box/boxing won’t work)
Therefore, the superintelligent AI will have a decisive advantage
Unless that AI is designed with goals that stably and extremely closely align with ours, if the superintelligent AI has a decisive advantage, civilization will be ruined. (Friendliness is necessary)
The AI will not be designed with goals that stably and extremely closely align with ours.
Therefore, civilization will be ruined shortly after fast take-off.