When making a choice of actions, “do nothing” is often a valid option. But as the other options, this option also has costs and risks.
Cost here is a loss of value, and risk is a probability of losing a lot of value.
If cost of the “no action” is near zero, then it means that it is preferrable to all risky alternatives.
But if all options are risky, then we will have to figure witch option is the “lesser evil”.
So, if sufficently aligned AI is executing some simple action that is not critical for survival of humanity, it will exclude the risky methods, such as creating an army of nanobots.
Problem is, we DO want to use AI for complex actions that are critical for survival of humanity, and AI will have to use risky methods for that.
When making a choice of actions, “do nothing” is often a valid option. But as the other options, this option also has costs and risks.
Cost here is a loss of value, and risk is a probability of losing a lot of value.
If cost of the “no action” is near zero, then it means that it is preferrable to all risky alternatives.
But if all options are risky, then we will have to figure witch option is the “lesser evil”.
So, if sufficently aligned AI is executing some simple action that is not critical for survival of humanity, it will exclude the risky methods, such as creating an army of nanobots.
Problem is, we DO want to use AI for complex actions that are critical for survival of humanity, and AI will have to use risky methods for that.