I think it’s more about unknowns and probabilities. AI may not exactly know nor the value of the specific world configuraiton, nor if specific actions would lead to that world with 100% chance.
So, AI has to consider the probabilities of the results of it’s different choices, including a choice of inaction. Normally, the cost of inaction is reliably low, which means that even a small chance of the other choice leading to something VERY BAD, because of the failure either in measuring the value or predicting the future, would lead to chosing inaction over that action.
But if the cost of inaction is also likely to be VERY BAD, because of some looming or ongoing catastrophe—then yes, AI (and us) will have to take the chances. And, if possible, inform the surviging people about the nature of the crisis and actions AI is taking. Hopefully that will not have to happen often.
I think it’s more about unknowns and probabilities. AI may not exactly know nor the value of the specific world configuraiton, nor if specific actions would lead to that world with 100% chance.
So, AI has to consider the probabilities of the results of it’s different choices, including a choice of inaction. Normally, the cost of inaction is reliably low, which means that even a small chance of the other choice leading to something VERY BAD, because of the failure either in measuring the value or predicting the future, would lead to chosing inaction over that action.
But if the cost of inaction is also likely to be VERY BAD, because of some looming or ongoing catastrophe—then yes, AI (and us) will have to take the chances. And, if possible, inform the surviging people about the nature of the crisis and actions AI is taking. Hopefully that will not have to happen often.