But it seems to me that if you were able to calculate the utility of
world-outcomes modularly, then you wouldn’t need an AI in the first
place; you would instead build an Oracle, give it your possible actions
as input, and select the action with the greatest utility.
That sounds as though it is just an intelligent machine which has been crippled by being forced to act through a human body.
That sounds as though it is just an intelligent machine which has been crippled by being forced to act through a human body.
You suggest that would be better—but how?