Why should I use the word “should” to describe this, when “will” serves exactly as well?
‘Will’ does not serve exactly as well when considering agents with limited optimisation power (that is, any actual agent). Considering, for example, a Paperclip Maximiser that happens to be less intelligent than I am. I may be able to predict that Clippy will colonize Mars before he invades earth but also be quite sure that more paperclips would be formed if Clippy invaded Earth first. In this case I will likely want a word that means “would better serve to maximise the agent’s expected utility even if the agent does not end up doing it”.
One option is to take ‘should’ and make it the generic ‘should’. I’m not saying you should use ‘should’ (implicitly, ‘should’) to describe the action that Clippy would take if he had sufficient optimisation power. But I am saying that ‘will’ does not serve exactly as well.
I use “would-want” to indicate extrapolation. I.e., A wants X but would-want Y. This helps to indicate the implicit sensitivity to the exact extrapolation method, and that A does not actually represent a desire for Y at the current moment, etc. Similarly, A does X but would-do Y, A chooses X but would-choose Y, etc.
‘Will’ does not serve exactly as well when considering agents with limited optimisation power (that is, any actual agent). Considering, for example, a Paperclip Maximiser that happens to be less intelligent than I am. I may be able to predict that Clippy will colonize Mars before he invades earth but also be quite sure that more paperclips would be formed if Clippy invaded Earth first. In this case I will likely want a word that means “would better serve to maximise the agent’s expected utility even if the agent does not end up doing it”.
One option is to take ‘should’ and make it the generic ‘should’. I’m not saying you should use ‘should’ (implicitly, ‘should’) to describe the action that Clippy would take if he had sufficient optimisation power. But I am saying that ‘will’ does not serve exactly as well.
I use “would-want” to indicate extrapolation. I.e., A wants X but would-want Y. This helps to indicate the implicit sensitivity to the exact extrapolation method, and that A does not actually represent a desire for Y at the current moment, etc. Similarly, A does X but would-do Y, A chooses X but would-choose Y, etc.
“Should” is a standard word for indicating moral obligation—it seems only sensible to use it in the context of other moral systems.