Agreed that we need real-valued utilities to make clear recommendations in the case of uncertainty.
I don’t understand how it is related to a decision theory, it’s just world counting and EV calculation. I must be missing something, I assume.
For all of the consequentialist decision theories, I think you can describe what they’re doing as attempting to argmax a probability-weighted sum of utilities across possible worlds, and they differ on how they think actions influence probabilities / their underlying theory of how they specify ‘possible worlds’ and thus what universe they think they’re in. [That is, I think the interesting bit is the part you seem to be handling as an implementation detail.]
Agreed that we need real-valued utilities to make clear recommendations in the case of uncertainty.
For all of the consequentialist decision theories, I think you can describe what they’re doing as attempting to argmax a probability-weighted sum of utilities across possible worlds, and they differ on how they think actions influence probabilities / their underlying theory of how they specify ‘possible worlds’ and thus what universe they think they’re in. [That is, I think the interesting bit is the part you seem to be handling as an implementation detail.]