I think the “deontological preferences are isomorphic to utility functions” is wrong as presented.
Firts, the formula has issues with dividing by zero and not summing probabilities to one (and re-using variable x as a local variable in the sum). So you probably meant something like
P(x)=eu(x)∑y∈Xeu(y).
Even then, I dont think this describes any isomorphism of deontological preferences to utility functions.
Utility functions are invariant when multiplied with a positive constant. This is not reflected in the formula.
utility maximizers usually take the action with the best utility with probability 1, rather than using different probabilities for different utilities.
modelling deontological constraints as probability distributions doesnt seem right to me. Let’s say I decide between drinking green tea and black tea, and neither of those violate any deontological constraints, then assigning some values (which ones?) to P(“I drink green tea”) or P(“I drink black tea”) doesnt describe these deontological constraints well.
any behavior can be encoded as utility functions, so finding any isomorphisms to utility functions is usually possible, but not always meaningful.
the formula has issues with dividing by zero and not summing probabilities to one
Well, that was one embarrassing typo. Fixed, and thanks for pointing it out.
Utility functions are invariant when multiplied with a positive constant. This is not reflected in the formula.
It is. Utility functions are invariant under ordering-preserving transformations. Exponentiation is order-preserving (rises monotonically), and so is multiplying by the constant of ∑x∈Xeu(x).
Let’s say I decide between drinking green tea and black tea, and neither of those violate any deontological constraints, then assigning some values (which ones?) to P(“I drink green tea”) or P(“I drink black tea”) doesnt describe these deontological constraints well.
Interpreted as a probability distribution, it assigns the same probability to both actions. In practice, you can imagine some sort of infrabayesianism-style imprecise probabilities being involved: the “preference” being indifferent between the vast majority of actions (and so providing no advice one way or another) and only expressing specific for vs. against preferences in a limited set of situations.
Utility functions are invariant under ordering-preserving transformations.
Utility functions in the sense of VNM, Savage, de Finetti, Jeffrey-Bolker, etc. are not invariant under all ordering-preserving transformations, only affine ones. Exponentiation is not affine.
What sort of utility function do you have in mind?
Oops, you’re right. I clearly took too many mental shortcuts when formulating that response.
What sort of utility function do you have in mind?
The reason this still works is because in the actual formulation I had in mind, we then plug the utility-function-transformed-into-a-probability-distribution into a logarithm function, canceling out the exponentiation. Indeed, that was the actual core statement in my post: that maximizing expected utility is equivalent to minimizing the cross-entropy between some target distribution and the real distribution.
But evidently I decided to skip some steps and claim that the utility function is directly equivalent to the target distribution. That was, indeed, unambiguously incorrect.
I think the “deontological preferences are isomorphic to utility functions” is wrong as presented.
Firts, the formula has issues with dividing by zero and not summing probabilities to one (and re-using variable x as a local variable in the sum). So you probably meant something like P(x)=eu(x)∑y∈Xeu(y). Even then, I dont think this describes any isomorphism of deontological preferences to utility functions.
Utility functions are invariant when multiplied with a positive constant. This is not reflected in the formula.
utility maximizers usually take the action with the best utility with probability 1, rather than using different probabilities for different utilities.
modelling deontological constraints as probability distributions doesnt seem right to me. Let’s say I decide between drinking green tea and black tea, and neither of those violate any deontological constraints, then assigning some values (which ones?) to P(“I drink green tea”) or P(“I drink black tea”) doesnt describe these deontological constraints well.
any behavior can be encoded as utility functions, so finding any isomorphisms to utility functions is usually possible, but not always meaningful.
Well, that was one embarrassing typo. Fixed, and thanks for pointing it out.
It is. Utility functions are invariant under ordering-preserving transformations. Exponentiation is order-preserving (rises monotonically), and so is multiplying by the constant of ∑x∈Xeu(x).
Interpreted as a probability distribution, it assigns the same probability to both actions. In practice, you can imagine some sort of infrabayesianism-style imprecise probabilities being involved: the “preference” being indifferent between the vast majority of actions (and so providing no advice one way or another) and only expressing specific for vs. against preferences in a limited set of situations.
Utility functions in the sense of VNM, Savage, de Finetti, Jeffrey-Bolker, etc. are not invariant under all ordering-preserving transformations, only affine ones. Exponentiation is not affine.
What sort of utility function do you have in mind?
Oops, you’re right. I clearly took too many mental shortcuts when formulating that response.
The reason this still works is because in the actual formulation I had in mind, we then plug the utility-function-transformed-into-a-probability-distribution into a logarithm function, canceling out the exponentiation. Indeed, that was the actual core statement in my post: that maximizing expected utility is equivalent to minimizing the cross-entropy between some target distribution and the real distribution.
But evidently I decided to skip some steps and claim that the utility function is directly equivalent to the target distribution. That was, indeed, unambiguously incorrect.