Trying to find a normalization technique with desirable properties does not seem like a useless endeavor to me.
Right, but finding a normalization scheme is essentially just defining a few particular indifference equations, and the desirable properties are “consistent with my preferences”, not the elegance of the normalization scheme, so you might as well just admit that you’re searching for information about your intertheoretic preferences. If you try to shoehorn that process into “normalization scheme”, it will just confuse about what is actually going on, and constrain you in ways you may not want.
I see it the other way: a normalization scheme is a cop-out; just pulling a utility function out of a hat to avoid having to do any real moral philosophy. It will work, but it won’t have anything to do with what you want.
EDIT: If you truly don’t have preferences in some case, then any scaling will do, and anynormalization scheme (even one that does nothing) will work. If you are uneasy with that, it’s because you actually do have intertheoretic preferences.
If on the other hand you have some reason to think that utility function U_a should not be weighted more or less than U_b on some problem, then that is a preference, and you should use it directly without calling it a normalization scheme.
Also keep in mind that it is bordering on an error to define preference-relevant stuff from some source other than information about preferences. So saying “these should be equal in this way” without reference to the preferences that produces and why that’s desirable is a mistake, IMO.
Are we just trying to model our preferences for the purposes of making predictions, or are we also trying to figure out how to make recommendations to ourselves that we think would improve our actions with respect to what we think we would want if we understood ourselves better? If the former, then we shouldn’t assume VNM anyway, since humans do not obey the VNM axioms. If the latter, then we can’t make any progress if we do not use some information other than revealed preferences.
And just to be clear, I am not suggesting that we should use only the structure of the utility functions themselves to come up with the normalization, without taking how we think about them into account. I think your example of normalizing the meat/vegetarian utility functions to a variance of 1 is a good example of what tends to go wrong when you insist on something so restrictive, and I seem to recall someone posting about some theorem on LW a while ago saying that no such normalization scheme has all of some set of desirable properties. Anyway, I am merely suggesting that when we only have a vague idea of what we want (which humans tend to do, and which is the motivation for the problem in the first place), it is not as simple as declaring that each km should be exactly what we want it to be.
Are we just trying to model our preferences for the purposes of making predictions, or are we also trying to figure out how to make recommendations to ourselves… CEV
The latter.
If the latter, then we can’t make any progress if we do not use some information other than revealed preferences.
Yes. I may have been unclear. I don’t mean to refer to revealed preference, I mean that refinements on the possible utility function are to be judged based on the preferences that they entail, not on anything else. For example, utilitarianism should be judged by it’s (repugnant) conclusions, not by the elegance of linear aggregation or whatever.
I think that other information takes a variety of forms, stuff like revealed preference, what philosophers think, neuroscience, etc. The trick is to define a prior that relates these things to desired preferences, and then what our preferences in a state of partial knowledge are.
The OP work has a few other problems as well that have me now leaning towards building this thing up from that (indirect normativity) base instead of going in with this “set of utility functions with probabilities” business.
Anyway, I am merely suggesting that when we only have a vague idea of what we want (which humans tend to do, and which is the motivation for the problem in the first place), it is not as simple as declaring that each km should be exactly what we want it to be.
Right, but finding a normalization scheme is essentially just defining a few particular indifference equations, and the desirable properties are “consistent with my preferences”, not the elegance of the normalization scheme, so you might as well just admit that you’re searching for information about your intertheoretic preferences. If you try to shoehorn that process into “normalization scheme”, it will just confuse about what is actually going on, and constrain you in ways you may not want.
I see it the other way: a normalization scheme is a cop-out; just pulling a utility function out of a hat to avoid having to do any real moral philosophy. It will work, but it won’t have anything to do with what you want.
EDIT: If you truly don’t have preferences in some case, then any scaling will do, and anynormalization scheme (even one that does nothing) will work. If you are uneasy with that, it’s because you actually do have intertheoretic preferences.
If on the other hand you have some reason to think that utility function
U_a
should not be weighted more or less thanU_b
on some problem, then that is a preference, and you should use it directly without calling it a normalization scheme.Also keep in mind that it is bordering on an error to define preference-relevant stuff from some source other than information about preferences. So saying “these should be equal in this way” without reference to the preferences that produces and why that’s desirable is a mistake, IMO.
Are we just trying to model our preferences for the purposes of making predictions, or are we also trying to figure out how to make recommendations to ourselves that we think would improve our actions with respect to what we think we would want if we understood ourselves better? If the former, then we shouldn’t assume VNM anyway, since humans do not obey the VNM axioms. If the latter, then we can’t make any progress if we do not use some information other than revealed preferences.
And just to be clear, I am not suggesting that we should use only the structure of the utility functions themselves to come up with the normalization, without taking how we think about them into account. I think your example of normalizing the meat/vegetarian utility functions to a variance of 1 is a good example of what tends to go wrong when you insist on something so restrictive, and I seem to recall someone posting about some theorem on LW a while ago saying that no such normalization scheme has all of some set of desirable properties. Anyway, I am merely suggesting that when we only have a vague idea of what we want (which humans tend to do, and which is the motivation for the problem in the first place), it is not as simple as declaring that each km should be exactly what we want it to be.
The latter.
Yes. I may have been unclear. I don’t mean to refer to revealed preference, I mean that refinements on the possible utility function are to be judged based on the preferences that they entail, not on anything else. For example, utilitarianism should be judged by it’s (repugnant) conclusions, not by the elegance of linear aggregation or whatever.
I think that other information takes a variety of forms, stuff like revealed preference, what philosophers think, neuroscience, etc. The trick is to define a prior that relates these things to desired preferences, and then what our preferences in a state of partial knowledge are.
The OP work has a few other problems as well that have me now leaning towards building this thing up from that (indirect normativity) base instead of going in with this “set of utility functions with probabilities” business.
Ok, because we don’t know what we want it to be.
Ok. It sounds like we mostly agree at this point.