Unfortunately, it requires a bit of moral philosophy to pin down the relative weights of the utility functions. That is, the utility functions and their respective probabilities is not enough to uniquely identify the combined utility function.
Right, to get that answer you need to look inside your utility function… which you’re uncertain about. Stated differently, your utility function tells you how to deal with uncertainty about your utility function, but that’s another thing you’re uncertain about. But luckily your utility function tells you how do deal with uncertainty about uncertainty about your utility function… I think you can see where this is going.
Naively, my intuition is that simply adding uncertainty about preferences as part of your ontology isn’t enough because of this regress—you still don’t even know in principle how to choose between actions without more precise knowledge of your utility function. However, this regress sounds suspiciously like the sort of thing that once formalized precisely isn’t really a problem at all—just “take the limit” as it were.
Your (partial) utility functions do not contain enough information to resolve uncertainty between them. As far as I can tell, utility functions can’t contain meta-preferences.
You can’t just pull a correct utility function out of thin air, though. You got the utility function from somewhere; it is the output of a moral-philosophy process. You resolve the uncertainty with the same information-source from which you constructed the partial utility functions from in the first place.
No need to take the limit or do any extrapolation (except that stuff like that does seem to show up inside the moral-philosophy process.)
I think we’re using “utility function” differently here. I take it to mean the function containing all information about your preferences, preferences about preferences, and higher level meta-preferences. I think you’re using the term to refer to the function containing just object-level preference information. Is that correct?
Now that I make this distinction, I’m not sure VNM utility applies to meta-preferences.
Now that I make this distinction, I’m not sure VNM utility applies to meta-preferences.
It doesn’t, AFAIK, which is why I said your utility function does not contain meta-preference and the whole moral dynamic. “utility function” is only a thing in VNM. Using it as a shorthand for “my whole reflective decision system” is incorrect use of the term, IMO.
I am not entirely sure that your utility function can’t contain meta-preference, though. I could be convinced by some well-placed mathematics.
My current understanding is that you put the preference uncertainty into your ontology, extend your utility function to deal with those extra dimensions, and lift the actual moral updating to epistemological work over those extra ontology-variables. This still requires some level of preliminary moral philosophy to shoehorn your current incoherent godshatter-soup into that formal framework.
I’ll hopefully formalize this some day soon to something coherent enough to be criticized.
Right, to get that answer you need to look inside your utility function… which you’re uncertain about. Stated differently, your utility function tells you how to deal with uncertainty about your utility function, but that’s another thing you’re uncertain about. But luckily your utility function tells you how do deal with uncertainty about uncertainty about your utility function… I think you can see where this is going.
Naively, my intuition is that simply adding uncertainty about preferences as part of your ontology isn’t enough because of this regress—you still don’t even know in principle how to choose between actions without more precise knowledge of your utility function. However, this regress sounds suspiciously like the sort of thing that once formalized precisely isn’t really a problem at all—just “take the limit” as it were.
That’s not the issue we ran into.
Your (partial) utility functions do not contain enough information to resolve uncertainty between them. As far as I can tell, utility functions can’t contain meta-preferences.
You can’t just pull a correct utility function out of thin air, though. You got the utility function from somewhere; it is the output of a moral-philosophy process. You resolve the uncertainty with the same information-source from which you constructed the partial utility functions from in the first place.
No need to take the limit or do any extrapolation (except that stuff like that does seem to show up inside the moral-philosophy process.)
I think we’re using “utility function” differently here. I take it to mean the function containing all information about your preferences, preferences about preferences, and higher level meta-preferences. I think you’re using the term to refer to the function containing just object-level preference information. Is that correct?
Now that I make this distinction, I’m not sure VNM utility applies to meta-preferences.
It doesn’t, AFAIK, which is why I said your utility function does not contain meta-preference and the whole moral dynamic. “utility function” is only a thing in VNM. Using it as a shorthand for “my whole reflective decision system” is incorrect use of the term, IMO.
I am not entirely sure that your utility function can’t contain meta-preference, though. I could be convinced by some well-placed mathematics.
My current understanding is that you put the preference uncertainty into your ontology, extend your utility function to deal with those extra dimensions, and lift the actual moral updating to epistemological work over those extra ontology-variables. This still requires some level of preliminary moral philosophy to shoehorn your current incoherent godshatter-soup into that formal framework.
I’ll hopefully formalize this some day soon to something coherent enough to be criticized.
I look forward to it!