I think I have updated slightly in the direction of requiring my utility function to conform to VNM and away from being inclined to throw it out if my preferences aren’t consistent. This is probably mostly due to smart people being asked to give an example of a circular preference and my not finding any answer compelling.
Expectation. VNM isn’t really useful without uncertainty. Without uncertainty, transitive preferences are enough.
I think I see the point you’re trying to make, which is that we want to have a normalized scale of utility to apply probability to. This directly contradicts the prohibition against “looking at the sign or magnitude”. You are comparing 1⁄400 EU and 1⁄500 EU using their magnitudes, and jumping headfirst into the radiation. Am I missing something?
I think what you mean to tell me is: “say ‘my preferences’ instead of ‘my utility function’”. I acknowledge that I was incorrectly using these interchangeably.
I do think it was clear what I meant when I called it “my” function and talked about it not conforming to VNM rules, so this response felt tautological to me.
What I mean by “normalized” is that you’re compressing the utility values into the range between 0 and 1. I am not aware of another definition that would apply here.
Your rule says you’re allowed to compare, but your other rule says you’re not allowed to compare by magnitude. You were serious enough about this second rule to equate it with radiation death.
You can’t apply probabilities to utilities and be left with anything meaningful unless you’re allowed to compare by magnitude. This is a fatal contradiction in your thesis. Using your own example, you assign a value of 1 to whaling and 1⁄500 to the sandwich. If you’re not allowed to compare the two using their magnitude, then you can’t compare the utility of 1⁄400 chance of the whale day with the sandwich, because you’re not allowed to think about how much better it is to be a whale.
There’s something missing here, which is that “1/400 chance of a whale day” means “1/400 chance of whale + 399⁄400 chance of normal day”. To calculate the value of “1/400 chance of a whale day” you need to assign a utility for both a whale day and a normal day. Then you can compare the resulting expectation of utility to the utility of a sandwhich = 1⁄500 (by which we mean a sandwich day, I guess?), no sweat.
The absolute magnitudes of the utilities don’t make any difference. If you add N to all utility values, that just adds N to both sides of the comparison. (And you’re not allowed to compare utilities to magic numbers like 0, since that would be numerology.)
I notice we’re not understanding each other, but I don’t know why. Let’s step back a bit. What problem is “radiation poisoning for looking at magnitude of utility” supposed to be solving?
We’re not talking about adding N to both sides of a comparison. We’re talking about taking a relation where we are only allowed to know that A < B, multiplying B by some probability factor, and then trying to make some judgment about the new relationship between A and xB. The rule against looking at magnitudes prevents that. So we can’t give an answer to the question: “Is the sandwich day better than the expected value of 1⁄400 chance of a whale day?”
If we’re allowed to compare A to xB, then we have to do that before the magnitude rule goes into effect. I don’t see how this model is supposed to account for that.
You can’t just multiply B by some probability factor. For the situation where you have p(B) = x, p(C) = 1 - x, your expected utility would be xB + (1-x)C. But xB by itself is meaningless, or equivalent to the assumption that the utility of the alternative (which has probability 1 - x) is the magic number 0. “1/400 chance of a whale day” is meaningless until you define the alternative that happens with probability 399⁄400.
For the purpose of calculating xB + (1-x)C you obviously need to know the actual values, and hence magnitudes of x, B and C. Similarly you need to know the actual values in order to calculate whether A < B or not. “Radiation poisoning for looking at magnitude of utility” really means that you’re not allowed to compare utilities to magic numbers like 0 or 1. It means that the only thing you’re allowed to do with utility values is a) compare them to each other, and b) obtain expected utilities by multiplying by a probability distribution.
If you can’t multiply B by a probability factor, then it’s meaningless in the context of xB + (1-x)C, also. xB by itself isn’t meaningless; it roughly means “the expected utility on a normalized scale between the utility of the outcome I least prefer and the outcome I most prefer”. nyan_sandwich even agrees that 0 and 1 aren’t magic numbers, they’re just rescaled utility values.
I’m 99% confident that that’s not what nyan_sandwich means by radiation poisoning in the original post, considering the fact that comparing utilities to 0 and 1 is exactly what he does in the hell example. If you’re not allowed to compare utilities by magnitude, then you can’t obtain an expected utility by multiplying by a probability distribution. Show the math if you think you can prove otherwise.
It’s getting hard to reference back to the original post because it keeps changing with no annotations to highlight the edits, but I think the only useful argument in the radiation poisoning section is: “don’t use units of sandwiches, whales, or orgasms because you’ll get confused by trying to experience them”. However, I don’t see any good argument for not even using Utils as a unit for a single person’s preferences. In fact, using units of Awesomes seems to me even worse than Utils, because it’s easier to accidentally experience an Awesome than a Util. Converting from Utils to unitless measurement may avoid some infinitesimal amount of radiation poisoning, but it’s no magic bullet for anything.
All this business with radiation poisoning is just a roundabout way of saying the only things you’re allowed to do with utilities are “compare two utilities” and “calculate expected utility over some probability distribution” (and rescale the whole utility function with a positive affine transformation, since positive affine transformations happen to be isomorphisms of the above two calculations).
Looking at utility values for any other purpose than comparison or calculating expected utilities is a bad idea, because your brain will think things like “positive number is good” and “negative number is bad” which don’t make any sense in a situation where you can arbitrarily rescale the utility function with any positive affine transformation.
xB by itself isn’t meaningless; it roughly means “the expected utility on a normalized scale between the utility of the outcome I least prefer and the outcome I most prefer”
“xB + (1-x)0” which is formally equivalent to “xB” means “the expected utility of B with probability p and the outcome I least prefer on a normalized scale with probability (1-p)”, yes. The point I’m trying to make here though is that probability distributions have to add up to 1. “Probability p of outcome B” — where p < 1 — is a type error, plain and simple, since you haven’t specified the alternative that happens with probability (1-p). “Probability p of outcome B, and probability (1-p) of the outcome I least prefer” is the closest thing that is meaningful, but if you mean that you need to say it.
I think I have updated slightly in the direction of requiring my utility function to conform to VNM and away from being inclined to throw it out if my preferences aren’t consistent. This is probably mostly due to smart people being asked to give an example of a circular preference and my not finding any answer compelling.
I think I see the point you’re trying to make, which is that we want to have a normalized scale of utility to apply probability to. This directly contradicts the prohibition against “looking at the sign or magnitude”. You are comparing 1⁄400 EU and 1⁄500 EU using their magnitudes, and jumping headfirst into the radiation. Am I missing something?
If you don’t conform to VNM, you don’t have a utility function.
I think you mean to refer to your decision algorithms.
No, I mean if my utility function violates transitivity or other axioms of VNM, I more want to fix it than to throw out VNM as being invalid.
then it’s not a utility function in the standard sense of the term.
I think what you mean to tell me is: “say ‘my preferences’ instead of ‘my utility function’”. I acknowledge that I was incorrectly using these interchangeably.
I do think it was clear what I meant when I called it “my” function and talked about it not conforming to VNM rules, so this response felt tautological to me.
You are allowed to compare. Comparison is one of the defined operations. Comparison is how you decide which is best.
I’m uneasy with this “normalized”. Can you unpack what you mean here?
What I mean by “normalized” is that you’re compressing the utility values into the range between 0 and 1. I am not aware of another definition that would apply here.
Your rule says you’re allowed to compare, but your other rule says you’re not allowed to compare by magnitude. You were serious enough about this second rule to equate it with radiation death.
You can’t apply probabilities to utilities and be left with anything meaningful unless you’re allowed to compare by magnitude. This is a fatal contradiction in your thesis. Using your own example, you assign a value of 1 to whaling and 1⁄500 to the sandwich. If you’re not allowed to compare the two using their magnitude, then you can’t compare the utility of 1⁄400 chance of the whale day with the sandwich, because you’re not allowed to think about how much better it is to be a whale.
There’s something missing here, which is that “1/400 chance of a whale day” means “1/400 chance of whale + 399⁄400 chance of normal day”. To calculate the value of “1/400 chance of a whale day” you need to assign a utility for both a whale day and a normal day. Then you can compare the resulting expectation of utility to the utility of a sandwhich = 1⁄500 (by which we mean a sandwich day, I guess?), no sweat.
The absolute magnitudes of the utilities don’t make any difference. If you add N to all utility values, that just adds N to both sides of the comparison. (And you’re not allowed to compare utilities to magic numbers like 0, since that would be numerology.)
I notice we’re not understanding each other, but I don’t know why. Let’s step back a bit. What problem is “radiation poisoning for looking at magnitude of utility” supposed to be solving?
We’re not talking about adding N to both sides of a comparison. We’re talking about taking a relation where we are only allowed to know that A < B, multiplying B by some probability factor, and then trying to make some judgment about the new relationship between A and xB. The rule against looking at magnitudes prevents that. So we can’t give an answer to the question: “Is the sandwich day better than the expected value of 1⁄400 chance of a whale day?”
If we’re allowed to compare A to xB, then we have to do that before the magnitude rule goes into effect. I don’t see how this model is supposed to account for that.
You can’t just multiply B by some probability factor. For the situation where you have
p(B) = x, p(C) = 1 - x
, your expected utility would bexB + (1-x)C
. ButxB
by itself is meaningless, or equivalent to the assumption that the utility of the alternative (which has probability1 - x
) is the magic number 0. “1/400 chance of a whale day” is meaningless until you define the alternative that happens with probability 399⁄400.For the purpose of calculating
xB + (1-x)C
you obviously need to know the actual values, and hence magnitudes of x, B and C. Similarly you need to know the actual values in order to calculate whether A < B or not. “Radiation poisoning for looking at magnitude of utility” really means that you’re not allowed to compare utilities to magic numbers like 0 or 1. It means that the only thing you’re allowed to do with utility values is a) compare them to each other, and b) obtain expected utilities by multiplying by a probability distribution.[edited out emotional commentary/snark]
If you can’t multiply B by a probability factor, then it’s meaningless in the context of xB + (1-x)C, also. xB by itself isn’t meaningless; it roughly means “the expected utility on a normalized scale between the utility of the outcome I least prefer and the outcome I most prefer”. nyan_sandwich even agrees that 0 and 1 aren’t magic numbers, they’re just rescaled utility values.
I’m 99% confident that that’s not what nyan_sandwich means by radiation poisoning in the original post, considering the fact that comparing utilities to 0 and 1 is exactly what he does in the hell example. If you’re not allowed to compare utilities by magnitude, then you can’t obtain an expected utility by multiplying by a probability distribution. Show the math if you think you can prove otherwise.
It’s getting hard to reference back to the original post because it keeps changing with no annotations to highlight the edits, but I think the only useful argument in the radiation poisoning section is: “don’t use units of sandwiches, whales, or orgasms because you’ll get confused by trying to experience them”. However, I don’t see any good argument for not even using Utils as a unit for a single person’s preferences. In fact, using units of Awesomes seems to me even worse than Utils, because it’s easier to accidentally experience an Awesome than a Util. Converting from Utils to unitless measurement may avoid some infinitesimal amount of radiation poisoning, but it’s no magic bullet for anything.
Oh, I was going to reply to this, and I forgot.
All this business with radiation poisoning is just a roundabout way of saying the only things you’re allowed to do with utilities are “compare two utilities” and “calculate expected utility over some probability distribution” (and rescale the whole utility function with a positive affine transformation, since positive affine transformations happen to be isomorphisms of the above two calculations).
Looking at utility values for any other purpose than comparison or calculating expected utilities is a bad idea, because your brain will think things like “positive number is good” and “negative number is bad” which don’t make any sense in a situation where you can arbitrarily rescale the utility function with any positive affine transformation.
“xB + (1-x)0” which is formally equivalent to “xB” means “the expected utility of B with probability p and the outcome I least prefer on a normalized scale with probability (1-p)”, yes. The point I’m trying to make here though is that probability distributions have to add up to 1. “Probability p of outcome B” — where p < 1 — is a type error, plain and simple, since you haven’t specified the alternative that happens with probability (1-p). “Probability p of outcome B, and probability (1-p) of the outcome I least prefer” is the closest thing that is meaningful, but if you mean that you need to say it.
Unless you rescale everything so that magic numbers like 0 and 1 are actually utilities of possibilities under consideration.
But that’s like cutting corners in the lab; dangerous if you don’t know what you are doing, but useful if you do.