That was one of the major points. Do not play with naked utilities. For any decision, find the 0 anchor and the 1 anchor, and rank other stuff relative to them.
I understood your major point about the radioactivity of the single real number for each utility, but I got confused by what you intended the process to look like with your hell example. I think you need to be a little more explicit about your algorithm when you say “find the 0 anchor and the 1 anchor”. I defaulted to a generic idea of moral intuition about best and worst, then only made it as far as thinking it required naked utilities to find the anchors in the first place. Is your process something like: “compare each option against the next until you find the worst and best?”
It is becoming clear from this and other comments that you consider at least the transitivity property of VNM to be axiomatic. Without it, you couldn’t find what is your best option if the only operation you’re allowed to do is compare one option against another. If VNM is required, it seems sort of hard to throw it out after the fact if it causes too much trouble.
What is the point of ranking other stuff relative to the 0 and 1 anchor if you already know the 1 anchor is your optimal choice? Am I misunderstanding the meaning of the 0 and 1 anchor, and it’s possible to go less than 0 or greater than 1?
Is your process something like: “compare each option against the next until you find the worst and best?”
Yes, approximately.
It is becoming clear from this and other comments that you consider at least the transitivity property of VNM to be axiomatic.
I consider all the axioms of VNM to be totally reasonable. I don’t think the human decision system follows the VNM axioms. Hence the project of defining and switching to this VNM thing; it’s not what we already use, but we think it should be.
If VNM is required, it seems sort of hard to throw it out after the fact if it causes too much trouble.
VNM is required to use VNM, but if you encounter a circular preference and decide you value running in circles more than the benefits of VNM, then you throw out VNM. You can’t throw it out from the inside, only decide whether it’s right from outside.
What is the point of ranking other stuff relative to the 0 and 1 anchor if you already know the 1 anchor is your optimal choice?
Expectation. VNM isn’t really useful without uncertainty. Without uncertainty, transitive preferences are enough.
If being a whale has utility 1, and getting nothing has utility 0, and getting a sandwich has utility 1⁄500, but the whale-deal only has a probability of 1⁄400 with nothing otherwise, then I don’t know until I do expectation that the 1⁄400 EU from the whale is better than the 1⁄500 EU from the sandwich.
I think I have updated slightly in the direction of requiring my utility function to conform to VNM and away from being inclined to throw it out if my preferences aren’t consistent. This is probably mostly due to smart people being asked to give an example of a circular preference and my not finding any answer compelling.
Expectation. VNM isn’t really useful without uncertainty. Without uncertainty, transitive preferences are enough.
I think I see the point you’re trying to make, which is that we want to have a normalized scale of utility to apply probability to. This directly contradicts the prohibition against “looking at the sign or magnitude”. You are comparing 1⁄400 EU and 1⁄500 EU using their magnitudes, and jumping headfirst into the radiation. Am I missing something?
I think what you mean to tell me is: “say ‘my preferences’ instead of ‘my utility function’”. I acknowledge that I was incorrectly using these interchangeably.
I do think it was clear what I meant when I called it “my” function and talked about it not conforming to VNM rules, so this response felt tautological to me.
What I mean by “normalized” is that you’re compressing the utility values into the range between 0 and 1. I am not aware of another definition that would apply here.
Your rule says you’re allowed to compare, but your other rule says you’re not allowed to compare by magnitude. You were serious enough about this second rule to equate it with radiation death.
You can’t apply probabilities to utilities and be left with anything meaningful unless you’re allowed to compare by magnitude. This is a fatal contradiction in your thesis. Using your own example, you assign a value of 1 to whaling and 1⁄500 to the sandwich. If you’re not allowed to compare the two using their magnitude, then you can’t compare the utility of 1⁄400 chance of the whale day with the sandwich, because you’re not allowed to think about how much better it is to be a whale.
There’s something missing here, which is that “1/400 chance of a whale day” means “1/400 chance of whale + 399⁄400 chance of normal day”. To calculate the value of “1/400 chance of a whale day” you need to assign a utility for both a whale day and a normal day. Then you can compare the resulting expectation of utility to the utility of a sandwhich = 1⁄500 (by which we mean a sandwich day, I guess?), no sweat.
The absolute magnitudes of the utilities don’t make any difference. If you add N to all utility values, that just adds N to both sides of the comparison. (And you’re not allowed to compare utilities to magic numbers like 0, since that would be numerology.)
I notice we’re not understanding each other, but I don’t know why. Let’s step back a bit. What problem is “radiation poisoning for looking at magnitude of utility” supposed to be solving?
We’re not talking about adding N to both sides of a comparison. We’re talking about taking a relation where we are only allowed to know that A < B, multiplying B by some probability factor, and then trying to make some judgment about the new relationship between A and xB. The rule against looking at magnitudes prevents that. So we can’t give an answer to the question: “Is the sandwich day better than the expected value of 1⁄400 chance of a whale day?”
If we’re allowed to compare A to xB, then we have to do that before the magnitude rule goes into effect. I don’t see how this model is supposed to account for that.
You can’t just multiply B by some probability factor. For the situation where you have p(B) = x, p(C) = 1 - x, your expected utility would be xB + (1-x)C. But xB by itself is meaningless, or equivalent to the assumption that the utility of the alternative (which has probability 1 - x) is the magic number 0. “1/400 chance of a whale day” is meaningless until you define the alternative that happens with probability 399⁄400.
For the purpose of calculating xB + (1-x)C you obviously need to know the actual values, and hence magnitudes of x, B and C. Similarly you need to know the actual values in order to calculate whether A < B or not. “Radiation poisoning for looking at magnitude of utility” really means that you’re not allowed to compare utilities to magic numbers like 0 or 1. It means that the only thing you’re allowed to do with utility values is a) compare them to each other, and b) obtain expected utilities by multiplying by a probability distribution.
If you can’t multiply B by a probability factor, then it’s meaningless in the context of xB + (1-x)C, also. xB by itself isn’t meaningless; it roughly means “the expected utility on a normalized scale between the utility of the outcome I least prefer and the outcome I most prefer”. nyan_sandwich even agrees that 0 and 1 aren’t magic numbers, they’re just rescaled utility values.
I’m 99% confident that that’s not what nyan_sandwich means by radiation poisoning in the original post, considering the fact that comparing utilities to 0 and 1 is exactly what he does in the hell example. If you’re not allowed to compare utilities by magnitude, then you can’t obtain an expected utility by multiplying by a probability distribution. Show the math if you think you can prove otherwise.
It’s getting hard to reference back to the original post because it keeps changing with no annotations to highlight the edits, but I think the only useful argument in the radiation poisoning section is: “don’t use units of sandwiches, whales, or orgasms because you’ll get confused by trying to experience them”. However, I don’t see any good argument for not even using Utils as a unit for a single person’s preferences. In fact, using units of Awesomes seems to me even worse than Utils, because it’s easier to accidentally experience an Awesome than a Util. Converting from Utils to unitless measurement may avoid some infinitesimal amount of radiation poisoning, but it’s no magic bullet for anything.
All this business with radiation poisoning is just a roundabout way of saying the only things you’re allowed to do with utilities are “compare two utilities” and “calculate expected utility over some probability distribution” (and rescale the whole utility function with a positive affine transformation, since positive affine transformations happen to be isomorphisms of the above two calculations).
Looking at utility values for any other purpose than comparison or calculating expected utilities is a bad idea, because your brain will think things like “positive number is good” and “negative number is bad” which don’t make any sense in a situation where you can arbitrarily rescale the utility function with any positive affine transformation.
xB by itself isn’t meaningless; it roughly means “the expected utility on a normalized scale between the utility of the outcome I least prefer and the outcome I most prefer”
“xB + (1-x)0” which is formally equivalent to “xB” means “the expected utility of B with probability p and the outcome I least prefer on a normalized scale with probability (1-p)”, yes. The point I’m trying to make here though is that probability distributions have to add up to 1. “Probability p of outcome B” — where p < 1 — is a type error, plain and simple, since you haven’t specified the alternative that happens with probability (1-p). “Probability p of outcome B, and probability (1-p) of the outcome I least prefer” is the closest thing that is meaningful, but if you mean that you need to say it.
I understood your major point about the radioactivity of the single real number for each utility, but I got confused by what you intended the process to look like with your hell example. I think you need to be a little more explicit about your algorithm when you say “find the 0 anchor and the 1 anchor”. I defaulted to a generic idea of moral intuition about best and worst, then only made it as far as thinking it required naked utilities to find the anchors in the first place. Is your process something like: “compare each option against the next until you find the worst and best?”
It is becoming clear from this and other comments that you consider at least the transitivity property of VNM to be axiomatic. Without it, you couldn’t find what is your best option if the only operation you’re allowed to do is compare one option against another. If VNM is required, it seems sort of hard to throw it out after the fact if it causes too much trouble.
What is the point of ranking other stuff relative to the 0 and 1 anchor if you already know the 1 anchor is your optimal choice? Am I misunderstanding the meaning of the 0 and 1 anchor, and it’s possible to go less than 0 or greater than 1?
Yes, approximately.
I consider all the axioms of VNM to be totally reasonable. I don’t think the human decision system follows the VNM axioms. Hence the project of defining and switching to this VNM thing; it’s not what we already use, but we think it should be.
VNM is required to use VNM, but if you encounter a circular preference and decide you value running in circles more than the benefits of VNM, then you throw out VNM. You can’t throw it out from the inside, only decide whether it’s right from outside.
Expectation. VNM isn’t really useful without uncertainty. Without uncertainty, transitive preferences are enough.
If being a whale has utility 1, and getting nothing has utility 0, and getting a sandwich has utility 1⁄500, but the whale-deal only has a probability of 1⁄400 with nothing otherwise, then I don’t know until I do expectation that the 1⁄400 EU from the whale is better than the 1⁄500 EU from the sandwich.
I think I have updated slightly in the direction of requiring my utility function to conform to VNM and away from being inclined to throw it out if my preferences aren’t consistent. This is probably mostly due to smart people being asked to give an example of a circular preference and my not finding any answer compelling.
I think I see the point you’re trying to make, which is that we want to have a normalized scale of utility to apply probability to. This directly contradicts the prohibition against “looking at the sign or magnitude”. You are comparing 1⁄400 EU and 1⁄500 EU using their magnitudes, and jumping headfirst into the radiation. Am I missing something?
If you don’t conform to VNM, you don’t have a utility function.
I think you mean to refer to your decision algorithms.
No, I mean if my utility function violates transitivity or other axioms of VNM, I more want to fix it than to throw out VNM as being invalid.
then it’s not a utility function in the standard sense of the term.
I think what you mean to tell me is: “say ‘my preferences’ instead of ‘my utility function’”. I acknowledge that I was incorrectly using these interchangeably.
I do think it was clear what I meant when I called it “my” function and talked about it not conforming to VNM rules, so this response felt tautological to me.
You are allowed to compare. Comparison is one of the defined operations. Comparison is how you decide which is best.
I’m uneasy with this “normalized”. Can you unpack what you mean here?
What I mean by “normalized” is that you’re compressing the utility values into the range between 0 and 1. I am not aware of another definition that would apply here.
Your rule says you’re allowed to compare, but your other rule says you’re not allowed to compare by magnitude. You were serious enough about this second rule to equate it with radiation death.
You can’t apply probabilities to utilities and be left with anything meaningful unless you’re allowed to compare by magnitude. This is a fatal contradiction in your thesis. Using your own example, you assign a value of 1 to whaling and 1⁄500 to the sandwich. If you’re not allowed to compare the two using their magnitude, then you can’t compare the utility of 1⁄400 chance of the whale day with the sandwich, because you’re not allowed to think about how much better it is to be a whale.
There’s something missing here, which is that “1/400 chance of a whale day” means “1/400 chance of whale + 399⁄400 chance of normal day”. To calculate the value of “1/400 chance of a whale day” you need to assign a utility for both a whale day and a normal day. Then you can compare the resulting expectation of utility to the utility of a sandwhich = 1⁄500 (by which we mean a sandwich day, I guess?), no sweat.
The absolute magnitudes of the utilities don’t make any difference. If you add N to all utility values, that just adds N to both sides of the comparison. (And you’re not allowed to compare utilities to magic numbers like 0, since that would be numerology.)
I notice we’re not understanding each other, but I don’t know why. Let’s step back a bit. What problem is “radiation poisoning for looking at magnitude of utility” supposed to be solving?
We’re not talking about adding N to both sides of a comparison. We’re talking about taking a relation where we are only allowed to know that A < B, multiplying B by some probability factor, and then trying to make some judgment about the new relationship between A and xB. The rule against looking at magnitudes prevents that. So we can’t give an answer to the question: “Is the sandwich day better than the expected value of 1⁄400 chance of a whale day?”
If we’re allowed to compare A to xB, then we have to do that before the magnitude rule goes into effect. I don’t see how this model is supposed to account for that.
You can’t just multiply B by some probability factor. For the situation where you have
p(B) = x, p(C) = 1 - x
, your expected utility would bexB + (1-x)C
. ButxB
by itself is meaningless, or equivalent to the assumption that the utility of the alternative (which has probability1 - x
) is the magic number 0. “1/400 chance of a whale day” is meaningless until you define the alternative that happens with probability 399⁄400.For the purpose of calculating
xB + (1-x)C
you obviously need to know the actual values, and hence magnitudes of x, B and C. Similarly you need to know the actual values in order to calculate whether A < B or not. “Radiation poisoning for looking at magnitude of utility” really means that you’re not allowed to compare utilities to magic numbers like 0 or 1. It means that the only thing you’re allowed to do with utility values is a) compare them to each other, and b) obtain expected utilities by multiplying by a probability distribution.[edited out emotional commentary/snark]
If you can’t multiply B by a probability factor, then it’s meaningless in the context of xB + (1-x)C, also. xB by itself isn’t meaningless; it roughly means “the expected utility on a normalized scale between the utility of the outcome I least prefer and the outcome I most prefer”. nyan_sandwich even agrees that 0 and 1 aren’t magic numbers, they’re just rescaled utility values.
I’m 99% confident that that’s not what nyan_sandwich means by radiation poisoning in the original post, considering the fact that comparing utilities to 0 and 1 is exactly what he does in the hell example. If you’re not allowed to compare utilities by magnitude, then you can’t obtain an expected utility by multiplying by a probability distribution. Show the math if you think you can prove otherwise.
It’s getting hard to reference back to the original post because it keeps changing with no annotations to highlight the edits, but I think the only useful argument in the radiation poisoning section is: “don’t use units of sandwiches, whales, or orgasms because you’ll get confused by trying to experience them”. However, I don’t see any good argument for not even using Utils as a unit for a single person’s preferences. In fact, using units of Awesomes seems to me even worse than Utils, because it’s easier to accidentally experience an Awesome than a Util. Converting from Utils to unitless measurement may avoid some infinitesimal amount of radiation poisoning, but it’s no magic bullet for anything.
Oh, I was going to reply to this, and I forgot.
All this business with radiation poisoning is just a roundabout way of saying the only things you’re allowed to do with utilities are “compare two utilities” and “calculate expected utility over some probability distribution” (and rescale the whole utility function with a positive affine transformation, since positive affine transformations happen to be isomorphisms of the above two calculations).
Looking at utility values for any other purpose than comparison or calculating expected utilities is a bad idea, because your brain will think things like “positive number is good” and “negative number is bad” which don’t make any sense in a situation where you can arbitrarily rescale the utility function with any positive affine transformation.
“xB + (1-x)0” which is formally equivalent to “xB” means “the expected utility of B with probability p and the outcome I least prefer on a normalized scale with probability (1-p)”, yes. The point I’m trying to make here though is that probability distributions have to add up to 1. “Probability p of outcome B” — where p < 1 — is a type error, plain and simple, since you haven’t specified the alternative that happens with probability (1-p). “Probability p of outcome B, and probability (1-p) of the outcome I least prefer” is the closest thing that is meaningful, but if you mean that you need to say it.
Unless you rescale everything so that magic numbers like 0 and 1 are actually utilities of possibilities under consideration.
But that’s like cutting corners in the lab; dangerous if you don’t know what you are doing, but useful if you do.