I mean an outcome where there is 1-epsilon chance of A.
It is permissible to assign utils arbitrarily, such that flipping a coin to decide between A and B has more utils than selecting A and more utils than selecting B. In that case, the outcome is “Flip a coin and allow the coin to decide”, which has different utility from the sum of half of A and half of B.
It is permissible to assign utils arbitrarily, such that flipping a coin to decide between A and B has more utils than selecting A and more utils than selecting B. In that case, the outcome is “Flip a coin and allow the coin to decide”, which has different utility from the sum of half of A and half of B.
Perhaps if you count “I flipped a coin and got A” > A.
You can always define some utility function such that it is rational to shoot yourself in the foot, but at that point, you are just doing a bunch of work to describe stupid behavior that you could just do anyways. You don’t have to follow the VNM axioms either.
The point of VNM and such is to constrain your behavior. And if you input sensible things, it does. You don’t have to let it constrain your behavior, but if you don’t, it is doing no work for you.
Right. If you think “I flipped a coin to decide” is more valuable than half of the difference between results of the coin flip (perhaps because those results are very close to equal, but you fear that systemic bias is a large negative, or perhaps because you demand that you are provably fair), then you flip a coin to decide.
The utility function, however, is not something to be defined. It is something to be determined and discovered- I already want things, and while what I want is time-variant, it isn’t arbitrarily alterable.
Unless your utility assigns a positive utility to your utility function being altered, in which case you’d have to seek to optimize your meta-utility. Desire to change one’s desires reflects an inconsistency, however, so one who desires to be consistent should desire not to desire to change one’s desires. (my apologies if this sounds confusing)
One level deeper: One who is not consistent but desires to be consistent desires to change their desires to desires that they will not then desire to change.
If you don’t like not liking where you are, and you don’t like where you are, move to somewhere where you will like where you are.
Ah, so true. Ultimately, I think that’s exactly the point this article tries to make: if you don’t want to do A, but you don’t want to be the kind of person who doesn’t want to do A (or you don’t want to be the kind of person who doesn’t do A), do A. If that doesn’t work, change who you are.
I mean an outcome where there is 1-epsilon chance of A.
It is permissible to assign utils arbitrarily, such that flipping a coin to decide between A and B has more utils than selecting A and more utils than selecting B. In that case, the outcome is “Flip a coin and allow the coin to decide”, which has different utility from the sum of half of A and half of B.
Perhaps if you count “I flipped a coin and got A” > A.
You can always define some utility function such that it is rational to shoot yourself in the foot, but at that point, you are just doing a bunch of work to describe stupid behavior that you could just do anyways. You don’t have to follow the VNM axioms either.
The point of VNM and such is to constrain your behavior. And if you input sensible things, it does. You don’t have to let it constrain your behavior, but if you don’t, it is doing no work for you.
Right. If you think “I flipped a coin to decide” is more valuable than half of the difference between results of the coin flip (perhaps because those results are very close to equal, but you fear that systemic bias is a large negative, or perhaps because you demand that you are provably fair), then you flip a coin to decide.
The utility function, however, is not something to be defined. It is something to be determined and discovered- I already want things, and while what I want is time-variant, it isn’t arbitrarily alterable.
Unless your utility assigns a positive utility to your utility function being altered, in which case you’d have to seek to optimize your meta-utility. Desire to change one’s desires reflects an inconsistency, however, so one who desires to be consistent should desire not to desire to change one’s desires. (my apologies if this sounds confusing)
One level deeper: One who is not consistent but desires to be consistent desires to change their desires to desires that they will not then desire to change.
If you don’t like not liking where you are, and you don’t like where you are, move to somewhere where you will like where you are.
Ah, so true. Ultimately, I think that’s exactly the point this article tries to make: if you don’t want to do A, but you don’t want to be the kind of person who doesn’t want to do A (or you don’t want to be the kind of person who doesn’t do A), do A. If that doesn’t work, change who you are.