One non-contradictive way this could happen is that I pick n=1, and then Omega says: “The mere knowledge this outcome carries with it is worth 10 utils to you. I will therefore subject you to five seconds of torture to bring your total utility gained down to 1 util.”
If the game only happens once, we might not want to do this. However, if this game is repeated, or if other people are likely to be faced with this decision, then it makes sense to do this the first time. Then we could try to figure out what the optimal value of n is.
To continue with the same example: suppose I found out that this knowledge is worth 10 utils to me. Then I get a second chance to bet. Since I’ll never meet Omega again (and presumably never again need to use these units) this knowledge must boost my expected outcome from the bet by 10 utils. We already know that my actions in a state of ignorance are to pick n=1 which has an expected value of 1 util. So my optimal actions ought to be such that my expected outcome is 11 utils, which happens approximately when n=6 (if we can pick non-integer values for n, we can get this result more exactly).
I’m not really sure what’s being calculated in that last paragraph there. Knowing the measurement of a single util seems to be valuable OUTSIDE of this problem. Inside the problem, the optimal actions (which is to say the actions with highest expected value) continues to be writing the busy beaver function as fast as possible, &c.
Also, if Omega balances out utils with positive and negative utils, why is e more likely to torture you for five seconds and tell you “this is −9 utils” than, say, torture you for 300 years then grant you an additional 300 years of life in which you have a safe nanofactory and an Iron Man suit?
It seems to me that the vast majority of actions Omega could take would be completely inscrutable, and give us very little knowledge about the actual value of utils.
A better example might be the case in which waiting for one second at a traffic light is worth one util, and after your encounter Omega disappears without a word. Omega then begins circulating a picture of a kitten on the internet. Three years later, a friend of yours links you the picture just before you leave for work. Having to tell them to stop sending you adorable pictures when you’re about to leave is the same value as seeing the adorable picture, and the one second later that you get out the door is a second you do not have to spend waiting at a traffic light.
If this is how utils work then I begin to understand why we have to break out the busy beaver function… in order to get an outcome that is akin to $1000 out of this game, you would need to win around 2^20 utils (by my rough and highly subjective estimate). A 5% chance of $1000 is MUCH MUCH better than a guarantee of waiting one second less of waiting at a traffic light.
One non-contradictive way this could happen is that I pick n=1, and then Omega says: “The mere knowledge this outcome carries with it is worth 10 utils to you. I will therefore subject you to five seconds of torture to bring your total utility gained down to 1 util.”
If the game only happens once, we might not want to do this. However, if this game is repeated, or if other people are likely to be faced with this decision, then it makes sense to do this the first time. Then we could try to figure out what the optimal value of n is.
To continue with the same example: suppose I found out that this knowledge is worth 10 utils to me. Then I get a second chance to bet. Since I’ll never meet Omega again (and presumably never again need to use these units) this knowledge must boost my expected outcome from the bet by 10 utils. We already know that my actions in a state of ignorance are to pick n=1 which has an expected value of 1 util. So my optimal actions ought to be such that my expected outcome is 11 utils, which happens approximately when n=6 (if we can pick non-integer values for n, we can get this result more exactly).
I’m not really sure what’s being calculated in that last paragraph there. Knowing the measurement of a single util seems to be valuable OUTSIDE of this problem. Inside the problem, the optimal actions (which is to say the actions with highest expected value) continues to be writing the busy beaver function as fast as possible, &c.
Also, if Omega balances out utils with positive and negative utils, why is e more likely to torture you for five seconds and tell you “this is −9 utils” than, say, torture you for 300 years then grant you an additional 300 years of life in which you have a safe nanofactory and an Iron Man suit?
It seems to me that the vast majority of actions Omega could take would be completely inscrutable, and give us very little knowledge about the actual value of utils.
A better example might be the case in which waiting for one second at a traffic light is worth one util, and after your encounter Omega disappears without a word. Omega then begins circulating a picture of a kitten on the internet. Three years later, a friend of yours links you the picture just before you leave for work. Having to tell them to stop sending you adorable pictures when you’re about to leave is the same value as seeing the adorable picture, and the one second later that you get out the door is a second you do not have to spend waiting at a traffic light.
If this is how utils work then I begin to understand why we have to break out the busy beaver function… in order to get an outcome that is akin to $1000 out of this game, you would need to win around 2^20 utils (by my rough and highly subjective estimate). A 5% chance of $1000 is MUCH MUCH better than a guarantee of waiting one second less of waiting at a traffic light.
I seem to have digressed.