Eliezer, I have a question about this: “There is no finite amount of life lived N where I would prefer a 80.0001% probability of living N years to an 0.0001% chance of living a googolplex years and an 80% chance of living forever. This is a sufficient condition to imply that my utility function is unbounded.”
I can see that this preference implies an unbounded utility function, given that a longer life has a greater utility. However, simply stated in that way, most people might agree with the preference. But consider this gamble instead:
A: Live 500 years and then die, with certainty.
B: Live forever, with probability 0.000000001%; die within the next ten seconds, with probability 99.999999999%
Do you choose A or B? Is it possible to choose A and have an unbounded utility function with respect to life? It seems to me that an unbounded utility function implies the choice of B. But then what if the probability of living forever becomes one in a googleplex, or whatever? Of course, this is a kind of Pascal’s Wager; but it seems to me that your utility function implies that you should accept the Wager.
It also seems to me that the intuitions suggesting to you and others that Pascal’s Mugging should be rejected similarly are based on an intuition of a bounded utility function. Emotions can’t react infinitely to anything; as one commenter put it, “I can only feel so much horror.” So to the degree that people’s preferences reflect their emotions, they have bounded utility functions. In the abstract, not emotionally but mentally, it is possible to have an unbounded function. But if you do, and act on it, others will think you a fanatic. For a fanatic cares infinitely for what he perceives to be an infinite good, whereas normal people do not care infinitely about anything.
This isn’t necessarily against an unbounded function; I’m simply trying to draw out the implications.
A: Live 500 years and then die, with certainty.
B: Live forever, with probability 0.000000001%; die within the next ten seconds, with probability 99.999999999%
If this was the only chance you ever get to determine your lifespan—then choose B.
In the real world, it would probably be a better idea to discard both options and use your natural lifespan to search for alternative paths to immortality.
I disagree, not surprisingly, since I was the author of the comment to which you are responding. I would choose A, and I think anyone sensible would choose A. There’s not much one can say here in the way of argument, but it is obvious to me that choosing B here is following your ideals off a cliff. Especially since I can add a few hundred 9s there, and by your argument you should still choose B.
Eliezer, I have a question about this: “There is no finite amount of life lived N where I would prefer a 80.0001% probability of living N years to an 0.0001% chance of living a googolplex years and an 80% chance of living forever. This is a sufficient condition to imply that my utility function is unbounded.”
I can see that this preference implies an unbounded utility function, given that a longer life has a greater utility. However, simply stated in that way, most people might agree with the preference. But consider this gamble instead:
A: Live 500 years and then die, with certainty.
B: Live forever, with probability 0.000000001%; die within the next ten seconds, with probability 99.999999999%
Do you choose A or B? Is it possible to choose A and have an unbounded utility function with respect to life? It seems to me that an unbounded utility function implies the choice of B. But then what if the probability of living forever becomes one in a googleplex, or whatever? Of course, this is a kind of Pascal’s Wager; but it seems to me that your utility function implies that you should accept the Wager.
It also seems to me that the intuitions suggesting to you and others that Pascal’s Mugging should be rejected similarly are based on an intuition of a bounded utility function. Emotions can’t react infinitely to anything; as one commenter put it, “I can only feel so much horror.” So to the degree that people’s preferences reflect their emotions, they have bounded utility functions. In the abstract, not emotionally but mentally, it is possible to have an unbounded function. But if you do, and act on it, others will think you a fanatic. For a fanatic cares infinitely for what he perceives to be an infinite good, whereas normal people do not care infinitely about anything.
This isn’t necessarily against an unbounded function; I’m simply trying to draw out the implications.
If this was the only chance you ever get to determine your lifespan—then choose B.
In the real world, it would probably be a better idea to discard both options and use your natural lifespan to search for alternative paths to immortality.
I disagree, not surprisingly, since I was the author of the comment to which you are responding. I would choose A, and I think anyone sensible would choose A. There’s not much one can say here in the way of argument, but it is obvious to me that choosing B here is following your ideals off a cliff. Especially since I can add a few hundred 9s there, and by your argument you should still choose B.