If you believe that there is something with arbitrarily high utility, then by definition, you will accept an indefinitely small probability of it.
Assume my life has a utility of 10 right now. My preferences are such that there is absolutely nothing I would take a 99% chance of dying for. Then, by definition, there’s nothing with a utility of 1000 or more. The problem comes from assuming that there is such a thing when there isn’t. I don’t see how this is scope insensitivity; it’s just how my preferences are.
Someone who really had an unbounded utility function would really take as many steps down the Lifespan Dilemma path as Omega allowed. That’s really what they’d prefer. Most of us just don’t have a utility function like that.
So you wouldn’t die to save the world? Or do you mean hypothetically if you had those preferences?
I agree with the basic argument, it is the same thing I said. But Eliezer at least does not, since he has asserted a number of times that his utility function is unbounded, and that it allows for arbitrarily high utilities.
So you wouldn’t die to save the world? Or do you mean hypothetically if you had those preferences?
If the world is doomed immediately unless I die for it, I have a 100% chance of dying immediately, so I might as well die to save the world. But if it’s a choice between living another 50 years and then the world ending, or dying right now and saving the world, and no one would know, I wouldn’t die to save the world. I’m too selfish for that.
But Eliezer at least does not, since he has asserted a number of times that his utility function is unbounded, and that it allows for arbitrarily high utilities.
Then he should keep taking Omega’s offers, and any discomfort he has with that is faulty intuition, like the discomfort from choosing TORTURE over SPECKS.
I would die right now to prevent the world from ending 50 years from now. It’s actually even hard for me to imagine that you’re actually as selfish as you say. If the situation actually came up you might find out differently. But I guess it’s possible.
You might be right that Eliezer should simply accept the Lifespan dilemma as the necessary consequence of his utility function (at least as he defines it.)
It’s actually even hard for me to imagine that you’re actually as selfish as you say.
Really? Why? I can’t imagine myself dying to save the world; it’s completely implausible to me and I have a hard time understanding what it would feel like to be willing to do so. But people often die for much less.
It’s simple. The ‘selfish’ terminology is just obscuring matters. Just keep your feelings about one thing (your life) and substitute it with something else (someones life).
Unknowns utility functions is of a type that it assigns infinitely high utility to saving the world. Not saving the world is simply no option. That’s what Unknowns wants.
Edit: Forget what I said about Unknowns previously.
If you believe that there is something with arbitrarily high utility, then by definition, you will accept an indefinitely small probability of it.
Assume my life has a utility of 10 right now. My preferences are such that there is absolutely nothing I would take a 99% chance of dying for. Then, by definition, there’s nothing with a utility of 1000 or more. The problem comes from assuming that there is such a thing when there isn’t. I don’t see how this is scope insensitivity; it’s just how my preferences are.
Someone who really had an unbounded utility function would really take as many steps down the Lifespan Dilemma path as Omega allowed. That’s really what they’d prefer. Most of us just don’t have a utility function like that.
So you wouldn’t die to save the world? Or do you mean hypothetically if you had those preferences?
I agree with the basic argument, it is the same thing I said. But Eliezer at least does not, since he has asserted a number of times that his utility function is unbounded, and that it allows for arbitrarily high utilities.
If the world is doomed immediately unless I die for it, I have a 100% chance of dying immediately, so I might as well die to save the world. But if it’s a choice between living another 50 years and then the world ending, or dying right now and saving the world, and no one would know, I wouldn’t die to save the world. I’m too selfish for that.
Then he should keep taking Omega’s offers, and any discomfort he has with that is faulty intuition, like the discomfort from choosing TORTURE over SPECKS.
I would die right now to prevent the world from ending 50 years from now. It’s actually even hard for me to imagine that you’re actually as selfish as you say. If the situation actually came up you might find out differently. But I guess it’s possible.
You might be right that Eliezer should simply accept the Lifespan dilemma as the necessary consequence of his utility function (at least as he defines it.)
Really? Why? I can’t imagine myself dying to save the world; it’s completely implausible to me and I have a hard time understanding what it would feel like to be willing to do so. But people often die for much less.
Are you married? If so, would you die to save your wife’s life?
Or if you’re not married, what about your mother?
Do you find it hard to imagine those things too?
It’s simple. The ‘selfish’ terminology is just obscuring matters. Just keep your feelings about one thing (your life) and substitute it with something else (someones life).
Unknowns utility functions is of a type that it assigns infinitely high utility to saving the world. Not saving the world is simply no option. That’s what Unknowns wants.
Edit: Forget what I said about Unknowns previously.
Blueberry was the one who introduced the “selfish” terminology. He said, “I wouldn’t die to save the world. I’m too selfish for that.”
I’m really sorry. I confused you with someone else I talked to yesterday. My mistake, I edited my comment and will keep more care in future.
Thank you.