The question presupposes that by continuing living you fullfill your values better. It might be that after a couple millenia additional millenias don’t really benefit that much.
I am presuming that if immortality is possible then the value of it is transfinite and thus any finite chance (infinidesimals migth still lose) means it overrides all other considerations.
In a way a translation to more human scale problem is “Are there acts you should take even if taking those actions would cost your life regardless of how well you think you can use your future life?” The way it would not be analogous would be that human lifes are assumed to be finite (note that if you genuinely think that there is a chance a particular human be immortal it is just the original question). This can lead to a stance where you estimate what a humanlife in good conditions could achieve without regard to your particular condition and if particular conditions allow you to take an even better option you could take it. This could lead to stuff like risking your life for relatively minor advantages in middle ages where death was very relevantly looming anyways. In those times it might have been relevant to “what I can achieve before I cause my own death?” and since then the option to trying to die of old age (ie not causing your own death actively) has become a relevant option that breaks the old way of framing the question. But if you take it seriously that shooting for old age is imperative it means that if there is a street that you estimate there is a 1% risk of being in a muggin situation with 1% chance of it ending with you getting shot it rules out using that street as a way to move.
In analogy as long as there is heat there will be computational uncerntainty which means that there will always be ambient risk about things going wrong. That is you might have a high certainty of functioning in some way indefinitely but working in a sane way is way less certain. And all action and thinking options deal in energy use and thus deal in increasing insanity risk.
It is easy to think of that as “utility function”, but it doesn’t mean that utility functions are always zero. So, we could have utility functions that make people behave like perfect utility function maximizers.
The question around scope insensitivity might play out (to us) as something like an agent’s utility function being zero, with the only real thing being the world. However, the “limited utility function” seems to play out that, so we can never really say anything negative about utility functions. In fact, the “limited utility function” doesn’t really exist (so, it’s possible as well as not universal for every purpose we can consider).
I’m not sure that this is true, but it seems like in many situations having a limited utility function can make people behave less ethically, but I don’t think one has to worry much about this particular scenario.
This is a good post, but it’s not something that would save a person. Is it just that utility functions are always zero?
It might be worth looking into this, because I don’t think it makes sense to rely on the inside view of the utility function, or if it’s true it’s also worth examining the underlying view.
I think those questions are interesting to argue about, but I’m not sure how to resolve problems of such that might result in a bad outcome.
I think humans are a very common model of the environment, and I like the terminology, but I worry that the examples given are just straw. What should really be done is to establish a good set of terms, a set which includes only the former (to establish a name), and to use a good definition, and give a better name for which terms one should be first before trying to judge what is “really” and what is “really”.
I think people should be able to use existing terms more broadly. I just think it makes sense to talk about utilities over possible worlds and why we should want to have common words about them, so I’d be interested to better understand what they mean.
If you’re interested in how people work and what sort of advantages might be real, I’d be be especially interested in seeing a variety of explanations for why utility functions aren’t the way they would be under similar circumstances.
The question presupposes that by continuing living you fullfill your values better. It might be that after a couple millenia additional millenias don’t really benefit that much.
I am presuming that if immortality is possible then the value of it is transfinite and thus any finite chance (infinidesimals migth still lose) means it overrides all other considerations.
In a way a translation to more human scale problem is “Are there acts you should take even if taking those actions would cost your life regardless of how well you think you can use your future life?” The way it would not be analogous would be that human lifes are assumed to be finite (note that if you genuinely think that there is a chance a particular human be immortal it is just the original question). This can lead to a stance where you estimate what a humanlife in good conditions could achieve without regard to your particular condition and if particular conditions allow you to take an even better option you could take it. This could lead to stuff like risking your life for relatively minor advantages in middle ages where death was very relevantly looming anyways. In those times it might have been relevant to “what I can achieve before I cause my own death?” and since then the option to trying to die of old age (ie not causing your own death actively) has become a relevant option that breaks the old way of framing the question. But if you take it seriously that shooting for old age is imperative it means that if there is a street that you estimate there is a 1% risk of being in a muggin situation with 1% chance of it ending with you getting shot it rules out using that street as a way to move.
In analogy as long as there is heat there will be computational uncerntainty which means that there will always be ambient risk about things going wrong. That is you might have a high certainty of functioning in some way indefinitely but working in a sane way is way less certain. And all action and thinking options deal in energy use and thus deal in increasing insanity risk.
It is easy to think of that as “utility function”, but it doesn’t mean that utility functions are always zero. So, we could have utility functions that make people behave like perfect utility function maximizers.
The question around scope insensitivity might play out (to us) as something like an agent’s utility function being zero, with the only real thing being the world. However, the “limited utility function” seems to play out that, so we can never really say anything negative about utility functions. In fact, the “limited utility function” doesn’t really exist (so, it’s possible as well as not universal for every purpose we can consider).
I’m not sure that this is true, but it seems like in many situations having a limited utility function can make people behave less ethically, but I don’t think one has to worry much about this particular scenario.
This is a good post, but it’s not something that would save a person. Is it just that utility functions are always zero?
It might be worth looking into this, because I don’t think it makes sense to rely on the inside view of the utility function, or if it’s true it’s also worth examining the underlying view.
I think those questions are interesting to argue about, but I’m not sure how to resolve problems of such that might result in a bad outcome.
I think humans are a very common model of the environment, and I like the terminology, but I worry that the examples given are just straw. What should really be done is to establish a good set of terms, a set which includes only the former (to establish a name), and to use a good definition, and give a better name for which terms one should be first before trying to judge what is “really” and what is “really”.
I think people should be able to use existing terms more broadly. I just think it makes sense to talk about utilities over possible worlds and why we should want to have common words about them, so I’d be interested to better understand what they mean.
If you’re interested in this post, see http://philpapers.org/surveys/results.pl.Abstract .
If you’re interested in how people work and what sort of advantages might be real, I’d be be especially interested in seeing a variety of explanations for why utility functions aren’t the way they would be under similar circumstances.