It is easy to think of that as “utility function”, but it doesn’t mean that utility functions are always zero. So, we could have utility functions that make people behave like perfect utility function maximizers.
The question around scope insensitivity might play out (to us) as something like an agent’s utility function being zero, with the only real thing being the world. However, the “limited utility function” seems to play out that, so we can never really say anything negative about utility functions. In fact, the “limited utility function” doesn’t really exist (so, it’s possible as well as not universal for every purpose we can consider).
I’m not sure that this is true, but it seems like in many situations having a limited utility function can make people behave less ethically, but I don’t think one has to worry much about this particular scenario.
This is a good post, but it’s not something that would save a person. Is it just that utility functions are always zero?
It might be worth looking into this, because I don’t think it makes sense to rely on the inside view of the utility function, or if it’s true it’s also worth examining the underlying view.
I think those questions are interesting to argue about, but I’m not sure how to resolve problems of such that might result in a bad outcome.
I think humans are a very common model of the environment, and I like the terminology, but I worry that the examples given are just straw. What should really be done is to establish a good set of terms, a set which includes only the former (to establish a name), and to use a good definition, and give a better name for which terms one should be first before trying to judge what is “really” and what is “really”.
I think people should be able to use existing terms more broadly. I just think it makes sense to talk about utilities over possible worlds and why we should want to have common words about them, so I’d be interested to better understand what they mean.
If you’re interested in how people work and what sort of advantages might be real, I’d be be especially interested in seeing a variety of explanations for why utility functions aren’t the way they would be under similar circumstances.
It is easy to think of that as “utility function”, but it doesn’t mean that utility functions are always zero. So, we could have utility functions that make people behave like perfect utility function maximizers.
The question around scope insensitivity might play out (to us) as something like an agent’s utility function being zero, with the only real thing being the world. However, the “limited utility function” seems to play out that, so we can never really say anything negative about utility functions. In fact, the “limited utility function” doesn’t really exist (so, it’s possible as well as not universal for every purpose we can consider).
I’m not sure that this is true, but it seems like in many situations having a limited utility function can make people behave less ethically, but I don’t think one has to worry much about this particular scenario.
This is a good post, but it’s not something that would save a person. Is it just that utility functions are always zero?
It might be worth looking into this, because I don’t think it makes sense to rely on the inside view of the utility function, or if it’s true it’s also worth examining the underlying view.
I think those questions are interesting to argue about, but I’m not sure how to resolve problems of such that might result in a bad outcome.
I think humans are a very common model of the environment, and I like the terminology, but I worry that the examples given are just straw. What should really be done is to establish a good set of terms, a set which includes only the former (to establish a name), and to use a good definition, and give a better name for which terms one should be first before trying to judge what is “really” and what is “really”.
I think people should be able to use existing terms more broadly. I just think it makes sense to talk about utilities over possible worlds and why we should want to have common words about them, so I’d be interested to better understand what they mean.
If you’re interested in this post, see http://philpapers.org/surveys/results.pl.Abstract .
If you’re interested in how people work and what sort of advantages might be real, I’d be be especially interested in seeing a variety of explanations for why utility functions aren’t the way they would be under similar circumstances.