Can anyone explain what goes wrong if you say something like, “The marginal utility of my terminal values increases asymtotically, and u(Torture) approaches a much higher asymptote than u(Dust speck)” (or indeed whether it goes wrong at all)?
Thank you. I had expected the bottom to drop out of it somehow.
EDIT: Although come to think of it I’m not sure the objections presented in that paper are so deadly after all if you takes TDT-like considerations into account (i.e. there would not be a difference between “kill 1 person, prevent 1000 mutilations” + “kill 1 person, prevent 1000 mutilations” and “kill 2 people, prevent 2000 mutilations”.) Will have to think on it some more.
Can anyone explain what goes wrong if you say something like, “The marginal utility of my terminal values increases asymtotically, and u(Torture) approaches a much higher asymptote than u(Dust speck)” (or indeed whether it goes wrong at all)?
Nothing, iif that happens to be be what your actual preferences are. If your preferences did not happen to be as you describe but instead you are confused by an inconsistency in your intuitions then you will make incorrect decisions.
The challenge is not to construct a utility function such that you can justify it to others in the face of opposition. The challenge is to work out what your actual preferences are and implement them.
The challenge is to work out what your actual preferences are and implement them.
Ayup. Also, it may be worth saying explicitly that a lot of the difficulty comes in working out a model of my actual preferences that is internally consistent and can be extended to apply to novel situations. If I give up those constraints, it’s easier to come up with propositions that seem to model my preferences, because they approximate particular aspects of my preferences well enough that in certain situations I can’t tell the difference. And if I don’t ever try to make decisions outside of that narrow band of situations, that can be enough to satisfy me.
The challenge is to work out what your actual preferences are and implement them.
[Edited to separate from quote]
But doesn’t that beg the question? Don’t you have to ask a the meta question “what kinds of preferences are reasonable to have?” Why should we shape ethics the way evolution happened to set up our values? That’s why I favor hedonistic utiltiarianism that is about actual states of the world that can in itself be bad (--> suffering).
Note that markup requires a blank line between your quote and the rest of the topic.
It does beg a question: specifically, the question of whether I ought to implement my preferences (or some approximation of them) in the first place. If, for example, my preferences are instead irrelevant to what I ought to do, then time spent working out my preferences is time that could better have been spent doing something else.
All of that said, it sounds like you’re suggesting that suffering is somehow unrelated to the way evolution set up our values. If that is what you’re suggesting, then I’m completely at a loss to understand either your model of what suffering is, or how evolution works.
The fact that suffering feels awful is about the very thing, and nothing else. There’s no valuing required, no being ask itself “should I dislike this experience” when it is in suffering. It wouldn’t be suffering otherwise.
My position implies that in a world without suffering (or happiness, if I were not a negative utiltiarian), nothing would matter.
If I make that assumption, then it follows that given enough Torture to approach its limit, I choose any number of Dust Specks rather than that amount of Torture.
If my goal is to come up with an algorithm that leads to that choice, then I’ve succeeded.
(I think talking about Torture and Dust Specks as terminal values is silly, but it isn’t necessary for what I think you’re trying to get at.)
Can anyone explain what goes wrong if you say something like, “The marginal utility of my terminal values increases asymtotically, and u(Torture) approaches a much higher asymptote than u(Dust speck)” (or indeed whether it goes wrong at all)?
That’s been done in this paper, secion VI “The Asymptotic Gambit”.
Thank you. I had expected the bottom to drop out of it somehow.
EDIT: Although come to think of it I’m not sure the objections presented in that paper are so deadly after all if you takes TDT-like considerations into account (i.e. there would not be a difference between “kill 1 person, prevent 1000 mutilations” + “kill 1 person, prevent 1000 mutilations” and “kill 2 people, prevent 2000 mutilations”.) Will have to think on it some more.
Nothing, iif that happens to be be what your actual preferences are. If your preferences did not happen to be as you describe but instead you are confused by an inconsistency in your intuitions then you will make incorrect decisions.
The challenge is not to construct a utility function such that you can justify it to others in the face of opposition. The challenge is to work out what your actual preferences are and implement them.
Ayup. Also, it may be worth saying explicitly that a lot of the difficulty comes in working out a model of my actual preferences that is internally consistent and can be extended to apply to novel situations. If I give up those constraints, it’s easier to come up with propositions that seem to model my preferences, because they approximate particular aspects of my preferences well enough that in certain situations I can’t tell the difference. And if I don’t ever try to make decisions outside of that narrow band of situations, that can be enough to satisfy me.
[Edited to separate from quote] But doesn’t that beg the question? Don’t you have to ask a the meta question “what kinds of preferences are reasonable to have?” Why should we shape ethics the way evolution happened to set up our values? That’s why I favor hedonistic utiltiarianism that is about actual states of the world that can in itself be bad (--> suffering).
Note that markup requires a blank line between your quote and the rest of the topic.
It does beg a question: specifically, the question of whether I ought to implement my preferences (or some approximation of them) in the first place. If, for example, my preferences are instead irrelevant to what I ought to do, then time spent working out my preferences is time that could better have been spent doing something else.
All of that said, it sounds like you’re suggesting that suffering is somehow unrelated to the way evolution set up our values. If that is what you’re suggesting, then I’m completely at a loss to understand either your model of what suffering is, or how evolution works.
The fact that suffering feels awful is about the very thing, and nothing else. There’s no valuing required, no being ask itself “should I dislike this experience” when it is in suffering. It wouldn’t be suffering otherwise.
My position implies that in a world without suffering (or happiness, if I were not a negative utiltiarian), nothing would matter.
Depends on what I’m trying to do.
If I make that assumption, then it follows that given enough Torture to approach its limit, I choose any number of Dust Specks rather than that amount of Torture.
If my goal is to come up with an algorithm that leads to that choice, then I’ve succeeded.
(I think talking about Torture and Dust Specks as terminal values is silly, but it isn’t necessary for what I think you’re trying to get at.)