If a decision decreases utility, is it not irrational?
I don’t see how you could go about proving this.
As for the trolley problem, what we are dealing with is the aftermath of the trolley problem. If you save the people on the trolley, it could be argued that you have behaved dishonourably, but what about the people you saved? Surely they are innocent of your decision. If humanity is honourably wiped out by the space monsters, is that better than having some humans behave dishonourably and others (i.e. those who favoured resistance, but were powerless to effect it) survive honourably?
Well, wait. Are we dealing with the happiness that results in the aftermath, or are we dealing with the moral value of the actions themselves? Surely these two are discrete. Don’t the intentions behind an action factor into the morality of the action? Or are the results all that matter? If intentions are irrelevant, does that mean that inanimate objects (entities without intentions, good or bad) can do morally good things? If a tornado diverts from a city at the last minute, was that a morally good action?
I think intentions matter. It might be the case that, 100 years later, the next generation will be happier. That doesn’t mean that the decision to sacrifice those children was the morally good decision—in the same way that, despite the tornado-free city being a happier city, it doesn’t mean the tornado’s diversion was a morally good thing.
Instrumental rationality: achieving your values. Not necessarily “your values” in the sense of being selfish values or unshared values: “your values” means anything you care about. The art of choosing actions that steer the future toward outcomes ranked higher in your preferences. On LW we sometimes refer to this as “winning”.
Couldn’t these people care about not sacrificing autonomy, and therefore this would be a value that they’re successfully fulfilling?
Yes they could care about either outcome. The question is whether they did, whether their true hiddenpreferences said that a given outcome is preferable.
All human preferences, in their exact form, are hidden. The complexity of human value is too great to comprehend it all explicitly with a merely human mind.
All human preferences, in their exact form, are hidden. The complexity of human value is too great to comprehend it all explicitly with a merely human mind.
Okay… so again, I’ll ask… why is it irrational to NOT sacrifice the children? How does it go against hidden preference (which, perhaps, it would be prudent to define)?
I understand your frustration, since we don’t seem to be saying much to support our claims here. We’ve discussed relevant issues of metaethics quite heavily on Less Wrong, but we should be willing to enter the debate again as new readers arrive and raise their points.
However, there’s a lot of material that’s already been said elsewhere, so I hope you’ll pardon me for pointing you towards a few early posts of interest right now instead of trying to summarize it in one go.
Torture vs. Dust Specks kicked off the arguing; Eliezer began arguing for his own position in Circular Altruism and The “Intuitions” Behind “Utilitarianism”. Searching LW for keywords like “specks” or “utilitarian” should bring up more recent posts as well, but these three sum up more or less what I’d say in response to your question.
Oh, it’s no problem if you point me elsewhere. I should’ve specified that that would be fine. I just wanted some definition. The only link that was given, I believe, was one defining rationality. Thanks for the links, I’ll check them out.
Oh, it’s no problem if you point me elsewhere. I should’ve specified that that would be fine. I just wanted some definition. The only link that was given, I believe, was one defining rationality. Thanks for the links, I’ll check them out.
All human preferences, in their exact form, are hidden. The complexity of human value is too great to comprehend it all explicitly with a merely human mind.
It’s especially hard if you use models based on utility maximizing rather than on predicted error minimization, or if you assume that human values are coherent even within a given individual, let alone humanity as a whole.
That being said, it is certainly possible to map a subset of one’s preferences as they pertain to some specific subject, and to do a fair amount of pruning and tuning. One’s preferences are not necessarily opaque to reflection; they’re mostly just nonobvious.
I don’t see how you could go about proving this.
Well, wait. Are we dealing with the happiness that results in the aftermath, or are we dealing with the moral value of the actions themselves? Surely these two are discrete. Don’t the intentions behind an action factor into the morality of the action? Or are the results all that matter? If intentions are irrelevant, does that mean that inanimate objects (entities without intentions, good or bad) can do morally good things? If a tornado diverts from a city at the last minute, was that a morally good action?
I think intentions matter. It might be the case that, 100 years later, the next generation will be happier. That doesn’t mean that the decision to sacrifice those children was the morally good decision—in the same way that, despite the tornado-free city being a happier city, it doesn’t mean the tornado’s diversion was a morally good thing.
I should have said “decreases personal utility.” When I say rationality, I mean rationality. Decreasing personal utility is the opposite of “winning”.
Couldn’t these people care about not sacrificing autonomy, and therefore this would be a value that they’re successfully fulfilling?
Yes they could care about either outcome. The question is whether they did, whether their true hidden preferences said that a given outcome is preferable.
What would be an example of a hidden preference? The post to which you linked didn’t explicitly mention that concept at all.
All human preferences, in their exact form, are hidden. The complexity of human value is too great to comprehend it all explicitly with a merely human mind.
Okay… so again, I’ll ask… why is it irrational to NOT sacrifice the children? How does it go against hidden preference (which, perhaps, it would be prudent to define)?
I understand your frustration, since we don’t seem to be saying much to support our claims here. We’ve discussed relevant issues of metaethics quite heavily on Less Wrong, but we should be willing to enter the debate again as new readers arrive and raise their points.
However, there’s a lot of material that’s already been said elsewhere, so I hope you’ll pardon me for pointing you towards a few early posts of interest right now instead of trying to summarize it in one go.
Torture vs. Dust Specks kicked off the arguing; Eliezer began arguing for his own position in Circular Altruism and The “Intuitions” Behind “Utilitarianism”. Searching LW for keywords like “specks” or “utilitarian” should bring up more recent posts as well, but these three sum up more or less what I’d say in response to your question.
(There’s a whole metaethics sequence later on (see the whole list of Eliezer’s posts from Overcoming Bias), but that’s less germane to your immediate question.)
Oh, it’s no problem if you point me elsewhere. I should’ve specified that that would be fine. I just wanted some definition. The only link that was given, I believe, was one defining rationality. Thanks for the links, I’ll check them out.
Oh, it’s no problem if you point me elsewhere. I should’ve specified that that would be fine. I just wanted some definition. The only link that was given, I believe, was one defining rationality. Thanks for the links, I’ll check them out.
It’s especially hard if you use models based on utility maximizing rather than on predicted error minimization, or if you assume that human values are coherent even within a given individual, let alone humanity as a whole.
That being said, it is certainly possible to map a subset of one’s preferences as they pertain to some specific subject, and to do a fair amount of pruning and tuning. One’s preferences are not necessarily opaque to reflection; they’re mostly just nonobvious.