Haha, I was not factoring that in. I assumed they were evil. Perhaps that was close minded of me, though.
The first scenario is better for both space monsters and humans. Sure, in the second scenario, the humans theoretically don’t lose their dignity, but what does dignity mean to the dead?
Some people would say that dying honorably is better than living dishonorably. I’m not endorsing this view, I’m just trying to figure out why it’s irrational, while the utilitarian sacrifice of children is more rational.
To put it in another light, what if this situation happened a hundred years ago? Would you be upset that the people alive at the time caved in to the aliens’ demands, or would you prefer the human race had been wiped out?
There are plenty of variables you can slide up and down to make one feel more or less comfortable with the scenario. But we already knew that, didn’t we? That’s what the original trolley problem tells us: that pushing someone off a bridge feels morally different than switching the tracks of a trolley. My concern is that I can’t figure out how to call one impulse (the discomfort at destroying autonomy) an objectively irrelevant mere impulse, and another impulse (the comfort at preserving life) an objectively good fact. It seems difficult to throw just the bathwater out here, but I’d really like to preserve the baby. (See my other post above, in response to Nesov.)
Some people would say that dying honorably is better than living dishonorably. I’m not endorsing this view, I’m just trying to figure out why it’s irrational, while the utilitarian sacrifice of children is more rational.
Utilitarian calculation is a more rational process of arriving at a decision, while for the output of this process (a decision) for a specific question you can argue that it’s inferior to the output of some other process, such as free-running deliberation or random guessing. When you are comparing the decisions of sacrifice of children and war to the death, first isn’t “intrinsically utilitarian”, and the second isn’t “intrinsically emotional”.
Which of the decision is (actually) the better one depends on the preferences of one who decides, and preferences are not necessarily reflected well in actions and choices. It’s instrumentally irrational for the agent to choose poorly according to its preferences. Systematic processes for decision-making allow agents to explicitly encode their preferences, and thus avoid some of the mistakes made with ad-hoc decision-making. Such systematic processes may be constructed in preference-independent fashion, and then given preferences as parameters.
Utilitarian calculation is a systematic process for computing a decision in situations that are expected to break intuitive decision-making. The output of a utilitarian calculation is expected to be better than an intuitive decision, but there are situations when utilitarian calculation goes wrong. For example, the extent to which you value things could be specified incorrectly, or a transformation that computes how much you value N things based on how much you value one thing may be wrong. In other cases, the problem could be reduced to a calculation incorrectly, losing important context.
However, whatever the right decision is, there normally should be a way to fix the parameters of utilitarian calculation so that it outputs the right decision. For example, if the right decision in the topic problem is actually war to the death, there should be a way to more formally understand the situation so that the utilitarian calculation outputs “war to the death” as the right decision.
I’m not convinced utilitarian reasoning can always be applied to situations where two preferences come into conflict: Calculating “secondary” uncertain factors which could influence the value of each decision ruins the possibility of exactness. Even in the trolley problem, in all its simplicity, each decision has repercussions whose values have some uncertainty. Thus a decision doesn’t always have a strict value, but a probable value distribution! We make a trolley decision by 1) Considering only so many iterations in trying to get a value distribution, and 2) seeing if there is a satisfying lack overlap between the two. When the two distributions overlap too much (and you know that they are approximate, due to the intractability of getting a perfect distribution), it’s really a wild guess to say one decision is best.
Utilitarian calculation helps the process, by providing means of deciding when each value probability distribution is sharply enough defined, and whether the overlap meets your internal maximum overlap criteria (presuming that’s sharply defined!), but no amount of reasoning can solve every moral dilemma a person might face.
Which of the decision is (actually) the better one depends on the preferences of one who decides
So if said planet decided that its preference was to perish, rather than sacrifice children, would this be irrational?
However, whatever the right decision is, there normally should be a way to fix the parameters of utilitarian calculation so that it outputs the right decision. For example, if the right decision in the topic problem is actually war to the death, there should be a way to more formally understand the situation so that the utilitarian calculation outputs “war to the death” as the right decision.
I don’t see why I should agree with this statement. I was understanding a utilitarian calculation as either a) the greatest happiness for the greatest number of people or b) the greatest preferences satisfied for the greatest number of people. If a), then it seems like it might predictably give you answers that are at odds with moral intuitions, and have no way of justifying itself against these intuitions. If b), then there’s nothing irrational about deciding to go to war with the aliens.
So if said planet decided that its preference was to perish, rather than sacrifice children, would this be irrational?
You can’t decide your preference, preference is not what you actually do, it is what you should do, and it’s encoded in your decision-making capabilities in a nontrivial way, so that you aren’t necessarily capable of seeing what it is.
Compare preference to a solution to an equation: you can see the equation, you can take it apart on the constituent terms, but its solution is nowhere to be found explicitly. Yet this solution is (say) uniquely defined by the equation, and approximate methods for solving the equation (analogized to the actual decisions) tend to give their results in the general ballpark of the exact solution.
You can’t decide your preference, preference is not what you actually do, it is what you should do, and it’s encoded in your decision-making capabilities in a nontrivial way, so that you aren’t necessarily capable of seeing what it is.
The analogy in the next paragraph was meant to clarify. Do you see the analogy?
A person in this analogy is an equations together with an algorithm for approximately solving that equation. Decisions that the person makes are the approximate solutions, while preference is the exact solution hidden in the equation that the person can’t solve exactly. The decision algorithm tries to make decisions as close to the exact solution as it can. The exact solution is what the person should do, while the output of the approximate algorithm is what the person actually does.
I suppose I’m questioning the validity of the analogy: equations are by nature descriptive, while what one ought to do is prescriptive. Are you familiar with the Is-Ought problem?
jwdink, I don’t think Vladimir Nesov is making an Is-Ought error. Think of this: You have values (preferences, desired ends, emotional “impulses” or whatever) which are a physical part of your nature. Everything you decide to do, you do because you Want to. If you refuse to acknowledge any criteria for behavior as valuable to you, you’re saying that what feels valuable to you isn’t valuable to you. This is a contradiction!
An Is-Ought problem arises when you attempt to derive a Then without an If. Here, the If is given: If you value what you value, then you should do what is right in accordance with your values.
But there seemed to be some suggestion that an avoidance of sacrificing the children, even to the risk of everyone’s lives was a “less rational” value. If it’s a value, it’s a value… how do you call certain values invalid, or not “real” preferences?
I missed where Vladimir made that suggestion, though I’m sure others have. You can have an irrational value, if it’s really a means and not an end (which is another value), but you don’t recognize that, and call the means a value itself. Means to an end can of course be evaluated as rational. If anyone made the suggestion you mention, they probably presumed a single “basic” value of preserving lives, and considered the method of deciding to be a means, but denoted as a value.
(Of course, a value can be both a means and an end, which presents fun new complications...)
I agree generally that this is what an irrational value would mean. However, the presiding implicit assumption was that the utilitarian ends were the correct, and therefore the presiding explicit assumption (or at least, I thought it was presiding… now I can’t seem to get anyone to defend it, so maybe not) was that therefore the most efficient means to these particular ends were the most rational.
Maybe I was misunderstanding the presiding assumption, though. It was just stuff like this:
Lesswrongers will be encouraged to learn that the Torchwood characters were rationalists to a man and woman—there was little hesitation in agreeing to the 456′s demands.
Or this, in response to a call to “dignity”:
How many lives is your dignity worth? Would you be willing to actually kill people for your dignity, or are you only willing to make that transaction if someone else is holding the knife?
Haha, we must have very different criteria for “confusing.” I found that post very clear, and I’ve struggled quite a bit with most of your posts. No offense meant, of course: I’m just not very versed in the LW vernacular.
My comments can be confusing, or difficult to get over the wider inferential gaps. In this case I meant that nickernst’s comment could just be expressed much more clearly.
That’s not a particularly helpful or elucidating response. Can you flesh out your position? It’s impossible to tell what it is based on the paltry statements you’ve provided. Are you asserting that the “equation” or “hidden preference” is the same for all humans, or ought to be the same, and therefore is something objective/rational?
Preference of a given human is defined by their brain, and can be somewhat different from person to person, but not too much. There is nothing “objective” about this preference, but for each person there is one true preference that is their own, and same could be said for humanity as a whole, with the whole planet defining its preference, instead of just one brain. The focus on the brain isn’t very accurate though, since environment plays its part as well.
I can’t do justice to the centuries-old problem with a few words, but the idea is more or less this. Whatever the concept of “preference” means, when the human philosophers talk about it, their words are caused by something in the world: “preference” must be either a mechanism in their brain, a name of their confusion, or something else. It’s not epiphenomenal. Searching for the “ought” in the world outside human minds is more or less a guaranteed failure, especially if the answer is expected to be found explicitly, as an exemplar of perfection rather than evidence about what perfection is, to be interpreted in nontrivial way. The history of failure to find an answer while looking in the wrong place doesn’t prove that the answer is nowhere to be found, that there is now positive knowledge about the absence of the answer is the world.
Okay, so I’ll ask again: why couldn’t the humans real preference be to not sacrifice the children? Remember, you said:
You can’t decide your preference, preference is not what you actually do, it is what you should do
You haven’t really elucidated this. You’re either pulling an ought out of nowhere, or you’re saying “preference is what you should do if you want to win”. In the latter case, you still haven’t explained why giving up the children is winning, and not doing so is not winning.
And the link you gave doesn’t help at all, since, if we’re going to be looking at moral impulses common to all cultures and humans, I’m pretty sure not sacrificing children is one of them. See: Jonathan Haidt
Okay, so I’ll ask again: why couldn’t the humans real preference be to not sacrifice the children? [...] In the latter case, you still haven’t explained why giving up the children is winning, and not doing so is not winning.
It seems like you are seeing my replies as soldier-arguments for the object-level question about the sacrifice of children, stumped on a particular conclusion that sacrificing children is right, while I’m merely giving opinion-neutral meta-comments about the semantics of such opinions. (I’m not sure I’m reading this right.)
You can’t decide your preference, preference is not what you actually do, it is what you should do.
You haven’t really elucidated this. You’re either pulling an ought out of nowhere, or you’re saying “preference is what you should do if you want to win”.
Preference defines what constitutes winning, your actions rank high in the preference order if they determine the world high in preference order. Preference can’t be reduced to winning or actions, as these all are the sides of the same structure.
It seems like you are seeing my replies as soldier-arguments for the object-level question about the sacrifice of children, stumped on a particular conclusion that sacrificing children is right, while I’m merely giving opinion-neutral meta-comments about the semantics of such opinions. (I’m not sure I’m reading this right.)
...so you’re NOT attempting to respond to my original question? My original question was “what’s irrational about not sacrificing the children?”
There is nothing intrinsically irrational about any action, rationality or irrationality depends on preference, which is the point I was trying to communicate. Any question about “rationality” of a decision is a question about correctness of preference-optimization. So, my reply to your original question is that the question is ill-posed, and the content of the reply was explanation as to why.
Okay, that’s fine. So you’ll agree that the various people—who were saying that the decision made in the show was the rational route—these people were speaking (at least somewhat) improperly?
Some people would say that dying honorably is better than living dishonorably. I’m not endorsing this view, I’m just trying to figure out why it’s irrational, while the utilitarian sacrifice of children is more rational.
If a decision decreases [personal] utility, is it not irrational?
Some people would say that it is dishonourable to hand over your wallet to a crackhead with a knife. When I was actually in that situation, though (hint: not as the crackhead), I didn’t think about my dignity. I just thought that refusing would be the dumbest, least rational possible decision. The only time I’ve ever been in a fight is when I couldn’t run away. If behaving honourably is rational then being rational is a good way to get killed. I’m not saying that being rational always leads to morally satisfactory decisions. I am saying that sometimes you have to choose moral satisfaction over rationality… or the reverse.
As for the trolley problem, what we are dealing with is the aftermath of the trolley problem. If you save the people on the trolley, it could be argued that you have behaved dishonourably, but what about the people you saved? Surely they are innocent of your decision. If humanity is honourably wiped out by the space monsters, is that better than having some humans behave dishonourably and others (i.e. those who favoured resistance, but were powerless to effect it) survive honourably?
If a decision decreases utility, is it not irrational?
I don’t see how you could go about proving this.
As for the trolley problem, what we are dealing with is the aftermath of the trolley problem. If you save the people on the trolley, it could be argued that you have behaved dishonourably, but what about the people you saved? Surely they are innocent of your decision. If humanity is honourably wiped out by the space monsters, is that better than having some humans behave dishonourably and others (i.e. those who favoured resistance, but were powerless to effect it) survive honourably?
Well, wait. Are we dealing with the happiness that results in the aftermath, or are we dealing with the moral value of the actions themselves? Surely these two are discrete. Don’t the intentions behind an action factor into the morality of the action? Or are the results all that matter? If intentions are irrelevant, does that mean that inanimate objects (entities without intentions, good or bad) can do morally good things? If a tornado diverts from a city at the last minute, was that a morally good action?
I think intentions matter. It might be the case that, 100 years later, the next generation will be happier. That doesn’t mean that the decision to sacrifice those children was the morally good decision—in the same way that, despite the tornado-free city being a happier city, it doesn’t mean the tornado’s diversion was a morally good thing.
Instrumental rationality: achieving your values. Not necessarily “your values” in the sense of being selfish values or unshared values: “your values” means anything you care about. The art of choosing actions that steer the future toward outcomes ranked higher in your preferences. On LW we sometimes refer to this as “winning”.
Couldn’t these people care about not sacrificing autonomy, and therefore this would be a value that they’re successfully fulfilling?
Yes they could care about either outcome. The question is whether they did, whether their true hiddenpreferences said that a given outcome is preferable.
All human preferences, in their exact form, are hidden. The complexity of human value is too great to comprehend it all explicitly with a merely human mind.
All human preferences, in their exact form, are hidden. The complexity of human value is too great to comprehend it all explicitly with a merely human mind.
Okay… so again, I’ll ask… why is it irrational to NOT sacrifice the children? How does it go against hidden preference (which, perhaps, it would be prudent to define)?
I understand your frustration, since we don’t seem to be saying much to support our claims here. We’ve discussed relevant issues of metaethics quite heavily on Less Wrong, but we should be willing to enter the debate again as new readers arrive and raise their points.
However, there’s a lot of material that’s already been said elsewhere, so I hope you’ll pardon me for pointing you towards a few early posts of interest right now instead of trying to summarize it in one go.
Torture vs. Dust Specks kicked off the arguing; Eliezer began arguing for his own position in Circular Altruism and The “Intuitions” Behind “Utilitarianism”. Searching LW for keywords like “specks” or “utilitarian” should bring up more recent posts as well, but these three sum up more or less what I’d say in response to your question.
Oh, it’s no problem if you point me elsewhere. I should’ve specified that that would be fine. I just wanted some definition. The only link that was given, I believe, was one defining rationality. Thanks for the links, I’ll check them out.
Oh, it’s no problem if you point me elsewhere. I should’ve specified that that would be fine. I just wanted some definition. The only link that was given, I believe, was one defining rationality. Thanks for the links, I’ll check them out.
All human preferences, in their exact form, are hidden. The complexity of human value is too great to comprehend it all explicitly with a merely human mind.
It’s especially hard if you use models based on utility maximizing rather than on predicted error minimization, or if you assume that human values are coherent even within a given individual, let alone humanity as a whole.
That being said, it is certainly possible to map a subset of one’s preferences as they pertain to some specific subject, and to do a fair amount of pruning and tuning. One’s preferences are not necessarily opaque to reflection; they’re mostly just nonobvious.
Haha, I was not factoring that in. I assumed they were evil. Perhaps that was close minded of me, though.
Some people would say that dying honorably is better than living dishonorably. I’m not endorsing this view, I’m just trying to figure out why it’s irrational, while the utilitarian sacrifice of children is more rational.
There are plenty of variables you can slide up and down to make one feel more or less comfortable with the scenario. But we already knew that, didn’t we? That’s what the original trolley problem tells us: that pushing someone off a bridge feels morally different than switching the tracks of a trolley. My concern is that I can’t figure out how to call one impulse (the discomfort at destroying autonomy) an objectively irrelevant mere impulse, and another impulse (the comfort at preserving life) an objectively good fact. It seems difficult to throw just the bathwater out here, but I’d really like to preserve the baby. (See my other post above, in response to Nesov.)
Utilitarian calculation is a more rational process of arriving at a decision, while for the output of this process (a decision) for a specific question you can argue that it’s inferior to the output of some other process, such as free-running deliberation or random guessing. When you are comparing the decisions of sacrifice of children and war to the death, first isn’t “intrinsically utilitarian”, and the second isn’t “intrinsically emotional”.
Which of the decision is (actually) the better one depends on the preferences of one who decides, and preferences are not necessarily reflected well in actions and choices. It’s instrumentally irrational for the agent to choose poorly according to its preferences. Systematic processes for decision-making allow agents to explicitly encode their preferences, and thus avoid some of the mistakes made with ad-hoc decision-making. Such systematic processes may be constructed in preference-independent fashion, and then given preferences as parameters.
Utilitarian calculation is a systematic process for computing a decision in situations that are expected to break intuitive decision-making. The output of a utilitarian calculation is expected to be better than an intuitive decision, but there are situations when utilitarian calculation goes wrong. For example, the extent to which you value things could be specified incorrectly, or a transformation that computes how much you value N things based on how much you value one thing may be wrong. In other cases, the problem could be reduced to a calculation incorrectly, losing important context.
However, whatever the right decision is, there normally should be a way to fix the parameters of utilitarian calculation so that it outputs the right decision. For example, if the right decision in the topic problem is actually war to the death, there should be a way to more formally understand the situation so that the utilitarian calculation outputs “war to the death” as the right decision.
I’m not convinced utilitarian reasoning can always be applied to situations where two preferences come into conflict: Calculating “secondary” uncertain factors which could influence the value of each decision ruins the possibility of exactness. Even in the trolley problem, in all its simplicity, each decision has repercussions whose values have some uncertainty. Thus a decision doesn’t always have a strict value, but a probable value distribution! We make a trolley decision by 1) Considering only so many iterations in trying to get a value distribution, and 2) seeing if there is a satisfying lack overlap between the two. When the two distributions overlap too much (and you know that they are approximate, due to the intractability of getting a perfect distribution), it’s really a wild guess to say one decision is best.
Utilitarian calculation helps the process, by providing means of deciding when each value probability distribution is sharply enough defined, and whether the overlap meets your internal maximum overlap criteria (presuming that’s sharply defined!), but no amount of reasoning can solve every moral dilemma a person might face.
So if said planet decided that its preference was to perish, rather than sacrifice children, would this be irrational?
I don’t see why I should agree with this statement. I was understanding a utilitarian calculation as either a) the greatest happiness for the greatest number of people or b) the greatest preferences satisfied for the greatest number of people. If a), then it seems like it might predictably give you answers that are at odds with moral intuitions, and have no way of justifying itself against these intuitions. If b), then there’s nothing irrational about deciding to go to war with the aliens.
You can’t decide your preference, preference is not what you actually do, it is what you should do, and it’s encoded in your decision-making capabilities in a nontrivial way, so that you aren’t necessarily capable of seeing what it is.
Compare preference to a solution to an equation: you can see the equation, you can take it apart on the constituent terms, but its solution is nowhere to be found explicitly. Yet this solution is (say) uniquely defined by the equation, and approximate methods for solving the equation (analogized to the actual decisions) tend to give their results in the general ballpark of the exact solution.
You’ve lost me.
The analogy in the next paragraph was meant to clarify. Do you see the analogy?
A person in this analogy is an equations together with an algorithm for approximately solving that equation. Decisions that the person makes are the approximate solutions, while preference is the exact solution hidden in the equation that the person can’t solve exactly. The decision algorithm tries to make decisions as close to the exact solution as it can. The exact solution is what the person should do, while the output of the approximate algorithm is what the person actually does.
I suppose I’m questioning the validity of the analogy: equations are by nature descriptive, while what one ought to do is prescriptive. Are you familiar with the Is-Ought problem?
jwdink, I don’t think Vladimir Nesov is making an Is-Ought error. Think of this: You have values (preferences, desired ends, emotional “impulses” or whatever) which are a physical part of your nature. Everything you decide to do, you do because you Want to. If you refuse to acknowledge any criteria for behavior as valuable to you, you’re saying that what feels valuable to you isn’t valuable to you. This is a contradiction!
An Is-Ought problem arises when you attempt to derive a Then without an If. Here, the If is given: If you value what you value, then you should do what is right in accordance with your values.
But there seemed to be some suggestion that an avoidance of sacrificing the children, even to the risk of everyone’s lives was a “less rational” value. If it’s a value, it’s a value… how do you call certain values invalid, or not “real” preferences?
I missed where Vladimir made that suggestion, though I’m sure others have. You can have an irrational value, if it’s really a means and not an end (which is another value), but you don’t recognize that, and call the means a value itself. Means to an end can of course be evaluated as rational. If anyone made the suggestion you mention, they probably presumed a single “basic” value of preserving lives, and considered the method of deciding to be a means, but denoted as a value.
(Of course, a value can be both a means and an end, which presents fun new complications...)
I agree generally that this is what an irrational value would mean. However, the presiding implicit assumption was that the utilitarian ends were the correct, and therefore the presiding explicit assumption (or at least, I thought it was presiding… now I can’t seem to get anyone to defend it, so maybe not) was that therefore the most efficient means to these particular ends were the most rational.
Maybe I was misunderstanding the presiding assumption, though. It was just stuff like this:
Or this, in response to a call to “dignity”:
I think I hear you, but this comment is way confusing.
Haha, we must have very different criteria for “confusing.” I found that post very clear, and I’ve struggled quite a bit with most of your posts. No offense meant, of course: I’m just not very versed in the LW vernacular.
My comments can be confusing, or difficult to get over the wider inferential gaps. In this case I meant that nickernst’s comment could just be expressed much more clearly.
The problem is a confusion. Human preference is something implemented in the very real human brain.
That’s not a particularly helpful or elucidating response. Can you flesh out your position? It’s impossible to tell what it is based on the paltry statements you’ve provided. Are you asserting that the “equation” or “hidden preference” is the same for all humans, or ought to be the same, and therefore is something objective/rational?
Preference of a given human is defined by their brain, and can be somewhat different from person to person, but not too much. There is nothing “objective” about this preference, but for each person there is one true preference that is their own, and same could be said for humanity as a whole, with the whole planet defining its preference, instead of just one brain. The focus on the brain isn’t very accurate though, since environment plays its part as well.
I can’t do justice to the centuries-old problem with a few words, but the idea is more or less this. Whatever the concept of “preference” means, when the human philosophers talk about it, their words are caused by something in the world: “preference” must be either a mechanism in their brain, a name of their confusion, or something else. It’s not epiphenomenal. Searching for the “ought” in the world outside human minds is more or less a guaranteed failure, especially if the answer is expected to be found explicitly, as an exemplar of perfection rather than evidence about what perfection is, to be interpreted in nontrivial way. The history of failure to find an answer while looking in the wrong place doesn’t prove that the answer is nowhere to be found, that there is now positive knowledge about the absence of the answer is the world.
Okay, so I’ll ask again: why couldn’t the humans real preference be to not sacrifice the children? Remember, you said:
You haven’t really elucidated this. You’re either pulling an ought out of nowhere, or you’re saying “preference is what you should do if you want to win”. In the latter case, you still haven’t explained why giving up the children is winning, and not doing so is not winning.
And the link you gave doesn’t help at all, since, if we’re going to be looking at moral impulses common to all cultures and humans, I’m pretty sure not sacrificing children is one of them. See: Jonathan Haidt
It seems like you are seeing my replies as soldier-arguments for the object-level question about the sacrifice of children, stumped on a particular conclusion that sacrificing children is right, while I’m merely giving opinion-neutral meta-comments about the semantics of such opinions. (I’m not sure I’m reading this right.)
Preference defines what constitutes winning, your actions rank high in the preference order if they determine the world high in preference order. Preference can’t be reduced to winning or actions, as these all are the sides of the same structure.
...so you’re NOT attempting to respond to my original question? My original question was “what’s irrational about not sacrificing the children?”
There is nothing intrinsically irrational about any action, rationality or irrationality depends on preference, which is the point I was trying to communicate. Any question about “rationality” of a decision is a question about correctness of preference-optimization. So, my reply to your original question is that the question is ill-posed, and the content of the reply was explanation as to why.
Okay, that’s fine. So you’ll agree that the various people—who were saying that the decision made in the show was the rational route—these people were speaking (at least somewhat) improperly?
If a decision decreases [personal] utility, is it not irrational?
Some people would say that it is dishonourable to hand over your wallet to a crackhead with a knife. When I was actually in that situation, though (hint: not as the crackhead), I didn’t think about my dignity. I just thought that refusing would be the dumbest, least rational possible decision. The only time I’ve ever been in a fight is when I couldn’t run away. If behaving honourably is rational then being rational is a good way to get killed. I’m not saying that being rational always leads to morally satisfactory decisions. I am saying that sometimes you have to choose moral satisfaction over rationality… or the reverse.
As for the trolley problem, what we are dealing with is the aftermath of the trolley problem. If you save the people on the trolley, it could be argued that you have behaved dishonourably, but what about the people you saved? Surely they are innocent of your decision. If humanity is honourably wiped out by the space monsters, is that better than having some humans behave dishonourably and others (i.e. those who favoured resistance, but were powerless to effect it) survive honourably?
I don’t see how you could go about proving this.
Well, wait. Are we dealing with the happiness that results in the aftermath, or are we dealing with the moral value of the actions themselves? Surely these two are discrete. Don’t the intentions behind an action factor into the morality of the action? Or are the results all that matter? If intentions are irrelevant, does that mean that inanimate objects (entities without intentions, good or bad) can do morally good things? If a tornado diverts from a city at the last minute, was that a morally good action?
I think intentions matter. It might be the case that, 100 years later, the next generation will be happier. That doesn’t mean that the decision to sacrifice those children was the morally good decision—in the same way that, despite the tornado-free city being a happier city, it doesn’t mean the tornado’s diversion was a morally good thing.
I should have said “decreases personal utility.” When I say rationality, I mean rationality. Decreasing personal utility is the opposite of “winning”.
Couldn’t these people care about not sacrificing autonomy, and therefore this would be a value that they’re successfully fulfilling?
Yes they could care about either outcome. The question is whether they did, whether their true hidden preferences said that a given outcome is preferable.
What would be an example of a hidden preference? The post to which you linked didn’t explicitly mention that concept at all.
All human preferences, in their exact form, are hidden. The complexity of human value is too great to comprehend it all explicitly with a merely human mind.
Okay… so again, I’ll ask… why is it irrational to NOT sacrifice the children? How does it go against hidden preference (which, perhaps, it would be prudent to define)?
I understand your frustration, since we don’t seem to be saying much to support our claims here. We’ve discussed relevant issues of metaethics quite heavily on Less Wrong, but we should be willing to enter the debate again as new readers arrive and raise their points.
However, there’s a lot of material that’s already been said elsewhere, so I hope you’ll pardon me for pointing you towards a few early posts of interest right now instead of trying to summarize it in one go.
Torture vs. Dust Specks kicked off the arguing; Eliezer began arguing for his own position in Circular Altruism and The “Intuitions” Behind “Utilitarianism”. Searching LW for keywords like “specks” or “utilitarian” should bring up more recent posts as well, but these three sum up more or less what I’d say in response to your question.
(There’s a whole metaethics sequence later on (see the whole list of Eliezer’s posts from Overcoming Bias), but that’s less germane to your immediate question.)
Oh, it’s no problem if you point me elsewhere. I should’ve specified that that would be fine. I just wanted some definition. The only link that was given, I believe, was one defining rationality. Thanks for the links, I’ll check them out.
Oh, it’s no problem if you point me elsewhere. I should’ve specified that that would be fine. I just wanted some definition. The only link that was given, I believe, was one defining rationality. Thanks for the links, I’ll check them out.
It’s especially hard if you use models based on utility maximizing rather than on predicted error minimization, or if you assume that human values are coherent even within a given individual, let alone humanity as a whole.
That being said, it is certainly possible to map a subset of one’s preferences as they pertain to some specific subject, and to do a fair amount of pruning and tuning. One’s preferences are not necessarily opaque to reflection; they’re mostly just nonobvious.