How does one define “bad” without “pain” or “suffering”? Seems rather difficult. Or: The question doesn’t seem so much difficult as it is (almost) tautological. It’s like asking “What, if anything, is hot about atoms moving more quickly?”
jwdink
Oh, it’s no problem if you point me elsewhere. I should’ve specified that that would be fine. I just wanted some definition. The only link that was given, I believe, was one defining rationality. Thanks for the links, I’ll check them out.
All human preferences, in their exact form, are hidden. The complexity of human value is too great to comprehend it all explicitly with a merely human mind.
Okay… so again, I’ll ask… why is it irrational to NOT sacrifice the children? How does it go against hidden preference (which, perhaps, it would be prudent to define)?
That’s not a particularly helpful or elucidating response. Can you flesh out your position? It’s impossible to tell what it is based on the paltry statements you’ve provided. Are you asserting that the “equation” or “hidden preference” is the same for all humans, or ought to be the same, and therefore is something objective/rational?
What would be an example of a hidden preference? The post to which you linked didn’t explicitly mention that concept at all.
I suppose I’m questioning the validity of the analogy: equations are by nature descriptive, while what one ought to do is prescriptive. Are you familiar with the Is-Ought problem?
Instrumental rationality: achieving your values. Not necessarily “your values” in the sense of being selfish values or unshared values: “your values” means anything you care about. The art of choosing actions that steer the future toward outcomes ranked higher in your preferences. On LW we sometimes refer to this as “winning”.
Couldn’t these people care about not sacrificing autonomy, and therefore this would be a value that they’re successfully fulfilling?
You can’t decide your preference, preference is not what you actually do, it is what you should do, and it’s encoded in your decision-making capabilities in a nontrivial way, so that you aren’t necessarily capable of seeing what it is.
You’ve lost me.
Which of the decision is (actually) the better one depends on the preferences of one who decides
So if said planet decided that its preference was to perish, rather than sacrifice children, would this be irrational?
However, whatever the right decision is, there normally should be a way to fix the parameters of utilitarian calculation so that it outputs the right decision. For example, if the right decision in the topic problem is actually war to the death, there should be a way to more formally understand the situation so that the utilitarian calculation outputs “war to the death” as the right decision.
I don’t see why I should agree with this statement. I was understanding a utilitarian calculation as either a) the greatest happiness for the greatest number of people or b) the greatest preferences satisfied for the greatest number of people. If a), then it seems like it might predictably give you answers that are at odds with moral intuitions, and have no way of justifying itself against these intuitions. If b), then there’s nothing irrational about deciding to go to war with the aliens.
Well then… I’d say a morality that puts the dignity of a few people (the decision makers) as having more importance than, well, the lives and well being of the majority of the human race is not a very good morality.
Okay. Would you say this statement is based on reason?
If a decision decreases utility, is it not irrational?
I don’t see how you could go about proving this.
As for the trolley problem, what we are dealing with is the aftermath of the trolley problem. If you save the people on the trolley, it could be argued that you have behaved dishonourably, but what about the people you saved? Surely they are innocent of your decision. If humanity is honourably wiped out by the space monsters, is that better than having some humans behave dishonourably and others (i.e. those who favoured resistance, but were powerless to effect it) survive honourably?
Well, wait. Are we dealing with the happiness that results in the aftermath, or are we dealing with the moral value of the actions themselves? Surely these two are discrete. Don’t the intentions behind an action factor into the morality of the action? Or are the results all that matter? If intentions are irrelevant, does that mean that inanimate objects (entities without intentions, good or bad) can do morally good things? If a tornado diverts from a city at the last minute, was that a morally good action?
I think intentions matter. It might be the case that, 100 years later, the next generation will be happier. That doesn’t mean that the decision to sacrifice those children was the morally good decision—in the same way that, despite the tornado-free city being a happier city, it doesn’t mean the tornado’s diversion was a morally good thing.
Ah, then I misunderstood. A better way of phrasing my challenge might be: it sounds like we might have different algorithms, so prove to me that your algorithm is more rational.
No one has answered this challenge.
Well, sure, when you phrase it like that. But your language begs the question: it assumes that the desire for dignity/autonomy is just an impulse/fuzzy feeling, while the desire for preserving human life is an objective good that is the proper aim for all (see my other posts above). This sounds probable to be me, but it doesn’t sound obvious/ rationally derived/ etc.
I could after all, phrase it in the reverse manner. IF I assume that dignity/autonomy is objectively good:
then the question becomes “everyone preserves their objectively good dignity” vs. “just about everyone loses their dignity for destroying human autonomy, but we get that warm fuzzy feeling of saving some people.” In this situation, “Everyone loses their dignity, but at least they get to survive—in the way that any other undignified organism (an amoeba) survives” would actually seem to be a highly immoral decision.
I’m not endorsing either view, necessarily. I’m just trying to figure out how you can claim one of these views is more rational or logical than the other.
What do the space monsters deserve?
Haha, I was not factoring that in. I assumed they were evil. Perhaps that was close minded of me, though.
The first scenario is better for both space monsters and humans. Sure, in the second scenario, the humans theoretically don’t lose their dignity, but what does dignity mean to the dead?
Some people would say that dying honorably is better than living dishonorably. I’m not endorsing this view, I’m just trying to figure out why it’s irrational, while the utilitarian sacrifice of children is more rational.
To put it in another light, what if this situation happened a hundred years ago? Would you be upset that the people alive at the time caved in to the aliens’ demands, or would you prefer the human race had been wiped out?
There are plenty of variables you can slide up and down to make one feel more or less comfortable with the scenario. But we already knew that, didn’t we? That’s what the original trolley problem tells us: that pushing someone off a bridge feels morally different than switching the tracks of a trolley. My concern is that I can’t figure out how to call one impulse (the discomfort at destroying autonomy) an objectively irrelevant mere impulse, and another impulse (the comfort at preserving life) an objectively good fact. It seems difficult to throw just the bathwater out here, but I’d really like to preserve the baby. (See my other post above, in response to Nesov.)
Yeah, the sentiment expressed in that post is usually my instinct too.
But then again, that’s the problem: it’s an instinct. If my utilitarian impulse is just another impulse, then why does it automatically outweigh any other moral impulses I have, such as a value of human autonomy? If my utilitarian impulse is NOT just an impulse, but somehow is objectively more rational and outranks other moral impulses, then I have yet to see a proof of this.
I don’t quite understand how your rhetorical question is analogous here. Can you flesh it out a bit?
I don’t think the notion of dignity is completely meaningless. After all, we don’t just want the maximum number of people to be happy, we also want people to get what they deserve—in other words, we want people to deserve their happiness. If only 10% of the world were decent people, and everyone else were immoral, which scenario would seem the more morally agreeable: the scenario in which the 10% good people were ensured perennial happiness at the expense of the other 90%’s misery, or the reversed scenario?
I’m just seeing something parallel here: it’s not brute number of people living that matters, so much as those people having worthwhile existences. After sacrificing their children on a gamble, do these people really deserve the peace they get?
(Would you also assert that Ozymandias’ decision in The Watchmen was morally good?)
I’m surprised that was so downvoted too.
Perhaps I should rephrase it: I don’t want to assert that it would’ve been objectively better for them to not give up the children. But can someone explain to me why it’s MORE rational to give up in this situation?
That’s horrible. They should’ve fought the space monsters in an all out war. Better to die like that than to give up your dignity. I’m surprised they took that route on the show.
A good example of this (I think) is The Dark Knight, with the two ferries.
Excellent response.
As a side note, I do suspect that there’s a big functional difference between an entity that feels a small voice in the back of the head and an entity that feels pain like we do.