I think no, and I think that if you’d say yes, you also ought to have the nerve to explain to victims like those quoted above how you think that their fate was better than the entirely counterfactual alternative.
Regardless of whether or not I agree with his position here, I think this is an unfair standard to set.
No. But I still would. And I’d let them take it all out on me. I’d hate to live in a world where anything less could be expected of me. Some things ought not to be easy to live with.
Moreover, if we insist that good, moral* people think about making decisions in this way, this leads to more of the decisions being made by evil, immoral people.
Given the way real-world humans behave, incentives work as a blunt instrument. You can’t incentivize only rational decisions without incentivizing irrational decisions that are somewhat similar in form. Incentivizing the 90% chance of saving 500 over the 100% chance of saving 400 would make the right choice more likely in that specific situation, but would also incentivize wrong choices (for instance, taking a 10% chance of 500 people dying in order to implement something that you are really certain would have good effects, when that certainty is unwarranted). You can’t change human psychology to make the incentive work only on rational choices, so we’re overall better off without the incentive.
From the outside view, a randomly picked choice to kill or hurt a large number of people, when made by actual humans, will turn out all wrong and unjustifiable in retrospect, say, 90% of the time. If we’re talking about torture as opposed to just killing enemies, it’s literally only there to create a lasting climate of terror and alienation (in the society being “reshaped” and “reformed”) while giving an outlet to the kind of psychopaths who end up running the repressive machine. So it would make sense to have a very very strong prior against this kind of thing, AKA moral injunction.
Again, if we’re considering counterfactuals along great timespans, we ARE considering counterfactuals along great timespans. Equally. If the counterfactual to a world where Pinochet didn’t take power is a long and bloody civil war, the counterfactual to a world where Pinochets are hated and considered indefensible… is a lot more Pinochets. (Whom we also just served with a much more widely accepted excuse for their horrific acts.)
To work at all, moral injunctions need to rely on blanket statements. Would you rather have “Thou shalt not kill”, or “Thou shalt not kill unless thou sees a really good reason to and it’s totally for the greater good”?
To work at all, moral injunctions need to rely on blanket statements.
As a rule, which is to say as a rule with exceptions. Rules are generally needed because it is not generally possible to accurately figure out consequences. But sometimes it is, in which case it is OK to suspend the rule. As a rule.
Exactly what I’ve been thinking of. But, as a meta-meta-rule, no-one should generally be the judge in one’s own case, i.e. to simply assert that it’s OK to suspend some particular rules for some particular act just because one has predicted some particular consequences.
There’s the problem of enforcement mechanisms, of course.
the counterfactual to a world where Pinochets are hated and considered indefensible… is a lot more Pinochets.
If someone approves of Pinochet, this is unlikely to be a convincing argument to them. Especially if they view warlord types as inevitably occurring during social evolution or something like that.
To work at all, moral injunctions need to rely on blanket statements.
You’ve not argued for this, most of us can imagine situations under which it’s acceptable to kill but we still have a reasonably strong disinclination to avoid killing people.
You walk into the lobby of a hotel during a major political convention. There’s a gun laying on the table next to you, apparently left by the only other occupant of the lobby, who hasn’t noticed you—a guy who is now assembling a gun from a backpack and readying magazines; he’s muttering rather loudly to himself about how many bullets he can put into a senator who is giving a keynote speech this afternoon. “Thou shalt not kill” or “Greater good”?
What would you want somebody else to do in that position?
Could you have at least thought of a scenario that would deserve a response?
Because for this one to even be a dilemma, you’d have to assume that I’m some mute, non-English speaking killer android who can’t: 1) take the gun from the table and tell the guy to turn around, hands in the air, etc; 2) run outside and yell “TERRORISTS!”; 3) hit a fire alarm on the wall; 4) shoot him in the leg...
And anyways, to be even a remote parallel with Allende, this story would need to have two guys arguing and one pushing away the gun that the other offers him, launching into a tirade about how he’s a pacifist/a Christian/whatever, and would never resort to crime even to oppose tyranny. Then he pushes the other one out of the door, throws the gun after him, turns to you and tries to hand you a protest flyer. (But no, even this doesn’t quite get it across.)
If you’re not willing to kill him, you have no business doing #1. #2 would, at best, result in -somebody else- killing him—you’re just outsourcing your moral faults. #3 might just bring more targets to him. And #4 has a pretty high chance of being fatal—femoral artery and all. (Also, a leg is -hard- to shoot. I take it you’ve never shot a gun before. In that case, you have no business shooting the gun at anything but his center of mass.)
I’m not drawing a parallel with Allende, never mind that your parallel whitewashes Allende’s history (Allende would be the senator, or rather president, in this parallel, and there’d be a -crowd- of guys with guns in the lobby, guns and grenades and body armor and aerial support in case they need to bomb the hotel just to be sure, and they wouldn’t be crazy so much as enacting the last-ditch and reluctant wishes of the judiciary after the president has repeatedly broken the constitution and ignored the Supreme Court’s orders, and so on and so forth). I’m taking this to the root of our disagreement—about whether or not consequences should be considered in moral theory.
I’m not a utilitarian, incidentally. I’m somewhere between a deontologist and a virtue ethicist. (Arguments like this are the reason I’ve been drifting away from deontology towards virtue ethics. Entirely different arguments are the reason I’ll never be a utilitarian.) If you don’t think consequences matter, you need some new rules in your deontology.
If you’re not willing to kill him, you have no business doing #1. #2 would, at best, result in -somebody else- killing him—you’re just outsourcing your moral faults.
“Not willing to kill him as a first resort” isn’t the same thing as “not willing to kill him”. Holding a gun on a criminal rather than immediately shooting him doesn’t mean that I’m not willing to kill him, it means that I’m not willing to kill him if he just sits there and waits for the police to arrive. It doesn’t mean that I’m not willing to kill him if he ignores me and continues aiming at the senator, nor does it mean that I’m not willing to have the police kill him if they try to arrest him and he doesn’t cooperate.
Since the rule under consideration is “Thou shallt not kill” and the person I’m arguing with is arguing that “moral injunctions need to rely on blanket statements”, the issue isn’t “Not willing to kill him as a first resort” so much as not willing to kill him, period.
If you’re -willing- to kill him, pointing the gun at him and telling him to halt might actually be a good move. It’s the one I would likely take. If he doesn’t stop, however, and you’re unwilling to kill him, you’ve sacrificed any other alternatives in doing so. Essentially it’s a statement that you’re willing to let some number of people be killed (on average) in order to satisfy your morality.
As is said here on decision theory, you should never be in a position of wishing your morality were different.
Fair point. But “moral injunctions must be blanket statements” doesn’t imply “Any blanket statement is a workable moral injunction.” And I’m not sure if you recognize that Multihead is not required by consistency-of-argument to assert “Any blanket statement is workable.”
The example under discussion is a great example—“Don’t kill” is a unworkable rule given any significant amount of conflict at all. By contrast, the original Hebrew of the commandments translates better as “Don’t murder” which is both a blanket statement and incredibly nuanced at the same time.
To the extent that Multihead argues that the blanket statement rule requires endorsement of “Don’t kill,” then I think you are right and he is wrong. But if that is his actual position, I don’t think he is defending the most defensible variation of that family of arguments.
The point is that murder != killing because there are some killings that aren’t murder (i.e. are not wrongful).
Describing that distinction can’t really be done briefly (e.g. what is and is not self-defense). But one doesn’t need to describe the distinction to notice that the distinction exists.
Yes, but just because it’s tautological doesn’t mean it’s necessarily psychologically compelling. I can easily imagine a human for whom “don’t kill someone you shouldn’t kill” does a much worse job of deterring them from killing someone they shouldn’t kill than “don’t murder” does. If my goal is to deter such humans from killing people they shouldn’t kill, “don’t murder” is much more effective at achieving my goal.
You might think the injunction ‘don’t murder’, is really just a way of saying ‘there is such a thing as murder, which is to say, killing immorally or illegally’ or ‘we have a law about killing’.
Considering people have brought up killing people when sanctioned by a democratic government with appropriate checks and balances, perhaps it refers to “unlawful killing”? Where “lawful” requires democracy or maybe some other supposedly superhumanly ethical authority.
(Arguments like this are the reason I’ve been drifting away from deontology towards virtue ethics. Entirely different arguments are the reason I’ll never be a utilitarian.)
Really? I had assumed you were a utilitarian from your … well, probably because you were the one shutting up and multiplying in this argument, to be honest.
I must say, I’m curious; what arguments persuade you to avoid utilitarianism in favour of virtue ethics?
“Utility”, more or less. Utilitarianism is entirely theoretical; I don’t see an actual application for it in my day-to-day life. The closest I could get would be “Well, if I actually put the work into doing the calculations, this is probably what I’d do”—and given that I know what I’d want to do anyways, the “If I actually put the work into it” part seems irrelevant.
Utilitarianism is also kind of one-dimensional; sure, you could construct a multidimensional utilitarian ethics system, but you lose out on any of the potential benefits of a hierarchical value system. Virtue ethics promotes a multidimensional approach to ethics, which is more intuitive to me, and more explicitly acknowledges the subjectiveness not only of valuation, but also of trade-offs.
He described himself as an “immense fan” of Pinochet. Smells like approval. Don’t ask me why a virtue ethicist would be an immense fan of Pinochet, though. Even if it is true that his regime represented a net utility gain over most plausible counterfactuals, it’s hard to argue that the man himself was virtuous in any ordinary sense. He was a slime.
What would you want somebody else to do in that position?
When I first hastily glanced at your comment, I thought it’d meant that you wished the assassin had believed in “Thou shalt not kill” principle, and that it was the “Greater good” concept that was motivating him.
Likewise any desire to stop the assassin without actually knowing anything about the politics of the senator in question will have to originate more directly from the “Thou shalt not kill” principle, not from the “Greater Good” principle. To not have the former principle at all would have to mean that I’d need to calculate at that exact moment what the “greater good” in the situation actually is, and by the time the calculation is complete, the assassin would have gone about his business and I’d be unable to stop him.
Hence rule utilitarianism, the thing to do when possessing a mind of finite capabilities...
I want to stop the assassin because I don’t want to live in a world where people can just assassinate those they don’t like. As I have no practical way of creating a world where “good” assassins are permitted but “bad” ones are not, the only choice is all assassinations or none. The only way that the politics of the senator would matter is if the senator is so bad that assassinating him is overall a good thing even considering that this increases the overall acceptability of assassination. This scenario is impossible barring very unlikely scenarios (which I will ignore, because of Pascal’s Mugging). So I don’t need to do any calculations at the time.
As I have no practical way of creating a world where “good” assassins are permitted but “bad” ones are not, the only choice is all assassinations or none.
Digressing somewhat… how confident are you of that?
Or, put another way… how much less plausible is this than creating a society where “good” armed-agents-patrolling-residential-areas-to-punish-rulebreakers are permitted but “bad” ones are not, or where “good” armed-groups-capable-of-large-scale-interventions are permitted but “bad” ones are not?
Because a lot of people seem confident that police forces and armies in the real world are practical approximate implementations of those targets. And, sure, I probably can’t go out and start my own police force or army, but it’s clear that such things do get started somehow or other. Similarly, a society where “good” assassins are permitted but “bad” ones are not doesn’t seem unachievable.
I meant, of course, a world where “good” assassins resembling the type described in the post exist and “bad” ones resembling the type in the post exist. I wasn’t intending to rule out killing enemy leaders in war.
I’m not sure that changes my question. Does the situation change if the guy in the lobby identifies with a population with which the senator’s nation is at war?
In other words, Jiro is implicitly defining assassination as violence that improperly escalates a conflict from one where violence is not justified to one where violence is permissible. Under such a definition, the US didn’t assassinate Yamamoto, it simply targeted him specifically for killing.
It seems plausible to me that this definition cuts the world at its joints, but there could be edge cases I haven’t considered.
That’s not my answer. My answer is that the checks and balances inherent in having a democratic government make it permissible for the government to decide to kill people under circumstances where I would not want to let random individuals go around killing people. (This doesn’t mean that I approve of all government killing—just that I approve of a wider range of government killing than killing by individuals.)
Whether you want to say that for the government to kill someone in a war counts as assassination is just a question of semantics.
If the guy in the lobby identifies with a population with which the senator’s nation is at war, and he is aiming at the senator as part of a campaign orchestrated by that population’s government, then yes, the situation does change. (That doesn’t mean I’d approve of the killing, just that the specific reason I gave above for not approving doesn’t apply. There might still be other reasons.)
...and we’re implicitly assuming that ArisKatsaris’ example is of an individual engaging in improper escalation… e.g., that the senator being targeted is not herself engaging in violence (in which case shooting her might be OK), but rather in some less-intense form of conflict (such as rational debate, on your account) to which violence is not a justifiable response?
OK, fair enough.
I’m not really on board with your definitions of “rational debate” or “assassin”, but I’m not sure it matters, so I’m happy to leave that to one side.
And I endorse some notion of proportional response, certainly, though the details are tricky.
This looks as if it’s in agreement with my own position above—but the tone of your comment felt like a disagreement, so has one of us misunderstood something, or did I simply suffer from momentary tone-deafness?.
“The man who passes the sentence should swing the sword.”
That rule literally makes sense only because of scope insensitivity or similar bias. There’s no reason to expect a rationalist to adopt it within a community of rationality.
In other words, maybe instrumentally useful, not terminal value.
VALIS help me, this whole… conversation just feels so surreal to me somehow.
That’s a statement primarily about yourself, only secondarily about the conversation.
Can you please cool it down with attempting to use outrage as an argument? There’s all the rest of the internet if we want to see that, LessWrong is one place where outrage-as-argument should not fly.
I don’t see the grandparent as an attempt at argument at all. Elsewhere, I see Multiheaded expressing arguments with outrage, but this is substantially different from using outrage as an argument. I agree with you that the latter shouldn’t fly on LW, but I have nothing against the former.
In the real world, you are probably right. In the least convenient possible world, torture is an effective interrogation technique and ticking-time-bombs are realistic scenarios, not ridiculous movie plot devices.
In short, I don’t need to be a deontologist to think the overthrow of Allende was a net negative. Please don’t act like the arguments against overthrowing Allende are arguments in favor of bright line rules. If for no other reason than you are creating the perception that deotologist never consider consequence. Which is a stupid position that no deotologists should accept.
If for no other reason than you are creating the perception that deotologist never consider consequence. Which is a stupid position that no deotologists should accept.
Ladies and Gentlemen of the jury, I am legally and morally innocent of the crime. Yes, I wanted to kill John. Yes, I pointed the gun at him. Yes, I pulled the trigger. Yes, John is dead. But we are all deontologists, and thus we don’t think about consequences when we do moral reasoning—so you must find me not guilty of murdering John.
Because that argument is stupid, and I don’t think a deontoligist needs to accept it.
Kant would say something like this: “You treated the victim as a means to your end, killing him because you wanted to. You very likely also broke my other version of the categorical imperative (since I expect you wouldn’t want to live in a world where everyone shot other people whenever they wanted to). It’s consistent with the categorical imperative to send folks like you to prison, since I’d prefer to live in a world like that than one where murderers go free. Guilty as charged!”
As you say, the defendant is guilty of causing the victim’s death for his own benefit.
Moral reasoning without causation just makes no sense. How do we have a coherent discussion of causation without some reference to consequences?
Edit: In other words, consequentialists say “you should always consider consequences,” while I take Kant to say that one should sometimes consider consequences, and sometimes not.
Well, a Straw-man Kantian might conceivably argue that it was the intent to kill that was really wrong, not the killing itself. Mr Straw Kant might conceivably impose almost the same sentence for attempted murder as for actual murder, though he’d want to think carefully about whether he’d really want to live in a world where that was the usual sentence.
However, leaving aside the straw stuffing, yes all real Kantians (and other deontologists) do think about the consequences of actions. Mostly about the consequences if lots of people performed the same actions.
In other words, consequentialists say “you should always consider consequences,” while I take Kant to say that one should sometimes consider consequences, and sometimes not.
Kant, and deontologists are deontologists because they take the intention (or something like it) to be what determines the moral value of an action. In some sense, a Kantian would always think about the consequences of the action, but just wouldn’t take the consequences to determine the moral value of an action. So for example, if I leap into a river to save a drowning baby, then Kant is going to say that my act is to be morally evaluated independently of whether or not I managed (despite my best efforts) to save the drowning baby. I’m not morally responsible for an overly swift current, after all.
However, Kant would say that understanding my intention means understanding what I was trying to bring about: you can’t evaluate my action’s intentions without understanding the consequences I sought. What doesn’t matter to the deontologist is the actual consequence.
Consequentialists and deontologists don’t really differ much in this. Consequentialists, after all, have to draw certain boundaries around ‘consequences’, having to do with what the agent can be called a cause of, as an agent. If I take my ailing brother to the hospital, only to be hit by a meteor on the way, I didn’t therefore act badly, even though he’d have lived through the day had I left him at home. Finally, consequentialists will evaluate courses of action based on expected utility, if only because actual utility is unavailable prior to the action. No consequentialist will say that moral judgements can only be made after the fact.
Well, whenever you say something like “this system of deciding whether an action is right or wrong is flawed; here is a better system,” this doesn’t make sense unless the two systems differ somehow. But then, the meta level can be collapsed to “these acts (which the former system considered right) are actually wrong; these other acts (which the former system considered wrong) are right.” Sounds like a moral judgement to me (or possibly a family of infinitely many moral judgements).
Systems can differ in their “outputs”—the sets of acts which they label “right” or “wrong”—or in their implementation, or both. If system A is contradictory, and system B isn’t, then system B is better. And that’s not a moral judgement.
They do seem to converge. Kant himself laid down a sort of hardcore deonotology in the Groundwork, and then spent the rest of his career sort of regressing toward the mean on all kinds of issues.
Yes, the conversation with drnickbone below is how my response would have gone as well, and you’re right in that sometimes consequences matter to Deontologists and sometimes they don’t. I also think we’ve had this conversation before, because I remember that example. :D
In the real world, you are probably right. In the least convenient possible world, torture is an effective interrogation technique and ticking-time-bombs are realistic scenarios, not ridiculous movie plot devices.
Yes, but so what? You’re asking here whether social rules that have been optimised for the real world will behave well in highly inconvenient possible worlds where torture is actually effective, and ticking nuclear-time-bombs are a routine hazard. And no, they probably won’t work very well in such worlds. Does that somehow make them the wrong rules in the real world?
Multiheaded’s argument style is that OrphanWilde is obviously wrong. I think OrphanWilde is wrong, but I disapprove of debate style that asserts his wrongness is obvious, when I think the historical facts are more ambiguous.
Incidentally, I -also- regard the overthrow of Allende, as it happened, was a net negative. I think the situation would have been better if the coup didn’t happen. But I don’t think Pinochet was responsible for the coup; I think he simply took charge of it (see, for example, contemporary judicial opinions of the coup). That is, given the political situation in Chile, I regard the coup as inevitable, with or without Pinochet; examining what happened in other countries (such as Argentina, whose junta was a series of deaths and coups—I have no idea how Argentina stayed as stable as it did through that mess), Pinochet made things better, rather than worse.
If you blame Pinochet for the coup, yes, I expect Pinochet did more harm than good. That’s an extremely simplistic view of the situation in Chile, however. (Indeed, senior military officials involved in the matter suggested, contrary to the initial public story, that Pinochet was actually a reluctant participant in the coup.)
As far as I can tell, that’s only true if you take the entire Cold War context as a given. If the US wasn’t actively trying to constrain Allende’s freedom to act, is the coup still inevitable? (Since we are reaching the end of my knowledge of Chilean politics, I don’t know the answer to that question).
Presumably, Pinochet thought the repression was necessary for government stability. If Pinochet (or someone similar) had been able to take power without a coup, is the repression necessary for government stability?
More generally, I’m skeptical about the able to draw lessons about right behavior and right governance by looking only at the internals of countries that we already know has significant external interventions on how to govern.
More generally, I’m skeptical about the able to draw lessons about right behavior and right governance by looking only at the internals of countries that we already know has significant external interventions on how to govern.
Strongly agree. it takes some Chutzpah to condemn “Pinochet caused the coup” as naive, whilst ignoring external influences.
I hate to agree with you, but I do, in some ways. It’s all fine and dandy to talk about Pinochet being good for Chile, but if he thought so, he should have been doing a fair chunk of the executions and tortures himself.
Alas, I have no reason to think Pinochet would have treated this like a deterrent. Except that he would likely have thought it a waste of his time because he had more important things to do.
Alas, I have no reason to think Pinochet would have treated this like a deterrent. Except that he would likely have thought it a waste of his time because he had more important things to do.
Um, I take it that shminux meant OW and not Pinochet by “him”? Grammar confusion?
No, I meant Pinochet. It would have been a good way for him to gauge his resolve in staying the course and avoiding the wetware bugs if he had committed to performing at least one tenth of the executions with his own hand. Also applies to other dictators.
...well, this went downhill pretty quick. Seriously, your view of human behavior and psychology appears to be rather unconventional.
By the way. Were you aware that Nazi Germany’s switch from Einsatzgruppen to gas chambers as the preferred instrument of genocide was caused at least partly by Himmler visiting a mass execution by the SS in Belarus, becoming all sick at the sight of prisoners being gunned down, and immediately issuing a policy memo calling for a more “humane”, “clean” and automated method of mass slaughter? Historians confirm the veracity of this episode. (http://en.wikipedia.org/wiki/Heinrich_Himmler#The_Holocaust)
This is the mirror image of the thought experiment of Gandhi and the murder pill. Gandhi (hypothetically) would not take the pill that would remove his repugnance to murder. Himmler (actually) refused the pill that would weaken his resolve to exterminate the Jews.
On a more trivial level, it is standard advice, here and elsewhere, to avoid distractions when trying to get work done, and, if it helps, using artificial blocks on one’s internet access to facilitate this. Is this also a reprehensible attempt to avoid “gauging one’s resolve in staying the course”? Or a sensible way of achieving one’s purposes?
Of course, we would like Himmler to have turned against the extermination project, so it is easy to say that he should have done the wet work himself, because that might have led to the result that we prefer. But that is idle talk. Himmler was in charge and organised things according to his aims, not ours, and he took steps to eliminate what he regarded as a useless distraction from the task. His fault was in undertaking the task at all.
I guess you are confirming what I was saying. The out like the one you describe should not be available. If Himmler was giving extermination orders, he should have participated in executions personally, not just giving orders. This is a pretty high threshold for most “normal” people, not psychopaths.
Of all fictional treatments of this question, the one that stood out to me the most is the one in Three Worlds Collide because of its restraint from turning a psychological question into a moral question.
“Once upon a time,” said the Kiritsugu, “there were people who dropped a U-235 fission bomb, on a place called Hiroshima. They killed perhaps seventy thousand people, and ended a war. And if the good and decent officer who pressed that button had needed to walk up to a man, a woman, a child, and slit their throats one at a time, he would have broken long before he killed seventy thousand people.”
“But pressing a button is different,” the Kiritsugu said. “You don’t see the results, then. Stabbing someone with a knife has an impact on you. The first time, anyway. Shooting someone with a gun is easier. Being a few meters further away makes a surprising difference. Only needing to pull a trigger changes it a lot. As for pressing a button on a spaceship—that’s the easiest of all. Then the part about ‘fifteen billion’ just gets flushed away. And more importantly—you think it was the right thing to do. The noble, the moral, the honorable thing to do. For the safety of your tribe. You’re proud of it—”
“Are you saying,” the Lord Pilot said, “that it was not the right thing to do?”
“No,” the Kiritsugu said. “I’m saying that, right or wrong, the belief is all it takes.”
If Himmler was giving extermination orders, he should have participated in executions personally, not just giving orders.
Bean’s style of leadership was similar to the above expectation—I assumed your opinion had been influenced by the book, and want to confirm or correct my perception.
Oh. now I remember the musings about it. No, I was simply agreeing with Multiheaded’s link to the Game of Thrones. It’s not a counter-intuitive idea, really. If you to do something that can be reasonably construed as evil, you better do it yourself to test your resolve and experience the negative impact first hand. Anyway, I thought I was clear in my replies to Multiheaded, but apparently not. Eh, who cares.
It’s not a counter-intuitive idea, really. If you to do something that can be reasonably construed as evil, you better do it yourself to test your resolve and experience the negative impact first hand.
That’s an appealing enough system, intuitively—but it also implies that the system’s selecting for amorality, provided that relatively amoral actions are sometimes adaptive in the ordinary course of rulership. I have no idea whether or not this would erode away the gains from making scope more salient, but to run with the Game of Thrones metaphor it would be a shame if you were trying to select for people like Ned Stark and ended up in a local minimum at Ramsey Bolton.
...Wow. Faith in the common decency of average LW user suddenly resurging! Seriously, thank you, dude.
You know I’ve clashed with you over this before, I’ve more or less written you off as impossible to persuade on this issue (not as in “inhuman monster”, more like “committed ideological enemy”)… and yet you try to share at least part of my moral sentiment here. I am grateful.
Regardless of whether or not I agree with his position here, I think this is an unfair standard to set.
If you chose a 90% chance of saving 500 people over a 100% chance of saving 400, got unlucky, and those 500 died, how forgiving do you think their families would be? Do you think it would be easy to face them?
I don’t think this sort of moral lever is very useful for separating good choices from bad ones.
No. But I still would. And I’d let them take it all out on me. I’d hate to live in a world where anything less could be expected of me. Some things ought not to be easy to live with.
“The man who passes the sentence should swing the sword.”
If we make the right choice as or more difficult to live with than wrong ones, we’re not doing a very good job incentivizing people to take it.
Moreover, if we insist that good, moral* people think about making decisions in this way, this leads to more of the decisions being made by evil, immoral people.
*for all values of “good” and “moral”.
Given the way real-world humans behave, incentives work as a blunt instrument. You can’t incentivize only rational decisions without incentivizing irrational decisions that are somewhat similar in form. Incentivizing the 90% chance of saving 500 over the 100% chance of saving 400 would make the right choice more likely in that specific situation, but would also incentivize wrong choices (for instance, taking a 10% chance of 500 people dying in order to implement something that you are really certain would have good effects, when that certainty is unwarranted). You can’t change human psychology to make the incentive work only on rational choices, so we’re overall better off without the incentive.
From the outside view, a randomly picked choice to kill or hurt a large number of people, when made by actual humans, will turn out all wrong and unjustifiable in retrospect, say, 90% of the time. If we’re talking about torture as opposed to just killing enemies, it’s literally only there to create a lasting climate of terror and alienation (in the society being “reshaped” and “reformed”) while giving an outlet to the kind of psychopaths who end up running the repressive machine. So it would make sense to have a very very strong prior against this kind of thing, AKA moral injunction.
Again, if we’re considering counterfactuals along great timespans, we ARE considering counterfactuals along great timespans. Equally. If the counterfactual to a world where Pinochet didn’t take power is a long and bloody civil war, the counterfactual to a world where Pinochets are hated and considered indefensible… is a lot more Pinochets. (Whom we also just served with a much more widely accepted excuse for their horrific acts.)
To work at all, moral injunctions need to rely on blanket statements. Would you rather have “Thou shalt not kill”, or “Thou shalt not kill unless thou sees a really good reason to and it’s totally for the greater good”?
As a rule, which is to say as a rule with exceptions. Rules are generally needed because it is not generally possible to accurately figure out consequences. But sometimes it is, in which case it is OK to suspend the rule. As a rule.
Exactly what I’ve been thinking of. But, as a meta-meta-rule, no-one should generally be the judge in one’s own case, i.e. to simply assert that it’s OK to suspend some particular rules for some particular act just because one has predicted some particular consequences.
There’s the problem of enforcement mechanisms, of course.
If someone approves of Pinochet, this is unlikely to be a convincing argument to them. Especially if they view warlord types as inevitably occurring during social evolution or something like that.
You’ve not argued for this, most of us can imagine situations under which it’s acceptable to kill but we still have a reasonably strong disinclination to avoid killing people.
You walk into the lobby of a hotel during a major political convention. There’s a gun laying on the table next to you, apparently left by the only other occupant of the lobby, who hasn’t noticed you—a guy who is now assembling a gun from a backpack and readying magazines; he’s muttering rather loudly to himself about how many bullets he can put into a senator who is giving a keynote speech this afternoon. “Thou shalt not kill” or “Greater good”?
What would you want somebody else to do in that position?
Could you have at least thought of a scenario that would deserve a response?
Because for this one to even be a dilemma, you’d have to assume that I’m some mute, non-English speaking killer android who can’t: 1) take the gun from the table and tell the guy to turn around, hands in the air, etc; 2) run outside and yell “TERRORISTS!”; 3) hit a fire alarm on the wall; 4) shoot him in the leg...
And anyways, to be even a remote parallel with Allende, this story would need to have two guys arguing and one pushing away the gun that the other offers him, launching into a tirade about how he’s a pacifist/a Christian/whatever, and would never resort to crime even to oppose tyranny. Then he pushes the other one out of the door, throws the gun after him, turns to you and tries to hand you a protest flyer. (But no, even this doesn’t quite get it across.)
Shooting people in the leg is difficult because they’re small targets that move quickly. Aiming for the torso is much more reliable.
If you’re not willing to kill him, you have no business doing #1. #2 would, at best, result in -somebody else- killing him—you’re just outsourcing your moral faults. #3 might just bring more targets to him. And #4 has a pretty high chance of being fatal—femoral artery and all. (Also, a leg is -hard- to shoot. I take it you’ve never shot a gun before. In that case, you have no business shooting the gun at anything but his center of mass.)
I’m not drawing a parallel with Allende, never mind that your parallel whitewashes Allende’s history (Allende would be the senator, or rather president, in this parallel, and there’d be a -crowd- of guys with guns in the lobby, guns and grenades and body armor and aerial support in case they need to bomb the hotel just to be sure, and they wouldn’t be crazy so much as enacting the last-ditch and reluctant wishes of the judiciary after the president has repeatedly broken the constitution and ignored the Supreme Court’s orders, and so on and so forth). I’m taking this to the root of our disagreement—about whether or not consequences should be considered in moral theory.
I’m not a utilitarian, incidentally. I’m somewhere between a deontologist and a virtue ethicist. (Arguments like this are the reason I’ve been drifting away from deontology towards virtue ethics. Entirely different arguments are the reason I’ll never be a utilitarian.) If you don’t think consequences matter, you need some new rules in your deontology.
“Not willing to kill him as a first resort” isn’t the same thing as “not willing to kill him”. Holding a gun on a criminal rather than immediately shooting him doesn’t mean that I’m not willing to kill him, it means that I’m not willing to kill him if he just sits there and waits for the police to arrive. It doesn’t mean that I’m not willing to kill him if he ignores me and continues aiming at the senator, nor does it mean that I’m not willing to have the police kill him if they try to arrest him and he doesn’t cooperate.
Since the rule under consideration is “Thou shallt not kill” and the person I’m arguing with is arguing that “moral injunctions need to rely on blanket statements”, the issue isn’t “Not willing to kill him as a first resort” so much as not willing to kill him, period.
If you’re -willing- to kill him, pointing the gun at him and telling him to halt might actually be a good move. It’s the one I would likely take. If he doesn’t stop, however, and you’re unwilling to kill him, you’ve sacrificed any other alternatives in doing so. Essentially it’s a statement that you’re willing to let some number of people be killed (on average) in order to satisfy your morality.
As is said here on decision theory, you should never be in a position of wishing your morality were different.
Fair point. But “moral injunctions must be blanket statements” doesn’t imply “Any blanket statement is a workable moral injunction.” And I’m not sure if you recognize that Multihead is not required by consistency-of-argument to assert “Any blanket statement is workable.”
The example under discussion is a great example—“Don’t kill” is a unworkable rule given any significant amount of conflict at all. By contrast, the original Hebrew of the commandments translates better as “Don’t murder” which is both a blanket statement and incredibly nuanced at the same time.
To the extent that Multihead argues that the blanket statement rule requires endorsement of “Don’t kill,” then I think you are right and he is wrong. But if that is his actual position, I don’t think he is defending the most defensible variation of that family of arguments.
Taboo murder. If it means ‘kill someone you shouldn’t kill’, then it’s tautological that you shouldn’t murder.
:-)
The point is that murder != killing because there are some killings that aren’t murder (i.e. are not wrongful).
Describing that distinction can’t really be done briefly (e.g. what is and is not self-defense). But one doesn’t need to describe the distinction to notice that the distinction exists.
Yes, but just because it’s tautological doesn’t mean it’s necessarily psychologically compelling. I can easily imagine a human for whom “don’t kill someone you shouldn’t kill” does a much worse job of deterring them from killing someone they shouldn’t kill than “don’t murder” does. If my goal is to deter such humans from killing people they shouldn’t kill, “don’t murder” is much more effective at achieving my goal.
:-)
You might think the injunction ‘don’t murder’, is really just a way of saying ‘there is such a thing as murder, which is to say, killing immorally or illegally’ or ‘we have a law about killing’.
Considering people have brought up killing people when sanctioned by a democratic government with appropriate checks and balances, perhaps it refers to “unlawful killing”? Where “lawful” requires democracy or maybe some other supposedly superhumanly ethical authority.
Not necessarily—it depends on how convincing your bluff is to the other guy.
I would say, rathert, that it depends on how convincing you’re justified in expecting your bluff to be to them.
Really? I had assumed you were a utilitarian from your … well, probably because you were the one shutting up and multiplying in this argument, to be honest.
I must say, I’m curious; what arguments persuade you to avoid utilitarianism in favour of virtue ethics?
“Utility”, more or less. Utilitarianism is entirely theoretical; I don’t see an actual application for it in my day-to-day life. The closest I could get would be “Well, if I actually put the work into doing the calculations, this is probably what I’d do”—and given that I know what I’d want to do anyways, the “If I actually put the work into it” part seems irrelevant.
Utilitarianism is also kind of one-dimensional; sure, you could construct a multidimensional utilitarian ethics system, but you lose out on any of the potential benefits of a hierarchical value system. Virtue ethics promotes a multidimensional approach to ethics, which is more intuitive to me, and more explicitly acknowledges the subjectiveness not only of valuation, but also of trade-offs.
Well, technically OrphanWilde merely said that Pinochet increased utility on net, he didn’t way he approved of him.
He described himself as an “immense fan” of Pinochet. Smells like approval. Don’t ask me why a virtue ethicist would be an immense fan of Pinochet, though. Even if it is true that his regime represented a net utility gain over most plausible counterfactuals, it’s hard to argue that the man himself was virtuous in any ordinary sense. He was a slime.
When I first hastily glanced at your comment, I thought it’d meant that you wished the assassin had believed in “Thou shalt not kill” principle, and that it was the “Greater good” concept that was motivating him.
Likewise any desire to stop the assassin without actually knowing anything about the politics of the senator in question will have to originate more directly from the “Thou shalt not kill” principle, not from the “Greater Good” principle. To not have the former principle at all would have to mean that I’d need to calculate at that exact moment what the “greater good” in the situation actually is, and by the time the calculation is complete, the assassin would have gone about his business and I’d be unable to stop him.
Hence rule utilitarianism, the thing to do when possessing a mind of finite capabilities...
I want to stop the assassin because I don’t want to live in a world where people can just assassinate those they don’t like. As I have no practical way of creating a world where “good” assassins are permitted but “bad” ones are not, the only choice is all assassinations or none. The only way that the politics of the senator would matter is if the senator is so bad that assassinating him is overall a good thing even considering that this increases the overall acceptability of assassination. This scenario is impossible barring very unlikely scenarios (which I will ignore, because of Pascal’s Mugging). So I don’t need to do any calculations at the time.
Digressing somewhat… how confident are you of that?
Or, put another way… how much less plausible is this than creating a society where “good” armed-agents-patrolling-residential-areas-to-punish-rulebreakers are permitted but “bad” ones are not, or where “good” armed-groups-capable-of-large-scale-interventions are permitted but “bad” ones are not?
Because a lot of people seem confident that police forces and armies in the real world are practical approximate implementations of those targets. And, sure, I probably can’t go out and start my own police force or army, but it’s clear that such things do get started somehow or other. Similarly, a society where “good” assassins are permitted but “bad” ones are not doesn’t seem unachievable.
I meant, of course, a world where “good” assassins resembling the type described in the post exist and “bad” ones resembling the type in the post exist. I wasn’t intending to rule out killing enemy leaders in war.
I’m not sure that changes my question. Does the situation change if the guy in the lobby identifies with a population with which the senator’s nation is at war?
I’m not Jiro, but I think the best answer involves creating a scale of intensity of a conflict, and then drawing a line such that rational debate is never an intense enough conflict to justify violence. (by definition of rational debate).
In other words, Jiro is implicitly defining assassination as violence that improperly escalates a conflict from one where violence is not justified to one where violence is permissible. Under such a definition, the US didn’t assassinate Yamamoto, it simply targeted him specifically for killing.
It seems plausible to me that this definition cuts the world at its joints, but there could be edge cases I haven’t considered.
That’s not my answer. My answer is that the checks and balances inherent in having a democratic government make it permissible for the government to decide to kill people under circumstances where I would not want to let random individuals go around killing people. (This doesn’t mean that I approve of all government killing—just that I approve of a wider range of government killing than killing by individuals.)
Whether you want to say that for the government to kill someone in a war counts as assassination is just a question of semantics.
If the guy in the lobby identifies with a population with which the senator’s nation is at war, and he is aiming at the senator as part of a campaign orchestrated by that population’s government, then yes, the situation does change. (That doesn’t mean I’d approve of the killing, just that the specific reason I gave above for not approving doesn’t apply. There might still be other reasons.)
...and we’re implicitly assuming that ArisKatsaris’ example is of an individual engaging in improper escalation… e.g., that the senator being targeted is not herself engaging in violence (in which case shooting her might be OK), but rather in some less-intense form of conflict (such as rational debate, on your account) to which violence is not a justifiable response?
OK, fair enough.
I’m not really on board with your definitions of “rational debate” or “assassin”, but I’m not sure it matters, so I’m happy to leave that to one side.
And I endorse some notion of proportional response, certainly, though the details are tricky.
This looks as if it’s in agreement with my own position above—but the tone of your comment felt like a disagreement, so has one of us misunderstood something, or did I simply suffer from momentary tone-deafness?.
I would want them to alert hotel security and/or call the police.
Why does the guy need to assemble a second gun if he already had one, and how do you make one out of a backpack?
He needs to have a second gun ready so that he can get as many shots off as possible before having to reload.
He isn’t assembling the gun out of a backpack, but from a backpack: specifically, from gun parts which are inside the backpack.
Apparently at least one of my questions was a stupid question, but thank you anyway.
That rule literally makes sense only because of scope insensitivity or similar bias. There’s no reason to expect a rationalist to adopt it within a community of rationality.
In other words, maybe instrumentally useful, not terminal value.
VALIS help me, this whole… conversation just feels so surreal to me somehow.
That’s a statement primarily about yourself, only secondarily about the conversation.
Can you please cool it down with attempting to use outrage as an argument? There’s all the rest of the internet if we want to see that, LessWrong is one place where outrage-as-argument should not fly.
I don’t see the grandparent as an attempt at argument at all. Elsewhere, I see Multiheaded expressing arguments with outrage, but this is substantially different from using outrage as an argument. I agree with you that the latter shouldn’t fly on LW, but I have nothing against the former.
Presumably when we’re talking about killing and torturing people, the context cannot be a “community of rationality”.
I’m not sure that follows. “Rationality” isn’t a generic applause light. It doesn’t mean ‘nice’.
In the real world, you are probably right. In the least convenient possible world, torture is an effective interrogation technique and ticking-time-bombs are realistic scenarios, not ridiculous movie plot devices.
In short, I don’t need to be a deontologist to think the overthrow of Allende was a net negative. Please don’t act like the arguments against overthrowing Allende are arguments in favor of bright line rules. If for no other reason than you are creating the perception that deotologist never consider consequence. Which is a stupid position that no deotologists should accept.
Someone should have told Kant that.
Kant thinks this argument should work?
Because that argument is stupid, and I don’t think a deontoligist needs to accept it.
Only if “pointing the gun at people and pulling the trigger” is replaced with an applause light.
???
Kant would say something like this: “You treated the victim as a means to your end, killing him because you wanted to. You very likely also broke my other version of the categorical imperative (since I expect you wouldn’t want to live in a world where everyone shot other people whenever they wanted to). It’s consistent with the categorical imperative to send folks like you to prison, since I’d prefer to live in a world like that than one where murderers go free. Guilty as charged!”
As you say, the defendant is guilty of causing the victim’s death for his own benefit.
Moral reasoning without causation just makes no sense. How do we have a coherent discussion of causation without some reference to consequences?
Edit: In other words, consequentialists say “you should always consider consequences,” while I take Kant to say that one should sometimes consider consequences, and sometimes not.
Well, a Straw-man Kantian might conceivably argue that it was the intent to kill that was really wrong, not the killing itself. Mr Straw Kant might conceivably impose almost the same sentence for attempted murder as for actual murder, though he’d want to think carefully about whether he’d really want to live in a world where that was the usual sentence.
However, leaving aside the straw stuffing, yes all real Kantians (and other deontologists) do think about the consequences of actions. Mostly about the consequences if lots of people performed the same actions.
Kant, and deontologists are deontologists because they take the intention (or something like it) to be what determines the moral value of an action. In some sense, a Kantian would always think about the consequences of the action, but just wouldn’t take the consequences to determine the moral value of an action. So for example, if I leap into a river to save a drowning baby, then Kant is going to say that my act is to be morally evaluated independently of whether or not I managed (despite my best efforts) to save the drowning baby. I’m not morally responsible for an overly swift current, after all.
However, Kant would say that understanding my intention means understanding what I was trying to bring about: you can’t evaluate my action’s intentions without understanding the consequences I sought. What doesn’t matter to the deontologist is the actual consequence.
Consequentialists and deontologists don’t really differ much in this. Consequentialists, after all, have to draw certain boundaries around ‘consequences’, having to do with what the agent can be called a cause of, as an agent. If I take my ailing brother to the hospital, only to be hit by a meteor on the way, I didn’t therefore act badly, even though he’d have lived through the day had I left him at home. Finally, consequentialists will evaluate courses of action based on expected utility, if only because actual utility is unavailable prior to the action. No consequentialist will say that moral judgements can only be made after the fact.
To put it another way, the more you fix the problems in C-ism, the more it looks like D-ology and vice versa.
The convergence is probably due to (and converging to) whatever we use to judge, in both cases, that what we’re doing is “fixing the problems”.
I don’t see why that should itself be a moral judgement, if that is what you were getting at?
Well, whenever you say something like “this system of deciding whether an action is right or wrong is flawed; here is a better system,” this doesn’t make sense unless the two systems differ somehow. But then, the meta level can be collapsed to “these acts (which the former system considered right) are actually wrong; these other acts (which the former system considered wrong) are right.” Sounds like a moral judgement to me (or possibly a family of infinitely many moral judgements).
Systems can differ in their “outputs”—the sets of acts which they label “right” or “wrong”—or in their implementation, or both. If system A is contradictory, and system B isn’t, then system B is better. And that’s not a moral judgement.
They do seem to converge. Kant himself laid down a sort of hardcore deonotology in the Groundwork, and then spent the rest of his career sort of regressing toward the mean on all kinds of issues.
Yes, the conversation with drnickbone below is how my response would have gone as well, and you’re right in that sometimes consequences matter to Deontologists and sometimes they don’t. I also think we’ve had this conversation before, because I remember that example. :D
Yes, but so what? You’re asking here whether social rules that have been optimised for the real world will behave well in highly inconvenient possible worlds where torture is actually effective, and ticking nuclear-time-bombs are a routine hazard. And no, they probably won’t work very well in such worlds. Does that somehow make them the wrong rules in the real world?
Multiheaded’s argument style is that OrphanWilde is obviously wrong. I think OrphanWilde is wrong, but I disapprove of debate style that asserts his wrongness is obvious, when I think the historical facts are more ambiguous.
Incidentally, I -also- regard the overthrow of Allende, as it happened, was a net negative. I think the situation would have been better if the coup didn’t happen. But I don’t think Pinochet was responsible for the coup; I think he simply took charge of it (see, for example, contemporary judicial opinions of the coup). That is, given the political situation in Chile, I regard the coup as inevitable, with or without Pinochet; examining what happened in other countries (such as Argentina, whose junta was a series of deaths and coups—I have no idea how Argentina stayed as stable as it did through that mess), Pinochet made things better, rather than worse.
If you blame Pinochet for the coup, yes, I expect Pinochet did more harm than good. That’s an extremely simplistic view of the situation in Chile, however. (Indeed, senior military officials involved in the matter suggested, contrary to the initial public story, that Pinochet was actually a reluctant participant in the coup.)
As far as I can tell, that’s only true if you take the entire Cold War context as a given. If the US wasn’t actively trying to constrain Allende’s freedom to act, is the coup still inevitable? (Since we are reaching the end of my knowledge of Chilean politics, I don’t know the answer to that question).
Presumably, Pinochet thought the repression was necessary for government stability. If Pinochet (or someone similar) had been able to take power without a coup, is the repression necessary for government stability?
More generally, I’m skeptical about the able to draw lessons about right behavior and right governance by looking only at the internals of countries that we already know has significant external interventions on how to govern.
Strongly agree. it takes some Chutzpah to condemn “Pinochet caused the coup” as naive, whilst ignoring external influences.
I hate to agree with you, but I do, in some ways. It’s all fine and dandy to talk about Pinochet being good for Chile, but if he thought so, he should have been doing a fair chunk of the executions and tortures himself.
Alas, I have no reason to think Pinochet would have treated this like a deterrent. Except that he would likely have thought it a waste of his time because he had more important things to do.
Um, I take it that shminux meant OW and not Pinochet by “him”? Grammar confusion?
No, I meant Pinochet. It would have been a good way for him to gauge his resolve in staying the course and avoiding the wetware bugs if he had committed to performing at least one tenth of the executions with his own hand. Also applies to other dictators.
...
...
...well, this went downhill pretty quick. Seriously, your view of human behavior and psychology appears to be rather unconventional.
By the way. Were you aware that Nazi Germany’s switch from Einsatzgruppen to gas chambers as the preferred instrument of genocide was caused at least partly by Himmler visiting a mass execution by the SS in Belarus, becoming all sick at the sight of prisoners being gunned down, and immediately issuing a policy memo calling for a more “humane”, “clean” and automated method of mass slaughter? Historians confirm the veracity of this episode. (http://en.wikipedia.org/wiki/Heinrich_Himmler#The_Holocaust)
This is the mirror image of the thought experiment of Gandhi and the murder pill. Gandhi (hypothetically) would not take the pill that would remove his repugnance to murder. Himmler (actually) refused the pill that would weaken his resolve to exterminate the Jews.
On a more trivial level, it is standard advice, here and elsewhere, to avoid distractions when trying to get work done, and, if it helps, using artificial blocks on one’s internet access to facilitate this. Is this also a reprehensible attempt to avoid “gauging one’s resolve in staying the course”? Or a sensible way of achieving one’s purposes?
Of course, we would like Himmler to have turned against the extermination project, so it is easy to say that he should have done the wet work himself, because that might have led to the result that we prefer. But that is idle talk. Himmler was in charge and organised things according to his aims, not ours, and he took steps to eliminate what he regarded as a useless distraction from the task. His fault was in undertaking the task at all.
Well, yes.
I guess you are confirming what I was saying. The out like the one you describe should not be available. If Himmler was giving extermination orders, he should have participated in executions personally, not just giving orders. This is a pretty high threshold for most “normal” people, not psychopaths.
Have you read the Bean Cycle by Orson Scott Card?
Of all fictional treatments of this question, the one that stood out to me the most is the one in Three Worlds Collide because of its restraint from turning a psychological question into a moral question.
And you are asking this why? (Achilles was a psychopath, in case this is your point.)
Bean’s style of leadership was similar to the above expectation—I assumed your opinion had been influenced by the book, and want to confirm or correct my perception.
Oh. now I remember the musings about it. No, I was simply agreeing with Multiheaded’s link to the Game of Thrones. It’s not a counter-intuitive idea, really. If you to do something that can be reasonably construed as evil, you better do it yourself to test your resolve and experience the negative impact first hand. Anyway, I thought I was clear in my replies to Multiheaded, but apparently not. Eh, who cares.
That’s an appealing enough system, intuitively—but it also implies that the system’s selecting for amorality, provided that relatively amoral actions are sometimes adaptive in the ordinary course of rulership. I have no idea whether or not this would erode away the gains from making scope more salient, but to run with the Game of Thrones metaphor it would be a shame if you were trying to select for people like Ned Stark and ended up in a local minimum at Ramsey Bolton.
...Wow. Faith in the common decency of average LW user suddenly resurging! Seriously, thank you, dude.
You know I’ve clashed with you over this before, I’ve more or less written you off as impossible to persuade on this issue (not as in “inhuman monster”, more like “committed ideological enemy”)… and yet you try to share at least part of my moral sentiment here. I am grateful.