Fuck every cause that ends in murder and children crying.
I suppose I somewhat appreciate the sentiment. I note that labelling the killing ‘murder’ has already amounted to significant discretion. Killings that are approved of get to be labelled something nicer sounding.
Does this pay rent in policy changes? It seems probable that existing policy positions will already determine the contexts in which we might choose to apply this quote, so that the quote will only be generating the appearance of additional evidential weight, but will in fact result in double-counting if we use its applicability as evidence for or against a proposal, because we already chose to use the quote because we disagreed with the porposal. For example: ‘This imperialist intervention is wrong—Fuck every cause that ends in murder and children crying.’ Is the latter clause doing any work?
(First version of this comment:
Does this pay rent in suggested policies? It feels like under all plausible interpretations, it’s at best ‘I’m so righteous!’ and possibly otherthings.)
Yes. It rules out all sorts of policies, including good ones. It likely rules out murdering Hitler to prevent a war, especially if that requires killing guards in order to get to him.
I agree entirely with your new wording. This quote seems to be the sort of claim to bring out conditionally against causes we oppose but conveniently ignore when we support the cause.
As much as I love Banks, this sounds like a massive set of applause lights, complete with sparkling Catherine wheels. Sometimes, you have to do shitty things to improve the world, and sometimes the shitty things are really shitty, because we’re not smart enough to find a better option fast enough to avoid the awful things resulting from not improving at all. “The perfect must not be the enemy of the good” and so on.
Sometimes, you have to do shitty things to improve the world
And sometimes you do shitty things because you think they will improve the world, but hey, even though the road to hell is very well-paved already, there’s always a place for another cobblestone...
The heuristic of this quote is that it is a firewall against a runaway utility function. If you convince yourself that something will generate gazillions of utilons, you’d be willing to pay a very high price to reach this, even though your estimates might be in error. This heuristic puts a cap on the price.
It’s good as an exhortation to build a Schelling fence, but without that sentiment, it’s pretty hollow. Reading the context, though, I agree with you: it’s a reminder that feeling really sure about something and being willing to sacrifice a lot of you and other (possibly unwilling) people to create a putative utopia probably means you’re wrong.
“Sorrow be damned, and all your plans. Fuck the faithful, fuck the committed, the dedicated, the true believers; fuck all the sure and certain people prepared to maim and kill whoever got in their way; fuck every cause that ended in murder and a child screaming. She turned and ran...”
(As an aside, I now have the perfect line for if I ever become an evil mastermind and someone quotes that at me: “But you see, murder and children screaming is only the beginning!”)
I think this is a useful heuristic because humans are just not good at calculating this stuff. Ethical Injunctions suggests that you do in fact check with your emotions when the numbers say something novel. (This is why I’m sceptical about deciding on numbers pulled out of your arse rather than pulling the decision directly out of your arse.)
Is that both, or either or?
Because if it is either or it may include such attrocities as going to bed on time and eating vegetables.
If it is both, it seems to imply killing those not as beloved by children may be acceptable.
Well, no. It’s against the promise of how many utilons you can pile up on the other arm of the scale, which may well not pay off at all. I’m reminded of a post here at some point whose gist was “if your model tells you that your chances of being wrong are 3^^^3:1 against, it is more likely that your model is wrong than that you are right.”
Yes, but the quote in no way concerns itself with the probability that such a plan will go wrong; rather, it explicitly includes even those with a wide margin of error, including “every” plan which ends in murder and children crying.
Well, getting into a car with your family is not inherently bad, so it’s not a very good parallel… but if your overall point is that “expected value calculations do not retroactively lose mathematical validity because the world turned out a certain way”, then that’s definitely true.
I think that the “what if it all goes wrong” sort of comment is meant to trigger the response of “oh god… it was all for nothing! Nothing!!!”. Which is silly, of course. We murdered all those people and made those children cry for the expected value of the plan. Complaining that the expected value of an action is not equal to the actual value of the outcome is a pretty elementary mistake.
The features of my plan which mitigate the result of the plan going wrong kick in, and the damage is mitigated. I don’t go on vacation, despite the nonrefundable expenses incurred. The plan didn’t end in death and sadness, even if a particular implementation did.
When the plan ends in murder and children crying, every failure of the plan results in a worse outcome.
When the plan ends in murder and children crying, every failure of the plan results in a worse outcome.
This does not seem to follow. Failure of the plan could easily involve failure to cause the murder or crying to happen for a start. Then there is the consideration that an unspecified failure has completely undefined behaviour. Anything could happen, from extinction or species-wide endless torture to the outright creation of a utopia.
For most people, murder and children crying are a bad outcome for a plan, but if they’re what the planner has selected as the intended outcome, the other probable outcomes are presumably worse. Theoretically, the plan could “fail” and end in an outcome with more utilons than murder and children crying, but those failures are obviously improbable: because if they weren’t, then the planner would presumably have selected them as the desired plan outcome.
I think we need to examine what we mean by ‘fail’.
A plan does not fail simply because the actual outcome is different from the outcome judged most likely; a plan fails when a contingency not prepared for occurs which prevents the intended outcome from being realized, or when an explicit failure state of the plan is reached.
If I plan to go on a vacation and prepare for a major illness by deciding that I will cancel the vacation, then experiencing a major illness might cause the plan to fail- because I have identified that as a failure state. The more important the object of the plan, the harder I will work in the planning stage to minimize the likelihood of ending up in a failure state. (When sending a probe to Mars, for example, I want to be prepared such that everything I can think of that might go wrong along the way still yields a success condition.)
It’s not a matter of “the plan might go wrong”, it’s a matter of “the plan might be wrong”, and the universal part comes from “no, really, yours too, because you aren’t remotely special.”
I wonder if people here realize how anti-utilitarianism this quote is :-)
You seem to be implying that people here should care about things being anti-utilitarianism. They shouldn’t. Utilitarianism refers to a group of largely abhorrent and arbitrary value systems.
It is also contrary to virtually all consequentialist value systems of the kind actually held by people here or extrapolatable from humans. All consequentialist systems that match the quote’s criteria for not being ‘Fucked’ are abhorrent.
It is not. “Murder and children crying” here are not means to an end, they are consequences as well. Maybe not intended consequences, maybe side effects (“collateral damage”), but still consequences.
I see no self-contradiction in a consequentialist approach which just declares certain consequences (e.g. “murder and children crying”) be be unacceptable.
There is nothing about consequentialism which distinguishes means from ends. Anything that happens is an “end” of the series of actions which produced it, even if it is not a terminal step, even if it is not intended.
When wedrifid says that the quote is “anti-consequentialism”, they are saying that it refuses to weigh all of the consequences—including the good ones. The negativity of children made to cry does not obliterate the positivity of children prevented from crying, but rather must be weighed against it, to produce a sum which can be negative or positive.
To declare a consequence “unacceptable” is to say that you refuse to be consequentialist where that particular outcome is involved; you are saying that such a consequence crashes your computation of value, as if it were infinitely negative and demanded some other method of valuation, which did not use such finicky things as numbers.
But even if there is a value which is negative, and 3^^^3 times greater in magnitude than any other value, positive or negative, its negation will always be of equal and opposite value, allowing things to be weighed against each other once again. In this example, a murder might be worth −3^^^3 utilons—but preventing two murders by committing one results in a net sum of +3^^^3 utilons.
The only possible world in which one could reject every possible cause which ends in murder or children crying is one in which it is conveniently impossible for such a cause to lead to positive consequences which outweigh the negative ones. And frankly, the world we live in is not so convenient as to divide itself perfectly into positive and negative acts in such a way.
There is nothing about consequentialism which distinguishes means from ends.
Wikipedia: Consequentialism is the class of normative ethical theories holding that the consequences of one’s conduct are the ultimate basis for any judgment about the rightness of that conduct. … Consequentialism is usually distinguished from deontological ethics (or deontology), in that deontology derives the rightness or wrongness of one’s conduct from the character of the behaviour itself rather than the outcomes of the conduct.
The “character of the behaviour” is means.
To declare a consequence “unacceptable” is to say that you refuse to be consequentialist where that particular outcome is involved; you are saying that such a consequence crashes your computation of value
Consequentialism does not demand “computation of value”. It only says that what matters it outcomes, it does not require that the outcomes be comparable or summable. I don’t see that saying that certain outcomes are unacceptable, full stop (= have negative infinity value) contradicts consequentialism.
You have a point, there are means and ends. I was using the term “means” as synonymous with “methods used to achieve instrumental ends”, which I realize was vague and misleading. I suppose it would be better to say that consequentialism does not concern itself with means at all, and rather considers every outcome, including those which are the result of means, to be an end.
As for your other point, I’m afraid that I find it rather odd. Consequentialism does not need to be implemented as having implicitly summable values, much as rational assessment does not require the computation of exact probabilities, but any moral system must be able to implement comparisons of some kind. Even the simplest deontologies must be able to distinguish “good” from “bad” moral actions, even if all “good” actions are equal, and all “bad” actions likewise.
Without the ability to compare outcomes, there is no way to compare the goodness of choices and select a good plan of action, regardless of how one defines “good”. And if a given outcome has infinitely negative value, than its negation must have infinitely positive value—which means that the negation is just as desirable as the original outcome is undesirable.
I see no self-contradiction in a consequentialist approach which just declares certain consequences (e.g. “murder and children crying”) be be unacceptable.
Pardon me. I left off the technical qualifier for the sake of terseness. I have previously observed that all deontologial value systems can be emulated by (suitably contrived) consequentialist value systems and vice-versa so I certainly don’t intend to imply that it is impossible to construct a consequentialist morality implementing this particular injunction. Edited to fix.
It is also contrary to virtually all consequentialist value systems of the kind actually held by people here or extrapolatable from humans. All consequentialist systems that match this criteria are abhorrent.
Your point is perfectly valid, I think. Every action-guiding set of principles is ultimately all about consequences. Deontologies can be “consequentialized”, i.e. expressed only through a maximization (or minimization) rule of some goal-function, by a mere semantic transformation. The reason why this is rarely done is, I suspect, because people get confused by words, and perhaps also because consequentializing some deontologies makes it more obvious that the goals are arbitrary or silly.
The traditional distinction between consequentialism and non-consequentialism does not come down to the former only counting consequences—both do! The difference is rather about what sort of consequences count. Deontology also counts how consequences are brought about, that becomes part of the “consequences” that matter, part of whatever you’re trying to minimize. “Me murdering someone” gets a different weight than “someone else murdering someone”, which in turn gets a different weight from “letting someone else die through ‘natural causes’ when it could be easily prevented”.
And sometimes it gets even weirder, the doctrine of double effect for instance draws a morally significant line between a harmful consequence being necessary for the execution of your (well-intended) aim, or a “mere” foreseen—but still necessary(!) -- side-effect of it. So sometimes certain intentions, when acted upon, are flagged with negative value as well.
And as you note below, deontologies sometimes attribute infinite negative value to certain consequences.
This seems like a poor strategy by simply considering temper tantrums, let alone all of the other holes in this. (The first half of the comment though, I can at least appreciate.)
-- Iain M. Banks
I suppose I somewhat appreciate the sentiment. I note that labelling the killing ‘murder’ has already amounted to significant discretion. Killings that are approved of get to be labelled something nicer sounding.
Does this pay rent in policy changes? It seems probable that existing policy positions will already determine the contexts in which we might choose to apply this quote, so that the quote will only be generating the appearance of additional evidential weight, but will in fact result in double-counting if we use its applicability as evidence for or against a proposal, because we already chose to use the quote because we disagreed with the porposal. For example: ‘This imperialist intervention is wrong—Fuck every cause that ends in murder and children crying.’ Is the latter clause doing any work?
(First version of this comment:
Does this pay rent in suggested policies? It feels like under all plausible interpretations, it’s at best ‘I’m so righteous!’ and possibly other things.)
Yes. It rules out all sorts of policies, including good ones. It likely rules out murdering Hitler to prevent a war, especially if that requires killing guards in order to get to him.
Upvoted; wording was bad. Edited.
I agree entirely with your new wording. This quote seems to be the sort of claim to bring out conditionally against causes we oppose but conveniently ignore when we support the cause.
As much as I love Banks, this sounds like a massive set of applause lights, complete with sparkling Catherine wheels. Sometimes, you have to do shitty things to improve the world, and sometimes the shitty things are really shitty, because we’re not smart enough to find a better option fast enough to avoid the awful things resulting from not improving at all. “The perfect must not be the enemy of the good” and so on.
And sometimes you do shitty things because you think they will improve the world, but hey, even though the road to hell is very well-paved already, there’s always a place for another cobblestone...
The heuristic of this quote is that it is a firewall against a runaway utility function. If you convince yourself that something will generate gazillions of utilons, you’d be willing to pay a very high price to reach this, even though your estimates might be in error. This heuristic puts a cap on the price.
It’s good as an exhortation to build a Schelling fence, but without that sentiment, it’s pretty hollow. Reading the context, though, I agree with you: it’s a reminder that feeling really sure about something and being willing to sacrifice a lot of you and other (possibly unwilling) people to create a putative utopia probably means you’re wrong.
“Sorrow be damned, and all your plans. Fuck the faithful, fuck the committed, the dedicated, the true believers; fuck all the sure and certain people prepared to maim and kill whoever got in their way; fuck every cause that ended in murder and a child screaming. She turned and ran...”
(As an aside, I now have the perfect line for if I ever become an evil mastermind and someone quotes that at me: “But you see, murder and children screaming is only the beginning!”)
The problem is that there are better heuristics out there. Look up “just war theory” for starters.
This seems better-suited for MoreEmotional than LessWrong.
I think this is a useful heuristic because humans are just not good at calculating this stuff. Ethical Injunctions suggests that you do in fact check with your emotions when the numbers say something novel. (This is why I’m sceptical about deciding on numbers pulled out of your arse rather than pulling the decision directly out of your arse.)
I don’t think Banks even believed that, though. Several of his books certainly seem to be evidence to the contrary.
Is that both, or either or? Because if it is either or it may include such attrocities as going to bed on time and eating vegetables. If it is both, it seems to imply killing those not as beloved by children may be acceptable.
I wonder if people here realize how anti-utilitarianism this quote is :-)
“Murder and children crying” aren’t allowed to have negative weight in a utility function?
It’s not about weight, it’s about an absolute, discontinuous, hard limit—regardless of how many utilons you can pile up on the other end of the scale.
Well, no. It’s against the promise of how many utilons you can pile up on the other arm of the scale, which may well not pay off at all. I’m reminded of a post here at some point whose gist was “if your model tells you that your chances of being wrong are 3^^^3:1 against, it is more likely that your model is wrong than that you are right.”
Yes, but the quote in no way concerns itself with the probability that such a plan will go wrong; rather, it explicitly includes even those with a wide margin of error, including “every” plan which ends in murder and children crying.
If your plan ends in murder and children crying, what happens if your plan goes wrong?
The murder and children crying fail to occur in the intended quantity?
If your plan requires you to get into a car with your family, what happens if you crash?
Well, getting into a car with your family is not inherently bad, so it’s not a very good parallel… but if your overall point is that “expected value calculations do not retroactively lose mathematical validity because the world turned out a certain way”, then that’s definitely true.
I think that the “what if it all goes wrong” sort of comment is meant to trigger the response of “oh god… it was all for nothing! Nothing!!!”. Which is silly, of course. We murdered all those people and made those children cry for the expected value of the plan. Complaining that the expected value of an action is not equal to the actual value of the outcome is a pretty elementary mistake.
The features of my plan which mitigate the result of the plan going wrong kick in, and the damage is mitigated. I don’t go on vacation, despite the nonrefundable expenses incurred. The plan didn’t end in death and sadness, even if a particular implementation did.
When the plan ends in murder and children crying, every failure of the plan results in a worse outcome.
This does not seem to follow. Failure of the plan could easily involve failure to cause the murder or crying to happen for a start. Then there is the consideration that an unspecified failure has completely undefined behaviour. Anything could happen, from extinction or species-wide endless torture to the outright creation of a utopia.
For most people, murder and children crying are a bad outcome for a plan, but if they’re what the planner has selected as the intended outcome, the other probable outcomes are presumably worse. Theoretically, the plan could “fail” and end in an outcome with more utilons than murder and children crying, but those failures are obviously improbable: because if they weren’t, then the planner would presumably have selected them as the desired plan outcome.
Or at least have the foresight to see that they have become likely and alter the plan such that it now results in utopia instead of murder.
I think we need to examine what we mean by ‘fail’.
A plan does not fail simply because the actual outcome is different from the outcome judged most likely; a plan fails when a contingency not prepared for occurs which prevents the intended outcome from being realized, or when an explicit failure state of the plan is reached.
If I plan to go on a vacation and prepare for a major illness by deciding that I will cancel the vacation, then experiencing a major illness might cause the plan to fail- because I have identified that as a failure state. The more important the object of the plan, the harder I will work in the planning stage to minimize the likelihood of ending up in a failure state. (When sending a probe to Mars, for example, I want to be prepared such that everything I can think of that might go wrong along the way still yields a success condition.)
It’s not a matter of “the plan might go wrong”, it’s a matter of “the plan might be wrong”, and the universal part comes from “no, really, yours too, because you aren’t remotely special.”
Seems like one of those rules that apply to humans but not to a perfect rationalist, then.
Sounds about right to me.
You seem to be implying that people here should care about things being anti-utilitarianism. They shouldn’t. Utilitarianism refers to a group of largely abhorrent and arbitrary value systems.
It is also contrary to virtually all consequentialist value systems of the kind actually held by people here or extrapolatable from humans. All consequentialist systems that match the quote’s criteria for not being ‘Fucked’ are abhorrent.
It is not. “Murder and children crying” here are not means to an end, they are consequences as well. Maybe not intended consequences, maybe side effects (“collateral damage”), but still consequences.
I see no self-contradiction in a consequentialist approach which just declares certain consequences (e.g. “murder and children crying”) be be unacceptable.
There is nothing about consequentialism which distinguishes means from ends. Anything that happens is an “end” of the series of actions which produced it, even if it is not a terminal step, even if it is not intended.
When wedrifid says that the quote is “anti-consequentialism”, they are saying that it refuses to weigh all of the consequences—including the good ones. The negativity of children made to cry does not obliterate the positivity of children prevented from crying, but rather must be weighed against it, to produce a sum which can be negative or positive.
To declare a consequence “unacceptable” is to say that you refuse to be consequentialist where that particular outcome is involved; you are saying that such a consequence crashes your computation of value, as if it were infinitely negative and demanded some other method of valuation, which did not use such finicky things as numbers.
But even if there is a value which is negative, and 3^^^3 times greater in magnitude than any other value, positive or negative, its negation will always be of equal and opposite value, allowing things to be weighed against each other once again. In this example, a murder might be worth −3^^^3 utilons—but preventing two murders by committing one results in a net sum of +3^^^3 utilons.
The only possible world in which one could reject every possible cause which ends in murder or children crying is one in which it is conveniently impossible for such a cause to lead to positive consequences which outweigh the negative ones. And frankly, the world we live in is not so convenient as to divide itself perfectly into positive and negative acts in such a way.
Wikipedia: Consequentialism is the class of normative ethical theories holding that the consequences of one’s conduct are the ultimate basis for any judgment about the rightness of that conduct. … Consequentialism is usually distinguished from deontological ethics (or deontology), in that deontology derives the rightness or wrongness of one’s conduct from the character of the behaviour itself rather than the outcomes of the conduct.
The “character of the behaviour” is means.
Consequentialism does not demand “computation of value”. It only says that what matters it outcomes, it does not require that the outcomes be comparable or summable. I don’t see that saying that certain outcomes are unacceptable, full stop (= have negative infinity value) contradicts consequentialism.
You have a point, there are means and ends. I was using the term “means” as synonymous with “methods used to achieve instrumental ends”, which I realize was vague and misleading. I suppose it would be better to say that consequentialism does not concern itself with means at all, and rather considers every outcome, including those which are the result of means, to be an end.
As for your other point, I’m afraid that I find it rather odd. Consequentialism does not need to be implemented as having implicitly summable values, much as rational assessment does not require the computation of exact probabilities, but any moral system must be able to implement comparisons of some kind. Even the simplest deontologies must be able to distinguish “good” from “bad” moral actions, even if all “good” actions are equal, and all “bad” actions likewise.
Without the ability to compare outcomes, there is no way to compare the goodness of choices and select a good plan of action, regardless of how one defines “good”. And if a given outcome has infinitely negative value, than its negation must have infinitely positive value—which means that the negation is just as desirable as the original outcome is undesirable.
Pardon me. I left off the technical qualifier for the sake of terseness. I have previously observed that all deontologial value systems can be emulated by (suitably contrived) consequentialist value systems and vice-versa so I certainly don’t intend to imply that it is impossible to construct a consequentialist morality implementing this particular injunction. Edited to fix.
Your point is perfectly valid, I think. Every action-guiding set of principles is ultimately all about consequences. Deontologies can be “consequentialized”, i.e. expressed only through a maximization (or minimization) rule of some goal-function, by a mere semantic transformation. The reason why this is rarely done is, I suspect, because people get confused by words, and perhaps also because consequentializing some deontologies makes it more obvious that the goals are arbitrary or silly.
The traditional distinction between consequentialism and non-consequentialism does not come down to the former only counting consequences—both do! The difference is rather about what sort of consequences count. Deontology also counts how consequences are brought about, that becomes part of the “consequences” that matter, part of whatever you’re trying to minimize. “Me murdering someone” gets a different weight than “someone else murdering someone”, which in turn gets a different weight from “letting someone else die through ‘natural causes’ when it could be easily prevented”.
And sometimes it gets even weirder, the doctrine of double effect for instance draws a morally significant line between a harmful consequence being necessary for the execution of your (well-intended) aim, or a “mere” foreseen—but still necessary(!) -- side-effect of it. So sometimes certain intentions, when acted upon, are flagged with negative value as well.
And as you note below, deontologies sometimes attribute infinite negative value to certain consequences.
That’s kind-of a good point, but I seriously doubt that that quote would be that effective in making people get it who don’t already.
I, too, support the cause of opposing every such cause.
This seems like a poor strategy by simply considering temper tantrums, let alone all of the other holes in this. (The first half of the comment though, I can at least appreciate.)