Well, no. It’s against the promise of how many utilons you can pile up on the other arm of the scale, which may well not pay off at all. I’m reminded of a post here at some point whose gist was “if your model tells you that your chances of being wrong are 3^^^3:1 against, it is more likely that your model is wrong than that you are right.”
Yes, but the quote in no way concerns itself with the probability that such a plan will go wrong; rather, it explicitly includes even those with a wide margin of error, including “every” plan which ends in murder and children crying.
Well, getting into a car with your family is not inherently bad, so it’s not a very good parallel… but if your overall point is that “expected value calculations do not retroactively lose mathematical validity because the world turned out a certain way”, then that’s definitely true.
I think that the “what if it all goes wrong” sort of comment is meant to trigger the response of “oh god… it was all for nothing! Nothing!!!”. Which is silly, of course. We murdered all those people and made those children cry for the expected value of the plan. Complaining that the expected value of an action is not equal to the actual value of the outcome is a pretty elementary mistake.
The features of my plan which mitigate the result of the plan going wrong kick in, and the damage is mitigated. I don’t go on vacation, despite the nonrefundable expenses incurred. The plan didn’t end in death and sadness, even if a particular implementation did.
When the plan ends in murder and children crying, every failure of the plan results in a worse outcome.
When the plan ends in murder and children crying, every failure of the plan results in a worse outcome.
This does not seem to follow. Failure of the plan could easily involve failure to cause the murder or crying to happen for a start. Then there is the consideration that an unspecified failure has completely undefined behaviour. Anything could happen, from extinction or species-wide endless torture to the outright creation of a utopia.
For most people, murder and children crying are a bad outcome for a plan, but if they’re what the planner has selected as the intended outcome, the other probable outcomes are presumably worse. Theoretically, the plan could “fail” and end in an outcome with more utilons than murder and children crying, but those failures are obviously improbable: because if they weren’t, then the planner would presumably have selected them as the desired plan outcome.
I think we need to examine what we mean by ‘fail’.
A plan does not fail simply because the actual outcome is different from the outcome judged most likely; a plan fails when a contingency not prepared for occurs which prevents the intended outcome from being realized, or when an explicit failure state of the plan is reached.
If I plan to go on a vacation and prepare for a major illness by deciding that I will cancel the vacation, then experiencing a major illness might cause the plan to fail- because I have identified that as a failure state. The more important the object of the plan, the harder I will work in the planning stage to minimize the likelihood of ending up in a failure state. (When sending a probe to Mars, for example, I want to be prepared such that everything I can think of that might go wrong along the way still yields a success condition.)
It’s not a matter of “the plan might go wrong”, it’s a matter of “the plan might be wrong”, and the universal part comes from “no, really, yours too, because you aren’t remotely special.”
I wonder if people here realize how anti-utilitarianism this quote is :-)
You seem to be implying that people here should care about things being anti-utilitarianism. They shouldn’t. Utilitarianism refers to a group of largely abhorrent and arbitrary value systems.
It is also contrary to virtually all consequentialist value systems of the kind actually held by people here or extrapolatable from humans. All consequentialist systems that match the quote’s criteria for not being ‘Fucked’ are abhorrent.
It is not. “Murder and children crying” here are not means to an end, they are consequences as well. Maybe not intended consequences, maybe side effects (“collateral damage”), but still consequences.
I see no self-contradiction in a consequentialist approach which just declares certain consequences (e.g. “murder and children crying”) be be unacceptable.
There is nothing about consequentialism which distinguishes means from ends. Anything that happens is an “end” of the series of actions which produced it, even if it is not a terminal step, even if it is not intended.
When wedrifid says that the quote is “anti-consequentialism”, they are saying that it refuses to weigh all of the consequences—including the good ones. The negativity of children made to cry does not obliterate the positivity of children prevented from crying, but rather must be weighed against it, to produce a sum which can be negative or positive.
To declare a consequence “unacceptable” is to say that you refuse to be consequentialist where that particular outcome is involved; you are saying that such a consequence crashes your computation of value, as if it were infinitely negative and demanded some other method of valuation, which did not use such finicky things as numbers.
But even if there is a value which is negative, and 3^^^3 times greater in magnitude than any other value, positive or negative, its negation will always be of equal and opposite value, allowing things to be weighed against each other once again. In this example, a murder might be worth −3^^^3 utilons—but preventing two murders by committing one results in a net sum of +3^^^3 utilons.
The only possible world in which one could reject every possible cause which ends in murder or children crying is one in which it is conveniently impossible for such a cause to lead to positive consequences which outweigh the negative ones. And frankly, the world we live in is not so convenient as to divide itself perfectly into positive and negative acts in such a way.
There is nothing about consequentialism which distinguishes means from ends.
Wikipedia: Consequentialism is the class of normative ethical theories holding that the consequences of one’s conduct are the ultimate basis for any judgment about the rightness of that conduct. … Consequentialism is usually distinguished from deontological ethics (or deontology), in that deontology derives the rightness or wrongness of one’s conduct from the character of the behaviour itself rather than the outcomes of the conduct.
The “character of the behaviour” is means.
To declare a consequence “unacceptable” is to say that you refuse to be consequentialist where that particular outcome is involved; you are saying that such a consequence crashes your computation of value
Consequentialism does not demand “computation of value”. It only says that what matters it outcomes, it does not require that the outcomes be comparable or summable. I don’t see that saying that certain outcomes are unacceptable, full stop (= have negative infinity value) contradicts consequentialism.
You have a point, there are means and ends. I was using the term “means” as synonymous with “methods used to achieve instrumental ends”, which I realize was vague and misleading. I suppose it would be better to say that consequentialism does not concern itself with means at all, and rather considers every outcome, including those which are the result of means, to be an end.
As for your other point, I’m afraid that I find it rather odd. Consequentialism does not need to be implemented as having implicitly summable values, much as rational assessment does not require the computation of exact probabilities, but any moral system must be able to implement comparisons of some kind. Even the simplest deontologies must be able to distinguish “good” from “bad” moral actions, even if all “good” actions are equal, and all “bad” actions likewise.
Without the ability to compare outcomes, there is no way to compare the goodness of choices and select a good plan of action, regardless of how one defines “good”. And if a given outcome has infinitely negative value, than its negation must have infinitely positive value—which means that the negation is just as desirable as the original outcome is undesirable.
I see no self-contradiction in a consequentialist approach which just declares certain consequences (e.g. “murder and children crying”) be be unacceptable.
Pardon me. I left off the technical qualifier for the sake of terseness. I have previously observed that all deontologial value systems can be emulated by (suitably contrived) consequentialist value systems and vice-versa so I certainly don’t intend to imply that it is impossible to construct a consequentialist morality implementing this particular injunction. Edited to fix.
It is also contrary to virtually all consequentialist value systems of the kind actually held by people here or extrapolatable from humans. All consequentialist systems that match this criteria are abhorrent.
Your point is perfectly valid, I think. Every action-guiding set of principles is ultimately all about consequences. Deontologies can be “consequentialized”, i.e. expressed only through a maximization (or minimization) rule of some goal-function, by a mere semantic transformation. The reason why this is rarely done is, I suspect, because people get confused by words, and perhaps also because consequentializing some deontologies makes it more obvious that the goals are arbitrary or silly.
The traditional distinction between consequentialism and non-consequentialism does not come down to the former only counting consequences—both do! The difference is rather about what sort of consequences count. Deontology also counts how consequences are brought about, that becomes part of the “consequences” that matter, part of whatever you’re trying to minimize. “Me murdering someone” gets a different weight than “someone else murdering someone”, which in turn gets a different weight from “letting someone else die through ‘natural causes’ when it could be easily prevented”.
And sometimes it gets even weirder, the doctrine of double effect for instance draws a morally significant line between a harmful consequence being necessary for the execution of your (well-intended) aim, or a “mere” foreseen—but still necessary(!) -- side-effect of it. So sometimes certain intentions, when acted upon, are flagged with negative value as well.
And as you note below, deontologies sometimes attribute infinite negative value to certain consequences.
I wonder if people here realize how anti-utilitarianism this quote is :-)
“Murder and children crying” aren’t allowed to have negative weight in a utility function?
It’s not about weight, it’s about an absolute, discontinuous, hard limit—regardless of how many utilons you can pile up on the other end of the scale.
Well, no. It’s against the promise of how many utilons you can pile up on the other arm of the scale, which may well not pay off at all. I’m reminded of a post here at some point whose gist was “if your model tells you that your chances of being wrong are 3^^^3:1 against, it is more likely that your model is wrong than that you are right.”
Yes, but the quote in no way concerns itself with the probability that such a plan will go wrong; rather, it explicitly includes even those with a wide margin of error, including “every” plan which ends in murder and children crying.
If your plan ends in murder and children crying, what happens if your plan goes wrong?
The murder and children crying fail to occur in the intended quantity?
If your plan requires you to get into a car with your family, what happens if you crash?
Well, getting into a car with your family is not inherently bad, so it’s not a very good parallel… but if your overall point is that “expected value calculations do not retroactively lose mathematical validity because the world turned out a certain way”, then that’s definitely true.
I think that the “what if it all goes wrong” sort of comment is meant to trigger the response of “oh god… it was all for nothing! Nothing!!!”. Which is silly, of course. We murdered all those people and made those children cry for the expected value of the plan. Complaining that the expected value of an action is not equal to the actual value of the outcome is a pretty elementary mistake.
The features of my plan which mitigate the result of the plan going wrong kick in, and the damage is mitigated. I don’t go on vacation, despite the nonrefundable expenses incurred. The plan didn’t end in death and sadness, even if a particular implementation did.
When the plan ends in murder and children crying, every failure of the plan results in a worse outcome.
This does not seem to follow. Failure of the plan could easily involve failure to cause the murder or crying to happen for a start. Then there is the consideration that an unspecified failure has completely undefined behaviour. Anything could happen, from extinction or species-wide endless torture to the outright creation of a utopia.
For most people, murder and children crying are a bad outcome for a plan, but if they’re what the planner has selected as the intended outcome, the other probable outcomes are presumably worse. Theoretically, the plan could “fail” and end in an outcome with more utilons than murder and children crying, but those failures are obviously improbable: because if they weren’t, then the planner would presumably have selected them as the desired plan outcome.
Or at least have the foresight to see that they have become likely and alter the plan such that it now results in utopia instead of murder.
I think we need to examine what we mean by ‘fail’.
A plan does not fail simply because the actual outcome is different from the outcome judged most likely; a plan fails when a contingency not prepared for occurs which prevents the intended outcome from being realized, or when an explicit failure state of the plan is reached.
If I plan to go on a vacation and prepare for a major illness by deciding that I will cancel the vacation, then experiencing a major illness might cause the plan to fail- because I have identified that as a failure state. The more important the object of the plan, the harder I will work in the planning stage to minimize the likelihood of ending up in a failure state. (When sending a probe to Mars, for example, I want to be prepared such that everything I can think of that might go wrong along the way still yields a success condition.)
It’s not a matter of “the plan might go wrong”, it’s a matter of “the plan might be wrong”, and the universal part comes from “no, really, yours too, because you aren’t remotely special.”
Seems like one of those rules that apply to humans but not to a perfect rationalist, then.
Sounds about right to me.
You seem to be implying that people here should care about things being anti-utilitarianism. They shouldn’t. Utilitarianism refers to a group of largely abhorrent and arbitrary value systems.
It is also contrary to virtually all consequentialist value systems of the kind actually held by people here or extrapolatable from humans. All consequentialist systems that match the quote’s criteria for not being ‘Fucked’ are abhorrent.
It is not. “Murder and children crying” here are not means to an end, they are consequences as well. Maybe not intended consequences, maybe side effects (“collateral damage”), but still consequences.
I see no self-contradiction in a consequentialist approach which just declares certain consequences (e.g. “murder and children crying”) be be unacceptable.
There is nothing about consequentialism which distinguishes means from ends. Anything that happens is an “end” of the series of actions which produced it, even if it is not a terminal step, even if it is not intended.
When wedrifid says that the quote is “anti-consequentialism”, they are saying that it refuses to weigh all of the consequences—including the good ones. The negativity of children made to cry does not obliterate the positivity of children prevented from crying, but rather must be weighed against it, to produce a sum which can be negative or positive.
To declare a consequence “unacceptable” is to say that you refuse to be consequentialist where that particular outcome is involved; you are saying that such a consequence crashes your computation of value, as if it were infinitely negative and demanded some other method of valuation, which did not use such finicky things as numbers.
But even if there is a value which is negative, and 3^^^3 times greater in magnitude than any other value, positive or negative, its negation will always be of equal and opposite value, allowing things to be weighed against each other once again. In this example, a murder might be worth −3^^^3 utilons—but preventing two murders by committing one results in a net sum of +3^^^3 utilons.
The only possible world in which one could reject every possible cause which ends in murder or children crying is one in which it is conveniently impossible for such a cause to lead to positive consequences which outweigh the negative ones. And frankly, the world we live in is not so convenient as to divide itself perfectly into positive and negative acts in such a way.
Wikipedia: Consequentialism is the class of normative ethical theories holding that the consequences of one’s conduct are the ultimate basis for any judgment about the rightness of that conduct. … Consequentialism is usually distinguished from deontological ethics (or deontology), in that deontology derives the rightness or wrongness of one’s conduct from the character of the behaviour itself rather than the outcomes of the conduct.
The “character of the behaviour” is means.
Consequentialism does not demand “computation of value”. It only says that what matters it outcomes, it does not require that the outcomes be comparable or summable. I don’t see that saying that certain outcomes are unacceptable, full stop (= have negative infinity value) contradicts consequentialism.
You have a point, there are means and ends. I was using the term “means” as synonymous with “methods used to achieve instrumental ends”, which I realize was vague and misleading. I suppose it would be better to say that consequentialism does not concern itself with means at all, and rather considers every outcome, including those which are the result of means, to be an end.
As for your other point, I’m afraid that I find it rather odd. Consequentialism does not need to be implemented as having implicitly summable values, much as rational assessment does not require the computation of exact probabilities, but any moral system must be able to implement comparisons of some kind. Even the simplest deontologies must be able to distinguish “good” from “bad” moral actions, even if all “good” actions are equal, and all “bad” actions likewise.
Without the ability to compare outcomes, there is no way to compare the goodness of choices and select a good plan of action, regardless of how one defines “good”. And if a given outcome has infinitely negative value, than its negation must have infinitely positive value—which means that the negation is just as desirable as the original outcome is undesirable.
Pardon me. I left off the technical qualifier for the sake of terseness. I have previously observed that all deontologial value systems can be emulated by (suitably contrived) consequentialist value systems and vice-versa so I certainly don’t intend to imply that it is impossible to construct a consequentialist morality implementing this particular injunction. Edited to fix.
Your point is perfectly valid, I think. Every action-guiding set of principles is ultimately all about consequences. Deontologies can be “consequentialized”, i.e. expressed only through a maximization (or minimization) rule of some goal-function, by a mere semantic transformation. The reason why this is rarely done is, I suspect, because people get confused by words, and perhaps also because consequentializing some deontologies makes it more obvious that the goals are arbitrary or silly.
The traditional distinction between consequentialism and non-consequentialism does not come down to the former only counting consequences—both do! The difference is rather about what sort of consequences count. Deontology also counts how consequences are brought about, that becomes part of the “consequences” that matter, part of whatever you’re trying to minimize. “Me murdering someone” gets a different weight than “someone else murdering someone”, which in turn gets a different weight from “letting someone else die through ‘natural causes’ when it could be easily prevented”.
And sometimes it gets even weirder, the doctrine of double effect for instance draws a morally significant line between a harmful consequence being necessary for the execution of your (well-intended) aim, or a “mere” foreseen—but still necessary(!) -- side-effect of it. So sometimes certain intentions, when acted upon, are flagged with negative value as well.
And as you note below, deontologies sometimes attribute infinite negative value to certain consequences.