Well, my point is that stating all the horrible things that Christians should do to (hypothetically) save people from eternal torment is not a good argument against ‘hard-core’ utilitarianism. These acts are only horrible because Christianity isn’t true. Therefore the antidote for these horrors is not, “don’t swallow the bullet”, it’s “don’t believe stuff without good evidence”.
These acts are only horrible because Christianity isn’t true.
Is that so?
Would real-life Christians who sincerely and wholeheartedly believe that Christianity is true agree that such acts are not horrible at all and, in fact, desirable and highly moral?
Therefore the antidote for these horrors is not, “don’t swallow the bullet”, it’s “don’t believe stuff without good evidence”.
So once you think you have good evidence, all the horrors stop being horrors and become justified?
So once you think you have good evidence, all the horrors stop being horrors and become justified?
If your evidence is good enough, then one must choose the lesser horror. “Better they burn in this life than in the next.”
Various arguments have been made that it’s impossible to be sure to the degree required. I don’t accept them, but I don’t think you’re advancing one of them either.
but I don’t think you’re advancing one of them either
I haven’t been advancing anything so far. I was just marveling at the readiness, nay, enthusiasm with which people declare themselves to be hard-headed fanatics ready and willing to do anything in the pursuit of the One True Goal.
If your evidence is good enough, then one must choose the lesser horror.
There are… complications here. First let me mention in passing two side issues. One is capability: even if you believe the “lesser horror” is the right way, you may find yourself unable to actually do that horror. The other one is change: you are not immutable. What you do changes you, the abyss gazes back, and after committing enough lesser horrors you may find that your ethics have shifted.
Getting back to the central point, there are also two strands here. First, you are basically saying that evil can become good through the virtue of being the lesser evil. Everything is comparable and relative, there are no absolute baselines. This is a major fork where consequentialists and deontologists part ways, right?
Second is the utilitarian insistence that everything must be boiled down to a single, basically, number which determines everything. One function to rule them all.
I find pure utilitarianism to be very fragile.
Consider a memetic plague (major examples: communism and fascism in the first half of the XX century; minor example: ISIS now). Imagine a utilitarian infected by such a memetic virus which hijacks his One True Goal. Is there something which would stop him from committing all sorts of horrors in the service of his new, somewhat modified “utility”? Nope. He has no failsafes, there is no risk management, once he falls he falls to the very bottom. If he’s unlucky enough to survive till the fever passes and the virus retreats, he will look at his hands and find them covered with blood.
I prefer more resilient systems, less susceptible to corruption, ones which fail more gracefully. Even at the price of inefficiency and occasional inconsistency.
I was just marveling at the readiness, nay, enthusiasm with which people declare themselves to be hard-headed fanatics ready and willing to do anything in the pursuit of the One True Goal.
Conditional on being sufficiently convinced such a goal is true, which I am not and assign negligible probability to ever being.
First let me mention in passing two side issues. One is capability: even if you believe the “lesser horror” is the right way, you may find yourself unable to actually do that horror. The other one is change: you are not immutable. What you do changes you, the abyss gazes back, and after committing enough lesser horrors you may find that your ethics have shifted.
Both are issues that must be addressed, but they don’t imply one should abandon the attempt. Also, they aren’t exclusive to doing extremely horrible instrumental things in pursuit of even-more-extremely good outcomes.
Getting back to the central point, there are also two strands here. First, you are basically saying that evil can become good through the virtue of being the lesser evil. Everything is comparable and relative, there are no absolute baselines. This is a major fork where consequentialists and deontologists part ways, right?
I’m saying that whether or not you embrace a notion of the absolute magnitude of good and evil—that is, of a moral true zero—an evil can be the least evil of all available options.
More importantly, deontology is completely compatible with theology. Many people believe(d) in the truth of a religion, and also that that religion commands them to either convert or kill non-believers. This is where the example used in this thread comes from: “burn their bodies—save their souls”. So I’m not sure if you’re proposing deontology as a solution, and if so, how.
I find pure utilitarianism to be very fragile.
I’m not a utilitarian, for a better reason than that: utilitarianism doesn’t describe my actual moral feelings (or those of almost all other people, as far as I can tell), so I see no reason to wish to be more utilitarian. In particular, I assign very different weights to the wellbeing of different people.
Imagine a utilitarian infected by such a memetic virus which hijacks his One True Goal.
That is not very different from imagining a meme that infects any other kind of consequentialist and hijacks the moral weight of a particular outcome. Or which infects deontologists with new rules (like religions sometimes do).
Conditional on being sufficiently convinced such a goal is true
Kinda? The interesting thing about utilitarians is that their One True Goal is whatever scores the highest on the utility-meter. Whatever it is.
an evil can be the least evil of all available options
This is conditional on two evils being comparable (think about generic sorting functions in programming). Not every moral system accepts that all evils can be compared and ranked.
deontology is completely compatible with theology
Again, kinda? It depends. Even in Christianity true love for Christ overrides any rules. Formulated in a different way, if you have sufficient amount of grace, deontological rules don’t apply to you any more, they are just a crutch.
I assign very different weights to the wellbeing of different people
That’s perfectly compatible with utilitarianism.
My understanding of utilitarianism is that it’s a variety of consequentialism where you arrange all the consequences on a single axis called “utility” and rank them. There are subspecies which specify particular ways of aggregating utility (e.g. by saying that the weights of utility of all individuals are all the same), but utilitarianism in general does not require that.
Kinda? The interesting thing about utilitarians is that their One True Goal is whatever scores the highest on the utility-meter. Whatever it is.
But they still need to take into account the probabilities of their factual beliefs. Getting everyone into Heaven may be the One True Goal, but they need to also be certain that Heaven really exists and that they’re right about how to get there.
This is conditional on two evils being comparable (think about generic sorting functions in programming). Not every moral system accepts that all evils can be compared and ranked.
Yes. That’s why I said “an evil can be” and not “some evil must be”. But usually, given a concrete choice, one outcome will be judged best. It’s unlikely, to put it mildly, that someone would believe they can determine whether another person goes to Heaven or Hell, and be morally indifferent between the choices.
Even in Christianity true love for Christ overrides any rules. Formulated in a different way, if you have sufficient amount of grace, deontological rules don’t apply to you any more, they are just a crutch.
That appears to be true for many Protestant denominations. In the Catholic and Orthodox churches, though, salvation is only possible through the church and its ministers and sacraments. And even most Protestants would agree that some (deontological) sins are incompatible with a state of grace unless repented, so at most a past sinner can be in a state of grace, not an ongoing one.
My understanding of utilitarianism is that it’s a variety of consequentialism where you arrange all the consequences on a single axis called “utility” and rank them. There are subspecies which specify particular ways of aggregating utility (e.g. by saying that the weights of utility of all individuals are all the same), but utilitarianism in general does not require that.
It’s good to be precise about the meaning of words. I’ve talked to some people (here on LW) who didn’t accept the label “utilitarianism” for philosophies that assign near-zero value to large groups of people.
True, but there are no absolute thresholds. Whatever gets ranked first is it.
What’s wrong with that? Other than Pascal’s mugging, which everyone needs to avoid.
There are moral philosophies which would refuse to kill an innocent even if this act saves a hundred lives.
True, but very few people actually follow them, especially if you replace ‘a hundred’ with a much larger arbitrary constant. The ‘everyone knows it’s wrong’ metric that was mentioned at the start of this thread doesn’t hold here.
Other than Pascal’s mugging, which everyone needs to avoid.
Other than that Mrs. Lincoln, how was the play? :-)
What’s wrong with that is, for example, the existence of the single point of failure and lack of failsafes.
very few people actually follow them
I don’t know about that. Very few people find themselves in a situation where they have to make this choice, to start with.
if you replace ‘a hundred’ with a much larger arbitrary constant
We’re back in Pascal’s Mugging territory, aren’t we? So what is it, is utilitarianism OK as long as it avoids Pascal’s Mugging, or is “all evil is evil” position untenable because it falls prey to Pascal’s Mugging?
What’s wrong with that is, for example, the existence of the single point of failure and lack of failsafes.
Why do you think other moral systems are more resilient?
You gave communism, fascism and ISIS (Islamism) as examples of “a utilitarian infected by such a memetic virus which hijacks his One True Goal”. Islamism, unlike the first two, seems to be deontological, like Christianity. Isn’t it?
Deontological Christianity has also been ‘hijacked’ several times by millenialist movements that sparked e.g. several crusades. Nationalism and tribe solidarity have started and maintained many wars where consequentialists would make peace because they kept losing.
Very few people find themselves in a situation where they have to make this choice, to start with.
That’s true. But do many people endorse such actions in a hypothetical scenario? I think not, but I’m not very sure about this.
We’re back in Pascal’s Mugging territory, aren’t we? So what is it, is utilitarianism OK as long as it avoids Pascal’s Mugging, or is “all evil is evil” position untenable because it falls prey to Pascal’s Mugging?
Good point :-)
It’s clear that one’s decision theory (and by extension, one’s morals) would benefit from being able to solve PM. But I don’t know how to do it. You have a good point elsewhere that consequentialism has a single failure point, so it would be more vulnerable to PM and fail more catastrophically, although deontology isn’t totally invulnerable to PM either. It may just be harder to construct a PM attack on deontology without knowing the particular set of deontological rules being used, whereas we can reason about the consequentialist utility function without actually knowing what it is.
I’m not sure if this should count as a reason not to be a consequentialist (as far as one can), because one can’t derive an ought from an is, so we can’t just choose our moral system on the basis of unlikely thought experiments. But it is a reason for consequentialists to be more careful and more uncertain.
Why do you think other moral systems are more resilient?
I think a mix of moral systems is more resilient. Some consequentialism, some deontology, some gut feeling.
Islamism, unlike the first two, seems to be deontological, like Christianity. Isn’t it?
No, I don’t think so. Mainstream Islam is deontological, but fundamentalist movements, just like in Christianity, shift to less deontology and more utilitarianism (of course, with a very particular notion of “utility”).
although deontology isn’t totally invulnerable to PM either
Yes, deontology is corruptible as well, but one of the reasons it’s more robust is that it’s simpler. To be a consequentialist you first need the ability to figure out the consequences and that’s a complicated and error-prone process, vulnerable to attack. To be a deontologist you don’t need to figure out anything except which rule to apply.
To corrupt a consequentialist it might be sufficient to mess with his estimation of probabilities. To corrupt a deontologist you need to replace at least some of his rules. Maybe if you find a pair of contradictory rules you could get somewhere by changing which to apply when, but in practice this doesn’t seem to be a promising attack vector.
And yes, I’m not arguing that this is a sufficient reason to avoid being a consequentialist. But, as you say, it’s a good reason to be more wary.
I think a mix of moral systems is more resilient. Some consequentialism, some deontology, some gut feeling.
I completely agree. Also because this describes how humans (including myself) actually act: according to different moral systems, depending on which is more convenient, some heuristics, and on gut feeling.
Would real-life Christians who sincerely and wholeheartedly believe that Christianity is true agree that such acts are not horrible at all and, in fact, desirable and highly moral?
Yes? Of course? With the caveats that the concept of ‘Christianity’ is the medieval one you mentioned above, that these Christians really have no doubts about their beliefs, and that they swallow the bullet.
So once you think you have good evidence, all the horrors stop being horrors and become justified?
Are you trolling? Is the notion that the morality of actions is dependent on reality really that surprising to you?
With the caveats that the concept of ‘Christianity’ is the medieval one you mentioned above
Huh? The “concept” of Christianity hasn’t changed since the Middle Ages. The relevant part is that you either get saved and achieve eternal life or you are doomed to eternal torment. Of course I don’t mean people like Unitarian Universalists, but rather “standard” Christians who believe in heaven and hell.
Is the notion that the morality of actions is dependent on reality really that surprising to you?
Morality certainly depends on the perception of reality, but the point here is different. We are talking here about what you can, should, or must sacrifice to get closer to the One True Goal (which in Christianity is salvation). Your answer is “everything”. Why? Because the One True Goal justifies everything including things people call “horrors”. Am I reading you wrong?
I mentioned three crucial caveats. I think it would be difficult to find Christians in 2016 who have no doubts and swallow the bullet about the implications of Christianity. It would be a lot easier a few hundred years ago.
Huh? The “concept” of Christianity hasn’t changed since the Middle Ages
What I mean is that the religious beliefs of the majority of people who call themselves Christians have changed a lot since medieval times.
We are talking here about what you can, should, or must sacrifice to get closer to the One True Goal (which in Christianity is salvation). Your answer is “everything”. Why? Because the One True Goal justifies everything including things people call “horrors”. Am I reading you wrong?
I don’t see the relevance of what you call a “One True Goal”. I mean, One True Goal as opposed to what? Several Sorta True Goals? Ultimately, no matter what your goals are, you will necessarily be willing to sacrifice things that are less important to you in order to achieve them. Actions are justified as they relate to the accomplishment of a goal, or a set of goals.
If I were convinced that Roger is going to detonate a nuclear bomb in New York, I would feel justified (and obliged) to murder him, because like most of the people I know, I have the goal to prevent millions of innocents from dying. And yet, if I believed that Roger is going to do this on bad or non-existent evidence, the odds are that I would be killing an innocent man for no good reason. There would be nothing wrong with my goal (One True or not), only with my rationality. I don’t see any fundamental difference between this scenario and the one we’ve been discussing.
One True Goal as opposed to what? Several Sorta True Goals?
Yes. Multiple systems, somewhat inconsistent but serving as a check and a constraint on each other, not letting a single one dominate.
Ultimately, no matter what your goals are, you will necessarily be willing to sacrifice things that are less important to you in order to achieve them.
Not in all ethical systems.
Actions are justified as they relate to the accomplishment of a goal, or a set of goals.
In consequentialism yes, but not all ethics are consequentialist.
There would be nothing wrong with my goal
How do you know that? Not in this specific example, but in general—how do you know there is nothing wrong with your One True Goal?
ETA: If you doubt what I said about beliefs regarding those “doomed to eternal torment,” see “Many religions can lead to eternal life,” in this sizeable PDF.
Yes, I do. Well, since I’m not actually religious, my understanding is hypothetical. But yes, this is precisely the point I’m making.
Well, my point is that stating all the horrible things that Christians should do to (hypothetically) save people from eternal torment is not a good argument against ‘hard-core’ utilitarianism. These acts are only horrible because Christianity isn’t true. Therefore the antidote for these horrors is not, “don’t swallow the bullet”, it’s “don’t believe stuff without good evidence”.
Is that so?
Would real-life Christians who sincerely and wholeheartedly believe that Christianity is true agree that such acts are not horrible at all and, in fact, desirable and highly moral?
So once you think you have good evidence, all the horrors stop being horrors and become justified?
If your evidence is good enough, then one must choose the lesser horror. “Better they burn in this life than in the next.”
Various arguments have been made that it’s impossible to be sure to the degree required. I don’t accept them, but I don’t think you’re advancing one of them either.
I haven’t been advancing anything so far. I was just marveling at the readiness, nay, enthusiasm with which people declare themselves to be hard-headed fanatics ready and willing to do anything in the pursuit of the One True Goal.
There are… complications here. First let me mention in passing two side issues. One is capability: even if you believe the “lesser horror” is the right way, you may find yourself unable to actually do that horror. The other one is change: you are not immutable. What you do changes you, the abyss gazes back, and after committing enough lesser horrors you may find that your ethics have shifted.
Getting back to the central point, there are also two strands here. First, you are basically saying that evil can become good through the virtue of being the lesser evil. Everything is comparable and relative, there are no absolute baselines. This is a major fork where consequentialists and deontologists part ways, right?
Second is the utilitarian insistence that everything must be boiled down to a single, basically, number which determines everything. One function to rule them all.
I find pure utilitarianism to be very fragile.
Consider a memetic plague (major examples: communism and fascism in the first half of the XX century; minor example: ISIS now). Imagine a utilitarian infected by such a memetic virus which hijacks his One True Goal. Is there something which would stop him from committing all sorts of horrors in the service of his new, somewhat modified “utility”? Nope. He has no failsafes, there is no risk management, once he falls he falls to the very bottom. If he’s unlucky enough to survive till the fever passes and the virus retreats, he will look at his hands and find them covered with blood.
I prefer more resilient systems, less susceptible to corruption, ones which fail more gracefully. Even at the price of inefficiency and occasional inconsistency.
Conditional on being sufficiently convinced such a goal is true, which I am not and assign negligible probability to ever being.
Both are issues that must be addressed, but they don’t imply one should abandon the attempt. Also, they aren’t exclusive to doing extremely horrible instrumental things in pursuit of even-more-extremely good outcomes.
I’m saying that whether or not you embrace a notion of the absolute magnitude of good and evil—that is, of a moral true zero—an evil can be the least evil of all available options.
More importantly, deontology is completely compatible with theology. Many people believe(d) in the truth of a religion, and also that that religion commands them to either convert or kill non-believers. This is where the example used in this thread comes from: “burn their bodies—save their souls”. So I’m not sure if you’re proposing deontology as a solution, and if so, how.
I’m not a utilitarian, for a better reason than that: utilitarianism doesn’t describe my actual moral feelings (or those of almost all other people, as far as I can tell), so I see no reason to wish to be more utilitarian. In particular, I assign very different weights to the wellbeing of different people.
That is not very different from imagining a meme that infects any other kind of consequentialist and hijacks the moral weight of a particular outcome. Or which infects deontologists with new rules (like religions sometimes do).
Kinda? The interesting thing about utilitarians is that their One True Goal is whatever scores the highest on the utility-meter. Whatever it is.
This is conditional on two evils being comparable (think about generic sorting functions in programming). Not every moral system accepts that all evils can be compared and ranked.
Again, kinda? It depends. Even in Christianity true love for Christ overrides any rules. Formulated in a different way, if you have sufficient amount of grace, deontological rules don’t apply to you any more, they are just a crutch.
That’s perfectly compatible with utilitarianism.
My understanding of utilitarianism is that it’s a variety of consequentialism where you arrange all the consequences on a single axis called “utility” and rank them. There are subspecies which specify particular ways of aggregating utility (e.g. by saying that the weights of utility of all individuals are all the same), but utilitarianism in general does not require that.
But they still need to take into account the probabilities of their factual beliefs. Getting everyone into Heaven may be the One True Goal, but they need to also be certain that Heaven really exists and that they’re right about how to get there.
Yes. That’s why I said “an evil can be” and not “some evil must be”. But usually, given a concrete choice, one outcome will be judged best. It’s unlikely, to put it mildly, that someone would believe they can determine whether another person goes to Heaven or Hell, and be morally indifferent between the choices.
That appears to be true for many Protestant denominations. In the Catholic and Orthodox churches, though, salvation is only possible through the church and its ministers and sacraments. And even most Protestants would agree that some (deontological) sins are incompatible with a state of grace unless repented, so at most a past sinner can be in a state of grace, not an ongoing one.
It’s good to be precise about the meaning of words. I’ve talked to some people (here on LW) who didn’t accept the label “utilitarianism” for philosophies that assign near-zero value to large groups of people.
True, but there are no absolute thresholds. Whatever gets ranked first is it.
There are moral philosophies which would refuse to kill an innocent even if this act saves a hundred lives.
What’s wrong with that? Other than Pascal’s mugging, which everyone needs to avoid.
True, but very few people actually follow them, especially if you replace ‘a hundred’ with a much larger arbitrary constant. The ‘everyone knows it’s wrong’ metric that was mentioned at the start of this thread doesn’t hold here.
Other than that Mrs. Lincoln, how was the play? :-)
What’s wrong with that is, for example, the existence of the single point of failure and lack of failsafes.
I don’t know about that. Very few people find themselves in a situation where they have to make this choice, to start with.
We’re back in Pascal’s Mugging territory, aren’t we? So what is it, is utilitarianism OK as long as it avoids Pascal’s Mugging, or is “all evil is evil” position untenable because it falls prey to Pascal’s Mugging?
Why do you think other moral systems are more resilient?
You gave communism, fascism and ISIS (Islamism) as examples of “a utilitarian infected by such a memetic virus which hijacks his One True Goal”. Islamism, unlike the first two, seems to be deontological, like Christianity. Isn’t it?
Deontological Christianity has also been ‘hijacked’ several times by millenialist movements that sparked e.g. several crusades. Nationalism and tribe solidarity have started and maintained many wars where consequentialists would make peace because they kept losing.
That’s true. But do many people endorse such actions in a hypothetical scenario? I think not, but I’m not very sure about this.
Good point :-)
It’s clear that one’s decision theory (and by extension, one’s morals) would benefit from being able to solve PM. But I don’t know how to do it. You have a good point elsewhere that consequentialism has a single failure point, so it would be more vulnerable to PM and fail more catastrophically, although deontology isn’t totally invulnerable to PM either. It may just be harder to construct a PM attack on deontology without knowing the particular set of deontological rules being used, whereas we can reason about the consequentialist utility function without actually knowing what it is.
I’m not sure if this should count as a reason not to be a consequentialist (as far as one can), because one can’t derive an ought from an is, so we can’t just choose our moral system on the basis of unlikely thought experiments. But it is a reason for consequentialists to be more careful and more uncertain.
I think a mix of moral systems is more resilient. Some consequentialism, some deontology, some gut feeling.
No, I don’t think so. Mainstream Islam is deontological, but fundamentalist movements, just like in Christianity, shift to less deontology and more utilitarianism (of course, with a very particular notion of “utility”).
Yes, deontology is corruptible as well, but one of the reasons it’s more robust is that it’s simpler. To be a consequentialist you first need the ability to figure out the consequences and that’s a complicated and error-prone process, vulnerable to attack. To be a deontologist you don’t need to figure out anything except which rule to apply.
To corrupt a consequentialist it might be sufficient to mess with his estimation of probabilities. To corrupt a deontologist you need to replace at least some of his rules. Maybe if you find a pair of contradictory rules you could get somewhere by changing which to apply when, but in practice this doesn’t seem to be a promising attack vector.
And yes, I’m not arguing that this is a sufficient reason to avoid being a consequentialist. But, as you say, it’s a good reason to be more wary.
I completely agree. Also because this describes how humans (including myself) actually act: according to different moral systems, depending on which is more convenient, some heuristics, and on gut feeling.
Yes? Of course? With the caveats that the concept of ‘Christianity’ is the medieval one you mentioned above, that these Christians really have no doubts about their beliefs, and that they swallow the bullet.
Are you trolling? Is the notion that the morality of actions is dependent on reality really that surprising to you?
Why don’t you go ask some.
Huh? The “concept” of Christianity hasn’t changed since the Middle Ages. The relevant part is that you either get saved and achieve eternal life or you are doomed to eternal torment. Of course I don’t mean people like Unitarian Universalists, but rather “standard” Christians who believe in heaven and hell.
Morality certainly depends on the perception of reality, but the point here is different. We are talking here about what you can, should, or must sacrifice to get closer to the One True Goal (which in Christianity is salvation). Your answer is “everything”. Why? Because the One True Goal justifies everything including things people call “horrors”. Am I reading you wrong?
I mentioned three crucial caveats. I think it would be difficult to find Christians in 2016 who have no doubts and swallow the bullet about the implications of Christianity. It would be a lot easier a few hundred years ago.
What I mean is that the religious beliefs of the majority of people who call themselves Christians have changed a lot since medieval times.
I don’t see the relevance of what you call a “One True Goal”. I mean, One True Goal as opposed to what? Several Sorta True Goals? Ultimately, no matter what your goals are, you will necessarily be willing to sacrifice things that are less important to you in order to achieve them. Actions are justified as they relate to the accomplishment of a goal, or a set of goals.
If I were convinced that Roger is going to detonate a nuclear bomb in New York, I would feel justified (and obliged) to murder him, because like most of the people I know, I have the goal to prevent millions of innocents from dying. And yet, if I believed that Roger is going to do this on bad or non-existent evidence, the odds are that I would be killing an innocent man for no good reason. There would be nothing wrong with my goal (One True or not), only with my rationality. I don’t see any fundamental difference between this scenario and the one we’ve been discussing.
Yes. Multiple systems, somewhat inconsistent but serving as a check and a constraint on each other, not letting a single one dominate.
Not in all ethical systems.
In consequentialism yes, but not all ethics are consequentialist.
How do you know that? Not in this specific example, but in general—how do you know there is nothing wrong with your One True Goal?
Are you trying to be funny? Note that not all of the 70% would agree that belief or its lack sends people to Hell. See also.
ETA: If you doubt what I said about beliefs regarding those “doomed to eternal torment,” see “Many religions can lead to eternal life,” in this sizeable PDF.