Conditional on being sufficiently convinced such a goal is true
Kinda? The interesting thing about utilitarians is that their One True Goal is whatever scores the highest on the utility-meter. Whatever it is.
an evil can be the least evil of all available options
This is conditional on two evils being comparable (think about generic sorting functions in programming). Not every moral system accepts that all evils can be compared and ranked.
deontology is completely compatible with theology
Again, kinda? It depends. Even in Christianity true love for Christ overrides any rules. Formulated in a different way, if you have sufficient amount of grace, deontological rules don’t apply to you any more, they are just a crutch.
I assign very different weights to the wellbeing of different people
That’s perfectly compatible with utilitarianism.
My understanding of utilitarianism is that it’s a variety of consequentialism where you arrange all the consequences on a single axis called “utility” and rank them. There are subspecies which specify particular ways of aggregating utility (e.g. by saying that the weights of utility of all individuals are all the same), but utilitarianism in general does not require that.
Kinda? The interesting thing about utilitarians is that their One True Goal is whatever scores the highest on the utility-meter. Whatever it is.
But they still need to take into account the probabilities of their factual beliefs. Getting everyone into Heaven may be the One True Goal, but they need to also be certain that Heaven really exists and that they’re right about how to get there.
This is conditional on two evils being comparable (think about generic sorting functions in programming). Not every moral system accepts that all evils can be compared and ranked.
Yes. That’s why I said “an evil can be” and not “some evil must be”. But usually, given a concrete choice, one outcome will be judged best. It’s unlikely, to put it mildly, that someone would believe they can determine whether another person goes to Heaven or Hell, and be morally indifferent between the choices.
Even in Christianity true love for Christ overrides any rules. Formulated in a different way, if you have sufficient amount of grace, deontological rules don’t apply to you any more, they are just a crutch.
That appears to be true for many Protestant denominations. In the Catholic and Orthodox churches, though, salvation is only possible through the church and its ministers and sacraments. And even most Protestants would agree that some (deontological) sins are incompatible with a state of grace unless repented, so at most a past sinner can be in a state of grace, not an ongoing one.
My understanding of utilitarianism is that it’s a variety of consequentialism where you arrange all the consequences on a single axis called “utility” and rank them. There are subspecies which specify particular ways of aggregating utility (e.g. by saying that the weights of utility of all individuals are all the same), but utilitarianism in general does not require that.
It’s good to be precise about the meaning of words. I’ve talked to some people (here on LW) who didn’t accept the label “utilitarianism” for philosophies that assign near-zero value to large groups of people.
True, but there are no absolute thresholds. Whatever gets ranked first is it.
What’s wrong with that? Other than Pascal’s mugging, which everyone needs to avoid.
There are moral philosophies which would refuse to kill an innocent even if this act saves a hundred lives.
True, but very few people actually follow them, especially if you replace ‘a hundred’ with a much larger arbitrary constant. The ‘everyone knows it’s wrong’ metric that was mentioned at the start of this thread doesn’t hold here.
Other than Pascal’s mugging, which everyone needs to avoid.
Other than that Mrs. Lincoln, how was the play? :-)
What’s wrong with that is, for example, the existence of the single point of failure and lack of failsafes.
very few people actually follow them
I don’t know about that. Very few people find themselves in a situation where they have to make this choice, to start with.
if you replace ‘a hundred’ with a much larger arbitrary constant
We’re back in Pascal’s Mugging territory, aren’t we? So what is it, is utilitarianism OK as long as it avoids Pascal’s Mugging, or is “all evil is evil” position untenable because it falls prey to Pascal’s Mugging?
What’s wrong with that is, for example, the existence of the single point of failure and lack of failsafes.
Why do you think other moral systems are more resilient?
You gave communism, fascism and ISIS (Islamism) as examples of “a utilitarian infected by such a memetic virus which hijacks his One True Goal”. Islamism, unlike the first two, seems to be deontological, like Christianity. Isn’t it?
Deontological Christianity has also been ‘hijacked’ several times by millenialist movements that sparked e.g. several crusades. Nationalism and tribe solidarity have started and maintained many wars where consequentialists would make peace because they kept losing.
Very few people find themselves in a situation where they have to make this choice, to start with.
That’s true. But do many people endorse such actions in a hypothetical scenario? I think not, but I’m not very sure about this.
We’re back in Pascal’s Mugging territory, aren’t we? So what is it, is utilitarianism OK as long as it avoids Pascal’s Mugging, or is “all evil is evil” position untenable because it falls prey to Pascal’s Mugging?
Good point :-)
It’s clear that one’s decision theory (and by extension, one’s morals) would benefit from being able to solve PM. But I don’t know how to do it. You have a good point elsewhere that consequentialism has a single failure point, so it would be more vulnerable to PM and fail more catastrophically, although deontology isn’t totally invulnerable to PM either. It may just be harder to construct a PM attack on deontology without knowing the particular set of deontological rules being used, whereas we can reason about the consequentialist utility function without actually knowing what it is.
I’m not sure if this should count as a reason not to be a consequentialist (as far as one can), because one can’t derive an ought from an is, so we can’t just choose our moral system on the basis of unlikely thought experiments. But it is a reason for consequentialists to be more careful and more uncertain.
Why do you think other moral systems are more resilient?
I think a mix of moral systems is more resilient. Some consequentialism, some deontology, some gut feeling.
Islamism, unlike the first two, seems to be deontological, like Christianity. Isn’t it?
No, I don’t think so. Mainstream Islam is deontological, but fundamentalist movements, just like in Christianity, shift to less deontology and more utilitarianism (of course, with a very particular notion of “utility”).
although deontology isn’t totally invulnerable to PM either
Yes, deontology is corruptible as well, but one of the reasons it’s more robust is that it’s simpler. To be a consequentialist you first need the ability to figure out the consequences and that’s a complicated and error-prone process, vulnerable to attack. To be a deontologist you don’t need to figure out anything except which rule to apply.
To corrupt a consequentialist it might be sufficient to mess with his estimation of probabilities. To corrupt a deontologist you need to replace at least some of his rules. Maybe if you find a pair of contradictory rules you could get somewhere by changing which to apply when, but in practice this doesn’t seem to be a promising attack vector.
And yes, I’m not arguing that this is a sufficient reason to avoid being a consequentialist. But, as you say, it’s a good reason to be more wary.
I think a mix of moral systems is more resilient. Some consequentialism, some deontology, some gut feeling.
I completely agree. Also because this describes how humans (including myself) actually act: according to different moral systems, depending on which is more convenient, some heuristics, and on gut feeling.
Kinda? The interesting thing about utilitarians is that their One True Goal is whatever scores the highest on the utility-meter. Whatever it is.
This is conditional on two evils being comparable (think about generic sorting functions in programming). Not every moral system accepts that all evils can be compared and ranked.
Again, kinda? It depends. Even in Christianity true love for Christ overrides any rules. Formulated in a different way, if you have sufficient amount of grace, deontological rules don’t apply to you any more, they are just a crutch.
That’s perfectly compatible with utilitarianism.
My understanding of utilitarianism is that it’s a variety of consequentialism where you arrange all the consequences on a single axis called “utility” and rank them. There are subspecies which specify particular ways of aggregating utility (e.g. by saying that the weights of utility of all individuals are all the same), but utilitarianism in general does not require that.
But they still need to take into account the probabilities of their factual beliefs. Getting everyone into Heaven may be the One True Goal, but they need to also be certain that Heaven really exists and that they’re right about how to get there.
Yes. That’s why I said “an evil can be” and not “some evil must be”. But usually, given a concrete choice, one outcome will be judged best. It’s unlikely, to put it mildly, that someone would believe they can determine whether another person goes to Heaven or Hell, and be morally indifferent between the choices.
That appears to be true for many Protestant denominations. In the Catholic and Orthodox churches, though, salvation is only possible through the church and its ministers and sacraments. And even most Protestants would agree that some (deontological) sins are incompatible with a state of grace unless repented, so at most a past sinner can be in a state of grace, not an ongoing one.
It’s good to be precise about the meaning of words. I’ve talked to some people (here on LW) who didn’t accept the label “utilitarianism” for philosophies that assign near-zero value to large groups of people.
True, but there are no absolute thresholds. Whatever gets ranked first is it.
There are moral philosophies which would refuse to kill an innocent even if this act saves a hundred lives.
What’s wrong with that? Other than Pascal’s mugging, which everyone needs to avoid.
True, but very few people actually follow them, especially if you replace ‘a hundred’ with a much larger arbitrary constant. The ‘everyone knows it’s wrong’ metric that was mentioned at the start of this thread doesn’t hold here.
Other than that Mrs. Lincoln, how was the play? :-)
What’s wrong with that is, for example, the existence of the single point of failure and lack of failsafes.
I don’t know about that. Very few people find themselves in a situation where they have to make this choice, to start with.
We’re back in Pascal’s Mugging territory, aren’t we? So what is it, is utilitarianism OK as long as it avoids Pascal’s Mugging, or is “all evil is evil” position untenable because it falls prey to Pascal’s Mugging?
Why do you think other moral systems are more resilient?
You gave communism, fascism and ISIS (Islamism) as examples of “a utilitarian infected by such a memetic virus which hijacks his One True Goal”. Islamism, unlike the first two, seems to be deontological, like Christianity. Isn’t it?
Deontological Christianity has also been ‘hijacked’ several times by millenialist movements that sparked e.g. several crusades. Nationalism and tribe solidarity have started and maintained many wars where consequentialists would make peace because they kept losing.
That’s true. But do many people endorse such actions in a hypothetical scenario? I think not, but I’m not very sure about this.
Good point :-)
It’s clear that one’s decision theory (and by extension, one’s morals) would benefit from being able to solve PM. But I don’t know how to do it. You have a good point elsewhere that consequentialism has a single failure point, so it would be more vulnerable to PM and fail more catastrophically, although deontology isn’t totally invulnerable to PM either. It may just be harder to construct a PM attack on deontology without knowing the particular set of deontological rules being used, whereas we can reason about the consequentialist utility function without actually knowing what it is.
I’m not sure if this should count as a reason not to be a consequentialist (as far as one can), because one can’t derive an ought from an is, so we can’t just choose our moral system on the basis of unlikely thought experiments. But it is a reason for consequentialists to be more careful and more uncertain.
I think a mix of moral systems is more resilient. Some consequentialism, some deontology, some gut feeling.
No, I don’t think so. Mainstream Islam is deontological, but fundamentalist movements, just like in Christianity, shift to less deontology and more utilitarianism (of course, with a very particular notion of “utility”).
Yes, deontology is corruptible as well, but one of the reasons it’s more robust is that it’s simpler. To be a consequentialist you first need the ability to figure out the consequences and that’s a complicated and error-prone process, vulnerable to attack. To be a deontologist you don’t need to figure out anything except which rule to apply.
To corrupt a consequentialist it might be sufficient to mess with his estimation of probabilities. To corrupt a deontologist you need to replace at least some of his rules. Maybe if you find a pair of contradictory rules you could get somewhere by changing which to apply when, but in practice this doesn’t seem to be a promising attack vector.
And yes, I’m not arguing that this is a sufficient reason to avoid being a consequentialist. But, as you say, it’s a good reason to be more wary.
I completely agree. Also because this describes how humans (including myself) actually act: according to different moral systems, depending on which is more convenient, some heuristics, and on gut feeling.