True, but there are no absolute thresholds. Whatever gets ranked first is it.
What’s wrong with that? Other than Pascal’s mugging, which everyone needs to avoid.
There are moral philosophies which would refuse to kill an innocent even if this act saves a hundred lives.
True, but very few people actually follow them, especially if you replace ‘a hundred’ with a much larger arbitrary constant. The ‘everyone knows it’s wrong’ metric that was mentioned at the start of this thread doesn’t hold here.
Other than Pascal’s mugging, which everyone needs to avoid.
Other than that Mrs. Lincoln, how was the play? :-)
What’s wrong with that is, for example, the existence of the single point of failure and lack of failsafes.
very few people actually follow them
I don’t know about that. Very few people find themselves in a situation where they have to make this choice, to start with.
if you replace ‘a hundred’ with a much larger arbitrary constant
We’re back in Pascal’s Mugging territory, aren’t we? So what is it, is utilitarianism OK as long as it avoids Pascal’s Mugging, or is “all evil is evil” position untenable because it falls prey to Pascal’s Mugging?
What’s wrong with that is, for example, the existence of the single point of failure and lack of failsafes.
Why do you think other moral systems are more resilient?
You gave communism, fascism and ISIS (Islamism) as examples of “a utilitarian infected by such a memetic virus which hijacks his One True Goal”. Islamism, unlike the first two, seems to be deontological, like Christianity. Isn’t it?
Deontological Christianity has also been ‘hijacked’ several times by millenialist movements that sparked e.g. several crusades. Nationalism and tribe solidarity have started and maintained many wars where consequentialists would make peace because they kept losing.
Very few people find themselves in a situation where they have to make this choice, to start with.
That’s true. But do many people endorse such actions in a hypothetical scenario? I think not, but I’m not very sure about this.
We’re back in Pascal’s Mugging territory, aren’t we? So what is it, is utilitarianism OK as long as it avoids Pascal’s Mugging, or is “all evil is evil” position untenable because it falls prey to Pascal’s Mugging?
Good point :-)
It’s clear that one’s decision theory (and by extension, one’s morals) would benefit from being able to solve PM. But I don’t know how to do it. You have a good point elsewhere that consequentialism has a single failure point, so it would be more vulnerable to PM and fail more catastrophically, although deontology isn’t totally invulnerable to PM either. It may just be harder to construct a PM attack on deontology without knowing the particular set of deontological rules being used, whereas we can reason about the consequentialist utility function without actually knowing what it is.
I’m not sure if this should count as a reason not to be a consequentialist (as far as one can), because one can’t derive an ought from an is, so we can’t just choose our moral system on the basis of unlikely thought experiments. But it is a reason for consequentialists to be more careful and more uncertain.
Why do you think other moral systems are more resilient?
I think a mix of moral systems is more resilient. Some consequentialism, some deontology, some gut feeling.
Islamism, unlike the first two, seems to be deontological, like Christianity. Isn’t it?
No, I don’t think so. Mainstream Islam is deontological, but fundamentalist movements, just like in Christianity, shift to less deontology and more utilitarianism (of course, with a very particular notion of “utility”).
although deontology isn’t totally invulnerable to PM either
Yes, deontology is corruptible as well, but one of the reasons it’s more robust is that it’s simpler. To be a consequentialist you first need the ability to figure out the consequences and that’s a complicated and error-prone process, vulnerable to attack. To be a deontologist you don’t need to figure out anything except which rule to apply.
To corrupt a consequentialist it might be sufficient to mess with his estimation of probabilities. To corrupt a deontologist you need to replace at least some of his rules. Maybe if you find a pair of contradictory rules you could get somewhere by changing which to apply when, but in practice this doesn’t seem to be a promising attack vector.
And yes, I’m not arguing that this is a sufficient reason to avoid being a consequentialist. But, as you say, it’s a good reason to be more wary.
I think a mix of moral systems is more resilient. Some consequentialism, some deontology, some gut feeling.
I completely agree. Also because this describes how humans (including myself) actually act: according to different moral systems, depending on which is more convenient, some heuristics, and on gut feeling.
What’s wrong with that? Other than Pascal’s mugging, which everyone needs to avoid.
True, but very few people actually follow them, especially if you replace ‘a hundred’ with a much larger arbitrary constant. The ‘everyone knows it’s wrong’ metric that was mentioned at the start of this thread doesn’t hold here.
Other than that Mrs. Lincoln, how was the play? :-)
What’s wrong with that is, for example, the existence of the single point of failure and lack of failsafes.
I don’t know about that. Very few people find themselves in a situation where they have to make this choice, to start with.
We’re back in Pascal’s Mugging territory, aren’t we? So what is it, is utilitarianism OK as long as it avoids Pascal’s Mugging, or is “all evil is evil” position untenable because it falls prey to Pascal’s Mugging?
Why do you think other moral systems are more resilient?
You gave communism, fascism and ISIS (Islamism) as examples of “a utilitarian infected by such a memetic virus which hijacks his One True Goal”. Islamism, unlike the first two, seems to be deontological, like Christianity. Isn’t it?
Deontological Christianity has also been ‘hijacked’ several times by millenialist movements that sparked e.g. several crusades. Nationalism and tribe solidarity have started and maintained many wars where consequentialists would make peace because they kept losing.
That’s true. But do many people endorse such actions in a hypothetical scenario? I think not, but I’m not very sure about this.
Good point :-)
It’s clear that one’s decision theory (and by extension, one’s morals) would benefit from being able to solve PM. But I don’t know how to do it. You have a good point elsewhere that consequentialism has a single failure point, so it would be more vulnerable to PM and fail more catastrophically, although deontology isn’t totally invulnerable to PM either. It may just be harder to construct a PM attack on deontology without knowing the particular set of deontological rules being used, whereas we can reason about the consequentialist utility function without actually knowing what it is.
I’m not sure if this should count as a reason not to be a consequentialist (as far as one can), because one can’t derive an ought from an is, so we can’t just choose our moral system on the basis of unlikely thought experiments. But it is a reason for consequentialists to be more careful and more uncertain.
I think a mix of moral systems is more resilient. Some consequentialism, some deontology, some gut feeling.
No, I don’t think so. Mainstream Islam is deontological, but fundamentalist movements, just like in Christianity, shift to less deontology and more utilitarianism (of course, with a very particular notion of “utility”).
Yes, deontology is corruptible as well, but one of the reasons it’s more robust is that it’s simpler. To be a consequentialist you first need the ability to figure out the consequences and that’s a complicated and error-prone process, vulnerable to attack. To be a deontologist you don’t need to figure out anything except which rule to apply.
To corrupt a consequentialist it might be sufficient to mess with his estimation of probabilities. To corrupt a deontologist you need to replace at least some of his rules. Maybe if you find a pair of contradictory rules you could get somewhere by changing which to apply when, but in practice this doesn’t seem to be a promising attack vector.
And yes, I’m not arguing that this is a sufficient reason to avoid being a consequentialist. But, as you say, it’s a good reason to be more wary.
I completely agree. Also because this describes how humans (including myself) actually act: according to different moral systems, depending on which is more convenient, some heuristics, and on gut feeling.