Well, in a purely deontologic moral system, the beliefs are like “Don’t torture people, torturing people is bad”. That is, there is a list of “bad” things, and the system is very simple: You may do thing X if and only if X is not on the list. The list is outside the system. In the same way as consequentialism does not provide you with what you should place utility in, deontology does not tell you what the list is.
So when you look at it like that, what the charismatic priest is doing is not inside the moral system, but rather outside it. That is, he is trying to get his followers to change what is on their lists. This is no different from a ice cream advertisement trying to convince a consequentialist that they should place a higher utility on eating ice cream.
To summarize, the issue you are talking about is not one meant to be handled by the belief system itself. The priest in your example is trying to hack people by changing their belief system, which is not something deontologists in particular are susceptible to beyond anyone with a different system.
in a purely deontologic moral system [..] there is a list of “bad” things, and the system is very simple: You may do thing X if and only if X is not on the list.
Are you asserting that purely deontologic systems don’t include good things which it is preferable to do than leave undone, or that it is mandatory to do, but only bad things which it is mandatory to refrain from doing?
And are you asserting that purely deontologic systems don’t allow for (or include) any mechanism for trading off among things on the list? For example, if a moral system M has on its list of “bad” things both speaking when Ganto enters my tent and not-speaking when Ganto enters my tent, and Ganto enters my tent, then either M has nothing to say about whether speaking is better than not-speaking, or M is not a purely deontologic system?
If you’re making either or both of those assertions, I’d be interested in your grounds for them.
Well, in a purely deontologic moral system, the beliefs are like “Don’t torture people, torturing people is bad”.
Is there any such a “pure” system? Deontological metaethics has to put forward a justifications because it is philosophy. I don’t see how you can arrvie at yoru conclusion without performing the double whammy of both ignoring what people who call themselves deontologists say, AND dubbing the attitudes of some unreflective people who don’t call themseles deontologists “deontology”.
Well, in a purely deontologic moral system, the beliefs are like “Don’t torture people, torturing people is bad”. That is, there is a list of “bad” things, and the system is very simple: You may do thing X if and only if X is not on the list.
What if not torturing people requires you to torture person, like in the trolley problem? What then? Do deontologists not care about torturing those people because they did not personally torture them, or do they secretly do consequentialism and dress it up in “rules” after the fact?
I see everything in terms of AI algorithms; with consequentialism, I imagine a utility-maximizing search over counterfactuals in an internal model (utility-based agent), and with deontology, I imagine a big bunch of if-statements and special cases (a reflex-based agent).
What if not torturing people requires you to torture person, like in the trolley problem? What then? Do deontologists not care about torturing those people because they did not personally torture them, or do they secretly do consequentialism and dress it up in “rules” after the fact?
No, archetypal deontologists don’t torture people even to prevent others from being tortured. Not pushing the guy off the bridge is practically the definition of a deontologist.
Exactly. I am not advocating dentology, just clarifying what it means. A true deontologist who thought that torture is bad would not torture anyone, no matter what the circumstances. Obviously, this is silly, and again, I do not advocate this as a good moral system.
A true deontologists who thought that torture is bad would not torture anyone, no matter what the circumstances.Obviously, this is silly
No. That is not obvious. You are probably being misled by thought experiments where it is stipulated that so-and-so really does have life-saving information and really would yield it under torture. In real life, things are not so simple. You might have the wrong person. They might be capable of resisiting torture. They might die under torture. They might go insane under torture They might lie and give you disonformation that harms your cause. Your cause might be wrong..you might be the bad guy...
Real life is always much more complex and messy than maths.
Well, in a purely deontologic moral system, the beliefs are like “Don’t torture people, torturing people is bad”. That is, there is a list of “bad” things, and the system is very simple: You may do thing X if and only if X is not on the list. The list is outside the system. In the same way as consequentialism does not provide you with what you should place utility in, deontology does not tell you what the list is.
So when you look at it like that, what the charismatic priest is doing is not inside the moral system, but rather outside it. That is, he is trying to get his followers to change what is on their lists. This is no different from a ice cream advertisement trying to convince a consequentialist that they should place a higher utility on eating ice cream.
To summarize, the issue you are talking about is not one meant to be handled by the belief system itself. The priest in your example is trying to hack people by changing their belief system, which is not something deontologists in particular are susceptible to beyond anyone with a different system.
Are you asserting that purely deontologic systems don’t include good things which it is preferable to do than leave undone, or that it is mandatory to do, but only bad things which it is mandatory to refrain from doing?
And are you asserting that purely deontologic systems don’t allow for (or include) any mechanism for trading off among things on the list? For example, if a moral system M has on its list of “bad” things both speaking when Ganto enters my tent and not-speaking when Ganto enters my tent, and Ganto enters my tent, then either M has nothing to say about whether speaking is better than not-speaking, or M is not a purely deontologic system?
If you’re making either or both of those assertions, I’d be interested in your grounds for them.
Is there any such a “pure” system? Deontological metaethics has to put forward a justifications because it is philosophy. I don’t see how you can arrvie at yoru conclusion without performing the double whammy of both ignoring what people who call themselves deontologists say, AND dubbing the attitudes of some unreflective people who don’t call themseles deontologists “deontology”.
What if not torturing people requires you to torture person, like in the trolley problem? What then? Do deontologists not care about torturing those people because they did not personally torture them, or do they secretly do consequentialism and dress it up in “rules” after the fact?
I see everything in terms of AI algorithms; with consequentialism, I imagine a utility-maximizing search over counterfactuals in an internal model (utility-based agent), and with deontology, I imagine a big bunch of if-statements and special cases (a reflex-based agent).
No, archetypal deontologists don’t torture people even to prevent others from being tortured. Not pushing the guy off the bridge is practically the definition of a deontologist.
Exactly. I am not advocating dentology, just clarifying what it means. A true deontologist who thought that torture is bad would not torture anyone, no matter what the circumstances. Obviously, this is silly, and again, I do not advocate this as a good moral system.
No. That is not obvious. You are probably being misled by thought experiments where it is stipulated that so-and-so really does have life-saving information and really would yield it under torture. In real life, things are not so simple. You might have the wrong person. They might be capable of resisiting torture. They might die under torture. They might go insane under torture They might lie and give you disonformation that harms your cause. Your cause might be wrong..you might be the bad guy...
Real life is always much more complex and messy than maths.
No it isn’t. It’s just more complex and messy than trivial maths.
If its too non-trivial for your brain to handle as maths, it might as well not be maths