Well, in a purely deontologic moral system, the beliefs are like “Don’t torture people, torturing people is bad”. That is, there is a list of “bad” things, and the system is very simple: You may do thing X if and only if X is not on the list.
What if not torturing people requires you to torture person, like in the trolley problem? What then? Do deontologists not care about torturing those people because they did not personally torture them, or do they secretly do consequentialism and dress it up in “rules” after the fact?
I see everything in terms of AI algorithms; with consequentialism, I imagine a utility-maximizing search over counterfactuals in an internal model (utility-based agent), and with deontology, I imagine a big bunch of if-statements and special cases (a reflex-based agent).
What if not torturing people requires you to torture person, like in the trolley problem? What then? Do deontologists not care about torturing those people because they did not personally torture them, or do they secretly do consequentialism and dress it up in “rules” after the fact?
No, archetypal deontologists don’t torture people even to prevent others from being tortured. Not pushing the guy off the bridge is practically the definition of a deontologist.
Exactly. I am not advocating dentology, just clarifying what it means. A true deontologist who thought that torture is bad would not torture anyone, no matter what the circumstances. Obviously, this is silly, and again, I do not advocate this as a good moral system.
A true deontologists who thought that torture is bad would not torture anyone, no matter what the circumstances.Obviously, this is silly
No. That is not obvious. You are probably being misled by thought experiments where it is stipulated that so-and-so really does have life-saving information and really would yield it under torture. In real life, things are not so simple. You might have the wrong person. They might be capable of resisiting torture. They might die under torture. They might go insane under torture They might lie and give you disonformation that harms your cause. Your cause might be wrong..you might be the bad guy...
Real life is always much more complex and messy than maths.
What if not torturing people requires you to torture person, like in the trolley problem? What then? Do deontologists not care about torturing those people because they did not personally torture them, or do they secretly do consequentialism and dress it up in “rules” after the fact?
I see everything in terms of AI algorithms; with consequentialism, I imagine a utility-maximizing search over counterfactuals in an internal model (utility-based agent), and with deontology, I imagine a big bunch of if-statements and special cases (a reflex-based agent).
No, archetypal deontologists don’t torture people even to prevent others from being tortured. Not pushing the guy off the bridge is practically the definition of a deontologist.
Exactly. I am not advocating dentology, just clarifying what it means. A true deontologist who thought that torture is bad would not torture anyone, no matter what the circumstances. Obviously, this is silly, and again, I do not advocate this as a good moral system.
No. That is not obvious. You are probably being misled by thought experiments where it is stipulated that so-and-so really does have life-saving information and really would yield it under torture. In real life, things are not so simple. You might have the wrong person. They might be capable of resisiting torture. They might die under torture. They might go insane under torture They might lie and give you disonformation that harms your cause. Your cause might be wrong..you might be the bad guy...
Real life is always much more complex and messy than maths.
No it isn’t. It’s just more complex and messy than trivial maths.
If its too non-trivial for your brain to handle as maths, it might as well not be maths