All three theories are obviously wrong, moss-grown and vulnerable.
As consequentalist u can think that raping and killing is ok if torturer receives more amount of joy then amount of pain received by the victim. It seems even more obvious in the case with group of assaulters and one victim.
As deontologist you can always justify almost any deed by some trivial “for greater good” rule
And virtue ethics can be misleading in many ways, consider Halo effect and what can be consequences—Dr. Evil will take over world in seconds.
Please prove where I am wrong, I just pointed out weak points of each movement which seemed obvious to me
As consequentalist u can think that raping and killing is ok if torturer receives more amount of joy then amount of pain received by the victim. It seems even more obvious in the case with group of assaulters and one victim.
Well, a consequentialist’s utility function doesn’t have to allow that. There are some utility functions that don’t justify that kind of behavior. But I agree that the question of how to aggregate individual utilities is a weak spot for consequentialism. The repugnant conclusion is pretty much about that.
As deontologist you can always justify almost any deed by some trivial “for greater good” rule
As far as I understand, a deontologist’s rule set doesn’t have to allow that. There are some rule sets that don’t justify all deeds.
And virtue ethics can be misleading in many ways, consider Halo effect and what can be consequences—Dr. Evil will take over world in seconds.
Yeah, I guess that’s why virtue ethicists say that recognizing virtue is an important skill. Not sure if they have a more detailed answer, though.
As consequentalist u can think that raping and killing is ok if torturer receives more amount of joy then amount of pain received by the victim. It seems even more obvious in the case with group of assaulters and one victim.
… if ‘u’ never ever admits the broad and nigh-inevitable aftereffects of such an event, severely underestimates the harm of being raped, and doesn’t consider the effects of such a policy in general, yes, u could.
In other words, the consequentialist faces the problem that they can do bad things if they’re don’t think things through.
The deontologist is generally fairly stable against doing horrible things for the greater good, but more often faces the problem that they are barred from doing something that really IS for a much greater good. Ethical tradeoffs are something it’s ill-suited to dealing with.
(C)
All three theories are obviously wrong, moss-grown and vulnerable.
As consequentalist u can think that raping and killing is ok if torturer receives more amount of joy then amount of pain received by the victim. It seems even more obvious in the case with group of assaulters and one victim.
As deontologist you can always justify almost any deed by some trivial “for greater good” rule
And virtue ethics can be misleading in many ways, consider Halo effect and what can be consequences—Dr. Evil will take over world in seconds.
Please prove where I am wrong, I just pointed out weak points of each movement which seemed obvious to me
Well, a consequentialist’s utility function doesn’t have to allow that. There are some utility functions that don’t justify that kind of behavior. But I agree that the question of how to aggregate individual utilities is a weak spot for consequentialism. The repugnant conclusion is pretty much about that.
As far as I understand, a deontologist’s rule set doesn’t have to allow that. There are some rule sets that don’t justify all deeds.
Yeah, I guess that’s why virtue ethicists say that recognizing virtue is an important skill. Not sure if they have a more detailed answer, though.
… if ‘u’ never ever admits the broad and nigh-inevitable aftereffects of such an event, severely underestimates the harm of being raped, and doesn’t consider the effects of such a policy in general, yes, u could.
In other words, the consequentialist faces the problem that they can do bad things if they’re don’t think things through.
The deontologist is generally fairly stable against doing horrible things for the greater good, but more often faces the problem that they are barred from doing something that really IS for a much greater good. Ethical tradeoffs are something it’s ill-suited to dealing with.