It seems to me that most consequentialist views fail to take into account sufficiently the problem of the implementability and stability of their moral schemes in actual human (or other) societies.
If a scheme isn’t implementable or stable, then it doesn’t maximize welfare, so utilitarianism does not recommend it. Utilitarianism describes a goal, not a method.
I don’t consider myself a utilitarian because I don’t agree with the goals of any of the variants I’ve seen described.
I’m not sure whether I consider myself a consequentialist because while I think that ultimately outcomes are important, I don’t see enough attention paid to issues of implementability and stability in many descriptions of consequentialist views I’ve read.
For example, it seems that some (not all) consequentialist ethics consider the ‘rightness’ of an action to be purely a function of its actual consequences, thus making it possible for an attempted murder to be a morally good act because it has an unintended good consequence and an attempt at assistance to be a morally bad act because it has an unintended bad consequence. Other variants of consequentialist ethics (rule consequentialism, which seems closer to something I would feel comfortable identifying with) recognize the impossibility of perfect prediction of outcomes and so associate the ‘good’ with rules that tend to produce good outcomes if followed. Consequentialism doesn’t seem clearly enough defined for me to figure out exactly what variant people are talking about when they use the term.
Consequentialism doesn’t seem clearly enough defined for me to figure out exactly what variant people are talking about when they use the term.
That’s okay, nobody else knows either. (People have guesses, but most of them exclude things that seem like they should be included or vice-versa.) The only way to get a handle on the word seems to be to listen to people use it a lot and sort of triangulate.
If a scheme isn’t implementable or stable, then it doesn’t maximize welfare, so utilitarianism does not recommend it. Utilitarianism describes a goal, not a method.
I don’t consider myself a utilitarian because I don’t agree with the goals of any of the variants I’ve seen described.
I’m not sure whether I consider myself a consequentialist because while I think that ultimately outcomes are important, I don’t see enough attention paid to issues of implementability and stability in many descriptions of consequentialist views I’ve read.
For example, it seems that some (not all) consequentialist ethics consider the ‘rightness’ of an action to be purely a function of its actual consequences, thus making it possible for an attempted murder to be a morally good act because it has an unintended good consequence and an attempt at assistance to be a morally bad act because it has an unintended bad consequence. Other variants of consequentialist ethics (rule consequentialism, which seems closer to something I would feel comfortable identifying with) recognize the impossibility of perfect prediction of outcomes and so associate the ‘good’ with rules that tend to produce good outcomes if followed. Consequentialism doesn’t seem clearly enough defined for me to figure out exactly what variant people are talking about when they use the term.
You may find this paper on consequentialism and decision procedures interesting.
That’s okay, nobody else knows either. (People have guesses, but most of them exclude things that seem like they should be included or vice-versa.) The only way to get a handle on the word seems to be to listen to people use it a lot and sort of triangulate.