I think we’re fairly close, but have one major difference.
I’d say there are moral facts.
These moral facts are objective features of the universe.
These facts are about the evaluations that could be made by the moral algorithms in our heads.
Where I differ with you is in the number of black boxes. “We” don’t have “a” black box. “Each” of us has our own black box.
Moral, as evaluated by you, is the result of your algorithm given the relevant information and sufficient processing time. I think this is somewhat in line with EY, though I can never tell if he is a universalist or not. Moral is the result of an idealized calculation of a moral algorithm, where the result of the idealization is often different than the actual because of lack of information and processing time.
A case could be made for this view to fall into many of the usual categories. Moral relativism. Ethical Subjectivism. Moral Realism. Moral Anti Realism. About the only thing ruled out is Universalism.
For Deontology vs. Consequentialism, it gets similarly murky.
Do consequentialists really do de novo analysis of the entire state of the universe again and again all day? If I shoot a gun at you, but miss, is it “no harm, no foul”? When a consequentialist actually thinks about it, all of a sudden I expect a lot of rules of behavior to come up. There will be some rule consequentialilsm. Then “acts” will be seen as part of the consequences too. Very quickly, we’re seeing all sorts of aspects of deontology when a consequentialist works out the details.
The same thing with deontologists. Does the rule absolutely always apply? No? Maybe it depends on context? Why? Does it have something to do with the consequences in the different contexts? I bet it often does. Similarly, the “though the heavens fall, I shall do right” attitude is rarely taken in hypotheticals, and would be more rarely taken in actual fact. You won’t tell a lie to keep everyone in the world from a fiery death? Really? I doubt it.
I’d expect a social animal to have both consequentialist and deontologist moral algorithms, but that there’d be significant feedback between the two. I’d expect the relative weighting of those algorithms to vary from animal to animal, much in the same way Haidt finds the relative strengths of the moral modalities he has identified vary between people.
Most of the argument over consequentialism and deontology probably comes more from how they are used as rationalizations for your preferences in moral modalities than the relative weighting of your consequentialist and deontological algorithms anyway. The meta argument over consequentialism vs. deontology is a way to avoid hard thinking that drives both algorithms to a settled conclusion.
Where I differ with you is in the number of black boxes. “We” don’t have “a” black box. “Each” of us has our own black box.
This doesn’t seem to be a point on which we differ at all. In this later comment I’m saying pretty much the same thing.
Indeed, I wouldn’t be surprised if each of us has hundreds of processes that feel like they’re calculating “morality”, and aren’t evaluating according to the same inputs. Some might have outputs that are not quite easy to directly compare, or impossible to.
OK. I see your other comment. I think I was mainly responding to this:
However, if one is to ask a moral question without including a specific group-referent (though usually, “all humans” or “most humans” is implicit) from which one can extract that objective algorithm that makes things moral or not
You can’t extract “an” objective algorithm even if you do specify a group of people, unless your algorithm returns the population distribution of their moral evaluations, and not a singular moral evaluation. Any singular statistic would be one of an infinite set of statistics on that distribution.
I think we’re fairly close, but have one major difference.
I’d say there are moral facts. These moral facts are objective features of the universe. These facts are about the evaluations that could be made by the moral algorithms in our heads. Where I differ with you is in the number of black boxes. “We” don’t have “a” black box. “Each” of us has our own black box.
Moral, as evaluated by you, is the result of your algorithm given the relevant information and sufficient processing time. I think this is somewhat in line with EY, though I can never tell if he is a universalist or not. Moral is the result of an idealized calculation of a moral algorithm, where the result of the idealization is often different than the actual because of lack of information and processing time.
A case could be made for this view to fall into many of the usual categories. Moral relativism. Ethical Subjectivism. Moral Realism. Moral Anti Realism. About the only thing ruled out is Universalism.
For Deontology vs. Consequentialism, it gets similarly murky.
Do consequentialists really do de novo analysis of the entire state of the universe again and again all day? If I shoot a gun at you, but miss, is it “no harm, no foul”? When a consequentialist actually thinks about it, all of a sudden I expect a lot of rules of behavior to come up. There will be some rule consequentialilsm. Then “acts” will be seen as part of the consequences too. Very quickly, we’re seeing all sorts of aspects of deontology when a consequentialist works out the details.
The same thing with deontologists. Does the rule absolutely always apply? No? Maybe it depends on context? Why? Does it have something to do with the consequences in the different contexts? I bet it often does. Similarly, the “though the heavens fall, I shall do right” attitude is rarely taken in hypotheticals, and would be more rarely taken in actual fact. You won’t tell a lie to keep everyone in the world from a fiery death? Really? I doubt it.
I’d expect a social animal to have both consequentialist and deontologist moral algorithms, but that there’d be significant feedback between the two. I’d expect the relative weighting of those algorithms to vary from animal to animal, much in the same way Haidt finds the relative strengths of the moral modalities he has identified vary between people.
Most of the argument over consequentialism and deontology probably comes more from how they are used as rationalizations for your preferences in moral modalities than the relative weighting of your consequentialist and deontological algorithms anyway. The meta argument over consequentialism vs. deontology is a way to avoid hard thinking that drives both algorithms to a settled conclusion.
This doesn’t seem to be a point on which we differ at all. In this later comment I’m saying pretty much the same thing.
Indeed, I wouldn’t be surprised if each of us has hundreds of processes that feel like they’re calculating “morality”, and aren’t evaluating according to the same inputs. Some might have outputs that are not quite easy to directly compare, or impossible to.
OK. I see your other comment. I think I was mainly responding to this:
You can’t extract “an” objective algorithm even if you do specify a group of people, unless your algorithm returns the population distribution of their moral evaluations, and not a singular moral evaluation. Any singular statistic would be one of an infinite set of statistics on that distribution.