Where I differ with you is in the number of black boxes. “We” don’t have “a” black box. “Each” of us has our own black box.
This doesn’t seem to be a point on which we differ at all. In this later comment I’m saying pretty much the same thing.
Indeed, I wouldn’t be surprised if each of us has hundreds of processes that feel like they’re calculating “morality”, and aren’t evaluating according to the same inputs. Some might have outputs that are not quite easy to directly compare, or impossible to.
OK. I see your other comment. I think I was mainly responding to this:
However, if one is to ask a moral question without including a specific group-referent (though usually, “all humans” or “most humans” is implicit) from which one can extract that objective algorithm that makes things moral or not
You can’t extract “an” objective algorithm even if you do specify a group of people, unless your algorithm returns the population distribution of their moral evaluations, and not a singular moral evaluation. Any singular statistic would be one of an infinite set of statistics on that distribution.
This doesn’t seem to be a point on which we differ at all. In this later comment I’m saying pretty much the same thing.
Indeed, I wouldn’t be surprised if each of us has hundreds of processes that feel like they’re calculating “morality”, and aren’t evaluating according to the same inputs. Some might have outputs that are not quite easy to directly compare, or impossible to.
OK. I see your other comment. I think I was mainly responding to this:
You can’t extract “an” objective algorithm even if you do specify a group of people, unless your algorithm returns the population distribution of their moral evaluations, and not a singular moral evaluation. Any singular statistic would be one of an infinite set of statistics on that distribution.