IMO, what each of us values to themselves may be relevant to morality. What we intuitively value for others is not.
I have to admit I have not read the metaethics sequences. From your tone, I feel I am making an elementary error. I am interested in hearing your response.
I’m not sure if it’s elementary, but I do have a couple of questions first. You say:
what each of us values to themselves may be relevant to morality
This seems to suggest that you’re a moral realist. Is that correct? I think that most forms of moral realism tend to stem from some variant of the mind projection fallacy; in this case, because we value something, we treat it as though it has some objective value. Similarly, because we almost universally hold something to be immoral, we hold its immorality to be objective, or mind independent, when in fact it is not. The morality or immorality of an action has less to do with the action itself than with how our brains react to hearing about or seeing the action.
Taking this route, I would say that not only are our values relevant to morality, but the dynamic system comprising all of our individual value systems is an upper-bound to what can be in the extensional definition of “morality” if “morality” is to make any sense as a term. That is, if something is outside of what any of us can ascribe value to, then it is not moral subject matter, and furthermore; what we can and do ascribe value to is dictated by neurology.
Not only that, but there is a well-known phenomenon that makes naive (without input from neuroscience) moral decision making: the distinction between liking and wanting. This distinction crops up in part because the way we evaluate possible alternatives is lossy—we can only use a very finite amount of computational power to try and predict the effects of a decision or obtaining a goal, and we have to use heuristics to do so. In addition, there is the fact that human valuation is multi-layered—we have at least three valuation mechanisms, and their interaction isn’t yet fully understood. Also see Glimcher et al. Neuroeconomics and the Study
of Valuation From that article:
10 years of work (that) established the existence of at least three interrelated subsystems in these
brain areas that employ distinct mechanisms for learning and representing value and that interact to produce the valuations that guide choice (Dayan & Balliene, 2002; Balliene, Daw, & O’Doherty, 2008; Niv & Montague, 2008).
The mechanisms for choice valuation are complicated, and so are the constraints for human ability in decision making. In evaluating whether an action was moral, it’s imperative to avoid making the criterion “too high for humanity”.
One last thing I’d point out has to do with the argument you link to, because you do seem to be being inconsistent when you say:
What we intuitively value for others is not.
Relevant to morality, that is. The reason is that the argument cited rests entirely on intuition for what others value. The hypothetical species in the example is not a human species, but a slightly different one.
I can easily imagine an individual from species described along the lines of the author’s hypothetical reading the following:
If it is good because it is loved by our genes, then anything that comes to be loved by the genes can become good. If humans, like lions, had a disposition to noteat their babies, or to behead their mates and eat them, or to attack neighboring tribes and tear their members to bits (all of which occurs in the natural kingdom), then these things would not be good. We could not brag that humans evolved a disposition to be moral because morality would be whatever humans evolved a disposition to do.
And being horrified at the thought of such a bizarre and morally bankrupt group. I strongly recommend you read the sequence I linked to in the quite if you haven’t. It’s quite an interesting (relevant) short story.
So, I have a bit more to write but I’m short on time at the moment. I’d be interested to hear if there is anything you find particularly objectionable here though.
What would probably help is if you said what you thought was relevant to morality, rather than only telling us about things you think are irrelevant. It would make it easier to interpret your irrelevancies.
IMO, what each of us values to themselves may be relevant to morality. What we intuitively value for others is not.
I have to admit I have not read the metaethics sequences. From your tone, I feel I am making an elementary error. I am interested in hearing your response.
Thanks
I’m not sure if it’s elementary, but I do have a couple of questions first. You say:
This seems to suggest that you’re a moral realist. Is that correct? I think that most forms of moral realism tend to stem from some variant of the mind projection fallacy; in this case, because we value something, we treat it as though it has some objective value. Similarly, because we almost universally hold something to be immoral, we hold its immorality to be objective, or mind independent, when in fact it is not. The morality or immorality of an action has less to do with the action itself than with how our brains react to hearing about or seeing the action.
Taking this route, I would say that not only are our values relevant to morality, but the dynamic system comprising all of our individual value systems is an upper-bound to what can be in the extensional definition of “morality” if “morality” is to make any sense as a term. That is, if something is outside of what any of us can ascribe value to, then it is not moral subject matter, and furthermore; what we can and do ascribe value to is dictated by neurology.
Not only that, but there is a well-known phenomenon that makes naive (without input from neuroscience) moral decision making: the distinction between liking and wanting. This distinction crops up in part because the way we evaluate possible alternatives is lossy—we can only use a very finite amount of computational power to try and predict the effects of a decision or obtaining a goal, and we have to use heuristics to do so. In addition, there is the fact that human valuation is multi-layered—we have at least three valuation mechanisms, and their interaction isn’t yet fully understood. Also see Glimcher et al. Neuroeconomics and the Study of Valuation From that article:
The mechanisms for choice valuation are complicated, and so are the constraints for human ability in decision making. In evaluating whether an action was moral, it’s imperative to avoid making the criterion “too high for humanity”.
One last thing I’d point out has to do with the argument you link to, because you do seem to be being inconsistent when you say:
Relevant to morality, that is. The reason is that the argument cited rests entirely on intuition for what others value. The hypothetical species in the example is not a human species, but a slightly different one.
I can easily imagine an individual from species described along the lines of the author’s hypothetical reading the following:
And being horrified at the thought of such a bizarre and morally bankrupt group. I strongly recommend you read the sequence I linked to in the quite if you haven’t. It’s quite an interesting (relevant) short story.
So, I have a bit more to write but I’m short on time at the moment. I’d be interested to hear if there is anything you find particularly objectionable here though.
What would probably help is if you said what you thought was relevant to morality, rather than only telling us about things you think are irrelevant. It would make it easier to interpret your irrelevancies.