If you say “I define should as [Eliezers long list of human values]”
then I say: “That’s a long definition. How did you pick that definition?”
and you say: ’Well, I took whatever I thought was morally important, and put it into the definition.”
In the part you quote I am arguing that (or at least claiming that) other responses to my query are wrong.
I would then continue:
“Using the long definition is obscuring what you really mean when you say ‘should’. You really mean ‘what’s important’, not [the long list of things I think are important]. So why not just define it as that?”
One more way to describe this idea. I ask, “What is morality?”, and you say, “I don’t know, but I use this brain thing here to figure out facts about it; it errs sometimes, but can provide limited guidance. Why do I believe this “brain” is talking about morality? It says it does, and it doesn’t know of a better tool for that purpose presently available. By the way, it’s reporting that are morally relevant, and is probably right.”
By the way, it’s reporting that are morally relevant, and is probably right.
Where do you get “is probably right” from? I don’t think you can get that if you take an outside view and consider how often a human brain is right when it reports on philosophical matters in a similar state of confusion...
Salt to taste, the specific estimate is irrelevant to my point, so long as the brain is seen as collecting at least some moral information, and not defining the whole of morality. The level of certainty in brain’s moral judgment won’t be stellar, but more reliable for simpler judgments. Here, I referred “morally relevant”, which is a rather weak matter-of-priority kind of judgment, as opposed to deciding which of the given options are better.
Ah. So this is what I am saying.
If you say “I define should as [Eliezers long list of human values]”
then I say: “That’s a long definition. How did you pick that definition?”
and you say: ’Well, I took whatever I thought was morally important, and put it into the definition.”
In the part you quote I am arguing that (or at least claiming that) other responses to my query are wrong.
I would then continue:
“Using the long definition is obscuring what you really mean when you say ‘should’. You really mean ‘what’s important’, not [the long list of things I think are important]. So why not just define it as that?”
One more way to describe this idea. I ask, “What is morality?”, and you say, “I don’t know, but I use this brain thing here to figure out facts about it; it errs sometimes, but can provide limited guidance. Why do I believe this “brain” is talking about morality? It says it does, and it doesn’t know of a better tool for that purpose presently available. By the way, it’s reporting that are morally relevant, and is probably right.”
Where do you get “is probably right” from? I don’t think you can get that if you take an outside view and consider how often a human brain is right when it reports on philosophical matters in a similar state of confusion...
Salt to taste, the specific estimate is irrelevant to my point, so long as the brain is seen as collecting at least some moral information, and not defining the whole of morality. The level of certainty in brain’s moral judgment won’t be stellar, but more reliable for simpler judgments. Here, I referred “morally relevant”, which is a rather weak matter-of-priority kind of judgment, as opposed to deciding which of the given options are better.
Beautiful. I would draw more attention to the “Why.… ? It says it does” bit, but that seems right.