Sure. So what is it that makes (a) [the fact that you can save a life by donating $500 to VillageReach] normatively justifying, whereas (b) [the fact that you can save a mosquito by donating $2000 to SaveTheMosquitos] is not normatively justifying?
On my naturalist view, the fact that makes (a) but not (b) normatively justifying would be some fact about how the goal we’re discussing at the moment is saving human lives, not saving mosquito lives. That’s a natural fact. So are the facts about how the English language works and how two English speakers are using their terms.
It’s not entirely clear what you’re asking. Two possibilities, corresponding to my above distinction, are:
(1) What (perhaps more general) normatively significant feature is possessed by [saving lives for $500 each] that isn’t possessed by [saving mosquitoes for $2000 each]? This would just be to ask for one’s fully general normative theory: a utilitarian might point to the greater happiness that would result from the former option. Eventually we’ll reach bedrock (“It’s just a brute fact that happiness is good!”), at which point the only remaining question is....
(2) In what does the normative signifiance of [happiness] consist? That is, what is the nature of this justificatory status? What are we attributing to happiness when we claim that it is normatively justifying? This is where the non-naturalist insists that attributing normativity to a feature is not merely to attribute some natural quality to it (e.g. of “being the salient goal under discussion”—that’s not such a philosophically interesting property for something to have. E.g., I could know that a feature has this property without this having any rational significance to me at all).
(Note that it’s a yet further question whether our attributions of normativity are actually correct, i.e. whether worldly things have the normative properties that we attribute to them.)
I gather it’s this second question you had in mind, but again it’s crucial to carefully distinguish them since non-naturalist answers to the first question are obviously crazy.
On my naturalist view, the fact that makes (a) but not (b) normatively justifying would be some fact about how the goal we’re discussing at the moment is saving human lives, not saving mosquito lives.
What if you actually should be discussing saving of mosquito lives, but don’t, because humans are dumb?
I take you to mean “what would maximize Luke’s utility function” (knowing that ‘utility function’ is probably just a metaphor when talking about humans) when you say “you actually should...” Of course, my ‘utility function’ is unknown to both of us.
In that case, it would remain true in our hypothetical scenario that I should-HumanLivesAreGood donate to VillageReach (assuming they’re a good charity for saving human lives), while I should-UtilityFunctionLuke donate to SaveTheMosquitos.
(Sorry about formatting; LW comments don’t know how to use underscores, apparently.)
On my naturalist view, the fact that makes (a) but not (b) normatively justifying would be some fact about how the goal we’re discussing at the moment is saving human lives, not saving mosquito lives.
But the question then is what goal you should have. It is easy to naturalise norms inasmuch as they are hypothetical and indexed to whatever you happen to be doing.
(if you want to play chess, you should move the bishop diagonally)
The issue is how to naturalise categorical ends,the goals you should have and the
rules you should be following irrespective of what you are doing.
That’s a natural fact. So are the facts about how the English language works and how two English speakers are using their terms.
Such facts aren’t supernatural. OTOH, they fall on the analytical/apriori side of the
fence, rather than the empirical side, and that is an iimportant distinction.
Sure. So what is it that makes (a) [the fact that you can save a life by donating $500 to VillageReach] normatively justifying, whereas (b) [the fact that you can save a mosquito by donating $2000 to SaveTheMosquitos] is not normatively justifying?
On my naturalist view, the fact that makes (a) but not (b) normatively justifying would be some fact about how the goal we’re discussing at the moment is saving human lives, not saving mosquito lives. That’s a natural fact. So are the facts about how the English language works and how two English speakers are using their terms.
It’s not entirely clear what you’re asking. Two possibilities, corresponding to my above distinction, are:
(1) What (perhaps more general) normatively significant feature is possessed by [saving lives for $500 each] that isn’t possessed by [saving mosquitoes for $2000 each]? This would just be to ask for one’s fully general normative theory: a utilitarian might point to the greater happiness that would result from the former option. Eventually we’ll reach bedrock (“It’s just a brute fact that happiness is good!”), at which point the only remaining question is....
(2) In what does the normative signifiance of [happiness] consist? That is, what is the nature of this justificatory status? What are we attributing to happiness when we claim that it is normatively justifying? This is where the non-naturalist insists that attributing normativity to a feature is not merely to attribute some natural quality to it (e.g. of “being the salient goal under discussion”—that’s not such a philosophically interesting property for something to have. E.g., I could know that a feature has this property without this having any rational significance to me at all).
(Note that it’s a yet further question whether our attributions of normativity are actually correct, i.e. whether worldly things have the normative properties that we attribute to them.)
I gather it’s this second question you had in mind, but again it’s crucial to carefully distinguish them since non-naturalist answers to the first question are obviously crazy.
Yup. I’m asking question (2). Thanks again for your clarifying remarks.
What if you actually should be discussing saving of mosquito lives, but don’t, because humans are dumb?
I think this is a change of subject, but… what do you mean by ‘actually should’?
No idea.
I take you to mean “what would maximize Luke’s utility function” (knowing that ‘utility function’ is probably just a metaphor when talking about humans) when you say “you actually should...” Of course, my ‘utility function’ is unknown to both of us.
In that case, it would remain true in our hypothetical scenario that I should-HumanLivesAreGood donate to VillageReach (assuming they’re a good charity for saving human lives), while I should-UtilityFunctionLuke donate to SaveTheMosquitos.
(Sorry about formatting; LW comments don’t know how to use underscores, apparently.)
But the question then is what goal you should have. It is easy to naturalise norms inasmuch as they are hypothetical and indexed to whatever you happen to be doing. (if you want to play chess, you should move the bishop diagonally) The issue is how to naturalise categorical ends,the goals you should have and the rules you should be following irrespective of what you are doing.
Such facts aren’t supernatural. OTOH, they fall on the analytical/apriori side of the fence, rather than the empirical side, and that is an iimportant distinction.