Agreeing with the logic is OK, but the problem with reductionism is that if you draw no lines, you’ll eventually find that there’s no difference between anything.
Thus the basic reductionist/humanist conflict: how does one you escape the ‘logic’ and draw a line?
Draw a gradient rather than a line. You don’t need sharp boundaries between categories if the output of your judgment is quantitative rather than boolean. You can assign similar values to similar cases, and dissimilar values to dissimilar cases.
See also The Fallacy of Gray. Now you’re obviously not falling for the one-color view, but that post also talks about what to do instead of staying with black-and-white.
Sure. But I was referring to my worry that if you don’t allow your values to be arbitrary (e.g., I don’t care about protecting fetuses but I care about protecting babies), you may find you wouldn’t have any. I guess I’m imagining a story in which a logician tries to argue me down a slippery slope of moral nihilism; there’ll be no step I can point to that I shouldn’t have taken, but I’ll find I stepped too far. When I retreat uphill to where I feel more comfortable, can I expect to have a logical justification?
I’m not sure what “arbitrary” means here. You don’t seem to be using it in the sense that all preferences are arbitary.
a story in which a logician tries to argue me down a slippery slope of moral nihilism
If the nihilist makes a sufficiently circuitous argument, they can ensure that there’s no step you can point to that’s very wrong. But by doing so, they will make slight approximations in many places. Each such step loses an incremental amount of logical justification, and if you add up all the approximations, you’ll find that they’ve approximated away any correlation with the premises. You don’t need to avoid following the argument too far, if you appropriately increase your error bars at each step.
Each such step loses an incremental amount of logical justification, and if you add up all the approximations, you’ll find that they’ve approximated away any correlation with the premises. You don’t need to avoid following the argument too far, if you appropriately increase your error bars at each step.
From your answer, I guess that you do think we have ‘justifications’ for our moral preferences. I’m not sure. It seems to me that on the one hand, we accept that our preferences are arational, but then we don’t really assimilate this. (If our preferences are arational, they won’t have logical justifications.)
I’m not sure what “arbitrary” means here. You don’t seem to be using it in the sense that all preferences are arbitary.
That seemed to be exactly how he’s using it. It would be how I’d respond, had I not worked it through already. But there is a difference between arbitrary in: “the difference between an 8.5 month fetus and a 15 day infant is arbitrary” and “the decision that killing people is wrong is arbitrary”.
Yes, at some point you need at least one arbitrary principle. Once you have an arbitrary moral principle, you can make non-arbitrary decisions about the morality of situations.
There’s a lot more about this in the whole sequence on metaethics.
I am generally confused by the metaethics sequence, which is why I didn’t correct Pengvado.
at some point you need at least one arbitrary principle. Once you have an arbitrary moral principle, you can make non-arbitrary decisions about the morality of situations.
Agreed, as long as you have found a consistent set of arbitrary principles to cover the whole moral landscape. But since our preferences are given to us, broadly, by evolution, shouldn’t we expect that our principles operate locally (context-dependent) and are likely to be mutually inconsistent?
So when we adjust to a new location in the moral landscape and the logician asks up to justify our movement, it seems that, generally, the correct answer would be shrug and say, ‘My preferences aren’t logical. They evolved.’
If there’s a difference in two positions in the moral landscape, we needn’t justify our preference for one position. We just pick the one we prefer. Unless we have a preference for consistency of our principles, in which case we build that into the landscape as well. So the logician could pull you to an (otherwise) immoral place in the landscape unless you decide you don’t consider logical consistency to be the most important moral principle.
But since our preferences are given to us, broadly, by evolution, shouldn’t we expect that our principles operate locally (context-dependent) and are likely to be mutually inconsistent?
Yes.
I have a strong preferences for simple set of moral preferences, with minimal inconsistency.
I admit that the idea of holding “killing babies is wrong” as a separate principle from “killing humans is wrong”, or holding that “babies are human” as a moral (rather than empirical) principle simply did not occur to me. The dangers of generalizing from one example, I guess.
Agreeing with the logic is OK, but the problem with reductionism is that if you draw no lines, you’ll eventually find that there’s no difference between anything.
Thus the basic reductionist/humanist conflict: how does one you escape the ‘logic’ and draw a line?
Draw a gradient rather than a line. You don’t need sharp boundaries between categories if the output of your judgment is quantitative rather than boolean. You can assign similar values to similar cases, and dissimilar values to dissimilar cases.
See also The Fallacy of Gray. Now you’re obviously not falling for the one-color view, but that post also talks about what to do instead of staying with black-and-white.
Sure. But I was referring to my worry that if you don’t allow your values to be arbitrary (e.g., I don’t care about protecting fetuses but I care about protecting babies), you may find you wouldn’t have any. I guess I’m imagining a story in which a logician tries to argue me down a slippery slope of moral nihilism; there’ll be no step I can point to that I shouldn’t have taken, but I’ll find I stepped too far. When I retreat uphill to where I feel more comfortable, can I expect to have a logical justification?
I’m not sure what “arbitrary” means here. You don’t seem to be using it in the sense that all preferences are arbitary.
If the nihilist makes a sufficiently circuitous argument, they can ensure that there’s no step you can point to that’s very wrong. But by doing so, they will make slight approximations in many places. Each such step loses an incremental amount of logical justification, and if you add up all the approximations, you’ll find that they’ve approximated away any correlation with the premises. You don’t need to avoid following the argument too far, if you appropriately increase your error bars at each step.
In short: “similar” is not a transitive relation.
This was rather elegantly put.
From your answer, I guess that you do think we have ‘justifications’ for our moral preferences. I’m not sure. It seems to me that on the one hand, we accept that our preferences are arational, but then we don’t really assimilate this. (If our preferences are arational, they won’t have logical justifications.)
That seemed to be exactly how he’s using it. It would be how I’d respond, had I not worked it through already. But there is a difference between arbitrary in: “the difference between an 8.5 month fetus and a 15 day infant is arbitrary” and “the decision that killing people is wrong is arbitrary”.
Yes, at some point you need at least one arbitrary principle. Once you have an arbitrary moral principle, you can make non-arbitrary decisions about the morality of situations.
There’s a lot more about this in the whole sequence on metaethics.
I am generally confused by the metaethics sequence, which is why I didn’t correct Pengvado.
Agreed, as long as you have found a consistent set of arbitrary principles to cover the whole moral landscape. But since our preferences are given to us, broadly, by evolution, shouldn’t we expect that our principles operate locally (context-dependent) and are likely to be mutually inconsistent?
So when we adjust to a new location in the moral landscape and the logician asks up to justify our movement, it seems that, generally, the correct answer would be shrug and say, ‘My preferences aren’t logical. They evolved.’
If there’s a difference in two positions in the moral landscape, we needn’t justify our preference for one position. We just pick the one we prefer. Unless we have a preference for consistency of our principles, in which case we build that into the landscape as well. So the logician could pull you to an (otherwise) immoral place in the landscape unless you decide you don’t consider logical consistency to be the most important moral principle.
Yes.
I have a strong preferences for simple set of moral preferences, with minimal inconsistency.
I admit that the idea of holding “killing babies is wrong” as a separate principle from “killing humans is wrong”, or holding that “babies are human” as a moral (rather than empirical) principle simply did not occur to me. The dangers of generalizing from one example, I guess.