I’m not sure what “arbitrary” means here. You don’t seem to be using it in the sense that all preferences are arbitary.
That seemed to be exactly how he’s using it. It would be how I’d respond, had I not worked it through already. But there is a difference between arbitrary in: “the difference between an 8.5 month fetus and a 15 day infant is arbitrary” and “the decision that killing people is wrong is arbitrary”.
Yes, at some point you need at least one arbitrary principle. Once you have an arbitrary moral principle, you can make non-arbitrary decisions about the morality of situations.
There’s a lot more about this in the whole sequence on metaethics.
I am generally confused by the metaethics sequence, which is why I didn’t correct Pengvado.
at some point you need at least one arbitrary principle. Once you have an arbitrary moral principle, you can make non-arbitrary decisions about the morality of situations.
Agreed, as long as you have found a consistent set of arbitrary principles to cover the whole moral landscape. But since our preferences are given to us, broadly, by evolution, shouldn’t we expect that our principles operate locally (context-dependent) and are likely to be mutually inconsistent?
So when we adjust to a new location in the moral landscape and the logician asks up to justify our movement, it seems that, generally, the correct answer would be shrug and say, ‘My preferences aren’t logical. They evolved.’
If there’s a difference in two positions in the moral landscape, we needn’t justify our preference for one position. We just pick the one we prefer. Unless we have a preference for consistency of our principles, in which case we build that into the landscape as well. So the logician could pull you to an (otherwise) immoral place in the landscape unless you decide you don’t consider logical consistency to be the most important moral principle.
But since our preferences are given to us, broadly, by evolution, shouldn’t we expect that our principles operate locally (context-dependent) and are likely to be mutually inconsistent?
Yes.
I have a strong preferences for simple set of moral preferences, with minimal inconsistency.
I admit that the idea of holding “killing babies is wrong” as a separate principle from “killing humans is wrong”, or holding that “babies are human” as a moral (rather than empirical) principle simply did not occur to me. The dangers of generalizing from one example, I guess.
That seemed to be exactly how he’s using it. It would be how I’d respond, had I not worked it through already. But there is a difference between arbitrary in: “the difference between an 8.5 month fetus and a 15 day infant is arbitrary” and “the decision that killing people is wrong is arbitrary”.
Yes, at some point you need at least one arbitrary principle. Once you have an arbitrary moral principle, you can make non-arbitrary decisions about the morality of situations.
There’s a lot more about this in the whole sequence on metaethics.
I am generally confused by the metaethics sequence, which is why I didn’t correct Pengvado.
Agreed, as long as you have found a consistent set of arbitrary principles to cover the whole moral landscape. But since our preferences are given to us, broadly, by evolution, shouldn’t we expect that our principles operate locally (context-dependent) and are likely to be mutually inconsistent?
So when we adjust to a new location in the moral landscape and the logician asks up to justify our movement, it seems that, generally, the correct answer would be shrug and say, ‘My preferences aren’t logical. They evolved.’
If there’s a difference in two positions in the moral landscape, we needn’t justify our preference for one position. We just pick the one we prefer. Unless we have a preference for consistency of our principles, in which case we build that into the landscape as well. So the logician could pull you to an (otherwise) immoral place in the landscape unless you decide you don’t consider logical consistency to be the most important moral principle.
Yes.
I have a strong preferences for simple set of moral preferences, with minimal inconsistency.
I admit that the idea of holding “killing babies is wrong” as a separate principle from “killing humans is wrong”, or holding that “babies are human” as a moral (rather than empirical) principle simply did not occur to me. The dangers of generalizing from one example, I guess.