Suppose you have immediate instinctive reactions of approval and disapproval—let’s call these pre-moral judgements—but that your actual moral judgements are formed by some (possibly somewhat unarticulated) process of reflection on these judgements. E.g., maybe your pre-moral judgements about killing various kinds of animal are strongly affected by how cute and/or human-looking the animals are, but after giving the matter much thought you decide that you should treat those as irrelevant.
In that case, you might have a strong reaction to either of those variants of utilitarianism, or for that matter to both of them.
But this is all consistent with there being no Real Moral System That You Actually Use, because those strong reactions are not the same thing as your moral system; you explicitly reject the idea that those pre-moral judgements are the same as actual moral judgements. And there could be situations where you have very strong pre-moral judgements but where, on careful reflection, … you have no idea what, if anything, your Actual Moral Judgement is.
I don’t see the argument. I have an actual moral judgement that painless extermination of all sentient beings is evil, and so is tiling the universe with meaningless sentient beings.
I have no trouble believing that you do, but I don’t understand how that relates to the point at issue here. (I wasn’t meaning to imply that no one has actual moral judgements, at all; nor that no one has actual moral judgements that match their immediate instinctive reactions; if the problem is that it seemed like I meant either of those, then I apologize for being insufficiently clear.)
The argument I was making goes like this: −1. Scott suggests that there may not be any such thing as his Real Moral System, because different ways of systematizing his moral judgements may be indistinguishable when asked about the sort of question he has definite moral judgements about, but all lead to different and horrifying conclusions when pushed far beyond that. 0. Paul says that if Scott didn’t have a Real Moral System then he wouldn’t be horrified by those conclusions, but would necessarily feel indifferent to them. 1. No: he might well still feel horror at those conclusions, because not having a Real Moral System doesn’t mean not having anything that generates moral reactions; one can have immediate reactions of approval or disapproval to things, but not reflectively endorse them. Scott surely has some kind of brain apparatus that can react to whatever it’s presented with, but that’s not necessarily a Real Moral System because he might disavow some of its reactions; if so, he presumably has some kind of moral system (which does that disavowing), but there may be some questions to which it doesn’t deliver answers.
All of this is perfectly consistent with there being other people whose Real Moral System does deliver definite unambiguous answers in all these weird extreme cases.
Anyone who makes moral judgements has a Real Moral Something.
But suppose there’s no human-manageable way of predicting your judgements; nothing any simpler or more efficient than presenting them to your brain and seeing what it does. You might not want to call that a system.
And suppose that for some questions, you don’t have an immediate answer, and what answer you end up with depends on irrelevant-seeming details: if we were somehow able to rerun your experience from now to when we ask you the question and you decide on an answer, we would get different answers on different reruns. (This might be difficult to discover, of course.) In that case, you might not want to say that you have a real opinion on those questions, even though it’s possible to induce you to state one.
Good point. Let’s try something else then, vaguely related to my first idea.
Suppose you are given lots of time and information and arguments to ponder, and either you would eventually come up with some resolution, or you still wouldn’t. In the former case, I think we’ve found your Actual Moral Judgment (AMJ). In the latter, I’m inclined to say that your AMJ is that the options are morally incomparable: neither is better nor worse than the other.
Of course, this analysis doesn’t help *you* make the decision. It just gives an impartial observer a possible way to understand what you’re doing.
I’m not sure there’s really a difference between “there is no fact of the matter as to whether I prefer A to B morally” and “my moral preference between A and B is that I find them incomparable”.
Note that “incomparable” is not the same thing as “equivalent”. That is, being persistently unable to choose between A and B is not the same as thinking that A and B are exactly equally good. E.g., it could happen that I find A and B incomparable, B and C incomparable, but A definitely better than C.
(E.g., let B be a world in which no one exists and A,C be two different worlds in which there are a lot of happy people and a lot of miserable people, with different ratios of happy to miserable and about the same total number of people. I might be entirely unable to figure out whether I think the existence of happy people makes it OK on balance that there are a lot of miserable people, but I will have no trouble deciding that all else equal I prefer a better happy-to-miserable ratio.)
Further: one way in which there being no fact of the matter as to which I prefer might manifest itself is that if you took multiple copies of me and gave them all more or less the same time and information arguments, they might end up coming to substantially different resolutions even though there wasn’t much difference in the information they were presented with. (Perhaps none at all; it might depend on irrelevancies like my mood.) In that case, defining my “actual moral judgement” in terms of what I “would” decide in those situations would be problematic.
Definitely “incomparable” fails to imply “equivalent”. But still, where two options are incomparable according to your morality you can’t use your morality to make the decision. You’ll have to decide on some other basis, or (possibly?) no basis at all. To my mind this seems like an important fact about your morality, which the sentence “my moral preference between A and B is that they’re incomparable” captures nicely.
I think we’re disagreeing only on terminology here. It’s certainly an important fact about your morals whether or not they deliver an answer to the question “A or B?”—or at least, it’s important in so far as choosing between A and B might be important. I think that if it turns out that they don’t deliver an answer, it’s OK to describe that situation by saying that there isn’t really such a thing as your Real Actual Moral Judgement between A and B, rather than saying that there is and it’s “A and B are incomparable”. Especially if there are lots of (A,B) for which this happens (supporting the picture in which there are great seas of weird situations for which your moral intuitions and principles fail, within which there’s an island of “normality” where they are useful), and especially if the way it feels is that you have no idea what to think about A and B, rather than that you understand them clearly and can see that there’s no principled way to decide between them (which it often does).
Suppose you have immediate instinctive reactions of approval and disapproval—let’s call these pre-moral judgements—but that your actual moral judgements are formed by some (possibly somewhat unarticulated) process of reflection on these judgements. E.g., maybe your pre-moral judgements about killing various kinds of animal are strongly affected by how cute and/or human-looking the animals are, but after giving the matter much thought you decide that you should treat those as irrelevant.
In that case, you might have a strong reaction to either of those variants of utilitarianism, or for that matter to both of them.
But this is all consistent with there being no Real Moral System That You Actually Use, because those strong reactions are not the same thing as your moral system; you explicitly reject the idea that those pre-moral judgements are the same as actual moral judgements. And there could be situations where you have very strong pre-moral judgements but where, on careful reflection, … you have no idea what, if anything, your Actual Moral Judgement is.
I don’t see the argument. I have an actual moral judgement that painless extermination of all sentient beings is evil, and so is tiling the universe with meaningless sentient beings.
I have no trouble believing that you do, but I don’t understand how that relates to the point at issue here. (I wasn’t meaning to imply that no one has actual moral judgements, at all; nor that no one has actual moral judgements that match their immediate instinctive reactions; if the problem is that it seemed like I meant either of those, then I apologize for being insufficiently clear.)
The argument I was making goes like this: −1. Scott suggests that there may not be any such thing as his Real Moral System, because different ways of systematizing his moral judgements may be indistinguishable when asked about the sort of question he has definite moral judgements about, but all lead to different and horrifying conclusions when pushed far beyond that. 0. Paul says that if Scott didn’t have a Real Moral System then he wouldn’t be horrified by those conclusions, but would necessarily feel indifferent to them. 1. No: he might well still feel horror at those conclusions, because not having a Real Moral System doesn’t mean not having anything that generates moral reactions; one can have immediate reactions of approval or disapproval to things, but not reflectively endorse them. Scott surely has some kind of brain apparatus that can react to whatever it’s presented with, but that’s not necessarily a Real Moral System because he might disavow some of its reactions; if so, he presumably has some kind of moral system (which does that disavowing), but there may be some questions to which it doesn’t deliver answers.
All of this is perfectly consistent with there being other people whose Real Moral System does deliver definite unambiguous answers in all these weird extreme cases.
I’m not sure what it would even mean to not have a Real Moral System. The actual moral judgments must come from somewhere.
Anyone who makes moral judgements has a Real Moral Something.
But suppose there’s no human-manageable way of predicting your judgements; nothing any simpler or more efficient than presenting them to your brain and seeing what it does. You might not want to call that a system.
And suppose that for some questions, you don’t have an immediate answer, and what answer you end up with depends on irrelevant-seeming details: if we were somehow able to rerun your experience from now to when we ask you the question and you decide on an answer, we would get different answers on different reruns. (This might be difficult to discover, of course.) In that case, you might not want to say that you have a real opinion on those questions, even though it’s possible to induce you to state one.
An high-Kolmogorov-complexity system is still a system.
Good point. Let’s try something else then, vaguely related to my first idea.
Suppose you are given lots of time and information and arguments to ponder, and either you would eventually come up with some resolution, or you still wouldn’t. In the former case, I think we’ve found your Actual Moral Judgment (AMJ). In the latter, I’m inclined to say that your AMJ is that the options are morally incomparable: neither is better nor worse than the other.
Of course, this analysis doesn’t help *you* make the decision. It just gives an impartial observer a possible way to understand what you’re doing.
I’m not sure there’s really a difference between “there is no fact of the matter as to whether I prefer A to B morally” and “my moral preference between A and B is that I find them incomparable”.
Note that “incomparable” is not the same thing as “equivalent”. That is, being persistently unable to choose between A and B is not the same as thinking that A and B are exactly equally good. E.g., it could happen that I find A and B incomparable, B and C incomparable, but A definitely better than C.
(E.g., let B be a world in which no one exists and A,C be two different worlds in which there are a lot of happy people and a lot of miserable people, with different ratios of happy to miserable and about the same total number of people. I might be entirely unable to figure out whether I think the existence of happy people makes it OK on balance that there are a lot of miserable people, but I will have no trouble deciding that all else equal I prefer a better happy-to-miserable ratio.)
Further: one way in which there being no fact of the matter as to which I prefer might manifest itself is that if you took multiple copies of me and gave them all more or less the same time and information arguments, they might end up coming to substantially different resolutions even though there wasn’t much difference in the information they were presented with. (Perhaps none at all; it might depend on irrelevancies like my mood.) In that case, defining my “actual moral judgement” in terms of what I “would” decide in those situations would be problematic.
Definitely “incomparable” fails to imply “equivalent”. But still, where two options are incomparable according to your morality you can’t use your morality to make the decision. You’ll have to decide on some other basis, or (possibly?) no basis at all. To my mind this seems like an important fact about your morality, which the sentence “my moral preference between A and B is that they’re incomparable” captures nicely.
I think we’re disagreeing only on terminology here. It’s certainly an important fact about your morals whether or not they deliver an answer to the question “A or B?”—or at least, it’s important in so far as choosing between A and B might be important. I think that if it turns out that they don’t deliver an answer, it’s OK to describe that situation by saying that there isn’t really such a thing as your Real Actual Moral Judgement between A and B, rather than saying that there is and it’s “A and B are incomparable”. Especially if there are lots of (A,B) for which this happens (supporting the picture in which there are great seas of weird situations for which your moral intuitions and principles fail, within which there’s an island of “normality” where they are useful), and especially if the way it feels is that you have no idea what to think about A and B, rather than that you understand them clearly and can see that there’s no principled way to decide between them (which it often does).