I don’t see the argument. I have an actual moral judgement that painless extermination of all sentient beings is evil, and so is tiling the universe with meaningless sentient beings.
I have no trouble believing that you do, but I don’t understand how that relates to the point at issue here. (I wasn’t meaning to imply that no one has actual moral judgements, at all; nor that no one has actual moral judgements that match their immediate instinctive reactions; if the problem is that it seemed like I meant either of those, then I apologize for being insufficiently clear.)
The argument I was making goes like this: −1. Scott suggests that there may not be any such thing as his Real Moral System, because different ways of systematizing his moral judgements may be indistinguishable when asked about the sort of question he has definite moral judgements about, but all lead to different and horrifying conclusions when pushed far beyond that. 0. Paul says that if Scott didn’t have a Real Moral System then he wouldn’t be horrified by those conclusions, but would necessarily feel indifferent to them. 1. No: he might well still feel horror at those conclusions, because not having a Real Moral System doesn’t mean not having anything that generates moral reactions; one can have immediate reactions of approval or disapproval to things, but not reflectively endorse them. Scott surely has some kind of brain apparatus that can react to whatever it’s presented with, but that’s not necessarily a Real Moral System because he might disavow some of its reactions; if so, he presumably has some kind of moral system (which does that disavowing), but there may be some questions to which it doesn’t deliver answers.
All of this is perfectly consistent with there being other people whose Real Moral System does deliver definite unambiguous answers in all these weird extreme cases.
Anyone who makes moral judgements has a Real Moral Something.
But suppose there’s no human-manageable way of predicting your judgements; nothing any simpler or more efficient than presenting them to your brain and seeing what it does. You might not want to call that a system.
And suppose that for some questions, you don’t have an immediate answer, and what answer you end up with depends on irrelevant-seeming details: if we were somehow able to rerun your experience from now to when we ask you the question and you decide on an answer, we would get different answers on different reruns. (This might be difficult to discover, of course.) In that case, you might not want to say that you have a real opinion on those questions, even though it’s possible to induce you to state one.
I don’t see the argument. I have an actual moral judgement that painless extermination of all sentient beings is evil, and so is tiling the universe with meaningless sentient beings.
I have no trouble believing that you do, but I don’t understand how that relates to the point at issue here. (I wasn’t meaning to imply that no one has actual moral judgements, at all; nor that no one has actual moral judgements that match their immediate instinctive reactions; if the problem is that it seemed like I meant either of those, then I apologize for being insufficiently clear.)
The argument I was making goes like this: −1. Scott suggests that there may not be any such thing as his Real Moral System, because different ways of systematizing his moral judgements may be indistinguishable when asked about the sort of question he has definite moral judgements about, but all lead to different and horrifying conclusions when pushed far beyond that. 0. Paul says that if Scott didn’t have a Real Moral System then he wouldn’t be horrified by those conclusions, but would necessarily feel indifferent to them. 1. No: he might well still feel horror at those conclusions, because not having a Real Moral System doesn’t mean not having anything that generates moral reactions; one can have immediate reactions of approval or disapproval to things, but not reflectively endorse them. Scott surely has some kind of brain apparatus that can react to whatever it’s presented with, but that’s not necessarily a Real Moral System because he might disavow some of its reactions; if so, he presumably has some kind of moral system (which does that disavowing), but there may be some questions to which it doesn’t deliver answers.
All of this is perfectly consistent with there being other people whose Real Moral System does deliver definite unambiguous answers in all these weird extreme cases.
I’m not sure what it would even mean to not have a Real Moral System. The actual moral judgments must come from somewhere.
Anyone who makes moral judgements has a Real Moral Something.
But suppose there’s no human-manageable way of predicting your judgements; nothing any simpler or more efficient than presenting them to your brain and seeing what it does. You might not want to call that a system.
And suppose that for some questions, you don’t have an immediate answer, and what answer you end up with depends on irrelevant-seeming details: if we were somehow able to rerun your experience from now to when we ask you the question and you decide on an answer, we would get different answers on different reruns. (This might be difficult to discover, of course.) In that case, you might not want to say that you have a real opinion on those questions, even though it’s possible to induce you to state one.
An high-Kolmogorov-complexity system is still a system.