Er? I can believe that there’s no intersubjective or objective fact of the matter as to whether an act is right or wrong, merely an algorithm in my mind that makes moral judgments, and also not know whether I think rescuing kittens from a flood is right or wrong. I suppose I’m confused in that case about kitten-rescuing, but I’m not sure that counts as “deeply confused.”
If morality is just whatever is returned by the algorithm in your mind that makes moral judgments, then when that algorithm returns a result “no result” that is itself a result—what is there you do not know about the subject?
This can be contrasted to an algorithm in your mind designed to calculate objectively real things like prime numbers -- in that case you can still express uncertainty about whether 5915587279 is a prime number, because primes are a real thing with an objective definition and not just “whatever my mind considers to be prime”.
If morality is just whatever is returned by the algorithm in your mind that makes moral judgments, then when that algorithm returns a result “no result” that is itself a result—what is there you do not know about the subject?
If the algorithm in my mind returns “gee, I’m not sure… there are wrong things about X, and there are right things about X, and mostly it seems like I have to think about it more in order to be sure,” one possible interpretation of that result (and, in fact, the one I’m likely to provisionally adopt, as in my experience it often turns out to be true) is that if I think about it more, and more carefully, I will know more about my moral judgments about X than I do at that moment.
Sure, it’s possible that this is confabulation, and that what I experience as “not knowing what my judgment is” is really “not having yet made a judgment”. I’m not sure that distinction actually matters, though.
Note, also, that there is a difference between what I said (that morality is “an algorithm in my mind”) and what you said (morality is “whatever is returned by the algorithm in your mind”). I don’t know if that distinction matters, either, but it seems related… you are focused on an answer to a specific question in isolation, I am focused on the process that generates answers to a class of questions, often over time.
I think that if one has moral uncertainty and a moral anti-realist position, one is also deeply confused.
Why? There isn’t anything incoherent about assigning a non-zero or non-one probability to a proposition F that states that a sentence G is or is not propositional.
I suppose we should divide moral uncertainty into two categories: a) Non-certainty about whether there’s some (positive, negative or zero) “real moral value” attached to a given action X. b) Given that such a value exists, non-certainty about its value.
So far I considered moral uncertainty to just mean (b), but it can ofcourse mean (a) as well, you’re correct about that.
I think that if one has moral uncertainty and a moral anti-realist position, one is also deeply confused.
Both certainty and uncertainty imply that there’s something real to be certain or uncertain about.
Er? I can believe that there’s no intersubjective or objective fact of the matter as to whether an act is right or wrong, merely an algorithm in my mind that makes moral judgments, and also not know whether I think rescuing kittens from a flood is right or wrong. I suppose I’m confused in that case about kitten-rescuing, but I’m not sure that counts as “deeply confused.”
If morality is just whatever is returned by the algorithm in your mind that makes moral judgments, then when that algorithm returns a result “no result” that is itself a result—what is there you do not know about the subject?
This can be contrasted to an algorithm in your mind designed to calculate objectively real things like prime numbers -- in that case you can still express uncertainty about whether 5915587279 is a prime number, because primes are a real thing with an objective definition and not just “whatever my mind considers to be prime”.
If the algorithm in my mind returns “gee, I’m not sure… there are wrong things about X, and there are right things about X, and mostly it seems like I have to think about it more in order to be sure,” one possible interpretation of that result (and, in fact, the one I’m likely to provisionally adopt, as in my experience it often turns out to be true) is that if I think about it more, and more carefully, I will know more about my moral judgments about X than I do at that moment.
Sure, it’s possible that this is confabulation, and that what I experience as “not knowing what my judgment is” is really “not having yet made a judgment”. I’m not sure that distinction actually matters, though.
Note, also, that there is a difference between what I said (that morality is “an algorithm in my mind”) and what you said (morality is “whatever is returned by the algorithm in your mind”). I don’t know if that distinction matters, either, but it seems related… you are focused on an answer to a specific question in isolation, I am focused on the process that generates answers to a class of questions, often over time.
Why? There isn’t anything incoherent about assigning a non-zero or non-one probability to a proposition F that states that a sentence G is or is not propositional.
I suppose we should divide moral uncertainty into two categories:
a) Non-certainty about whether there’s some (positive, negative or zero) “real moral value” attached to a given action X.
b) Given that such a value exists, non-certainty about its value.
So far I considered moral uncertainty to just mean (b), but it can ofcourse mean (a) as well, you’re correct about that.