Being uncertain about the truth-value a moral proposition is quite compatible with moral realism.
And only compatible with non-relativistic moral cognitivism. If moral propositions can’t be false, there’s nothing to be uncertain about. If a moral truth just amounts to my belief or my society’s belief, and I know what that belief is (which I do), then again uncertainty is out of place.
I may be misunderstanding you, but it seems I have the same objection to this assertion that I raised here. That is, I don’t necessarily know my own moral beliefs, let alone my society’s. (Of course, that’s not to say that you don’t; if all you meant by “which I do” was a claim about torekp’s knowledge, I withdraw my objection.)
If one has moral certainty and a moral anti-realist position, one is deeply confused. As far as I see, moral certainty requires a commitment to moral realism.
More generally, the parent to my comment asserted moral anti-realism was an intellectual precursor to totalitarianism. Because I’m not aware of any totalitarian regime that wasn’t a moral certainty regime (and therefore a moral realist regime), I am confused how a contrary philosophical position can be seen as a ideological precursor to totalitarianism.
If one has moral certainty and a moral anti-realist position, one is deeply confused.
It only seems that way to you because you’ve retained enough meta-moral realism to believe that there’s something wrong with having an inconsistent about your position on morality.
Er? I can believe that there’s no intersubjective or objective fact of the matter as to whether an act is right or wrong, merely an algorithm in my mind that makes moral judgments, and also not know whether I think rescuing kittens from a flood is right or wrong. I suppose I’m confused in that case about kitten-rescuing, but I’m not sure that counts as “deeply confused.”
If morality is just whatever is returned by the algorithm in your mind that makes moral judgments, then when that algorithm returns a result “no result” that is itself a result—what is there you do not know about the subject?
This can be contrasted to an algorithm in your mind designed to calculate objectively real things like prime numbers -- in that case you can still express uncertainty about whether 5915587279 is a prime number, because primes are a real thing with an objective definition and not just “whatever my mind considers to be prime”.
If morality is just whatever is returned by the algorithm in your mind that makes moral judgments, then when that algorithm returns a result “no result” that is itself a result—what is there you do not know about the subject?
If the algorithm in my mind returns “gee, I’m not sure… there are wrong things about X, and there are right things about X, and mostly it seems like I have to think about it more in order to be sure,” one possible interpretation of that result (and, in fact, the one I’m likely to provisionally adopt, as in my experience it often turns out to be true) is that if I think about it more, and more carefully, I will know more about my moral judgments about X than I do at that moment.
Sure, it’s possible that this is confabulation, and that what I experience as “not knowing what my judgment is” is really “not having yet made a judgment”. I’m not sure that distinction actually matters, though.
Note, also, that there is a difference between what I said (that morality is “an algorithm in my mind”) and what you said (morality is “whatever is returned by the algorithm in your mind”). I don’t know if that distinction matters, either, but it seems related… you are focused on an answer to a specific question in isolation, I am focused on the process that generates answers to a class of questions, often over time.
I think that if one has moral uncertainty and a moral anti-realist position, one is also deeply confused.
Why? There isn’t anything incoherent about assigning a non-zero or non-one probability to a proposition F that states that a sentence G is or is not propositional.
I suppose we should divide moral uncertainty into two categories: a) Non-certainty about whether there’s some (positive, negative or zero) “real moral value” attached to a given action X. b) Given that such a value exists, non-certainty about its value.
So far I considered moral uncertainty to just mean (b), but it can ofcourse mean (a) as well, you’re correct about that.
Aren’t you confusing moral realism with moral certainty?
Being uncertain about the truth-value a moral proposition is quite compatible with moral realism.
And only compatible with non-relativistic moral cognitivism. If moral propositions can’t be false, there’s nothing to be uncertain about. If a moral truth just amounts to my belief or my society’s belief, and I know what that belief is (which I do), then again uncertainty is out of place.
I may be misunderstanding you, but it seems I have the same objection to this assertion that I raised here. That is, I don’t necessarily know my own moral beliefs, let alone my society’s. (Of course, that’s not to say that you don’t; if all you meant by “which I do” was a claim about torekp’s knowledge, I withdraw my objection.)
If one has moral certainty and a moral anti-realist position, one is deeply confused. As far as I see, moral certainty requires a commitment to moral realism.
More generally, the parent to my comment asserted moral anti-realism was an intellectual precursor to totalitarianism. Because I’m not aware of any totalitarian regime that wasn’t a moral certainty regime (and therefore a moral realist regime), I am confused how a contrary philosophical position can be seen as a ideological precursor to totalitarianism.
It only seems that way to you because you’ve retained enough meta-moral realism to believe that there’s something wrong with having an inconsistent about your position on morality.
The ability to recognize logical consistency is from moral realist thought?
The ability to recognize logical inconsistency about morality is from meta-moral realist thought.
I think that if one has moral uncertainty and a moral anti-realist position, one is also deeply confused.
Both certainty and uncertainty imply that there’s something real to be certain or uncertain about.
Er? I can believe that there’s no intersubjective or objective fact of the matter as to whether an act is right or wrong, merely an algorithm in my mind that makes moral judgments, and also not know whether I think rescuing kittens from a flood is right or wrong. I suppose I’m confused in that case about kitten-rescuing, but I’m not sure that counts as “deeply confused.”
If morality is just whatever is returned by the algorithm in your mind that makes moral judgments, then when that algorithm returns a result “no result” that is itself a result—what is there you do not know about the subject?
This can be contrasted to an algorithm in your mind designed to calculate objectively real things like prime numbers -- in that case you can still express uncertainty about whether 5915587279 is a prime number, because primes are a real thing with an objective definition and not just “whatever my mind considers to be prime”.
If the algorithm in my mind returns “gee, I’m not sure… there are wrong things about X, and there are right things about X, and mostly it seems like I have to think about it more in order to be sure,” one possible interpretation of that result (and, in fact, the one I’m likely to provisionally adopt, as in my experience it often turns out to be true) is that if I think about it more, and more carefully, I will know more about my moral judgments about X than I do at that moment.
Sure, it’s possible that this is confabulation, and that what I experience as “not knowing what my judgment is” is really “not having yet made a judgment”. I’m not sure that distinction actually matters, though.
Note, also, that there is a difference between what I said (that morality is “an algorithm in my mind”) and what you said (morality is “whatever is returned by the algorithm in your mind”). I don’t know if that distinction matters, either, but it seems related… you are focused on an answer to a specific question in isolation, I am focused on the process that generates answers to a class of questions, often over time.
Why? There isn’t anything incoherent about assigning a non-zero or non-one probability to a proposition F that states that a sentence G is or is not propositional.
I suppose we should divide moral uncertainty into two categories:
a) Non-certainty about whether there’s some (positive, negative or zero) “real moral value” attached to a given action X.
b) Given that such a value exists, non-certainty about its value.
So far I considered moral uncertainty to just mean (b), but it can ofcourse mean (a) as well, you’re correct about that.