It is unclear how to update moral beliefs if we don’t allow those updates to take place in the context of a background moral theory. But if the agent does have a background theory, it is often quite clear how it should update specific moral beliefs on receiving new information. A simple example: If I learn that there is a child hiding in a barrel, I should update strongly in favor of “I shouldn’t use that barrel for target practice”. The usual response to this kind of example from moral skeptics is that the update just takes for granted various moral claims (like “It’s wrong to harm innocent children, ceteris paribus”). Well, yes, but that’s exactly what “No universally compelling arguments” means. Updating one’s factual beliefs also takes for granted substantive prior factual beliefs—an agent with maximum entropy priors will never learn anything.
So basically the argument is: we’ve failed to come up with any foundational or evidential justifications for induction, Occam’s razor or modus ponens; those things seem objective and true; my moral beliefs don’t have a justification either: therefore my moral beliefs are objective and true?
No, what I gave is not an argument in favor of moral realism intended to convince the skeptic, it’s merely a response to a common skeptical argument against moral realism. So the conclusion is not supposed to be “Therefore, my moral beliefs are objective and true.” The conclusion is merely that the alleged distinction between moral beliefs and factual beliefs (or epistemic normative beliefs) that you were drawing (viz. that it’s unclear how moral beliefs pay rent) doesn’t actually hold up.
My position on moral realism is simply that belief in universally applicable (though not universally compelling) moral truths is a very central feature of my practical theory of the world, and certain moral inferences (i.e. inferences from descriptive facts to moral claims) are extremely intuitive to me, almost as intuitive as many inductive inferences. So I’m going to need to hear a powerful argument against moral realism to convince me of its falsehood, and I haven’t yet heard one (and I have read quite a bit of the skeptical literature).
It’s not a defense of X, it’s a refutation of an argument against X. It claims that the purported argument doesn’t change the status of X, without asserting what that status is.
But that’s a universal defense of any free-floating belief.
Well, no, because most beliefs don’t have the properties I attributed to moral beiefs (”...central feature of my practical theory of the world… moral inferences are extremely intuitive to me...”), so I couldn’t offer the same defense, at least not honestly. And again, I’m not trying to convince you to be a moral realist here, I’m explaining why I’m a moral realist, and why I think it’s reasonable for me to be one.
Also, I’m not sure what you mean when you refer to my moral beliefs as “free-floating”. If you mean they have no connection to my non-moral beliefs then the characterization is inapt. My moral beliefs are definitely shaped by my beliefs about what the world is like. I also believe moral truths supervene on non-moral truths. You couldn’t have a universe where all the non-moral facts were the same as this one but the moral facts were different. So not free-floating, I think.
For that matter: do you really think the degrees of justification for the rules of induction are similar to those of your moral beliefs?
Not sure what you mean by “degree of justification” here.
Well, with the addition that moral beliefs, like the others, seem to perform a useful function (though like the others this doesn’t seem to be able to be turned into a justification without circularity).
I don’t see a good alternative to not believing in modus ponens. Not believing that my moral values are also objective truths works just fine: and does so without the absurd free-floating beliefs and other metaphysical baggage.
But as it happens, I think the arguments we do have, for Bayesian epistemology, Occam-like priors, and induction are already much stronger than the arguments we have that anyone’s moral beliefs are objective truths.
I think the arguments we do have, for Bayesian epistemology, Occam-like priors, and induction are already much stronger than the arguments we have that anyone’s moral beliefs are objective truths.
Really? I’d love to see them. I suspect you’re so used to using these things that you’ve forgotten how weak the arguments for them actually are.
Not believing that my moral values are also objective truths works just fine:
Works at what?
That depends how hard you test it: Albert thinks Charlie has committed a heinous sin and should be severely punished, Brenda thinks Charlie has engaged in a harmless pecadillo and should be let go. What should happen to Charlie?
The same way morality works for everyone else. I’m not biting any bullets.
That depends how hard you test it: Albert thinks Charlie has committed a heinous sin and should be severely punished, Brenda thinks Charlie has engaged in a harmless pecadillo and should be let go. What should happen to Charlie?
Objectively; there is no fact of the matter. Subjectively; you haven’t given me any details about what Charlie did.
The same way morality works for everyone else. I’m not biting any bullets.
One of the things it works for is assigning concrete, objective punishments and rewards. if there is no objective fact of the matter about moral claim, there is none about who gets punished or rewarded, yet these things still happen. And happen unjustifiably on your view. You view doesn’t work to rationally justify and explain actual practices.
Objectively; there is no fact of the matter. Subjectively; you haven’t given me any details about what Charlie did.
Why would that help? You would have one opinion, someone else has another. But Charlie can’t be in a a quantum superposition of jailed and free.
You would have one opinion, someone else has another. But Charlie can’t be in a a quantum superposition of jailed and free.
If whether Charlie is punished or not is entirely up to me then if I think he deserves to be punished I will do so; if I don’t I will not do so. If I have to persuade someone else to punish him, then I will try. If the legal system is doing the punishing then I will advocate for laws that agree with my morals. And so on.
if there is no objective fact of the matter about moral claim, there is none about who gets punished or rewarded
No. There is no objective fact about who ought to get punished or rewarded. Obviously people do get punished and rewarded: and this happens according to the moral values of the people around them and the society they live in. In lots of societies there is near-universal acceptance of many moral judgments and these get codified into norms and laws and so on.
If whether Charlie is punished or not is entirely up to me then if I think he deserves to be punished I will do so; if I don’t I will not do so. If I have to persuade someone else to punish him, then I will try. If the legal system is doing the punishing then I will advocate for laws that agree with my morals. And so on.
And do you alone get a say (after all, you belive that what you think is right, is right) or does anybody else?
There is no objective fact about who ought to get punished or rewarded.
Exactly. My view “works” int that it can rationally justify punishment and reward.
It is unclear how to update moral beliefs if we don’t allow those updates to take place in the context of a background moral theory. But if the agent does have a background theory, it is often quite clear how it should update specific moral beliefs on receiving new information. A simple example: If I learn that there is a child hiding in a barrel, I should update strongly in favor of “I shouldn’t use that barrel for target practice”. The usual response to this kind of example from moral skeptics is that the update just takes for granted various moral claims (like “It’s wrong to harm innocent children, ceteris paribus”). Well, yes, but that’s exactly what “No universally compelling arguments” means. Updating one’s factual beliefs also takes for granted substantive prior factual beliefs—an agent with maximum entropy priors will never learn anything.
So basically the argument is: we’ve failed to come up with any foundational or evidential justifications for induction, Occam’s razor or modus ponens; those things seem objective and true; my moral beliefs don’t have a justification either: therefore my moral beliefs are objective and true?
No, what I gave is not an argument in favor of moral realism intended to convince the skeptic, it’s merely a response to a common skeptical argument against moral realism. So the conclusion is not supposed to be “Therefore, my moral beliefs are objective and true.” The conclusion is merely that the alleged distinction between moral beliefs and factual beliefs (or epistemic normative beliefs) that you were drawing (viz. that it’s unclear how moral beliefs pay rent) doesn’t actually hold up.
My position on moral realism is simply that belief in universally applicable (though not universally compelling) moral truths is a very central feature of my practical theory of the world, and certain moral inferences (i.e. inferences from descriptive facts to moral claims) are extremely intuitive to me, almost as intuitive as many inductive inferences. So I’m going to need to hear a powerful argument against moral realism to convince me of its falsehood, and I haven’t yet heard one (and I have read quite a bit of the skeptical literature).
But that’s a universal defense of any free-floating belief.
For that matter: do you really think the degrees of justification for the rules of induction are similar to those of your moral beliefs?
It’s not a defense of X, it’s a refutation of an argument against X. It claims that the purported argument doesn’t change the status of X, without asserting what that status is.
Well, no, because most beliefs don’t have the properties I attributed to moral beiefs (”...central feature of my practical theory of the world… moral inferences are extremely intuitive to me...”), so I couldn’t offer the same defense, at least not honestly. And again, I’m not trying to convince you to be a moral realist here, I’m explaining why I’m a moral realist, and why I think it’s reasonable for me to be one.
Also, I’m not sure what you mean when you refer to my moral beliefs as “free-floating”. If you mean they have no connection to my non-moral beliefs then the characterization is inapt. My moral beliefs are definitely shaped by my beliefs about what the world is like. I also believe moral truths supervene on non-moral truths. You couldn’t have a universe where all the non-moral facts were the same as this one but the moral facts were different. So not free-floating, I think.
Not sure what you mean by “degree of justification” here.
If you can pin down the fundamentals of rationality, I’d be glad to hear how.
Side conditions can be added, eg that intuitions need to be used for something else.
Well, with the addition that moral beliefs, like the others, seem to perform a useful function (though like the others this doesn’t seem to be able to be turned into a justification without circularity).
..or at least no worse off. But if you can solve the foundational problems of rationalism, I’m all ears.
I don’t see a good alternative to not believing in modus ponens. Not believing that my moral values are also objective truths works just fine: and does so without the absurd free-floating beliefs and other metaphysical baggage.
But as it happens, I think the arguments we do have, for Bayesian epistemology, Occam-like priors, and induction are already much stronger than the arguments we have that anyone’s moral beliefs are objective truths.
Really? I’d love to see them. I suspect you’re so used to using these things that you’ve forgotten how weak the arguments for them actually are.
Works at what?
That depends how hard you test it: Albert thinks Charlie has committed a heinous sin and should be severely punished, Brenda thinks Charlie has engaged in a harmless pecadillo and should be let go. What should happen to Charlie?
The same way morality works for everyone else. I’m not biting any bullets.
Objectively; there is no fact of the matter. Subjectively; you haven’t given me any details about what Charlie did.
One of the things it works for is assigning concrete, objective punishments and rewards. if there is no objective fact of the matter about moral claim, there is none about who gets punished or rewarded, yet these things still happen. And happen unjustifiably on your view. You view doesn’t work to rationally justify and explain actual practices.
Why would that help? You would have one opinion, someone else has another. But Charlie can’t be in a a quantum superposition of jailed and free.
If whether Charlie is punished or not is entirely up to me then if I think he deserves to be punished I will do so; if I don’t I will not do so. If I have to persuade someone else to punish him, then I will try. If the legal system is doing the punishing then I will advocate for laws that agree with my morals. And so on.
No. There is no objective fact about who ought to get punished or rewarded. Obviously people do get punished and rewarded: and this happens according to the moral values of the people around them and the society they live in. In lots of societies there is near-universal acceptance of many moral judgments and these get codified into norms and laws and so on.
And do you alone get a say (after all, you belive that what you think is right, is right) or does anybody else?
Exactly. My view “works” int that it can rationally justify punishment and reward.