It sounds as though you’re expecting anti-realists about normativity to tell you some arguments that will genuinely make you feel (close to) indifferent about whether to use Bayesianism, or whether to use induction. But that’s not how I understand anti-realism. The way I would describe it, the primary claim that anti-realism about normativity entails is of a more trivial kind. More something like this:
If anti-realism about normativity is true, then in a hypothetical world where your mind worked in some strange way such that you found induction or Bayesianism dumb, then it’s impossible to point out and justify the exact sense in which you would be mistaken by some “universally approved standard.” So the question shouldn’t be “Have I ever seen someone give a an argument to start doubting induction?” Rather, I would ask “Have I ever seen someone give a convincing and non-question-begging account of what aliens who don’t believe in induction are doing wrong?”
In practice, the difference between realism and anti-realism only matters in cases where the answer doesn’t feel like the straightforward thing to do anyway. If Bayesianism and induction feel like the straightforward thing for you to do, you’ll use them whether you endorse realism or not. I’d argue that realists therefore shouldn’t use example propositions that provoke universal agreement (at least not as standalone examples) when they want to explain what constitutes an objective reason. Because by using examples that evoke universal agreement, they’re only pointing at reasons that we can already tell will feelconvincing to people. The interesting question I want to know, as an anti-realist, is what it means for there to be irreducibly normative reasons that go beyond what I personally find convincing. The realists seem to think that just like in cases where we’re inclined to call a proposition “right” because it feels self-evident to everyone, there’s just as much of a fact of the matter for other propositions about which people will be in seemingly irresolveble disagreements. But I have yet to see how that’s a useful concept to introduce. I just don’t get it.
Edit:
then it’s impossible to point out and justify the exact sense in which you would be mistaken by some “universally approved standard.”
I was strawmanning realism a bit here. Realists readily point out that the sense in which this is a mistake cannot be “explained” (at least not in non-question-begging terminology, i.e., not without the use of normative terminology). So in one sense, realism is simply a declaration that the intuition that some standards apply beyond the personal/subjective level is too important to give up on. But by itself, that declaration doesn’t yet make for a specific position, and it depends on further assumptions whether the disagreement will be only semantic, or also substantive.
It sounds as though you’re expecting anti-realists about normativity to tell you some arguments that will genuinely make you feel (close to) indifferent about whether to use Bayesianism, or whether to use induction.
Hm, this actually isn’t an expectation I have. When I talk about “realists” and “anti-realists,” in this post, I’m thinking of groups of people with different beliefs (rather than groups of people with different feelings). I don’t think of anti-realism as having any strong link to feelings of indifference about behavior. For example: I certainly expect most anti-realist philosophers to have strong preferences against putting their hands on hot stoves (and don’t see anything inconsistent in this).
But I have yet to see how that’s a useful concept to introduce. I just don’t get it.
I guess I don’t see it as a matter of usefulness. I have this concept that a lot of other people seem to have too: the concept of the choice I “should” make or that it would be “right” for me to make. Although pretty much everyone uses these words, not everyone reports having the same concept. Nonetheless, at least I do have the concept. And, insofar as there is any such thing as the “right thing,” I care a lot about doing it.
We can ask the question: “Why should people care about doing what they ‘should’ do?” I think the natural response to this question, though, is just sort of to evoke a tautology. People should care about doing what they should do, because they should do what they should do.
To put my “realist hat” firmly on for a second: I don’t think, for example, that someone happily abusing their partner would in any way find it “useful” to believe that abuse is wrong. But I do think they should believe that abuse is wrong, and take this fact into account when deciding how to act, because abuse is wrong.
I’m unfortunately not sure, though, if I have anything much deeper or more compelling than that to say in response to the question.
Another (significantly more rambling and possibly redundant) thought on “usefulness”:
One of the main things I’m trying to say in the post—although, in hindsight, I’m unsure if I communicated it well—is that there are a lot of debates that I personally have trouble interpretting as both non-trivial and truth-oriented if I assume that the debaters aren’t employing irreducably normative concepts. A lot of debates about decision theory have this property for me.
I understand how it’s possible for realists to have a substantive factual disagreement about the Newcomb scenario, for example, because they’re talking about something above-and-beyond the traditional physical facts of the case (which are basically just laid out in the problem specification). But if we assume that there’s nothing above-and-beyond the traditional physical facts, then I don’t see what there’s left for anyone to have a substantive factual disagree about.
If we want to ask “Which amount of money is the agent most likely to receive, if we condition on the information that it will one-box?”, then it seems to me that pretty much everyone agrees that “one million dollars” is the answer. If we want to ask “Would the agent get more money in a counterfactual world where it instead two-boxes, but all other features of the world at that time (including the contents of the boxes) are held fixed?”, then it seems to me that pretty much everyone agrees the answer is “yes.” If we want to ask “Would the agent get more money in a counterfactual world where it was born as a two-boxer, but all other features of the world at the time of its birth were held fixed?”, then it seems to me that pretty much everyone agrees the answer is “no.” So I don’t understand what the open question could be. People may of course have different feelings about one-boxing and about two-boxing, in the same way that people have different feelings about (e.g.) playing tennis and playing soccer, but that’s not a matter of factual/substantive disagreement.
So this is sort of one way in which irreducably normative concepts can be “useful”: they can, I think, allow us to make sense of and justify certain debates that many people are strongly inclined to have and certain questions that many people are strongly inclined to ask.
But the above line line of thought of course isn’t, at least in any direct way, an argument for realism actually being true. Even if the line of thought is sound, then it’s still entirely totally possible that these debates and questions just actually aren’t non-trivial and truth-oriented. Furthermore, the line of thought could also just not be sound. It’s totally possible that the debates/questions are non-trivial and truth-oriented, without evoking irreducably normative concepts, and I’m just a confused outside observer not getting what’s going on. Tonally, one thing I regret about the way I wrote this post is that I think it comes across as overly skeptical of this possibility.
Hm, this actually isn’t an expectation I have. When I talk about “realists” and “anti-realists,” in this post, I’m thinking of groups of people with different beliefs (rather than groups of people with different feelings). I don’t think of anti-realism as having any strong link to feelings of indifference about behavior.
Yeah, that makes sense. I was mostly replying to T3t’s comment, especially this part:
The only place I can imagine anything similar to an argument against normative realism cropping up would be in a discussion of the problem of induction, which hasn’t seen serious debate around here for many years.
Upon re-reading T3t’s comment, I now think I interpreted them uncharitably. Probably they meant that because induction seems impossible to justify, one way to “explain” this or come to terms with this is by endorsing anti-realism. (That interpretation would make sense to me!)
I guess I don’t see it as a matter of usefulness. I have this concept that a lot of other people seem to have too: the concept of the choice I “should” make or that it would be “right” for me to make. Although pretty much everyone uses these words, not everyone reports having the same concept. Nonetheless, at least I do have the concept. And, insofar as there is any such thing as the “right thing,” I care a lot about doing it.
I see. I think I understand the motivation to introduce irreducibly normative concepts into one’s philosophical repertoire. Therefore, saying “I don’t see the use” was a bit misleading. I think I meant that even though I understand the motivation, I don’t actually think we can make it work. I also kind of see the motivation behind wanting libertarian free will, but I also don’t think that works (and probably you’d agree on that one). So, I guess my main critique is that irreducibly normative concepts won’t add anything we can actually make use of in practice, because I don’t believe that your irreducibly normative concepts can ever be made coherent. I claim that if we think carefully about how words get their meaning, and then compare the situation with irreducibly normative concepts to other words, it’ll become apparent that the irreducibly normative concepts have connotations that cannot go together with each other (at least not under the IMO proper account of how words get their meaning).
So far, the arguments for my claim are mostly just implicitly in my head. I’m currently trying to write them up and I’ll post them on the EA forum once it’s all done. (But I feel like there’s a sense in which the burden of proof isn’t on the anti-realists here. If I was a moral realist, I’d want to have a good sense of how I could, in theory under ideal conditions, figure out normative truths. Or, if I accept the interpretation that it’s conceivable that humans are forever incapable of figuring out normative truths, I’d at least need to have *some sense* of what it would mean for someone to not be forever incapable of figuring things out. Otherwise, how could I possibly believe that I understand my own concept well enough for it to have any meaning?)
But if we assume that there’s nothing above-and-beyond the traditional physical facts, then I don’t see what there’s left for anyone to have a substantive factual disagree about.
I think it’s true that there’d be much fewer substantive disagreements if more people explicitly accepted anti-realism. I find it good because then things feel like progress (but that’s mostly my need for closure talking.) That said, I think there are some interesting discussions to be had in an anti-realist framework, but they’d go a bit differently.
So this is sort of one way in which irreducably normative concepts can be “useful”: they can, I think, allow us to make sense of and justify certain debates that many people are strongly inclined to have and certain questions that many people are strongly inclined to ask.
Sure. In this sense, I’m an error theorist (as you point out as a possibility in your last paragraph). But I think there’s a sense in which that’s a misleading label. When I shifted from realism to anti-realism, I didn’t just shrug my shoulders thinking “oh no, I made an error” and then stopped being interested in normative ethics (or normative decision theory). Instead, I continued to be very interested in these things, but started thinking about them in different ways. So even though “error theory” is the appropriate label in one way, there’s another sense in which the shift is about how to handle ontological crises.
It sounds as though you’re expecting anti-realists about normativity to tell you some arguments that will genuinely make you feel (close to) indifferent about whether to use Bayesianism, or whether to use induction. But that’s not how I understand anti-realism. The way I would describe it, the primary claim that anti-realism about normativity entails is of a more trivial kind. More something like this:
If anti-realism about normativity is true, then in a hypothetical world where your mind worked in some strange way such that you found induction or Bayesianism dumb, then it’s impossible to point out and justify the exact sense in which you would be mistaken by some “universally approved standard.” So the question shouldn’t be “Have I ever seen someone give a an argument to start doubting induction?” Rather, I would ask “Have I ever seen someone give a convincing and non-question-begging account of what aliens who don’t believe in induction are doing wrong?”
In practice, the difference between realism and anti-realism only matters in cases where the answer doesn’t feel like the straightforward thing to do anyway. If Bayesianism and induction feel like the straightforward thing for you to do, you’ll use them whether you endorse realism or not. I’d argue that realists therefore shouldn’t use example propositions that provoke universal agreement (at least not as standalone examples) when they want to explain what constitutes an objective reason. Because by using examples that evoke universal agreement, they’re only pointing at reasons that we can already tell will feel convincing to people. The interesting question I want to know, as an anti-realist, is what it means for there to be irreducibly normative reasons that go beyond what I personally find convincing. The realists seem to think that just like in cases where we’re inclined to call a proposition “right” because it feels self-evident to everyone, there’s just as much of a fact of the matter for other propositions about which people will be in seemingly irresolveble disagreements. But I have yet to see how that’s a useful concept to introduce. I just don’t get it.
Edit:
I was strawmanning realism a bit here. Realists readily point out that the sense in which this is a mistake cannot be “explained” (at least not in non-question-begging terminology, i.e., not without the use of normative terminology). So in one sense, realism is simply a declaration that the intuition that some standards apply beyond the personal/subjective level is too important to give up on. But by itself, that declaration doesn’t yet make for a specific position, and it depends on further assumptions whether the disagreement will be only semantic, or also substantive.
Hm, this actually isn’t an expectation I have. When I talk about “realists” and “anti-realists,” in this post, I’m thinking of groups of people with different beliefs (rather than groups of people with different feelings). I don’t think of anti-realism as having any strong link to feelings of indifference about behavior. For example: I certainly expect most anti-realist philosophers to have strong preferences against putting their hands on hot stoves (and don’t see anything inconsistent in this).
I guess I don’t see it as a matter of usefulness. I have this concept that a lot of other people seem to have too: the concept of the choice I “should” make or that it would be “right” for me to make. Although pretty much everyone uses these words, not everyone reports having the same concept. Nonetheless, at least I do have the concept. And, insofar as there is any such thing as the “right thing,” I care a lot about doing it.
We can ask the question: “Why should people care about doing what they ‘should’ do?” I think the natural response to this question, though, is just sort of to evoke a tautology. People should care about doing what they should do, because they should do what they should do.
To put my “realist hat” firmly on for a second: I don’t think, for example, that someone happily abusing their partner would in any way find it “useful” to believe that abuse is wrong. But I do think they should believe that abuse is wrong, and take this fact into account when deciding how to act, because abuse is wrong.
I’m unfortunately not sure, though, if I have anything much deeper or more compelling than that to say in response to the question.
Another (significantly more rambling and possibly redundant) thought on “usefulness”:
One of the main things I’m trying to say in the post—although, in hindsight, I’m unsure if I communicated it well—is that there are a lot of debates that I personally have trouble interpretting as both non-trivial and truth-oriented if I assume that the debaters aren’t employing irreducably normative concepts. A lot of debates about decision theory have this property for me.
I understand how it’s possible for realists to have a substantive factual disagreement about the Newcomb scenario, for example, because they’re talking about something above-and-beyond the traditional physical facts of the case (which are basically just laid out in the problem specification). But if we assume that there’s nothing above-and-beyond the traditional physical facts, then I don’t see what there’s left for anyone to have a substantive factual disagree about.
If we want to ask “Which amount of money is the agent most likely to receive, if we condition on the information that it will one-box?”, then it seems to me that pretty much everyone agrees that “one million dollars” is the answer. If we want to ask “Would the agent get more money in a counterfactual world where it instead two-boxes, but all other features of the world at that time (including the contents of the boxes) are held fixed?”, then it seems to me that pretty much everyone agrees the answer is “yes.” If we want to ask “Would the agent get more money in a counterfactual world where it was born as a two-boxer, but all other features of the world at the time of its birth were held fixed?”, then it seems to me that pretty much everyone agrees the answer is “no.” So I don’t understand what the open question could be. People may of course have different feelings about one-boxing and about two-boxing, in the same way that people have different feelings about (e.g.) playing tennis and playing soccer, but that’s not a matter of factual/substantive disagreement.
So this is sort of one way in which irreducably normative concepts can be “useful”: they can, I think, allow us to make sense of and justify certain debates that many people are strongly inclined to have and certain questions that many people are strongly inclined to ask.
But the above line line of thought of course isn’t, at least in any direct way, an argument for realism actually being true. Even if the line of thought is sound, then it’s still entirely totally possible that these debates and questions just actually aren’t non-trivial and truth-oriented. Furthermore, the line of thought could also just not be sound. It’s totally possible that the debates/questions are non-trivial and truth-oriented, without evoking irreducably normative concepts, and I’m just a confused outside observer not getting what’s going on. Tonally, one thing I regret about the way I wrote this post is that I think it comes across as overly skeptical of this possibility.
Yeah, that makes sense. I was mostly replying to T3t’s comment, especially this part:
Upon re-reading T3t’s comment, I now think I interpreted them uncharitably. Probably they meant that because induction seems impossible to justify, one way to “explain” this or come to terms with this is by endorsing anti-realism. (That interpretation would make sense to me!)
I see. I think I understand the motivation to introduce irreducibly normative concepts into one’s philosophical repertoire. Therefore, saying “I don’t see the use” was a bit misleading. I think I meant that even though I understand the motivation, I don’t actually think we can make it work. I also kind of see the motivation behind wanting libertarian free will, but I also don’t think that works (and probably you’d agree on that one). So, I guess my main critique is that irreducibly normative concepts won’t add anything we can actually make use of in practice, because I don’t believe that your irreducibly normative concepts can ever be made coherent. I claim that if we think carefully about how words get their meaning, and then compare the situation with irreducibly normative concepts to other words, it’ll become apparent that the irreducibly normative concepts have connotations that cannot go together with each other (at least not under the IMO proper account of how words get their meaning).
So far, the arguments for my claim are mostly just implicitly in my head. I’m currently trying to write them up and I’ll post them on the EA forum once it’s all done. (But I feel like there’s a sense in which the burden of proof isn’t on the anti-realists here. If I was a moral realist, I’d want to have a good sense of how I could, in theory under ideal conditions, figure out normative truths. Or, if I accept the interpretation that it’s conceivable that humans are forever incapable of figuring out normative truths, I’d at least need to have *some sense* of what it would mean for someone to not be forever incapable of figuring things out. Otherwise, how could I possibly believe that I understand my own concept well enough for it to have any meaning?)
I think it’s true that there’d be much fewer substantive disagreements if more people explicitly accepted anti-realism. I find it good because then things feel like progress (but that’s mostly my need for closure talking.) That said, I think there are some interesting discussions to be had in an anti-realist framework, but they’d go a bit differently.
Sure. In this sense, I’m an error theorist (as you point out as a possibility in your last paragraph). But I think there’s a sense in which that’s a misleading label. When I shifted from realism to anti-realism, I didn’t just shrug my shoulders thinking “oh no, I made an error” and then stopped being interested in normative ethics (or normative decision theory). Instead, I continued to be very interested in these things, but started thinking about them in different ways. So even though “error theory” is the appropriate label in one way, there’s another sense in which the shift is about how to handle ontological crises.