I missed this post when it was recent, but I’m glad someone referred me to it! I really liked it and it made me more motivated to finalize some posts related to this topic that I’ve long been postponing. After reading this post, I upshifted the importance of discussing other types of normative realism besides moral realism.
As an anti-realist, I feel like you haven’t quite captured what anti-realism combined with an interest in EA and rationality can be like. I have a few comments about that (here and also below other people’s comments).
But I am also very sympathetic to realism and, in practice, tend to reason about normative questions as though I was a full-throated realist. My sympathy for realism and tendency to think as a realist largely stems from my perception that if we reject realism and internalize this rejection then there’s really not much to be said or thought about anything.
That’s interesting! The most intuitively compelling “argument” I have for anti-realism is that it very much feels to me as though there’s nothing worth wanting that anti-realists are missing. I’m pretty sure that you can get to a point where your intuitions also come to reflect that – though I guess one could worry about this being some kind of epistemic drift. That’ll be my ambitious aim with my anti-realism sequence: providing people with enough immersion into the anti-realist framework that it’ll start to feel as though nothing worth wanting is missing. :)
Furthermore, if anti-realism is true, then it can’t also be true that we should believe that anti-realism is true. Belief in anti-realism seems to undermine itself.
This rings hollow to me because you apply the realist sense of “something being true.” Of course anti-realism isn’t true in that way. But everything that you believe for reasons other than “I think this is true in the realist sense” will still remain with you under an anti-realist framework. In other words: As an anti-realist I’d recommend to stop caring about “objective reasons.” Most likely you’ll find that you can’t help but still care about what intuitively continues to feel like “reasons.” Then, think of those things as subjective reasons. This will feel like giving up on something extremely important, but it’s worth questioning whether that’s just an intuition rather than an actual loss.
I think that realism warrants more respect than it has historically received in the rationality community, at least relative to the level of respect it gets from philosophers.[17] I suspect that some of this lack of respect might come from a relatively weaker awareness of the cost of rejecting realism or of the way in which belief in anti-realism appears to undermine itself.
I agree that anti-realists (my past and probably still current self included) often don’t pass the Ideological Turing Test. That said, my impression is that the anti-realist perspective is at least as strongly missing in some (usually Oxford-originating) EA circles than the realist perspective is missing among rationalists.
If we assume that anti-realism is true, though, then we are assuming that there are no such facts. It seems to me like a committed anti-realist could not be in a state of normative uncertainty.
I agree. The closest anti-realist equivalent to moral uncertainty is what Brian Tomasik has called “valuing moral reflection.” Instead of having in mind a goal that’s fleshed out in direct terms, people might work toward an indirect goal of improved reflection, with the aim to eventually translate that into a direct goal. The important difference compared to the picture with moral realism is that not all the implications of valuing moral reflection are intuitive, and therefore, it’s not a “forced move.” Peer disagreement also works differently (I don’t update to the career choices of MMA fighters because I don’t think my personality is suitable for that type of leisure activity or profession, but I do update toward the life choices of people who are similar to me in certain relevant sensess.) I think this (improved clarity about ways of being morally uncertain) is probably the major way in which it has action-guiding consequences to get metaethics right. If I’m right about anti-realism, then people who consider themselves morally uncertain might not realize that they would have to cash this state of uncertainty out in some specific sense of “valuing moral reflection,” or that they might have underdetermined values. Perhaps underdetermined values are fine/acceptable – but that seems like the type of question that I at least would want to explicitly think about before implicitly deciding. (And for what it’s worth, I think there are quite strong reasons to value moral reflection to some degree as an anti-realist. I just think it’s complicated and not obvious, and people will likely come down on different sides on this if they realize that there’s a very real sense in which they are forced to take a stance on the object level, rather than taking what seems like the safe default of “being uncertain.”)
It rejects the idea of “shoulds” and points out that there aren’t “any oughtthorities to ordain what is right and what is wrong.” But then it seems to draw normative implications out of these attacks: among other implications, you should “just do what you want.” At least taken at face value, this line of reasoning wouldn’t be valid. It makes no more sense than reasoning that, if there are no facts about what we should do, then we should “just maximize total hedonistic well-being” or “just do the opposite of what we want” or “just open up souvenir shops.” Of course, though, there’s a good chance that I’m misunderstanding something here.
Argh! :D I think you might indeed be misunderstanding the point. I don’t think Nate gives “do what you want” as some kind of normative advice. Instead, I’m pretty this is meant in the “trivial” sense that people will by definition always do what they want, so they can continue to listen to their intuitions and subjective reasons without having to worry that they need to reach the exact same conclusions as everyone else. Nate is using the word “should” in the anti-realist sense. You’re still trying to interpret his statement with the realist “should” in mind – but anti-realists never use that type of “should.” (But maybe you were perfectly aware of that and you still insist on the realist sense of “should” because to you it seems like everything else doesn’t really matter? I often feel like the differences between realists and anti-realists comes down to intuitions like that.)
If this apparently anti-realist stance is widely held, then I don’t understand why the community engages so heavily with normative decision theory research or why it takes part in discussions about which decision theory is “correct.” It strikes me a bit like an atheist enthustiastically following theological debates about which god is the true god. But I’m mostly just confused here.[12][13]
I agree that this looks interesting, and that it’s not trivial to explain why exactly one would seem to care as an anti-realist. But ultimately, I think the explanation is perfectly intuitive. People in the rationalist community like to systematize, and decision theory is about systematizing. People have intuitions about what’s the best way to carve out useful concepts. To me, it provides me with a rewarding sense of insight if I can disentangle different ways in which things like causality are or aren’t relevant to my intuitions about caring about real-world outcomes. There’s a lot of progress to be made in philosophy at the level of carving out useful distinctions, without necessarily taking normative stances. People often tend to take normative stances, but many times that’s not even the most interesting bit. Anyway, decision theory is like cocaine for a certain type of intellectually curious person, and there’s a chance it’ll be relevant to real-world outcomes involving happiness and suffering. So thinking about it makes for a better, more existentially satisfying life project than many other things (for the right type of person).
From one of the footnotes:
I think this attitude is in line with the viewpoint that Luke Muehlhauser expresses in his classic LessWrong blog post on what he calls “pluralistic moral reductionism.” PMR seems to me to be the view that: (a) non-naturalist realism is false, (b) all remaining meta-normative disputes are purely semantic, and (c) purely semantic disputes aren’t terribly substantive and often reflect a failure to accept that the same phrase can be used in different ways. If we define the view this way, then, conditional on non-naturalist realism being false, I believe that PMR is the correct view. I believe that many non-naturalist realists would agree on this point as well. ↩
I agree. This is a very minor point, but I feel like it’s worth pointing out that premise (b) (“all remaining meta-normative disputes are purely semantic”) might be something that people could somewhat legitimately disagree with. I personally think premise (b) is obviously correct, but I’m always more “black and white” on questions like these than a lot of people whose reasoning I hold in high regard. The point I’m trying to make is that if you deny (b), you get a kind of interesting naturalist metaethical position that’s different and seemingly more “realist” than PMR. It seems to me that we can imagine a world where people (for some reason or another) just end up agreeing with each other on basically all normative questions. In that world, it would empirically be the case that whenever there are normative disagreement, they tend to eventually get resolved one way or another once certain misunderstandings are pointed out. Of course, if the hypothesis is spelled out this way, it seems relatively clear that this would be a very ambitious claim. Therefore, I think (b) is wrong. But quite a few people seem to think that if only we thought properly about the intrinsically motivating aspects of positive experiences, we’d all come to see that they are what matters, and from that, we could draw further conclusions toward a morality that will seem universally compelling to people who aren’t somehow conceptually confused. I think it’s worth having a name for that hypothesis. (In my introduction to moral realism, I called it “One Compelling Axiology,” but I’m not sure I like the name, and I also am a bit unhappy with how I explained the position in that post.)
Edited to add: I think Wei Dai has also described this position in his post about six metaethical possibilities, but I don’t think he gave it a name there.
I missed this post when it was recent, but I’m glad someone referred me to it! I really liked it and it made me more motivated to finalize some posts related to this topic that I’ve long been postponing. After reading this post, I upshifted the importance of discussing other types of normative realism besides moral realism.
As an anti-realist, I feel like you haven’t quite captured what anti-realism combined with an interest in EA and rationality can be like. I have a few comments about that (here and also below other people’s comments).
That’s interesting! The most intuitively compelling “argument” I have for anti-realism is that it very much feels to me as though there’s nothing worth wanting that anti-realists are missing. I’m pretty sure that you can get to a point where your intuitions also come to reflect that – though I guess one could worry about this being some kind of epistemic drift. That’ll be my ambitious aim with my anti-realism sequence: providing people with enough immersion into the anti-realist framework that it’ll start to feel as though nothing worth wanting is missing. :)
This rings hollow to me because you apply the realist sense of “something being true.” Of course anti-realism isn’t true in that way. But everything that you believe for reasons other than “I think this is true in the realist sense” will still remain with you under an anti-realist framework. In other words: As an anti-realist I’d recommend to stop caring about “objective reasons.” Most likely you’ll find that you can’t help but still care about what intuitively continues to feel like “reasons.” Then, think of those things as subjective reasons. This will feel like giving up on something extremely important, but it’s worth questioning whether that’s just an intuition rather than an actual loss.
I agree that anti-realists (my past and probably still current self included) often don’t pass the Ideological Turing Test. That said, my impression is that the anti-realist perspective is at least as strongly missing in some (usually Oxford-originating) EA circles than the realist perspective is missing among rationalists.
I agree. The closest anti-realist equivalent to moral uncertainty is what Brian Tomasik has called “valuing moral reflection.” Instead of having in mind a goal that’s fleshed out in direct terms, people might work toward an indirect goal of improved reflection, with the aim to eventually translate that into a direct goal. The important difference compared to the picture with moral realism is that not all the implications of valuing moral reflection are intuitive, and therefore, it’s not a “forced move.” Peer disagreement also works differently (I don’t update to the career choices of MMA fighters because I don’t think my personality is suitable for that type of leisure activity or profession, but I do update toward the life choices of people who are similar to me in certain relevant sensess.) I think this (improved clarity about ways of being morally uncertain) is probably the major way in which it has action-guiding consequences to get metaethics right. If I’m right about anti-realism, then people who consider themselves morally uncertain might not realize that they would have to cash this state of uncertainty out in some specific sense of “valuing moral reflection,” or that they might have underdetermined values. Perhaps underdetermined values are fine/acceptable – but that seems like the type of question that I at least would want to explicitly think about before implicitly deciding. (And for what it’s worth, I think there are quite strong reasons to value moral reflection to some degree as an anti-realist. I just think it’s complicated and not obvious, and people will likely come down on different sides on this if they realize that there’s a very real sense in which they are forced to take a stance on the object level, rather than taking what seems like the safe default of “being uncertain.”)
Argh! :D I think you might indeed be misunderstanding the point. I don’t think Nate gives “do what you want” as some kind of normative advice. Instead, I’m pretty this is meant in the “trivial” sense that people will by definition always do what they want, so they can continue to listen to their intuitions and subjective reasons without having to worry that they need to reach the exact same conclusions as everyone else. Nate is using the word “should” in the anti-realist sense. You’re still trying to interpret his statement with the realist “should” in mind – but anti-realists never use that type of “should.” (But maybe you were perfectly aware of that and you still insist on the realist sense of “should” because to you it seems like everything else doesn’t really matter? I often feel like the differences between realists and anti-realists comes down to intuitions like that.)
I agree that this looks interesting, and that it’s not trivial to explain why exactly one would seem to care as an anti-realist. But ultimately, I think the explanation is perfectly intuitive. People in the rationalist community like to systematize, and decision theory is about systematizing. People have intuitions about what’s the best way to carve out useful concepts. To me, it provides me with a rewarding sense of insight if I can disentangle different ways in which things like causality are or aren’t relevant to my intuitions about caring about real-world outcomes. There’s a lot of progress to be made in philosophy at the level of carving out useful distinctions, without necessarily taking normative stances. People often tend to take normative stances, but many times that’s not even the most interesting bit. Anyway, decision theory is like cocaine for a certain type of intellectually curious person, and there’s a chance it’ll be relevant to real-world outcomes involving happiness and suffering. So thinking about it makes for a better, more existentially satisfying life project than many other things (for the right type of person).
From one of the footnotes:
I agree. This is a very minor point, but I feel like it’s worth pointing out that premise (b) (“all remaining meta-normative disputes are purely semantic”) might be something that people could somewhat legitimately disagree with. I personally think premise (b) is obviously correct, but I’m always more “black and white” on questions like these than a lot of people whose reasoning I hold in high regard. The point I’m trying to make is that if you deny (b), you get a kind of interesting naturalist metaethical position that’s different and seemingly more “realist” than PMR. It seems to me that we can imagine a world where people (for some reason or another) just end up agreeing with each other on basically all normative questions. In that world, it would empirically be the case that whenever there are normative disagreement, they tend to eventually get resolved one way or another once certain misunderstandings are pointed out. Of course, if the hypothesis is spelled out this way, it seems relatively clear that this would be a very ambitious claim. Therefore, I think (b) is wrong. But quite a few people seem to think that if only we thought properly about the intrinsically motivating aspects of positive experiences, we’d all come to see that they are what matters, and from that, we could draw further conclusions toward a morality that will seem universally compelling to people who aren’t somehow conceptually confused. I think it’s worth having a name for that hypothesis. (In my introduction to moral realism, I called it “One Compelling Axiology,” but I’m not sure I like the name, and I also am a bit unhappy with how I explained the position in that post.)
Edited to add: I think Wei Dai has also described this position in his post about six metaethical possibilities, but I don’t think he gave it a name there.
Just wanted to say I really appreciate you taking the time to write up such a long, clear, and thoughtful response!
(If I have a bit of time and/or need to procrastinate anything in the near future, I may write up a few further thoughts under this comment.)