… it seems hard for me to even imagine what it would mean for non-naturalistic moral realism to be true, and thus very unlikely that it is true …
This is my view also (except that I would probably drop even the “non-naturalistic” qualifier; I’m unsure of this, because I haven’t seen this term used consistently in the literature… what is your preferred reference for what is meant by “naturalistic” vs. “non-naturalistic” moral realism?).
… but it seems worth acting as if it’s true anyway. (I’m not sure if this reasoning actually makes sense—I plan to write a post about it later.)
I would like to read such a post, certainly. I find your comment here interesting, because there’s a version of this sort of view (“worth acting as if it’s true anyway”) that I find to be possibly reasonable—but it’s not one I’d ever describe as a “Pascal’s wager”! So perhaps you mean something else by it, which difference / conceptual collision seems worth exploring.
In any case, I agree that clarifying the terms as they are used is worthwhile. (Although one caveat is that if a term / concept is incoherent, there is an upper limit to how much clarity can be achieved in discerning how the term is used! But even in this case, the attempt is worthy.)
(If [moral realism doesn’t make sense], we can still have a coherent concept that we call “moral uncertainty”, along the lines of what coherent extrapolated volition is about, but it seems to me—though I could be wrong—to be something substantively different.)
“(If [moral realism doesn’t make sense], we can still have a coherent concept that we call “moral uncertainty”, along the lines of what coherent extrapolated volition is about, but it seems to me—though I could be wrong—to be something substantively different.)”
This, too, seems worth writing about!
Glad to hear you think so! That’s roughly what the post (mentioned in my other comment) which I hope to finish by early next week will be about.
In any case, I agree that clarifying the terms as they are used is worthwhile. (Although one caveat is that if a term / concept is incoherent, there is an upper limit to how much clarity can be achieved in discerning how the term is used! But even in this case, the attempt is worthy.)
I think that’s true, but also that additional valiant attempts to clarify incoherent terms that still leave them seeming very unclear and incoherent might help us gain further evidence that the terms are worth abandoning entirely. Sort of like just trying a cure for some disease and finding it fails, so we can rule that out, rather than theorising about why that cure might not work (which could also be valuable).
(That said, that wasn’t my explicit intention when I wrote this post—it just came to mind as an interesting possible bonus and/or rationalisation when I read your comment.)
I would like to read such a post, certainly. I find your comment here interesting, because there’s a version of this sort of view (“worth acting as if it’s true anyway”) that I find to be possibly reasonable—but it’s not one I’d ever describe as a “Pascal’s wager”! So perhaps you mean something else by it, which difference / conceptual collision seems worth exploring.
Is your version of this sort of view something more like the idea that it should all “add up to normality” in the end, and that moral antirealism should be able to “rescue” our prior intuitions about morality anyway, so we should still end up valuing basically the same things whether or not realism is true?
If so, that’s also something I find fairly compelling. And I think it’ll often lead to similar actions in effect. But I do expect some differences could occur. E.g., I’m very concerned about the idea of designing an AGI that implements coherent extrapolated volition, even if it all goes perfectly as planned, because I see it as quite possible, and possibly extremely high stakes, that there really is some sort of “moral truth” that’s not at all grounded in what humans value. (That is, something that may or may not overlap or be correlated with what we value, but doesn’t come from the fact that we value certain things.)
I’m not saying I have a better alternative, because I do find compelling the arguments along the lines of “We can’t just tell an AGI to find the moral truth and act on it, because ‘moral truth’ isn’t a clear enough concept and there may be no fundamental thing that matches that idea out there in the world.” But I’d ideally like us to hold back on trying to implement a strategy based on moral antirealism or on assuming moral realism + that the ‘moral truth’ will be naturally findable by an AGI, because I “moral truth” as at least possible a coherent and reality-matching concept. (In practice, we may need to just lock something in to avoid some worse lock in, and CEV may be the best we’ve got. But I don’t think it’s just obvious that that’s definitely all there is to morality, and that we should happily move towards CEV as fast as we can.)
I’m more confident in the above ideas than I am in my Pascal’s wager type thing. The Pascal’s wager type thing is something a bit stronger—not just acting as if uncertain, but acting pretty much as if non-naturalistic moral realism actually is true, because if it is “the stakes are so much higher” than if it isn’t. This seems to come from me sort of conflating nihilism and moral antirealism, which seems rejected in various LessWrong posts and also might differ from standard academic metaethics, but it still seems to me that there might be something to that. But again, these are half-formed, low-confidence thoughts at the moment.
This is my view also (except that I would probably drop even the “non-naturalistic” qualifier; I’m unsure of this, because I haven’t seen this term used consistently in the literature… what is your preferred reference for what is meant by “naturalistic” vs. “non-naturalistic” moral realism?).
As a general point, I have a half-formed thought along the lines of “Metaethics—and to some extent morality—is like a horrible stupid quagmire of wrong questions, at least if we take non-naturalistic moral realism seriously, but unfortunately it seems like the one case in which we may have to just wade through that as best we can rather than dissolving it.” (I believe Eliezer has written against the second half of that view, but I currently don’t find his points there convincing. But I’m quite unsure about all this.)
The relevance here being that I’d agree that the terms are used far from consistently, and perhaps that’s because we’re just totally confused about what we’re even trying to say.
But that being said, I think a good discussion of naturalistic vs non-naturalistic realism, and indication of why I added the qualifier in the above sentences, can be found in footnote 15 of this post. E.g. (but the whole footnote is worth reading):
In general, I agree with the view that the key division in metaethics is between self-identified non-naturalist realists on the one hand and self-identified anti-realists and naturalist realists on the other hand, since “naturalist realists” are in fact anti-realists with regard to the distinctively normative properties of decisions that non-naturalist realists are talking about. If we rule out non-naturalist realism as a position then it seems the main remaining question is a somewhat boring one about semantics: When someone makes a statement of form “A should do X,” are they most commonly expressing some sort of attitude (non-cognitivism), making a claim about the natural world (naturalist realism), or making a claim about some made-up property that no actions actually possess (error theory)?
This is my view also (except that I would probably drop even the “non-naturalistic” qualifier; I’m unsure of this, because I haven’t seen this term used consistently in the literature… what is your preferred reference for what is meant by “naturalistic” vs. “non-naturalistic” moral realism?).
I would like to read such a post, certainly. I find your comment here interesting, because there’s a version of this sort of view (“worth acting as if it’s true anyway”) that I find to be possibly reasonable—but it’s not one I’d ever describe as a “Pascal’s wager”! So perhaps you mean something else by it, which difference / conceptual collision seems worth exploring.
In any case, I agree that clarifying the terms as they are used is worthwhile. (Although one caveat is that if a term / concept is incoherent, there is an upper limit to how much clarity can be achieved in discerning how the term is used! But even in this case, the attempt is worthy.)
This, too, seems worth writing about!
Glad to hear you think so! That’s roughly what the post (mentioned in my other comment) which I hope to finish by early next week will be about.
I think that’s true, but also that additional valiant attempts to clarify incoherent terms that still leave them seeming very unclear and incoherent might help us gain further evidence that the terms are worth abandoning entirely. Sort of like just trying a cure for some disease and finding it fails, so we can rule that out, rather than theorising about why that cure might not work (which could also be valuable).
(That said, that wasn’t my explicit intention when I wrote this post—it just came to mind as an interesting possible bonus and/or rationalisation when I read your comment.)
Is your version of this sort of view something more like the idea that it should all “add up to normality” in the end, and that moral antirealism should be able to “rescue” our prior intuitions about morality anyway, so we should still end up valuing basically the same things whether or not realism is true?
If so, that’s also something I find fairly compelling. And I think it’ll often lead to similar actions in effect. But I do expect some differences could occur. E.g., I’m very concerned about the idea of designing an AGI that implements coherent extrapolated volition, even if it all goes perfectly as planned, because I see it as quite possible, and possibly extremely high stakes, that there really is some sort of “moral truth” that’s not at all grounded in what humans value. (That is, something that may or may not overlap or be correlated with what we value, but doesn’t come from the fact that we value certain things.)
I’m not saying I have a better alternative, because I do find compelling the arguments along the lines of “We can’t just tell an AGI to find the moral truth and act on it, because ‘moral truth’ isn’t a clear enough concept and there may be no fundamental thing that matches that idea out there in the world.” But I’d ideally like us to hold back on trying to implement a strategy based on moral antirealism or on assuming moral realism + that the ‘moral truth’ will be naturally findable by an AGI, because I “moral truth” as at least possible a coherent and reality-matching concept. (In practice, we may need to just lock something in to avoid some worse lock in, and CEV may be the best we’ve got. But I don’t think it’s just obvious that that’s definitely all there is to morality, and that we should happily move towards CEV as fast as we can.)
I’m more confident in the above ideas than I am in my Pascal’s wager type thing. The Pascal’s wager type thing is something a bit stronger—not just acting as if uncertain, but acting pretty much as if non-naturalistic moral realism actually is true, because if it is “the stakes are so much higher” than if it isn’t. This seems to come from me sort of conflating nihilism and moral antirealism, which seems rejected in various LessWrong posts and also might differ from standard academic metaethics, but it still seems to me that there might be something to that. But again, these are half-formed, low-confidence thoughts at the moment.
As a general point, I have a half-formed thought along the lines of “Metaethics—and to some extent morality—is like a horrible stupid quagmire of wrong questions, at least if we take non-naturalistic moral realism seriously, but unfortunately it seems like the one case in which we may have to just wade through that as best we can rather than dissolving it.” (I believe Eliezer has written against the second half of that view, but I currently don’t find his points there convincing. But I’m quite unsure about all this.)
The relevance here being that I’d agree that the terms are used far from consistently, and perhaps that’s because we’re just totally confused about what we’re even trying to say.
But that being said, I think a good discussion of naturalistic vs non-naturalistic realism, and indication of why I added the qualifier in the above sentences, can be found in footnote 15 of this post. E.g. (but the whole footnote is worth reading):