This is a valuable post, certainly, and I appreciate you writing it—it lays out some (clearly very relevant) ideas in a straightforward way.
That said, most of this seems to be predicated on “moral uncertainty” being a coherent concept—and, of course, it is (so far, in the sequence) an unexplained one.[1] So, I am not yet quite sure what I think of the substantive points you describe/mention.
Some of the other concepts here seem to me to be questionable (not necessarily incoherent, just… not obviously coherent) for much the same reasons that “moral uncertainty” is. I will refrain from commenting on that in detail, for now, but may come back to it later (contingent on what I think about “moral uncertainty” itself, once I see it explained).
In any case, thank you for taking the time to write these posts!
Also, just regarding “of course, [moral uncertainty] is (so far, in the sequence) an unexplained [concept]”:
The later posts will further flesh out the concept, provide more examples, etc. But as far as I can tell, there unfortunately isn’t just one neat, simple explanation of the concept that everyone will agree to, that will make self-evident all the important points, and that won’t just rely on other terms that need explaining too. This is partly because the term is used in different ways by different people, and partly because the concept obviously involves morality and thus can’t be fully disentangled from various meta-ethical quagmires.
This is part of why I try to explain the term from multiple angles, using multiple examples, contrasting it with other terms, etc., rather than just being able to say “Moral uncertainty is...”, list four criteria, explain those, and be done with it (or something like that).
But if part of your feelings are premised on being suspicious of non-naturalistic moral realism, then perhaps the post you’ll find most useful will be the one on what moral uncertainty can mean for antirealists and subjectivists, which should hopefully be out early next week.
(I guess one way of putting this is that the explanation will unfold gradually, and really we’re talking about something a bit more like a cluster of related ideas rather than one neat simple crisp thing—it’s not that I’ve been holding the explanation of that one neat simple crisp thing back from readers so far!)
But if part of your feelings are premised on being suspicious of non-naturalistic moral realism, then perhaps the post you’ll find most useful will be the one on what moral uncertainty can mean for antirealists and subjectivists, which should hopefully be out early next week.
That’s certainly a big part of it (see my reply to sibling comment for more). It’s not all of it, though. I listed some questions in my initial comment asking for an explanation of what moral realism is; I’ll want to revisit them (as well as a couple of others that’ve occurred to me), once the entire sequence (or, at least, this upcoming post you mention) is posted.
(I guess one way of putting this is that the explanation will unfold gradually, and really we’re talking about something a bit more like a cluster of related ideas rather than one neat simple crisp thing—it’s not that I’ve been holding the explanation of that one neat simple crisp thing back from readers so far!)
Certainly understandable.
Although—if, indeed, the term is used in different ways by different people (as seems likely enough), then perhaps it might make sense, instead of trying to explain “moral uncertainty”, rather to clearly separate the concepts labeled by this term into distinct buckets, and explain them separately.
Then again, it’s hard for me to judge any of these explanations too confidently, given, as you say, the “unfolding” dynamic… we will see, I suppose, what I think of the whole thing, when it’s all posted!
(I don’t think there’s a conflict between what I say in this comment and what you said in yours—I think they’re different points, but not inconsistent.)
I think I understand what you mean, and sympathise. To be clear, I’m trying to explain what existing concepts are meant to mean, and how they seem to relate to each other, rather than putting forward new concepts or arguing that these are necessarily good, coherent, useful concepts that carve reality at the joints.
This is why I say “I hope this will benefit readers by facilitating clearer thinking and discussion.” Even if we aren’t sure these are useful concepts, they definitely are used, both on LessWrong and elsewhere, and so it seems worth us getting more on the same page with what we’re even trying to say. That may in turn also help us work out whether what we’re trying to say is actually empty, and pointing to nothing in the real world at all.
I think, though I’m not sure, that a lot of the question of how coherent some of these concepts really are—or how much they point at real things—also comes down the question of whether non-naturalistic moral realism makes sense or is true. (If it doesn’t, we can still have a coherent concept that we call “moral uncertainty”, along the lines of what coherent extrapolated volition is about, but it seems to me—though I could be wrong—to be something substantively different.)
Personally, I’ve got something like a Pascal’s wager going on that front—it seems hard for me to even imagine what it would mean for non-naturalistic moral realism to be true, and thus very unlikely that it is true, but it seems worth acting as if it’s true anyway. (I’m not sure if this reasoning actually makes sense—I plan to write a post about it later.)
But in any case, whether due to that sort of Pascal’s wager, or a more general sense of epistemic humility (as it seems a majority of philosophers are moral realists), or even to clarify and thus facilitate conversations that may ultimately kill our current concept of moral uncertainty, it seems worth clarifying what we’re even trying to say with the terms and concepts we use.
… it seems hard for me to even imagine what it would mean for non-naturalistic moral realism to be true, and thus very unlikely that it is true …
This is my view also (except that I would probably drop even the “non-naturalistic” qualifier; I’m unsure of this, because I haven’t seen this term used consistently in the literature… what is your preferred reference for what is meant by “naturalistic” vs. “non-naturalistic” moral realism?).
… but it seems worth acting as if it’s true anyway. (I’m not sure if this reasoning actually makes sense—I plan to write a post about it later.)
I would like to read such a post, certainly. I find your comment here interesting, because there’s a version of this sort of view (“worth acting as if it’s true anyway”) that I find to be possibly reasonable—but it’s not one I’d ever describe as a “Pascal’s wager”! So perhaps you mean something else by it, which difference / conceptual collision seems worth exploring.
In any case, I agree that clarifying the terms as they are used is worthwhile. (Although one caveat is that if a term / concept is incoherent, there is an upper limit to how much clarity can be achieved in discerning how the term is used! But even in this case, the attempt is worthy.)
(If [moral realism doesn’t make sense], we can still have a coherent concept that we call “moral uncertainty”, along the lines of what coherent extrapolated volition is about, but it seems to me—though I could be wrong—to be something substantively different.)
“(If [moral realism doesn’t make sense], we can still have a coherent concept that we call “moral uncertainty”, along the lines of what coherent extrapolated volition is about, but it seems to me—though I could be wrong—to be something substantively different.)”
This, too, seems worth writing about!
Glad to hear you think so! That’s roughly what the post (mentioned in my other comment) which I hope to finish by early next week will be about.
In any case, I agree that clarifying the terms as they are used is worthwhile. (Although one caveat is that if a term / concept is incoherent, there is an upper limit to how much clarity can be achieved in discerning how the term is used! But even in this case, the attempt is worthy.)
I think that’s true, but also that additional valiant attempts to clarify incoherent terms that still leave them seeming very unclear and incoherent might help us gain further evidence that the terms are worth abandoning entirely. Sort of like just trying a cure for some disease and finding it fails, so we can rule that out, rather than theorising about why that cure might not work (which could also be valuable).
(That said, that wasn’t my explicit intention when I wrote this post—it just came to mind as an interesting possible bonus and/or rationalisation when I read your comment.)
I would like to read such a post, certainly. I find your comment here interesting, because there’s a version of this sort of view (“worth acting as if it’s true anyway”) that I find to be possibly reasonable—but it’s not one I’d ever describe as a “Pascal’s wager”! So perhaps you mean something else by it, which difference / conceptual collision seems worth exploring.
Is your version of this sort of view something more like the idea that it should all “add up to normality” in the end, and that moral antirealism should be able to “rescue” our prior intuitions about morality anyway, so we should still end up valuing basically the same things whether or not realism is true?
If so, that’s also something I find fairly compelling. And I think it’ll often lead to similar actions in effect. But I do expect some differences could occur. E.g., I’m very concerned about the idea of designing an AGI that implements coherent extrapolated volition, even if it all goes perfectly as planned, because I see it as quite possible, and possibly extremely high stakes, that there really is some sort of “moral truth” that’s not at all grounded in what humans value. (That is, something that may or may not overlap or be correlated with what we value, but doesn’t come from the fact that we value certain things.)
I’m not saying I have a better alternative, because I do find compelling the arguments along the lines of “We can’t just tell an AGI to find the moral truth and act on it, because ‘moral truth’ isn’t a clear enough concept and there may be no fundamental thing that matches that idea out there in the world.” But I’d ideally like us to hold back on trying to implement a strategy based on moral antirealism or on assuming moral realism + that the ‘moral truth’ will be naturally findable by an AGI, because I “moral truth” as at least possible a coherent and reality-matching concept. (In practice, we may need to just lock something in to avoid some worse lock in, and CEV may be the best we’ve got. But I don’t think it’s just obvious that that’s definitely all there is to morality, and that we should happily move towards CEV as fast as we can.)
I’m more confident in the above ideas than I am in my Pascal’s wager type thing. The Pascal’s wager type thing is something a bit stronger—not just acting as if uncertain, but acting pretty much as if non-naturalistic moral realism actually is true, because if it is “the stakes are so much higher” than if it isn’t. This seems to come from me sort of conflating nihilism and moral antirealism, which seems rejected in various LessWrong posts and also might differ from standard academic metaethics, but it still seems to me that there might be something to that. But again, these are half-formed, low-confidence thoughts at the moment.
This is my view also (except that I would probably drop even the “non-naturalistic” qualifier; I’m unsure of this, because I haven’t seen this term used consistently in the literature… what is your preferred reference for what is meant by “naturalistic” vs. “non-naturalistic” moral realism?).
As a general point, I have a half-formed thought along the lines of “Metaethics—and to some extent morality—is like a horrible stupid quagmire of wrong questions, at least if we take non-naturalistic moral realism seriously, but unfortunately it seems like the one case in which we may have to just wade through that as best we can rather than dissolving it.” (I believe Eliezer has written against the second half of that view, but I currently don’t find his points there convincing. But I’m quite unsure about all this.)
The relevance here being that I’d agree that the terms are used far from consistently, and perhaps that’s because we’re just totally confused about what we’re even trying to say.
But that being said, I think a good discussion of naturalistic vs non-naturalistic realism, and indication of why I added the qualifier in the above sentences, can be found in footnote 15 of this post. E.g. (but the whole footnote is worth reading):
In general, I agree with the view that the key division in metaethics is between self-identified non-naturalist realists on the one hand and self-identified anti-realists and naturalist realists on the other hand, since “naturalist realists” are in fact anti-realists with regard to the distinctively normative properties of decisions that non-naturalist realists are talking about. If we rule out non-naturalist realism as a position then it seems the main remaining question is a somewhat boring one about semantics: When someone makes a statement of form “A should do X,” are they most commonly expressing some sort of attitude (non-cognitivism), making a claim about the natural world (naturalist realism), or making a claim about some made-up property that no actions actually possess (error theory)?
This is a valuable post, certainly, and I appreciate you writing it—it lays out some (clearly very relevant) ideas in a straightforward way.
That said, most of this seems to be predicated on “moral uncertainty” being a coherent concept—and, of course, it is (so far, in the sequence) an unexplained one.[1] So, I am not yet quite sure what I think of the substantive points you describe/mention.
Some of the other concepts here seem to me to be questionable (not necessarily incoherent, just… not obviously coherent) for much the same reasons that “moral uncertainty” is. I will refrain from commenting on that in detail, for now, but may come back to it later (contingent on what I think about “moral uncertainty” itself, once I see it explained).
In any case, thank you for taking the time to write these posts!
I know that comes in the next post; I haven’t read it yet, but will shortly. I’m merely commenting as I go along.
Also, just regarding “of course, [moral uncertainty] is (so far, in the sequence) an unexplained [concept]”:
The later posts will further flesh out the concept, provide more examples, etc. But as far as I can tell, there unfortunately isn’t just one neat, simple explanation of the concept that everyone will agree to, that will make self-evident all the important points, and that won’t just rely on other terms that need explaining too. This is partly because the term is used in different ways by different people, and partly because the concept obviously involves morality and thus can’t be fully disentangled from various meta-ethical quagmires.
This is part of why I try to explain the term from multiple angles, using multiple examples, contrasting it with other terms, etc., rather than just being able to say “Moral uncertainty is...”, list four criteria, explain those, and be done with it (or something like that).
But if part of your feelings are premised on being suspicious of non-naturalistic moral realism, then perhaps the post you’ll find most useful will be the one on what moral uncertainty can mean for antirealists and subjectivists, which should hopefully be out early next week.
(I guess one way of putting this is that the explanation will unfold gradually, and really we’re talking about something a bit more like a cluster of related ideas rather than one neat simple crisp thing—it’s not that I’ve been holding the explanation of that one neat simple crisp thing back from readers so far!)
That’s certainly a big part of it (see my reply to sibling comment for more). It’s not all of it, though. I listed some questions in my initial comment asking for an explanation of what moral realism is; I’ll want to revisit them (as well as a couple of others that’ve occurred to me), once the entire sequence (or, at least, this upcoming post you mention) is posted.
Certainly understandable.
Although—if, indeed, the term is used in different ways by different people (as seems likely enough), then perhaps it might make sense, instead of trying to explain “moral uncertainty”, rather to clearly separate the concepts labeled by this term into distinct buckets, and explain them separately.
Then again, it’s hard for me to judge any of these explanations too confidently, given, as you say, the “unfolding” dynamic… we will see, I suppose, what I think of the whole thing, when it’s all posted!
(I don’t think there’s a conflict between what I say in this comment and what you said in yours—I think they’re different points, but not inconsistent.)
I think I understand what you mean, and sympathise. To be clear, I’m trying to explain what existing concepts are meant to mean, and how they seem to relate to each other, rather than putting forward new concepts or arguing that these are necessarily good, coherent, useful concepts that carve reality at the joints.
This is why I say “I hope this will benefit readers by facilitating clearer thinking and discussion.” Even if we aren’t sure these are useful concepts, they definitely are used, both on LessWrong and elsewhere, and so it seems worth us getting more on the same page with what we’re even trying to say. That may in turn also help us work out whether what we’re trying to say is actually empty, and pointing to nothing in the real world at all.
I think, though I’m not sure, that a lot of the question of how coherent some of these concepts really are—or how much they point at real things—also comes down the question of whether non-naturalistic moral realism makes sense or is true. (If it doesn’t, we can still have a coherent concept that we call “moral uncertainty”, along the lines of what coherent extrapolated volition is about, but it seems to me—though I could be wrong—to be something substantively different.)
Personally, I’ve got something like a Pascal’s wager going on that front—it seems hard for me to even imagine what it would mean for non-naturalistic moral realism to be true, and thus very unlikely that it is true, but it seems worth acting as if it’s true anyway. (I’m not sure if this reasoning actually makes sense—I plan to write a post about it later.)
But in any case, whether due to that sort of Pascal’s wager, or a more general sense of epistemic humility (as it seems a majority of philosophers are moral realists), or even to clarify and thus facilitate conversations that may ultimately kill our current concept of moral uncertainty, it seems worth clarifying what we’re even trying to say with the terms and concepts we use.
This is my view also (except that I would probably drop even the “non-naturalistic” qualifier; I’m unsure of this, because I haven’t seen this term used consistently in the literature… what is your preferred reference for what is meant by “naturalistic” vs. “non-naturalistic” moral realism?).
I would like to read such a post, certainly. I find your comment here interesting, because there’s a version of this sort of view (“worth acting as if it’s true anyway”) that I find to be possibly reasonable—but it’s not one I’d ever describe as a “Pascal’s wager”! So perhaps you mean something else by it, which difference / conceptual collision seems worth exploring.
In any case, I agree that clarifying the terms as they are used is worthwhile. (Although one caveat is that if a term / concept is incoherent, there is an upper limit to how much clarity can be achieved in discerning how the term is used! But even in this case, the attempt is worthy.)
This, too, seems worth writing about!
Glad to hear you think so! That’s roughly what the post (mentioned in my other comment) which I hope to finish by early next week will be about.
I think that’s true, but also that additional valiant attempts to clarify incoherent terms that still leave them seeming very unclear and incoherent might help us gain further evidence that the terms are worth abandoning entirely. Sort of like just trying a cure for some disease and finding it fails, so we can rule that out, rather than theorising about why that cure might not work (which could also be valuable).
(That said, that wasn’t my explicit intention when I wrote this post—it just came to mind as an interesting possible bonus and/or rationalisation when I read your comment.)
Is your version of this sort of view something more like the idea that it should all “add up to normality” in the end, and that moral antirealism should be able to “rescue” our prior intuitions about morality anyway, so we should still end up valuing basically the same things whether or not realism is true?
If so, that’s also something I find fairly compelling. And I think it’ll often lead to similar actions in effect. But I do expect some differences could occur. E.g., I’m very concerned about the idea of designing an AGI that implements coherent extrapolated volition, even if it all goes perfectly as planned, because I see it as quite possible, and possibly extremely high stakes, that there really is some sort of “moral truth” that’s not at all grounded in what humans value. (That is, something that may or may not overlap or be correlated with what we value, but doesn’t come from the fact that we value certain things.)
I’m not saying I have a better alternative, because I do find compelling the arguments along the lines of “We can’t just tell an AGI to find the moral truth and act on it, because ‘moral truth’ isn’t a clear enough concept and there may be no fundamental thing that matches that idea out there in the world.” But I’d ideally like us to hold back on trying to implement a strategy based on moral antirealism or on assuming moral realism + that the ‘moral truth’ will be naturally findable by an AGI, because I “moral truth” as at least possible a coherent and reality-matching concept. (In practice, we may need to just lock something in to avoid some worse lock in, and CEV may be the best we’ve got. But I don’t think it’s just obvious that that’s definitely all there is to morality, and that we should happily move towards CEV as fast as we can.)
I’m more confident in the above ideas than I am in my Pascal’s wager type thing. The Pascal’s wager type thing is something a bit stronger—not just acting as if uncertain, but acting pretty much as if non-naturalistic moral realism actually is true, because if it is “the stakes are so much higher” than if it isn’t. This seems to come from me sort of conflating nihilism and moral antirealism, which seems rejected in various LessWrong posts and also might differ from standard academic metaethics, but it still seems to me that there might be something to that. But again, these are half-formed, low-confidence thoughts at the moment.
As a general point, I have a half-formed thought along the lines of “Metaethics—and to some extent morality—is like a horrible stupid quagmire of wrong questions, at least if we take non-naturalistic moral realism seriously, but unfortunately it seems like the one case in which we may have to just wade through that as best we can rather than dissolving it.” (I believe Eliezer has written against the second half of that view, but I currently don’t find his points there convincing. But I’m quite unsure about all this.)
The relevance here being that I’d agree that the terms are used far from consistently, and perhaps that’s because we’re just totally confused about what we’re even trying to say.
But that being said, I think a good discussion of naturalistic vs non-naturalistic realism, and indication of why I added the qualifier in the above sentences, can be found in footnote 15 of this post. E.g. (but the whole footnote is worth reading):