I would like to read such a post, certainly. I find your comment here interesting, because there’s a version of this sort of view (“worth acting as if it’s true anyway”) that I find to be possibly reasonable—but it’s not one I’d ever describe as a “Pascal’s wager”! So perhaps you mean something else by it, which difference / conceptual collision seems worth exploring.
Is your version of this sort of view something more like the idea that it should all “add up to normality” in the end, and that moral antirealism should be able to “rescue” our prior intuitions about morality anyway, so we should still end up valuing basically the same things whether or not realism is true?
If so, that’s also something I find fairly compelling. And I think it’ll often lead to similar actions in effect. But I do expect some differences could occur. E.g., I’m very concerned about the idea of designing an AGI that implements coherent extrapolated volition, even if it all goes perfectly as planned, because I see it as quite possible, and possibly extremely high stakes, that there really is some sort of “moral truth” that’s not at all grounded in what humans value. (That is, something that may or may not overlap or be correlated with what we value, but doesn’t come from the fact that we value certain things.)
I’m not saying I have a better alternative, because I do find compelling the arguments along the lines of “We can’t just tell an AGI to find the moral truth and act on it, because ‘moral truth’ isn’t a clear enough concept and there may be no fundamental thing that matches that idea out there in the world.” But I’d ideally like us to hold back on trying to implement a strategy based on moral antirealism or on assuming moral realism + that the ‘moral truth’ will be naturally findable by an AGI, because I “moral truth” as at least possible a coherent and reality-matching concept. (In practice, we may need to just lock something in to avoid some worse lock in, and CEV may be the best we’ve got. But I don’t think it’s just obvious that that’s definitely all there is to morality, and that we should happily move towards CEV as fast as we can.)
I’m more confident in the above ideas than I am in my Pascal’s wager type thing. The Pascal’s wager type thing is something a bit stronger—not just acting as if uncertain, but acting pretty much as if non-naturalistic moral realism actually is true, because if it is “the stakes are so much higher” than if it isn’t. This seems to come from me sort of conflating nihilism and moral antirealism, which seems rejected in various LessWrong posts and also might differ from standard academic metaethics, but it still seems to me that there might be something to that. But again, these are half-formed, low-confidence thoughts at the moment.
Is your version of this sort of view something more like the idea that it should all “add up to normality” in the end, and that moral antirealism should be able to “rescue” our prior intuitions about morality anyway, so we should still end up valuing basically the same things whether or not realism is true?
If so, that’s also something I find fairly compelling. And I think it’ll often lead to similar actions in effect. But I do expect some differences could occur. E.g., I’m very concerned about the idea of designing an AGI that implements coherent extrapolated volition, even if it all goes perfectly as planned, because I see it as quite possible, and possibly extremely high stakes, that there really is some sort of “moral truth” that’s not at all grounded in what humans value. (That is, something that may or may not overlap or be correlated with what we value, but doesn’t come from the fact that we value certain things.)
I’m not saying I have a better alternative, because I do find compelling the arguments along the lines of “We can’t just tell an AGI to find the moral truth and act on it, because ‘moral truth’ isn’t a clear enough concept and there may be no fundamental thing that matches that idea out there in the world.” But I’d ideally like us to hold back on trying to implement a strategy based on moral antirealism or on assuming moral realism + that the ‘moral truth’ will be naturally findable by an AGI, because I “moral truth” as at least possible a coherent and reality-matching concept. (In practice, we may need to just lock something in to avoid some worse lock in, and CEV may be the best we’ve got. But I don’t think it’s just obvious that that’s definitely all there is to morality, and that we should happily move towards CEV as fast as we can.)
I’m more confident in the above ideas than I am in my Pascal’s wager type thing. The Pascal’s wager type thing is something a bit stronger—not just acting as if uncertain, but acting pretty much as if non-naturalistic moral realism actually is true, because if it is “the stakes are so much higher” than if it isn’t. This seems to come from me sort of conflating nihilism and moral antirealism, which seems rejected in various LessWrong posts and also might differ from standard academic metaethics, but it still seems to me that there might be something to that. But again, these are half-formed, low-confidence thoughts at the moment.