Interesting point! I think I see what you mean. I think “a metaethics [...] where one can be wrong about one’s values” makes sense, but in a fuzzy sort of way. I think of metaphilosophy and moral reflection as more an art than a science, and a lot of things are left under-defined.
Is there actually a broad basin of attraction around human values? How do we know or how can we find out?
I recently finished a sequence on metaethics which culminated in this post on moral uncertainty, which contains a bunch of thoughts on this very topic. I don’t necessarily expect them to be new to you and I suspect that you disagree with some of my intuitions, but you might nonetheless find the post interesting. I cite some of your Lesswrong posts and comments in the post.
Under moral anti-realism, there are two empirical possibilities[10] for “When is someone ready to form convictions?.” [Endnote 10: The possibilities roughly correspond to Wei Dai’s option 4 on the one hand, and his options 5 and 6 on the other hand, in the post Six Plausible Metaethical Alternatives.] In the first possibility, things work similarly to naturalist moral realism but on a personal/subjectivist basis. We can describe this option as “My idealized values are here for me to discover.” By this, I mean that, at any given moment, there’s a fact of the matter to “What I’d conclude with open-minded moral reflection.” (Specifically, a unique fact – it cannot be that I would conclude vastly different things in different runs of the reflection procedure or that I would find myself indifferent about a whole range of options.)
The second option is that my idealized values aren’t “here for me to discover.” In this view, open-minded reflection is too passive – therefore, we have to create our values actively. Arguments for this view include that (too) open-minded reflection doesn’t reliably terminate; instead, one must bring normative convictions to the table. “Forming convictions,” according to this second option, is about making a particular moral view/outlook a part of one’s identity as a morality-inspired actor. Finding one’s values, then, is not just about intellectual insights.
I will argue that the truth is somewhere in between. Still, the second view, that we have to actively create our (idealized) values, definitely holds to a degree that I often find underappreciated. Admittedly, many things we can learn about the philosophical option space indeed function like “discoveries.” However, because there are several defensible ways to systematize under-defined concepts like “altruism/doing good impartially,” personal factors will determine whether a given approach appeals to someone. Moreover, these factors may change depending on different judgment calls taken in setting up the moral reflection procedure or in different runs of it. (If different runs of the reflection procedure produce different outcomes, it suggests that there’s something unreliable about the way we do reflection.)
Apologies for triple-posting, but something quite relevant also occurred to me:
I know of no other way to even locate “true values” other than “the values that sit within the broad basin of attraction when we attempt moral reflection in the way we’d most endorse.” So, unless there is such a basin, our “true values” remain under-defined.
In other words, I’m skeptical that the concept “true values” would remain meaningful if we couldn’t point it out via “what reflection (somewhat) robustly converges to.”
Absent the technology to create copies of oneself/one’s reasoning, it seems tricky to study the degree of convergence across different runs of reflection of a single person. But it’s not impossible to get a better sense of things. One could study how people form convictions and design their reflection strategies, stating hypotheses in advance. (E.g., conduct “moral reflection retreats” within EA [or outside of it!], do in-advance surveys to get a lot of baseline data, then run another retreat and see if there are correlations between clusters in the baseline data and the post-retreat reflection outcomes.)
Interesting point! I think I see what you mean. I think “a metaethics [...] where one can be wrong about one’s values” makes sense, but in a fuzzy sort of way. I think of metaphilosophy and moral reflection as more an art than a science, and a lot of things are left under-defined.
I recently finished a sequence on metaethics which culminated in this post on moral uncertainty, which contains a bunch of thoughts on this very topic. I don’t necessarily expect them to be new to you and I suspect that you disagree with some of my intuitions, but you might nonetheless find the post interesting. I cite some of your Lesswrong posts and comments in the post.
Here’s an excerpt from the post:
Apologies for triple-posting, but something quite relevant also occurred to me:
I know of no other way to even locate “true values” other than “the values that sit within the broad basin of attraction when we attempt moral reflection in the way we’d most endorse.” So, unless there is such a basin, our “true values” remain under-defined.
In other words, I’m skeptical that the concept “true values” would remain meaningful if we couldn’t point it out via “what reflection (somewhat) robustly converges to.”
Absent the technology to create copies of oneself/one’s reasoning, it seems tricky to study the degree of convergence across different runs of reflection of a single person. But it’s not impossible to get a better sense of things. One could study how people form convictions and design their reflection strategies, stating hypotheses in advance. (E.g., conduct “moral reflection retreats” within EA [or outside of it!], do in-advance surveys to get a lot of baseline data, then run another retreat and see if there are correlations between clusters in the baseline data and the post-retreat reflection outcomes.)