didn’t realize that was not a mainstream position in the EA community.
My impression is that moral realism based on irreducible normativity is more common in the broader EA community than on Lesswrong. But it comes in different versions. I also tend to refer to it as (a version of) “moral realism” if someone holds the belief that humans will reach a strong consensus about human values / normative ethical theories (if only they had ample time to reflect on the questions). Such convergence doesn’t necessarily require there to be irreducibly normative facts about what’s good or bad, but it still sounds like moral realism. The “we strongly expect convergence” position seemed to be somewhat prelevant on Lesswrong initially, though my impression was that this was more of a probable default assumption rather than something anyone confidently endorsed, and over time my impression is also that people have tentatively moved away from it.
I’m usually bad at explaining my thoughts too, but I’m persistent enough to keep trying. :P
My impression is that moral realism based on irreducible normativity is more common in the broader EA community than on Lesswrong. But it comes in different versions. I also tend to refer to it as (a version of) “moral realism” if someone holds the belief that humans will reach a strong consensus about human values / normative ethical theories (if only they had ample time to reflect on the questions). Such convergence doesn’t necessarily require there to be irreducibly normative facts about what’s good or bad, but it still sounds like moral realism. The “we strongly expect convergence” position seemed to be somewhat prelevant on Lesswrong initially, though my impression was that this was more of a probable default assumption rather than something anyone confidently endorsed, and over time my impression is also that people have tentatively moved away from it.
I’m usually bad at explaining my thoughts too, but I’m persistent enough to keep trying. :P