I believe the prevalence of moral realism within EA is risky and bad for EA goals for several reasons. One of which is that moral realists tend to believe in the inevitability of a positive far-future (since smart minds will converge on the “right” morality), which tends to make them focus on ensuring the existence of the far future at the cost of other things.
If smart minds will converge on the “right” morality, this makes sense, but I severely doubt that is true. It could be true, but that possibility certainly isn’t worth sacrificing other goals of improvement.
And I think trying to figure out the “right” morality is a waste of resources for similar reasons. CEA has expressed the views I argue against here, which has other EAs and me concerned.
I believe the prevalence of moral realism within EA is risky and bad for EA goals for several reasons. One of which is that moral realists tend to believe in the inevitability of a positive far-future (since smart minds will converge on the “right” morality), which tends to make them focus on ensuring the existence of the far future at the cost of other things.
If smart minds will converge on the “right” morality, this makes sense, but I severely doubt that is true. It could be true, but that possibility certainly isn’t worth sacrificing other goals of improvement.
And I think trying to figure out the “right” morality is a waste of resources for similar reasons. CEA has expressed the views I argue against here, which has other EAs and me concerned.