I suspect that, at an evolutionary equilibrium, we wouldn’t have the concept of “morality”. There would be things we would naturally want to do, and things we would naturally not want to do; but not things that we thought we ought to want to do but didn’t.
I don’t know if that would apply to reflective equilibrium.
I think agents in reflective equilibrium would (almost, but not quite, by definition) not have “morality” in that sense (unsatisfied higher-order desires, though that’s definitely not the local common usage of “morality”) except in some very rare equilibria with higher-order desires to remain inconsistent. However, they might value humans having to work to satisfy their own higher-order desires.
This is an excellent question. I think it’s curiosity about where reflective equilibrium would take you.
I suspect that, at an evolutionary equilibrium, we wouldn’t have the concept of “morality”. There would be things we would naturally want to do, and things we would naturally not want to do; but not things that we thought we ought to want to do but didn’t.
I don’t know if that would apply to reflective equilibrium.
I think agents in reflective equilibrium would (almost, but not quite, by definition) not have “morality” in that sense (unsatisfied higher-order desires, though that’s definitely not the local common usage of “morality”) except in some very rare equilibria with higher-order desires to remain inconsistent. However, they might value humans having to work to satisfy their own higher-order desires.