I agree with you when it comes to humans that an approximation is totally fine for [almost] all purposes. I’m not sure that this holds when it comes to thinking about potential superintelligent AI, however. If it turns out that even in a super high-fidelity multidimensional ethical model there are still inherent self-contradictions, how/would that impact the Alignment problem, for instance?
I agree with you when it comes to humans that an approximation is totally fine for [almost] all purposes. I’m not sure that this holds when it comes to thinking about potential superintelligent AI, however. If it turns out that even in a super high-fidelity multidimensional ethical model there are still inherent self-contradictions, how/would that impact the Alignment problem, for instance?
Given the state of AI, I think AI systems are more likely to infer our ethical intuitions by default.