formulating our goals well enough for use in a formal decision theory is perhaps the last milestone, far outside of what can be reached with a lot of work!
Did you mean to say “without a lot of work”?
(Or did you really mean to say that we can’t reach it, even with a lot of work?)
The latter, where “a lot of work” is the kind of thing humanity can manage in subjective centuries. In an indirect normativity design, doing much more work than that should still be feasible, since it’s only specified abstractly, to be predicted by an AI, enabling distillation. So we can still reach it, if there is an AI to compute the result. But if there is already such an AI, perhaps the work is pointless, because the AI can carry out the work’s purpose in a different way.
Did you mean to say “without a lot of work”?
(Or did you really mean to say that we can’t reach it, even with a lot of work?)
The latter, where “a lot of work” is the kind of thing humanity can manage in subjective centuries. In an indirect normativity design, doing much more work than that should still be feasible, since it’s only specified abstractly, to be predicted by an AI, enabling distillation. So we can still reach it, if there is an AI to compute the result. But if there is already such an AI, perhaps the work is pointless, because the AI can carry out the work’s purpose in a different way.