I’m unclear if you are disagreeing with something or not, but to me your comment reads largely as saying you think there’s a lot of probability mass that can be assigned before we reach the frontier and that this is what you think is most important for reasoning about the risks associated with attempts to build human-aligned AI.
I agree that we can learn a lot before we reach the frontier, but I also think that most of the time we should be thinking as if we are already along the frontier and not much expect the sudden development of resolutions to questions that would let us get more of everything. For example, to return to one of my examples, we shouldn’t expect to suddenly learn info that would let us make Pareto improvements to our assumptions about moral facts given how long this question has been studied, so we should instead mostly be concerned with marginal trade offs about the assumptions we make under uncertainty.
I’m unclear if you are disagreeing with something or not, but to me your comment reads largely as saying you think there’s a lot of probability mass that can be assigned before we reach the frontier and that this is what you think is most important for reasoning about the risks associated with attempts to build human-aligned AI.
I agree that we can learn a lot before we reach the frontier, but I also think that most of the time we should be thinking as if we are already along the frontier and not much expect the sudden development of resolutions to questions that would let us get more of everything. For example, to return to one of my examples, we shouldn’t expect to suddenly learn info that would let us make Pareto improvements to our assumptions about moral facts given how long this question has been studied, so we should instead mostly be concerned with marginal trade offs about the assumptions we make under uncertainty.