We are very likely not going to miss out on alignment by a 2x productivity boost, that’s not how things end up in the real world. We’ll either solve alignment or miss by a factor of >10x.
Most problems that people work on in research are roughly the right difficulty, because the ambition level is adjusted to be somewhat challenging but not unachievable. If it’s too hard then the researcher just moves on to another project. This is the problem selection process we’re used to, and might bias our intuitions here.
On the other hand, we want to align AGI because it’s a really important problem, and have no control over the difficulty of the problem. And if you think about the distribution of difficulties of all possible problems, it would be a huge coincidence if the problem of aligning AGI, chosen for its importance and not its difficulty, happened to be within 2x difficulty of the effort we end up being able to put in.
Why is this true?
Most problems that people work on in research are roughly the right difficulty, because the ambition level is adjusted to be somewhat challenging but not unachievable. If it’s too hard then the researcher just moves on to another project. This is the problem selection process we’re used to, and might bias our intuitions here.
On the other hand, we want to align AGI because it’s a really important problem, and have no control over the difficulty of the problem. And if you think about the distribution of difficulties of all possible problems, it would be a huge coincidence if the problem of aligning AGI, chosen for its importance and not its difficulty, happened to be within 2x difficulty of the effort we end up being able to put in.
That argument makes sense, thanks