I think the relevance to AI is that AI might accelerate other kinds of progress more than it accelerates deliberation. [...] And at any rate, that seems like a separate problem from alignment, which really needs to be solved by different mechanisms.
What if the mechanism for solving alignment itself causes differential intellectual progress (in the wrong direction)? For example, suppose IDA makes certain kinds of progress easier than others, compared to no AI, or compared to another AI that’s designed based on a different approach to AI alignment. If that’s the case, it seems that we have to solve alignment (in your narrow sense) and differential intellectual progress at the same time instead of through independent mechanisms. An exception might be if we had some independent solution to differential intellectual progress that can totally overpower whatever influence AI design has on it. Is that what you are expecting?
What if the mechanism for solving alignment itself causes differential intellectual progress (in the wrong direction)? For example, suppose IDA makes certain kinds of progress easier than others, compared to no AI, or compared to another AI that’s designed based on a different approach to AI alignment. If that’s the case, it seems that we have to solve alignment (in your narrow sense) and differential intellectual progress at the same time instead of through independent mechanisms. An exception might be if we had some independent solution to differential intellectual progress that can totally overpower whatever influence AI design has on it. Is that what you are expecting?