I want to push back a little against the claim that the bootstrapping strategy (“build a relatively weak aligned AI that will make superhumanly fast progress on AI alignment”) is definitely irrelevant/doomed/inferior. Specifically, I don’t know whether this strategy is good or not in practice, but it serves as useful threshold for what level/kind of capabilities we need to align in order to solve AI risk.
Yeah, very much agree with all of this. I even think there’s an argument to be made that relatively narrow-yet-superhuman theorem provers (or other research aids) could be worth the risk to develop and use, because they may make the human alignment researchers who use them more effective in unpredictable ways. For example, researchers tend to instinctively avoid considering solution paths that are bottlenecked by statements they see as being hard to prove — which is totally reasonable. But if your mentality is that you can just toss a super-powerful theorem-prover at the problem, then you’re free to explore concept-space more broadly since you may be able to check your ideas at much lower cost.
(Also find myself agreeing with your point about tradeoffs. In fact, you could think of a primitive alignment strategy as having a kind of Sharpe ratio: how much marginal x-risk does it incur per marginal bit of optimization it gives? Since a closed-form solution to the alignment problem doesn’t necessarily seem forthcoming, measuring its efficient frontier might be the next best thing.)
Yeah, very much agree with all of this. I even think there’s an argument to be made that relatively narrow-yet-superhuman theorem provers (or other research aids) could be worth the risk to develop and use, because they may make the human alignment researchers who use them more effective in unpredictable ways. For example, researchers tend to instinctively avoid considering solution paths that are bottlenecked by statements they see as being hard to prove — which is totally reasonable. But if your mentality is that you can just toss a super-powerful theorem-prover at the problem, then you’re free to explore concept-space more broadly since you may be able to check your ideas at much lower cost.
(Also find myself agreeing with your point about tradeoffs. In fact, you could think of a primitive alignment strategy as having a kind of Sharpe ratio: how much marginal x-risk does it incur per marginal bit of optimization it gives? Since a closed-form solution to the alignment problem doesn’t necessarily seem forthcoming, measuring its efficient frontier might be the next best thing.)