(I’m replying to your comment here since I don’t trust personal blogs to stay alive and I don’t want my comments to disappear with them.)
Your point about not giving up too easily seems a good one. There could well be some ideas that are counterintuitive (to most people) but ultimately workable after a lot of effort, like public-key crypto in another area that I’m familiar with. I also think you’re overly optimistic, but that’s not necessarily a bad thing if it helps you explore some areas that others wouldn’t. But I’m worried that unlike typical CS fields, where it’s relatively easy to define technical concepts (and then prove theorems about them) and run algorithms to test/debug them, the analogous things in AI alignment will be many times harder, so we can’t achieve high confidence that something works even if it actually does, or narrow down the precise right idea from the neighborhood that it sits in. Even in crypto, it took decades to refine the idea of “security” into things like “indistinguishability under adaptive chosen ciphertext attack” and then find actually secure algorithms. All of the earliest public-key crypto algorithms deployed were in fact broken, even though they formed the basis for later algorithms. If ideas about AI alignment evolve in a similar way (but on an even longer timescale due to concepts being even harder to define and experiments being harder to run), it’s hard to see how things will turn out well. If the best we can achieve in the relevant time-frame are plausible AI alignment ideas or algorithms that are “in the right neighborhood”, that could even make things worse (than not having them at all) by causing people to feel safer to pursue/deploy AI capability or not invest as much in other ways of preventing AI risk.
(I replied last weekend, but the comment is awaiting moderation.)
Apologies, I stopped getting moderation emails at some point and haven’t fixed it properly.
(I’m replying to your comment here since I don’t trust personal blogs to stay alive and I don’t want my comments to disappear with them.)
Your point about not giving up too easily seems a good one. There could well be some ideas that are counterintuitive (to most people) but ultimately workable after a lot of effort, like public-key crypto in another area that I’m familiar with. I also think you’re overly optimistic, but that’s not necessarily a bad thing if it helps you explore some areas that others wouldn’t. But I’m worried that unlike typical CS fields, where it’s relatively easy to define technical concepts (and then prove theorems about them) and run algorithms to test/debug them, the analogous things in AI alignment will be many times harder, so we can’t achieve high confidence that something works even if it actually does, or narrow down the precise right idea from the neighborhood that it sits in. Even in crypto, it took decades to refine the idea of “security” into things like “indistinguishability under adaptive chosen ciphertext attack” and then find actually secure algorithms. All of the earliest public-key crypto algorithms deployed were in fact broken, even though they formed the basis for later algorithms. If ideas about AI alignment evolve in a similar way (but on an even longer timescale due to concepts being even harder to define and experiments being harder to run), it’s hard to see how things will turn out well. If the best we can achieve in the relevant time-frame are plausible AI alignment ideas or algorithms that are “in the right neighborhood”, that could even make things worse (than not having them at all) by causing people to feel safer to pursue/deploy AI capability or not invest as much in other ways of preventing AI risk.
I also commented there last week and am awaiting moderation. Maybe we should post our replies here soon?