I like that you point out that we’d normally do trial and error, but that this might not work with AI. I think you could possibly make clearer where this fails in your story. You do point out how HLMI might become extremely widespread and how it might replace most human work. Right now it seems to me like you argue essentially that the problem is a large-scale accident that comes from a distribution shift. But this doesn’t yet say why we couldn’t e.g. just continue trial-and-error and correct the AI once we notice that something is going wrong.
I think one would need to invoke something like instrumental convergence, goal preservation and AI being power-seeking, to argue that this isn’t just an accident that could be prevented if we gave some more feedback in time. It is important for the argument that the AI is pursuing the wrong goals and thus wouldn’t want to be stopped, etc.
Of course, one has to simplify the argument somehow in an introduction like this (and you do elaborate in the appendix), but maybe some argument about instrumental convergence should still be included in the main text.
Yes, after reflection I think this is correct. I think I had in mind a situation where with deployment, the training of the AI system simply stops, but of course, this need not be the case. So if training continues, then one either needs to argue stronger reasons why the distribution shift leads to a catastrophe (e.g., along the lines you argue) or make the case that the training signal couldn’t keep up with the fast pace of the development. The latter would be an outer alignment failure, which I tried to avoid talking about in the text.
Great post!
I like that you point out that we’d normally do trial and error, but that this might not work with AI. I think you could possibly make clearer where this fails in your story. You do point out how HLMI might become extremely widespread and how it might replace most human work. Right now it seems to me like you argue essentially that the problem is a large-scale accident that comes from a distribution shift. But this doesn’t yet say why we couldn’t e.g. just continue trial-and-error and correct the AI once we notice that something is going wrong.
I think one would need to invoke something like instrumental convergence, goal preservation and AI being power-seeking, to argue that this isn’t just an accident that could be prevented if we gave some more feedback in time. It is important for the argument that the AI is pursuing the wrong goals and thus wouldn’t want to be stopped, etc.
Of course, one has to simplify the argument somehow in an introduction like this (and you do elaborate in the appendix), but maybe some argument about instrumental convergence should still be included in the main text.
Yes, after reflection I think this is correct. I think I had in mind a situation where with deployment, the training of the AI system simply stops, but of course, this need not be the case. So if training continues, then one either needs to argue stronger reasons why the distribution shift leads to a catastrophe (e.g., along the lines you argue) or make the case that the training signal couldn’t keep up with the fast pace of the development. The latter would be an outer alignment failure, which I tried to avoid talking about in the text.