You list several possibilities for how directly working on the problem is not the best thing to do. Somebody who is competent and tries to solve the problem would consider these possibilities and make use of them.
I agree that sometimes there will be a promising path, that is discovered by accident. You could not have planned for discovering it. Or could you? Even if you can’t predict what path will reveal itself, you can be aware that there are paths that will reveal themselves in circumstances that you could not predict. You can still plan to do some specific type of “random stuff” that you expect might lead to something that you did not think of before.
There are circumstances where you discover something on accident without even thinking of the possibility (and considering that it might be worth investigating). I still expect in these circumstances that somebody who tries to solve AI alignment will make better use of the opportunity for making progress on AI alignment.
To me, it seems that trying to do X is generally better at achieving X, the more competent you are. It does include strategies like don’t try hard to solve X, insofar as that seems useful. The worse you are at optimizing, the better it is to do random stuff, as you might stumble upon solutions your cognitive optimization process could not find.
You list several possibilities for how directly working on the problem is not the best thing to do. Somebody who is competent and tries to solve the problem would consider these possibilities and make use of them.
I agree that sometimes there will be a promising path, that is discovered by accident. You could not have planned for discovering it. Or could you? Even if you can’t predict what path will reveal itself, you can be aware that there are paths that will reveal themselves in circumstances that you could not predict. You can still plan to do some specific type of “random stuff” that you expect might lead to something that you did not think of before.
There are circumstances where you discover something on accident without even thinking of the possibility (and considering that it might be worth investigating). I still expect in these circumstances that somebody who tries to solve AI alignment will make better use of the opportunity for making progress on AI alignment.
To me, it seems that trying to do X is generally better at achieving X, the more competent you are. It does include strategies like don’t try hard to solve X, insofar as that seems useful. The worse you are at optimizing, the better it is to do random stuff, as you might stumble upon solutions your cognitive optimization process could not find.