I don’t know. It seems to me that we have to make the graph of progress in alignment vs capabilities meet somewhere and part of that would probably involve really thinking about which parts of which bottlenecks are really blockers vs just epiphenomena that tag along but can be optimised away. For instance, in your statement:
If research would be bad for other people to know about, you should mainly just not do it
Then maybe doing research but not having the wrong people know about it is the right intervention, rather than just straight-up not doing it at all?
I don’t know. It seems to me that we have to make the graph of progress in alignment vs capabilities meet somewhere and part of that would probably involve really thinking about which parts of which bottlenecks are really blockers vs just epiphenomena that tag along but can be optimised away. For instance, in your statement:
Then maybe doing research but not having the wrong people know about it is the right intervention, rather than just straight-up not doing it at all?