Generally the way that people solve hard problems is to solve related easy problems first, and this is true even if the technology in question gets much more powerful.
Sure, but if you want to do this kind of research, you should do it in such a way that it does not end up making the situation worse by helping “the AI project” (the irresponsible AI labs burning the time we have left before AI kills us all). That basically means keeping your research results secret from the AI project, and merely refraining from publishing your results is insufficient IMHO because employees in the private sector are free to leave your safety lab and go work for an irresponsible lab. It would be quite helpful here if an organization doing the kind of safety research you want had the same level of control over its employees that secret military projects currently have: namely, the ability to credibly threaten your employees with decades in jail if they bring the secrets they learned in your employ to organizations that should not have those secrets.
The way things are now, the main effect of the AI safety project is to give unintentional help to the AI project IMHO.
Sure, but if you want to do this kind of research, you should do it in such a way that it does not end up making the situation worse by helping “the AI project” (the irresponsible AI labs burning the time we have left before AI kills us all). That basically means keeping your research results secret from the AI project, and merely refraining from publishing your results is insufficient IMHO because employees in the private sector are free to leave your safety lab and go work for an irresponsible lab. It would be quite helpful here if an organization doing the kind of safety research you want had the same level of control over its employees that secret military projects currently have: namely, the ability to credibly threaten your employees with decades in jail if they bring the secrets they learned in your employ to organizations that should not have those secrets.
The way things are now, the main effect of the AI safety project is to give unintentional help to the AI project IMHO.