Infohazards. Researching capability risks within an AI lab can inspire researchers hearing about your findings to build new capabilities.
Does research really work like this? That is, only 1 person is capable of coming across an idea? It seems usually any discovery has a lot of competitors who are fairly close. I doubt the small number of EA people choosing to or not to work in safety will have any significant impact on capabilities.
If you’re smart and specialised in researching capability risks, it would not be that surprising if you come up with new feasible mechanisms that others were not aware of.
Does research really work like this? That is, only 1 person is capable of coming across an idea? It seems usually any discovery has a lot of competitors who are fairly close. I doubt the small number of EA people choosing to or not to work in safety will have any significant impact on capabilities.
If you’re smart and specialised in researching capability risks, it would not be that surprising if you come up with new feasible mechanisms that others were not aware of.
That’s my opinion on this.