Interesting argument though I don’t quite agree with the conclusion to stay away from brain-like AGI safety.
I think you could argue that if the assumption holds that AGI will likely be brain-like, it would be very important that safety researchers look at the perspective beforemainstream AI research realizes this.
I think there also is a point to be said that you could probably tell the safety community about your discovery without speeding up mainstream AI research, but this depends on what exactly your discovery is (i.e. might work for theoretical work, less for practical work)
Even if you were very convinced that brain-like AGI is the only way we can get there, it should still be possible to do research that is speeding up safety differentially. I.e. If you discovered some kind of architecture that would be very useful for capabilities, you could just stop laying out how it would be useful and instead do work on the assumption that future AI will look that way and base your safety work on that.
Interesting argument though I don’t quite agree with the conclusion to stay away from brain-like AGI safety.
I think you could argue that if the assumption holds that AGI will likely be brain-like, it would be very important that safety researchers look at the perspective before mainstream AI research realizes this.
I think there also is a point to be said that you could probably tell the safety community about your discovery without speeding up mainstream AI research, but this depends on what exactly your discovery is (i.e. might work for theoretical work, less for practical work)
Even if you were very convinced that brain-like AGI is the only way we can get there, it should still be possible to do research that is speeding up safety differentially. I.e. If you discovered some kind of architecture that would be very useful for capabilities, you could just stop laying out how it would be useful and instead do work on the assumption that future AI will look that way and base your safety work on that.