Current practice in AI research seems to be to publish everything and take no safety precautions whatsoever, and that is definitely not good.
Most of the compaines involved (e.g. Google, James Harris Simons) publish little or nothing relating so their code in this area publicly—and few know what safeguards they employ. The government security agencies potentially involved (e.g. the NSA) are even more secretive.
Simons is an AI researcher? News to me. Clearly his fund uses machine learning, but there is an ocean between that and AGI (besides plenty of funds use ML also, DE Shaw and many others).
Most of the compaines involved (e.g. Google, James Harris Simons) publish little or nothing relating so their code in this area publicly—and few know what safeguards they employ. The government security agencies potentially involved (e.g. the NSA) are even more secretive.
Simons is an AI researcher? News to me. Clearly his fund uses machine learning, but there is an ocean between that and AGI (besides plenty of funds use ML also, DE Shaw and many others).