I think in the standard X-risk models that would be a biosafety X-risk. Itβs a problem but it has little to do with the alignment problems on which AI Safety researchers focus.
I think in the standard X-risk models that would be a biosafety X-risk. Itβs a problem but it has little to do with the alignment problems on which AI Safety researchers focus.