Is there a place in the existential risk community for a respected body/​group to evaluate peoples ideas and put them on a danger scale? Or dangerous given assumptions.
If this body could give normal machine learning a stamp of safety then people might not have to worry about death threats etc?
Is there a place in the existential risk community for a respected body/​group to evaluate peoples ideas and put them on a danger scale? Or dangerous given assumptions.
If this body could give normal machine learning a stamp of safety then people might not have to worry about death threats etc?