So is this “the LLM might say the word ‘nigger’”-AI-safety or “a superintelligence is going to turn us all into paperclips”-AI-safety?
Reducing the harm of irreversible proliferation potentially addresses almost all AI harms, but my motivating concern is x-risk.
So is this “the LLM might say the word ‘nigger’”-AI-safety or “a superintelligence is going to turn us all into paperclips”-AI-safety?
Reducing the harm of irreversible proliferation potentially addresses almost all AI harms, but my motivating concern is x-risk.