Any chance it could be called AGI Safety instead of AI safety? I think that getting us to consistently use that terminology would help people to know that we are worrying about something greater than current deep learning systems and other narrow AI (although investigating safety in these systems is a good stepping stone to the AGI work).
I’ll help out how I can. I think these sorts of meta approaches are a great idea!
Any chance it could be called AGI Safety instead of AI safety? I think that getting us to consistently use that terminology would help people to know that we are worrying about something greater than current deep learning systems and other narrow AI (although investigating safety in these systems is a good stepping stone to the AGI work).
I’ll help out how I can. I think these sorts of meta approaches are a great idea!
No-doom-AGI