Firstly, I don’t think the term matters that much. Whether you use AGI safety, AI safety, ML safety, etc. doesn’t seem to have as much of an effect compared to the actual arguments you make during the conversation (at least that was my impression).
Secondly, I don’t say you should never talk about x-risk. I mostly say you shouldn’t start with it. Many of my conversations ended up in discussions of X-risk but only after 30 minutes of back and forth.
Firstly, I don’t think the term matters that much. Whether you use AGI safety, AI safety, ML safety, etc. doesn’t seem to have as much of an effect compared to the actual arguments you make during the conversation (at least that was my impression).
Secondly, I don’t say you should never talk about x-risk. I mostly say you shouldn’t start with it. Many of my conversations ended up in discussions of X-risk but only after 30 minutes of back and forth.