My personal course of action has been to avoid this topic. I have specific thoughts about what current AI is not doing that bars it from becoming AGI, I have specific thoughts about what lines of research are most likely to successfully lead to AGI, I have arguments for these thoughts, and I’ve decided to keep them to myself. My reasoning is that if I’m right then sharing them marginally accelerates AGI development and if I’m wrong then whatever I say is likely neutral, so it’s all downside, however I keep open the option to change my mind if I encounter some safety related work that hinges on these thoughts (and I’ve hinted at earlier versions of my thoughts in this space as part of previous safety writing to make the case for why I think certain things matter to building aligned AGI).
My personal course of action has been to avoid this topic. I have specific thoughts about what current AI is not doing that bars it from becoming AGI, I have specific thoughts about what lines of research are most likely to successfully lead to AGI, I have arguments for these thoughts, and I’ve decided to keep them to myself. My reasoning is that if I’m right then sharing them marginally accelerates AGI development and if I’m wrong then whatever I say is likely neutral, so it’s all downside, however I keep open the option to change my mind if I encounter some safety related work that hinges on these thoughts (and I’ve hinted at earlier versions of my thoughts in this space as part of previous safety writing to make the case for why I think certain things matter to building aligned AGI).