Great post. I also fear that it may not be socially acceptable for AI researchers to talk about the long-term effects of AI despite the fact that, because of exponential progress, most of the impact of AI will probably occur in the long term.
I think it’s important that AI safety and considerations related to AGI become mainstream in the field of AI because it could be dangerous if the people building AGI are not safety-conscious.
I want a world where the people building AGI are also safety researchers rather than one where the AI researchers aren’t thinking about safety and the safety people are shouting over the wall and asking them to build safe AI.
This idea reminds me of how software development and operations were combined into the DevOps role in software companies.
Great post. I also fear that it may not be socially acceptable for AI researchers to talk about the long-term effects of AI despite the fact that, because of exponential progress, most of the impact of AI will probably occur in the long term.
I think it’s important that AI safety and considerations related to AGI become mainstream in the field of AI because it could be dangerous if the people building AGI are not safety-conscious.
I want a world where the people building AGI are also safety researchers rather than one where the AI researchers aren’t thinking about safety and the safety people are shouting over the wall and asking them to build safe AI.
This idea reminds me of how software development and operations were combined into the DevOps role in software companies.