I haven’t read the entire piece but from your excerpt and a quick skim it looks like they are focusing on AI going wrong in prosaic ways like hacking the AI to change what it should do, not that a strong AI might have problems all on its own. I don’t know how much of that is that this piece is just an overview piece that discusses a variety of technological risks.
Yes, that does seem like the primary focus. However, they cite this article about Stephen Wolfram when saying “It is difficult to fully predict what such profound improvements in artificial cognition could imply; however, some credible thinkers have already posited a variety of potential risks related to loss of control of aspects of the physical world by human beings.” which suggests that the researchers are at least aware of the wider risks around creating AGI, even if they don’t choose to focus on them.
I haven’t read the entire piece but from your excerpt and a quick skim it looks like they are focusing on AI going wrong in prosaic ways like hacking the AI to change what it should do, not that a strong AI might have problems all on its own. I don’t know how much of that is that this piece is just an overview piece that discusses a variety of technological risks.
Yes, that does seem like the primary focus. However, they cite this article about Stephen Wolfram when saying “It is difficult to fully predict what such profound improvements in artificial cognition could imply; however, some credible thinkers have already posited a variety of potential risks related to loss of control of aspects of the physical world by human beings.” which suggests that the researchers are at least aware of the wider risks around creating AGI, even if they don’t choose to focus on them.