This is a really nice and useful article. I particularly like the list of problems AI experts assumed would be AI-complete, but turned out not to be.
I’d add that if we are trying to reach the conclusion that “we should be more worried about non-general intelligences than we currently are”, then you don’t need it to be true that general intelligences are really difficult. It would be enough that “there is a reasonable chance we will encounter a dangerous non-general one before a dangerous general one”. I’d be inclined to believe that even without any of the theorising about possibility.
I think one reason for the focus on ‘general’ in the AI Safety community is that it is a stand in for the observation that we are not worried about path planners or chess programs or self-driving cars etc. One way to say this is that these are specialised systems, not general ones. But you rightly point out that it doesn’t follow that we should only be worried about completely general systems.
This is a really nice and useful article. I particularly like the list of problems AI experts assumed would be AI-complete, but turned out not to be.
I’d add that if we are trying to reach the conclusion that “we should be more worried about non-general intelligences than we currently are”, then you don’t need it to be true that general intelligences are really difficult. It would be enough that “there is a reasonable chance we will encounter a dangerous non-general one before a dangerous general one”. I’d be inclined to believe that even without any of the theorising about possibility.
I think one reason for the focus on ‘general’ in the AI Safety community is that it is a stand in for the observation that we are not worried about path planners or chess programs or self-driving cars etc. One way to say this is that these are specialised systems, not general ones. But you rightly point out that it doesn’t follow that we should only be worried about completely general systems.