The arguments presented in the links above are really poor. However, I feel like I am attacking a straw man—quite possibly, www.popsci.com is misrepresenting a more reasonable argument.
Where can I find some precise, well thought out reasons why the risk of human extinction from strong AI is not just small, but for practical purposes equal to 0? I am interested in both arguments from people who believe the risk is zero, and people who do not believe this, but still attempt to “steel man” the argument.
Where can I find some precise, well thought out reasons why the risk of human extinction from strong AI is not just small, but for practical purposes equal to 0? I am interested in both arguments from people who believe the risk is zero, and people who do not believe this, but still attempt to “steel man” the argument.
The primary disagreement, in the steel man universe, is over urgency. If one knew that we would make AGI in 2200, then one would be less worried about solving the friendliness problem now. If one knew that we would make AGI in 2020, then one would be very worried about solving the friendliness problem now.
For many people who work on AI, it’s hard to believe that it will ‘just start working’ at that high level of ability soon, because of how optimistic AI proponents have been over the years and how hard it is to wring a bit more predictive accuracy out of the algorithms for their problems.
But if one takes the position not that one is certain that it will happen soon, but that one is uncertain when it will happen, the uncertainty implies that it will happen sooner or later, and that means we need to do some planning for the sooner case. (That is, uncertainty does not imply it can only happen a long time from now.) This is, it seems, the most effective way to communicate with people who aren’t worried about Strong AI.
There is also the question of what should this type of research actually look like.
I think that’s an answer to “why aren’t people supporting MIRI’s specific research agenda?” but I see SoerenE’s question as about “is there a good reason to not be worried about AI danger?”
(In the steelman universe, I think people understand that different research priorities will stem from different intuitions and skills, and think that there’s space for everyone to work in the direction that suits them best.)
Hi,
I’ve read some of “Rationality: From AI to Zombies”, and find myself worrying about unfriendly strong AI.
Reddit recently had an AMA with the OpenAI team, where “thegdb” seems to misunderstand the concerns. Another user, “AnvaMiba” provides 2 links (http://www.popsci.com/bill-gates-fears-ai-ai-researchers-know-better and http://fusion.net/story/54583/the-case-against-killer-robots-from-a-guy-actually-building-ai/) as examples of researchers not worried about unfriendly strong AI.
The arguments presented in the links above are really poor. However, I feel like I am attacking a straw man—quite possibly, www.popsci.com is misrepresenting a more reasonable argument.
Where can I find some precise, well thought out reasons why the risk of human extinction from strong AI is not just small, but for practical purposes equal to 0? I am interested in both arguments from people who believe the risk is zero, and people who do not believe this, but still attempt to “steel man” the argument.
Stuart Armstrong asked a similar question a while back. You may find the comments to his post useful.
Thank you. That was exactly what I was after.
The primary disagreement, in the steel man universe, is over urgency. If one knew that we would make AGI in 2200, then one would be less worried about solving the friendliness problem now. If one knew that we would make AGI in 2020, then one would be very worried about solving the friendliness problem now.
For many people who work on AI, it’s hard to believe that it will ‘just start working’ at that high level of ability soon, because of how optimistic AI proponents have been over the years and how hard it is to wring a bit more predictive accuracy out of the algorithms for their problems.
But if one takes the position not that one is certain that it will happen soon, but that one is uncertain when it will happen, the uncertainty implies that it will happen sooner or later, and that means we need to do some planning for the sooner case. (That is, uncertainty does not imply it can only happen a long time from now.) This is, it seems, the most effective way to communicate with people who aren’t worried about Strong AI.
There is also the question of what should this type of research actually look like.
I think that’s an answer to “why aren’t people supporting MIRI’s specific research agenda?” but I see SoerenE’s question as about “is there a good reason to not be worried about AI danger?”
(In the steelman universe, I think people understand that different research priorities will stem from different intuitions and skills, and think that there’s space for everyone to work in the direction that suits them best.)
You might want to start with Bostrom’s Superintelligence: Paths, Dangers, Strategies.