I think that the central argument for AI risk has the following structure and could be formulated without LW-slang:
AI can have unexpected capability gains which will make it significantly above the human level.
Such AI can create dangerous weapons (likely, nanobots).
AI will control these weapons.
This AI will likely have a dangerous goal system and will not care about human wellbeing.
It doesn’t have LW sleng, but using words difficult to understanding by average people.
I think that the central argument for AI risk has the following structure and could be formulated without LW-slang:
AI can have unexpected capability gains which will make it significantly above the human level.
Such AI can create dangerous weapons (likely, nanobots).
AI will control these weapons.
This AI will likely have a dangerous goal system and will not care about human wellbeing.
It doesn’t have LW sleng, but using words difficult to understanding by average people.