The scary idea states that it is likely that if we create self-improving AI it will consume humanity.
No, it states that we run the risk of accidentally making something that will consume (or exterminate, subvert, betray, make miserable, or otherwise Do Bad Things to) humanity, that looks perfectly safe and correct, right up until it’s too late to do anything about it… and that this is the default case: the case if we don’t do something extraordinary to prevent it.
This doesn’t require self-improvement, and it doesn’t require wiping out humanity. It just requires normal, every-day human error.
SIAI’s “Scary Idea”, which is the idea that: progressing toward advanced AGI without a design for “provably non-dangerous AGI” (or something closely analogous, often called “Friendly AI” in SIAI lingo) is highly likely to lead to an involuntary end for the human race.
No, it states that we run the risk of accidentally making something that will consume (or exterminate, subvert, betray, make miserable, or otherwise Do Bad Things to) humanity, that looks perfectly safe and correct, right up until it’s too late to do anything about it… and that this is the default case: the case if we don’t do something extraordinary to prevent it.
This doesn’t require self-improvement, and it doesn’t require wiping out humanity. It just requires normal, every-day human error.
Here is Ben’s phrasing: