Yeah, the letter on Time Magazine’s website doesn’t argue very hard that superintelligent AI would want to kill everyone, only that it could kill everyone—and what it would actually take to implement “then don’t make one”.
To be clear, that it more-likely-than-not would want to kill everyone is the article’s central assertion. “[Most likely] literally everyone on Earth will die” is the key point. Yes, he doesn’t present a convincing argument for it, and that is my point.
Yeah, the letter on Time Magazine’s website doesn’t argue very hard that superintelligent AI would want to kill everyone, only that it could kill everyone—and what it would actually take to implement “then don’t make one”.
To be clear, that it more-likely-than-not would want to kill everyone is the article’s central assertion. “[Most likely] literally everyone on Earth will die” is the key point. Yes, he doesn’t present a convincing argument for it, and that is my point.