Your argument boils down to “destroying the world isn’t easy”. Do you seriously believe this? All it takes is to hack into the codes of one single big nuclear power, thereby triggering mutually assured destruction, thereby triggering nuclear winter and effectively killing us all with radiation over time.
In fact you don’t need AGI to destroy the world. You only need a really good hacker, or a really bad president. In fact we’ve been close about a dozen times, so I hear. If Stanislav Petrov had listened to the computer in 1983 who indicated 100% probability of incoming nuclear strike, the world would have been destroyed. If all 3 officials of the Russian submarine of the Cuban Missile Crisis had agreed to launch what they mistakenly thought would be a nuclear counter strike, the world would have been destroyed. Etc etc.
Of course there are also other easy ways to destroy the world, but this one is enough to invalidate your argument.
I think it’s a bad title for the post. It shouldn’t be “I don’t believe in doom”, but “I don’t believe in the foom path to doom”. Most of the argument is that it’ll take longer than is often talked about, not that it won’t happen (although the poster does make some claims that slower is less likely to succeed).
The post doesn’t mention all the other ways we could be doomed, with or without AGI.
The post is clearly saying “it will take longer than days/weeks/months SO THAT we will likely have time to react”. Both are highly unlikely. It wouldn’t take a proper AGI weeks or months to hack into the nuclear codes of a big power, it would take days or even hours. That gives us no time to react. But the question here isn’t even about time. It’s about something MORE intelligent than us which WILL overpower us if it wants, be it on 1st or 100th try (nothing guarantees we can turn it off after the first failed strike).
Am I extremely sure that an unaligned AGI would cause doom? No. But to be extremely sure of the opposite is just as irrational. For some reason it’s called a risk—it’s something that has a certain probability, and given that we all should agree that that probability is high enough, we all should take the matter extremely seriously regardless of our differences.
Am I extremely sure that an unaligned AGI would cause doom?
If that’s the case, we already agree and I have nothing to add. We might disagree in the relative likelihood but that’s ok. I do agree that is a risk and we should take the matter extremely seriously
Right then, but my original claim still stands: your main point is, in fact, that it is hard to destroy the world. Like I’ve explained, this doesn’t make any sense (hacking into nuclear codes). If we create an AI better than us at code, I don’t have any doubts that it CAN easily do it, if it WANTS. My only doubt is whether it will want it or not. Not whether it will be capable, because like I said, even a very good human hacker in the future could be capable.
At least the type of AGI that I fear is one capable of Recursive Self-Improvement, which will unavoidably attain enormous capabilities. Not some prosaic non-improving AGI that is only human-level. To doubt whether the latter would have the capability to destroy the world is kinda reasonable, to doubt it about the former is not.
Your argument boils down to “destroying the world isn’t easy”. Do you seriously believe this? All it takes is to hack into the codes of one single big nuclear power, thereby triggering mutually assured destruction, thereby triggering nuclear winter and effectively killing us all with radiation over time.
In fact you don’t need AGI to destroy the world. You only need a really good hacker, or a really bad president. In fact we’ve been close about a dozen times, so I hear. If Stanislav Petrov had listened to the computer in 1983 who indicated 100% probability of incoming nuclear strike, the world would have been destroyed. If all 3 officials of the Russian submarine of the Cuban Missile Crisis had agreed to launch what they mistakenly thought would be a nuclear counter strike, the world would have been destroyed. Etc etc.
Of course there are also other easy ways to destroy the world, but this one is enough to invalidate your argument.
I think it’s a bad title for the post. It shouldn’t be “I don’t believe in doom”, but “I don’t believe in the foom path to doom”. Most of the argument is that it’ll take longer than is often talked about, not that it won’t happen (although the poster does make some claims that slower is less likely to succeed).
The post doesn’t mention all the other ways we could be doomed, with or without AGI.
The post is clearly saying “it will take longer than days/weeks/months SO THAT we will likely have time to react”. Both are highly unlikely. It wouldn’t take a proper AGI weeks or months to hack into the nuclear codes of a big power, it would take days or even hours. That gives us no time to react. But the question here isn’t even about time. It’s about something MORE intelligent than us which WILL overpower us if it wants, be it on 1st or 100th try (nothing guarantees we can turn it off after the first failed strike).
Am I extremely sure that an unaligned AGI would cause doom? No. But to be extremely sure of the opposite is just as irrational. For some reason it’s called a risk—it’s something that has a certain probability, and given that we all should agree that that probability is high enough, we all should take the matter extremely seriously regardless of our differences.
Am I extremely sure that an unaligned AGI would cause doom?
If that’s the case, we already agree and I have nothing to add. We might disagree in the relative likelihood but that’s ok. I do agree that is a risk and we should take the matter extremely seriously
Right then, but my original claim still stands: your main point is, in fact, that it is hard to destroy the world. Like I’ve explained, this doesn’t make any sense (hacking into nuclear codes). If we create an AI better than us at code, I don’t have any doubts that it CAN easily do it, if it WANTS. My only doubt is whether it will want it or not. Not whether it will be capable, because like I said, even a very good human hacker in the future could be capable.
At least the type of AGI that I fear is one capable of Recursive Self-Improvement, which will unavoidably attain enormous capabilities. Not some prosaic non-improving AGI that is only human-level. To doubt whether the latter would have the capability to destroy the world is kinda reasonable, to doubt it about the former is not.