Yes, we may not live until superintelligene because some Tool AI in hands of a terrorist is enough to kill everybody. In that case moratorium on AI is bad.
If we stuck on the level of dangerous Tools and assuming that superintelligence will not kill us based on some long-term complex reasoning, e.g. small chance that it is in a testing simulation.
Yes, we may not live until superintelligene because some Tool AI in hands of a terrorist is enough to kill everybody. In that case moratorium on AI is bad.
Why is a moratorium bad in that case?
For reference, I disagree with the moratorium slightly, but for different reasons.
If we stuck on the level of dangerous Tools and assuming that superintelligence will not kill us based on some long-term complex reasoning, e.g. small chance that it is in a testing simulation.