If we stuck on the level of dangerous Tools and assuming that superintelligence will not kill us based on some long-term complex reasoning, e.g. small chance that it is in a testing simulation.
If we stuck on the level of dangerous Tools and assuming that superintelligence will not kill us based on some long-term complex reasoning, e.g. small chance that it is in a testing simulation.