But I doubt that one is likely to be able to formally prove that.
E.g. it is possible that we are in a reality where very cautious and reasonable, but sufficiently advanced experiments in quantum gravity lead to a disaster.
Advanced systems are likely to reach those capabilities, and they might make very reasonable estimates that it’s OK to proceed, but due to bad luck of being in a particularly unfortunate reality, the “local neighborhood” might get destroyed as a result… One can’t prove that it’s not the case...
Whereas, if the level of overall intelligence remains sufficiently low, we might not be able to ever achieve the technical capabilities to get into the danger zone...
It is logically possible that the reality is like that.
E.g. it is possible that we are in a reality where very cautious and reasonable, but sufficiently advanced experiments in quantum gravity lead to a disaster.
Advanced systems are likely to reach those capabilities, and they might make very reasonable estimates that it’s OK to proceed, but due to bad luck of being in a particularly unfortunate reality, the “local neighborhood” might get destroyed as a result… One can’t prove that it’s not the case...
Whereas, if the level of overall intelligence remains sufficiently low, we might not be able to ever achieve the technical capabilities to get into the danger zone...
It is logically possible that the reality is like that.
Yes, it is. But even if that is the case, by the argument given in this post, there must exist an AI system that avoids the dangerzone.
Yes, possibly.
Not by the argument given in the post (considering quantum gravity, one immediately sees how inadequate and unrealistic is the model in the post).
But yes, it is possible that they will be so wise that they will be cautious enough even in a very unfortunate situation.
Yes, I was trying to explicitly refute your claim, but my refutation has holes.
(I don’t think you have a valid proof, but this is not yet a counterexample.)