So to transfer back from the analogy, you are saying the uncertainty is in “maybe it’s not possible to create a God-like AI” and “maybe people won’t create a God-like AI” and “maybe a God-like AI won’t do anything”?
Another one, corresponding to the analogy in the chemical not being reactive at all, is the possbility that even very strong AIs are fundamentally very easy to align by default, for any number of reasons.
So to transfer back from the analogy, you are saying the uncertainty is in “maybe it’s not possible to create a God-like AI” and “maybe people won’t create a God-like AI” and “maybe a God-like AI won’t do anything”?
Another one, corresponding to the analogy in the chemical not being reactive at all, is the possbility that even very strong AIs are fundamentally very easy to align by default, for any number of reasons.