A friendly AI would not make us omnipotent. They know better than us, and letting one of us be god instead of them would be a mistake.
That being said, the effects of a friendly AI would definitely be noticeable, so that point still stands.
However, if a civilization did actually reach a posthuman stage where it can create ancestor simulations, it would also be advanced enough to create a FASI.
First off, it is not clear how easy it is to make ASI. While they would definitely have enough computing power, they would not have enough to create one from any trivial method. It would still be difficult. Perhaps beyond their capacity to solve.
Second, they may opt not to create ASI. Perhaps they are too afraid of UASI. Perhaps they consider it unethical. (But ancestor simulations are fair game. Hypocrites.) Perhaps they have some other reason.
Third, they may create UASI. They may simply fail while attempting to make it friendly, or they could have different goals, and they create a UASI that is friendly only to their goal system. They also could have different goals, and fail at that.
A friendly AI would not make us omnipotent. They know better than us, and letting one of us be god instead of them would be a mistake.
That being said, the effects of a friendly AI would definitely be noticeable, so that point still stands.
First off, it is not clear how easy it is to make ASI. While they would definitely have enough computing power, they would not have enough to create one from any trivial method. It would still be difficult. Perhaps beyond their capacity to solve.
Second, they may opt not to create ASI. Perhaps they are too afraid of UASI. Perhaps they consider it unethical. (But ancestor simulations are fair game. Hypocrites.) Perhaps they have some other reason.
Third, they may create UASI. They may simply fail while attempting to make it friendly, or they could have different goals, and they create a UASI that is friendly only to their goal system. They also could have different goals, and fail at that.