I see, so “if there is convergence” is not a point of theoretical uncertainty, but something that depends on the way the AIs are built. Makes sense (as a position, not something I agree with).
But the whole point of my posting was that, if there is convergence (in the second sense) then those initial values may make very little difference in the outcome of the universe
I see, so “if there is convergence” is not a point of theoretical uncertainty, but something that depends on the way the AIs are built.
Well, it is both. Convergence in the sense of “outcome is independent of the starting point” has not been proved for any AI/updating architecture. Also, I strongly suspect that the detailed outcome will depend quite a bit on the way AIs interact and produce successors/self-updates, even if the fact of convergence does not.
I see, so “if there is convergence” is not a point of theoretical uncertainty, but something that depends on the way the AIs are built. Makes sense (as a position, not something I agree with).
Well, it is both. Convergence in the sense of “outcome is independent of the starting point” has not been proved for any AI/updating architecture. Also, I strongly suspect that the detailed outcome will depend quite a bit on the way AIs interact and produce successors/self-updates, even if the fact of convergence does not.