I agree with wedrifid here. We don’t seem to have a valid argument showing that “an upper bound on the complexity of a friendly superintelligence would be the total information content of all human brains”. I would like to point out that if the K-complexity of friendly superintelligence is greater than that, then there is no way for us to build a friendly superintelligence except by luck (i.e., most Everett branches are doomed to fail to build a friendly superintelligence) or by somehow exploiting uncomputable physics.
I agree with wedrifid here. We don’t seem to have a valid argument showing that “an upper bound on the complexity of a friendly superintelligence would be the total information content of all human brains”. I would like to point out that if the K-complexity of friendly superintelligence is greater than that, then there is no way for us to build a friendly superintelligence except by luck (i.e., most Everett branches are doomed to fail to build a friendly superintelligence) or by somehow exploiting uncomputable physics.