I agree. I don’t see how even an FAI could reproduce a model of your brain that is significantly more accurate than a slightly modified standard median mind. Heck, even if an FAI had some parts of your brain preserved and some of your writings (e.g. email) I’m not sure it could reproduce the rest of you with accuracy.
I think this is one of those domains where structural uncertainty plays a large part. If you’re talking about a Bayesian superintelligence operating at the physical limits of computation… I’d feel rather uneasy making speculations as to what limits it could possibly have. In a Tegmark ensemble universe, you get possibilities like ‘hacking out of the matrix’ or acausal trade or similarly AGI meta-golden rule cooperative optimization, and that’s some seriously powerful stuff.
I agree. I don’t see how even an FAI could reproduce a model of your brain that is significantly more accurate than a slightly modified standard median mind. Heck, even if an FAI had some parts of your brain preserved and some of your writings (e.g. email) I’m not sure it could reproduce the rest of you with accuracy.
I think this is one of those domains where structural uncertainty plays a large part. If you’re talking about a Bayesian superintelligence operating at the physical limits of computation… I’d feel rather uneasy making speculations as to what limits it could possibly have. In a Tegmark ensemble universe, you get possibilities like ‘hacking out of the matrix’ or acausal trade or similarly AGI meta-golden rule cooperative optimization, and that’s some seriously powerful stuff.