Much more plausible is the notion that you could modify a “standard median mind” to fit someone’s writing. But I suspect that most of the workings of such a creation would be from the standard model than the writings, and also that this is not what people have in mind as far as “writing oneself into the future” goes.
I agree. I don’t see how even an FAI could reproduce a model of your brain that is significantly more accurate than a slightly modified standard median mind. Heck, even if an FAI had some parts of your brain preserved and some of your writings (e.g. email) I’m not sure it could reproduce the rest of you with accuracy.
I think this is one of those domains where structural uncertainty plays a large part. If you’re talking about a Bayesian superintelligence operating at the physical limits of computation… I’d feel rather uneasy making speculations as to what limits it could possibly have. In a Tegmark ensemble universe, you get possibilities like ‘hacking out of the matrix’ or acausal trade or similarly AGI meta-golden rule cooperative optimization, and that’s some seriously powerful stuff.
I agree—the whole idea of writing oneself into the future seems extremely implausible, especially using something like email.
Much more plausible is the notion that you could modify a “standard median mind” to fit someone’s writing. But I suspect that most of the workings of such a creation would be from the standard model than the writings, and also that this is not what people have in mind as far as “writing oneself into the future” goes.
I agree. I don’t see how even an FAI could reproduce a model of your brain that is significantly more accurate than a slightly modified standard median mind. Heck, even if an FAI had some parts of your brain preserved and some of your writings (e.g. email) I’m not sure it could reproduce the rest of you with accuracy.
I think this is one of those domains where structural uncertainty plays a large part. If you’re talking about a Bayesian superintelligence operating at the physical limits of computation… I’d feel rather uneasy making speculations as to what limits it could possibly have. In a Tegmark ensemble universe, you get possibilities like ‘hacking out of the matrix’ or acausal trade or similarly AGI meta-golden rule cooperative optimization, and that’s some seriously powerful stuff.