You can integrate it with initial conditions though and just like we can use our prefrontal cortex to predict the probable initial conditions of events(albeit inaccurately occasionally), a powerful computer may be able to predict our complex mental pathways based on known past events with high fidelity. I’m not saying that you wont need the initial conditions to integrate the function, I just think AGI would have less trouble with it than you assume. I think you have a good point about the principle though and I will take informative decay into my perceived utility of cryonics in the future.
a powerful computer may be able to predict our complex mental pathways based on known past events with high fidelity
“known past events”—unless those past events are full-brain scans of the past, then all you’re going to get is a reduction of the scope of the configuration space and not the exact function.
“A powerful computer” != “magic”. No matter how smart you are, fifteen tons of mass moving at five thousand miles per second will still contain the same amount of kinetic energy. No amount of cleverness can extract information that has been decayed.
This is information-theoretically proven.
I just think AGI would have less trouble with it than you assume.
The question at hand is, can a personality be reconstructed from partial data by a sufficiently clever process? We have analogues to this question. Compress and decompress the same mp3 file a hundred times or so. Then see if you can find an algorithm that can restore the lost fidelity.
According to information-theoretic physics, information once lost cannot be retrieved. It’s simply gone. New information can be derived (at the cost of destroying more information than it creates; this is entropy) -- but that will always be approximations. This then leads to a second, corollary question which you seem to be asserting is “where the magic happens”: can a sufficiently-clever process extrapolate from historical records a personality which is of sufficient fidelity to qualify as that same person?
I’ve had this conversation before, and I ended it then as I will end my contribution now: how could that result be shown to be the proper one? I do not want someone who is “like me” to be uploaded. I want me to be uploaded. That means an information-theoretically-complete scan of me. Not approximations. Yes, this is not a black-and-white picture. Measurements are always approximations. The point is, without those measurements in a complete state, there isn’t a way to determine what those measurements “ought to be”.
You can integrate it with initial conditions though and just like we can use our prefrontal cortex to predict the probable initial conditions of events(albeit inaccurately occasionally), a powerful computer may be able to predict our complex mental pathways based on known past events with high fidelity. I’m not saying that you wont need the initial conditions to integrate the function, I just think AGI would have less trouble with it than you assume. I think you have a good point about the principle though and I will take informative decay into my perceived utility of cryonics in the future.
“known past events”—unless those past events are full-brain scans of the past, then all you’re going to get is a reduction of the scope of the configuration space and not the exact function.
“A powerful computer” != “magic”. No matter how smart you are, fifteen tons of mass moving at five thousand miles per second will still contain the same amount of kinetic energy. No amount of cleverness can extract information that has been decayed.
This is information-theoretically proven.
The question at hand is, can a personality be reconstructed from partial data by a sufficiently clever process? We have analogues to this question. Compress and decompress the same mp3 file a hundred times or so. Then see if you can find an algorithm that can restore the lost fidelity.
According to information-theoretic physics, information once lost cannot be retrieved. It’s simply gone. New information can be derived (at the cost of destroying more information than it creates; this is entropy) -- but that will always be approximations. This then leads to a second, corollary question which you seem to be asserting is “where the magic happens”: can a sufficiently-clever process extrapolate from historical records a personality which is of sufficient fidelity to qualify as that same person?
I’ve had this conversation before, and I ended it then as I will end my contribution now: how could that result be shown to be the proper one? I do not want someone who is “like me” to be uploaded. I want me to be uploaded. That means an information-theoretically-complete scan of me. Not approximations. Yes, this is not a black-and-white picture. Measurements are always approximations. The point is, without those measurements in a complete state, there isn’t a way to determine what those measurements “ought to be”.
Just thought I’d say: excellently well put.