Ok so I’m presuming that an extremely fine grained scan stored with some naive compression is massively more than 10^14 synapse-bits. In order to store all that now in the information theoretic minimum, don’t we need some kind of incredibly awesome compression algorithm NOW that we simply don’t have?
No, I think the idea is to do coarse-grained scans, which the superintelligence will have to heavily process in order to infer the original brain structure. (Yeah, it’s not clear this is possible even with a whole universe worth of computing power and whatever algorithmic breakthroughs a superintelligence might come up with.)
you periodically take neuroimaging scans of your brain and save them to multiple backup locations (1010 bits is only about 1 gigabyte)
I think I understand but I’m lost as to why that 10^10 is showing up here. Wouldn’t it be whatever the scan happens to be rather than a reference to the compressed size of a human’s unique experiences? We might plausibly have a 10^18 scan that is detailed in the wrong ways (like it carries 1024 bits per voxel of color channel info :p).
eta: In case it’s not clear, I can’t actually help you answer the question of just how useful a scan is.
It’s also not clear that one could tell whether it failed. That is, OK, it processes the scans, interpolates over the gaps, and a person pops out the other end who believes himself to remember being me. Yay? Maybe.
Then again, it’s not clear to me that I ought to care about the difference.
It’s also not clear that one could tell whether it failed.
If the superintelligence does the same kind of coarse-grained scan to living humans and successfully copies/recreates them from that information alone, there would every reason to think the process would work just as well with dead humans like you, right?
Then again, it’s not clear to me that I ought to care about the difference.
Well, if you care about living, rather than about somebody similar to you that wrongly believes to be you, you definitely should care about the difference.
I care about living (usually), but it’s not clear to me that what I care about when I care about living is absent in the “failed” scenario.
As far as I can tell, “being me” just isn’t all that precisely defined in the first place; it describes a wide range of possible conditions. Which seems to allow for the possibility of two entities A and B existing at some future time such that A and B are different, but both A and B satisfy the condition of being me.
I agree, though, that if A is the result of my body traveling through time in the conventional manner, and B is the result of some other process, and A and B are different, it is conventional to say that A is really me and B is not. It’s just that this strikes me as a socially constructed truth more than an empirically observed one.
I also agree that the test you describe is compelling evidence that the copy/recreation process is as reliable a self-preserver as anything could be.
It should be possible to check for corruption in the process by having the AGI not use some known information in the reconstruction, then asking the reconstruct to answer questions with known answers.
(For example, the AGI could not use the (known, from records) birthdate of the person during reconstruction; afterwards, if the reconstruct doesn’t remember their correct birthdate, that would be strong evidence that the process had failed. Given a sufficiently large number of these tests, the superintelligence could verify with reasonable accuracy the fidelity of the reconstruction.)
Ok so I’m presuming that an extremely fine grained scan stored with some naive compression is massively more than 10^14 synapse-bits. In order to store all that now in the information theoretic minimum, don’t we need some kind of incredibly awesome compression algorithm NOW that we simply don’t have?
No, I think the idea is to do coarse-grained scans, which the superintelligence will have to heavily process in order to infer the original brain structure. (Yeah, it’s not clear this is possible even with a whole universe worth of computing power and whatever algorithmic breakthroughs a superintelligence might come up with.)
I think I understand but I’m lost as to why that 10^10 is showing up here. Wouldn’t it be whatever the scan happens to be rather than a reference to the compressed size of a human’s unique experiences? We might plausibly have a 10^18 scan that is detailed in the wrong ways (like it carries 1024 bits per voxel of color channel info :p).
eta: In case it’s not clear, I can’t actually help you answer the question of just how useful a scan is.
Yes, that makes sense. Thanks.
It’s also not clear that one could tell whether it failed. That is, OK, it processes the scans, interpolates over the gaps, and a person pops out the other end who believes himself to remember being me. Yay? Maybe.
Then again, it’s not clear to me that I ought to care about the difference.
If the superintelligence does the same kind of coarse-grained scan to living humans and successfully copies/recreates them from that information alone, there would every reason to think the process would work just as well with dead humans like you, right?
Well, if you care about living, rather than about somebody similar to you that wrongly believes to be you, you definitely should care about the difference.
I care about living (usually), but it’s not clear to me that what I care about when I care about living is absent in the “failed” scenario.
As far as I can tell, “being me” just isn’t all that precisely defined in the first place; it describes a wide range of possible conditions. Which seems to allow for the possibility of two entities A and B existing at some future time such that A and B are different, but both A and B satisfy the condition of being me.
I agree, though, that if A is the result of my body traveling through time in the conventional manner, and B is the result of some other process, and A and B are different, it is conventional to say that A is really me and B is not. It’s just that this strikes me as a socially constructed truth more than an empirically observed one.
I also agree that the test you describe is compelling evidence that the copy/recreation process is as reliable a self-preserver as anything could be.
It should be possible to check for corruption in the process by having the AGI not use some known information in the reconstruction, then asking the reconstruct to answer questions with known answers.
(For example, the AGI could not use the (known, from records) birthdate of the person during reconstruction; afterwards, if the reconstruct doesn’t remember their correct birthdate, that would be strong evidence that the process had failed. Given a sufficiently large number of these tests, the superintelligence could verify with reasonable accuracy the fidelity of the reconstruction.)