The ability to identify important people in our life is vital to our identity. What level of fine detail would be required to preserve this? It would be disconcerting to be reconstructed but to lack a memory of what your mother looked like.
I think we are in agreement that recognition of important faces is pretty important… but I can imagine losing just that in a stroke and having to re-learn it, and still being me. It would be annoying, and I can imagine that too much change of that sort might cause what’s left of me to be incapable or uninterested in recognizing and cherishing things that are important to me. In that case, I think a copy of my current self and the future inhabitant of a body historically connected to this one might agree we weren’t really instances of the same person, because they’d diverged too much.
However, it seems like you might be able to do this sort of brain scan using input data with the things you cherished in them, and many raw facts about the valence and elements of your care for aspects of what you care about would be recorded. Your tenderness or inattention or lust or disgust for many specific parts of the different things that you currently care about would be recorded. How your eyes saccade around each image would probably be reconstructable, so it would capture not just that you recognize your mother, but how you recognize your mother.
Starting with a well preserved brain, I can imagine nanoscale structures being “bottom-up” reconstructed in functional form, but when it comes time to do “integration testing” on the resulting brain model, I could imagine the initial draft having seizures, because certain long distance ratios being a bit different might have set up new resonance possibilities or something. I’m not defending this precise neurological fact (I have no high precision model of how siezures work), I’m just trying to provide a simple example from the space of possible bugs to show that the space is probably non-empty. Perfect representation of a frozen brain run on simulated physics would be just as non-functional in the simulation as a the frozen brain is in reality. You’d need to adjust the brain model and/or the simulation process (maybe both) to get something that works at all.
Testing for seizure prone-ness and adaptively adjusting things to fix that seems like the sort of thing that would be a very basic and obvious part of quality assurance processes on reconstructions. Testing for “memories of cherishing X” seems like something that would be much harder to make a part of basic QA processes, absent external data about this fact captured at the rough level of abstraction where it was obvious (ie by measuring actual responses to actual stimulus). If this was one of the first ever reconstructions of a person then this extra layer of data for “emotional recognition level QA” is something I can imagine appreciating a lot, as an engineer.
When I imagine reconstructing an acceptable King Tut, I’m thinking of this as happening deep into the development of revival processes, where so much information from detail-informed reconstruction of other people’s brains is known that basic details of functional neurology aren’t even science anymore, they’re just engineering, or maybe just application of existing tools by far future script kiddies, and if the tools are solid enough and they hook into the far future Everythingpedia, then maybe all you really need is description of who the person was that cuts human personhood at the joints.
Like the big five is an attempt to describe “personality” in a way that was state of the art for the 1940s and is only very slowly filtering out into mainstream cultural awareness even now. What I’m thinking of would be like the “big million” and perhaps be state of the art for the 2140′s. In that sort of context, enough QA data might be all that is really required, and (plus or minus) I can imagine enough QA data being available from archaeological artifacts. A reconstructed Tut really might recognize his Aunt because there is surviving data on her. Given genomics data, a statue of a relative, and lots of computer time, perhaps he would also be able to recognize his mother as she once was.
In some sense there is a really extreme inside-view/outside-view issue here. Taking the extreme outside view, with background knowledge about the entire earth at this moment in time counting as an “accessible codebook” for compression, it takes less than 50 yes/no questions to identify any one of us. Taking the extreme inside view, with the precise configuration of every molecule considered as a completely de novo surprise it probably takes something like 10^50 yes/no questions to precisely describe any one of us. Maybe the issue is that I tend to imagine reconstruction happening in an environment richer in know-how, data, and resources, and so I tend to think fewer bits are required than other people?
The ability to identify important people in our life is vital to our identity. What level of fine detail would be required to preserve this? It would be disconcerting to be reconstructed but to lack a memory of what your mother looked like.
I think we are in agreement that recognition of important faces is pretty important… but I can imagine losing just that in a stroke and having to re-learn it, and still being me. It would be annoying, and I can imagine that too much change of that sort might cause what’s left of me to be incapable or uninterested in recognizing and cherishing things that are important to me. In that case, I think a copy of my current self and the future inhabitant of a body historically connected to this one might agree we weren’t really instances of the same person, because they’d diverged too much.
However, it seems like you might be able to do this sort of brain scan using input data with the things you cherished in them, and many raw facts about the valence and elements of your care for aspects of what you care about would be recorded. Your tenderness or inattention or lust or disgust for many specific parts of the different things that you currently care about would be recorded. How your eyes saccade around each image would probably be reconstructable, so it would capture not just that you recognize your mother, but how you recognize your mother.
Starting with a well preserved brain, I can imagine nanoscale structures being “bottom-up” reconstructed in functional form, but when it comes time to do “integration testing” on the resulting brain model, I could imagine the initial draft having seizures, because certain long distance ratios being a bit different might have set up new resonance possibilities or something. I’m not defending this precise neurological fact (I have no high precision model of how siezures work), I’m just trying to provide a simple example from the space of possible bugs to show that the space is probably non-empty. Perfect representation of a frozen brain run on simulated physics would be just as non-functional in the simulation as a the frozen brain is in reality. You’d need to adjust the brain model and/or the simulation process (maybe both) to get something that works at all.
Testing for seizure prone-ness and adaptively adjusting things to fix that seems like the sort of thing that would be a very basic and obvious part of quality assurance processes on reconstructions. Testing for “memories of cherishing X” seems like something that would be much harder to make a part of basic QA processes, absent external data about this fact captured at the rough level of abstraction where it was obvious (ie by measuring actual responses to actual stimulus). If this was one of the first ever reconstructions of a person then this extra layer of data for “emotional recognition level QA” is something I can imagine appreciating a lot, as an engineer.
When I imagine reconstructing an acceptable King Tut, I’m thinking of this as happening deep into the development of revival processes, where so much information from detail-informed reconstruction of other people’s brains is known that basic details of functional neurology aren’t even science anymore, they’re just engineering, or maybe just application of existing tools by far future script kiddies, and if the tools are solid enough and they hook into the far future Everythingpedia, then maybe all you really need is description of who the person was that cuts human personhood at the joints.
Like the big five is an attempt to describe “personality” in a way that was state of the art for the 1940s and is only very slowly filtering out into mainstream cultural awareness even now. What I’m thinking of would be like the “big million” and perhaps be state of the art for the 2140′s. In that sort of context, enough QA data might be all that is really required, and (plus or minus) I can imagine enough QA data being available from archaeological artifacts. A reconstructed Tut really might recognize his Aunt because there is surviving data on her. Given genomics data, a statue of a relative, and lots of computer time, perhaps he would also be able to recognize his mother as she once was.
In some sense there is a really extreme inside-view/outside-view issue here. Taking the extreme outside view, with background knowledge about the entire earth at this moment in time counting as an “accessible codebook” for compression, it takes less than 50 yes/no questions to identify any one of us. Taking the extreme inside view, with the precise configuration of every molecule considered as a completely de novo surprise it probably takes something like 10^50 yes/no questions to precisely describe any one of us. Maybe the issue is that I tend to imagine reconstruction happening in an environment richer in know-how, data, and resources, and so I tend to think fewer bits are required than other people?