Like TheOtherDave (I presume), I consider my identity to be adequately described by whatever Turing machine that can emulate my brain, or at least its prefrontal cortex + relevant memory storage.
There’s a very wide range of possible minds I consider to preserve my identity; I’m not sure the majority of those emulate my prefrontal cortex significantly more closely than they emulate yours, and the majority of my memories are not shared by the majority of those minds.
Interesting. I wonder what you would consider a mind that preserves your identity. For example, I assume that the total of your posts online, plus whatever other information available without some hypothetical future brain scanner, all running as a process on some simulator, is probably not enough.
At one extreme, if I assume those posts are being used to create a me-simulation by me-simulation-creator that literally knows nothing else about humans, then I’m pretty confident that the result is nothing I would identify with. (I’m also pretty sure this scenario is internally inconsistent.)
At another extreme, if I assume the me-simulation-creator has access to a standard template for my general demographic and is just looking to customize that template sufficiently to pick out some subset of the volume of mindspace my sufficiently preserved identity defines… then maybe. I’d have to think a lot harder about what information is in my online posts and what information would plausibly be in such a template to even express a confidence interval about that.
That said, I’m certainly not comfortable treating the result of that process as preserving “me.”
Then again I’m also not comfortable treating the result of living a thousand years as preserving “me.”
There’s a very wide range of possible minds I consider to preserve my identity; I’m not sure the majority of those emulate my prefrontal cortex significantly more closely than they emulate yours, and the majority of my memories are not shared by the majority of those minds.
Interesting. I wonder what you would consider a mind that preserves your identity. For example, I assume that the total of your posts online, plus whatever other information available without some hypothetical future brain scanner, all running as a process on some simulator, is probably not enough.
At one extreme, if I assume those posts are being used to create a me-simulation by me-simulation-creator that literally knows nothing else about humans, then I’m pretty confident that the result is nothing I would identify with. (I’m also pretty sure this scenario is internally inconsistent.)
At another extreme, if I assume the me-simulation-creator has access to a standard template for my general demographic and is just looking to customize that template sufficiently to pick out some subset of the volume of mindspace my sufficiently preserved identity defines… then maybe. I’d have to think a lot harder about what information is in my online posts and what information would plausibly be in such a template to even express a confidence interval about that.
That said, I’m certainly not comfortable treating the result of that process as preserving “me.”
Then again I’m also not comfortable treating the result of living a thousand years as preserving “me.”