Thanks for the link. The problem of how to have a cryptographic root of trust for an uploaded person and how to maintain an on going state of trusted operation is a tricky one that I’m aware people have discussed. Though it’s mostly well over my cryptography pay grade. The main point I was trying to get at was not primarily about uploaded brains. I’m using them as an anchor at the extreme end of a distribution that I’m arguing we are already on. The problems of being able to trust its own cognition that an uploaded brain has we are already beginning to experience in the aspects of our cognition that we are outsourcing.
Human brains are not just general purpose CPUs much of our cognition is performed on the wetware equivalent of application-specific integrated circuits (ASICs). ASICs that were tuned for applications that are of waning relevance in the current environment. They were tuned for our environment of evolutionary adaptiveness but the modern world presents very different challenges. By analogy it’s as if they were tuned for sha256 hashing but Ethereum changed the hash function so the returns have dropped. Not to mention that biology uses terrible, dirty hacky heuristics that would would make a grown engineer cry and statisticians yell WHY! at the sky in existential dread. These leave us wide open to all sorts of subtle exploits that can be utilised by those who have studied the systematic errors we make and if they don’t share our interests this is a problem.
Note that I am regarding the specifics of an uploaded brain as personal data which should be subject to privacy protections (both at the technical and policy level) and not as code. This distinction may be less clear for more sophisticated mind upload methods which generate an abstract representation of your brain and run that. If, however, we take a conceptually simpler approach the data/code distinction is cleaner. let’s say we have an ‘image’ of the brain which captures the ‘coordinates’ (quantum numbers) of all of the subatomic particles that make up your brain. We then run that ‘image’ in a physics simulation which can also emulate sensory inputs to place the uploadee in a virtual environment. The brain image is data, the physics and sensory emulation engine is code. I suspect a similar reasonable distinction will continue to continue to hold quite well for quite a while even once your ‘brain’ data starts being represented in a more complex data structure than and N dimensional matrix.
I actually think mind uploading is a much harder problem than many people seem to regard it as, indeed I think it is quite possibly harder than getting to AGI de novo in code. This is for reasons related to neurobiology, imaging technology and computational tractability of physics simulations and I can get into it at greater length if anyone is interested.
Thanks for the link. The problem of how to have a cryptographic root of trust for an uploaded person and how to maintain an on going state of trusted operation is a tricky one that I’m aware people have discussed. Though it’s mostly well over my cryptography pay grade. The main point I was trying to get at was not primarily about uploaded brains. I’m using them as an anchor at the extreme end of a distribution that I’m arguing we are already on. The problems of being able to trust its own cognition that an uploaded brain has we are already beginning to experience in the aspects of our cognition that we are outsourcing.
Human brains are not just general purpose CPUs much of our cognition is performed on the wetware equivalent of application-specific integrated circuits (ASICs). ASICs that were tuned for applications that are of waning relevance in the current environment. They were tuned for our environment of evolutionary adaptiveness but the modern world presents very different challenges. By analogy it’s as if they were tuned for sha256 hashing but Ethereum changed the hash function so the returns have dropped. Not to mention that biology uses terrible, dirty hacky heuristics that would would make a grown engineer cry and statisticians yell WHY! at the sky in existential dread. These leave us wide open to all sorts of subtle exploits that can be utilised by those who have studied the systematic errors we make and if they don’t share our interests this is a problem.
Note that I am regarding the specifics of an uploaded brain as personal data which should be subject to privacy protections (both at the technical and policy level) and not as code. This distinction may be less clear for more sophisticated mind upload methods which generate an abstract representation of your brain and run that. If, however, we take a conceptually simpler approach the data/code distinction is cleaner. let’s say we have an ‘image’ of the brain which captures the ‘coordinates’ (quantum numbers) of all of the subatomic particles that make up your brain. We then run that ‘image’ in a physics simulation which can also emulate sensory inputs to place the uploadee in a virtual environment. The brain image is data, the physics and sensory emulation engine is code. I suspect a similar reasonable distinction will continue to continue to hold quite well for quite a while even once your ‘brain’ data starts being represented in a more complex data structure than and N dimensional matrix.
I actually think mind uploading is a much harder problem than many people seem to regard it as, indeed I think it is quite possibly harder than getting to AGI de novo in code. This is for reasons related to neurobiology, imaging technology and computational tractability of physics simulations and I can get into it at greater length if anyone is interested.