If I upload my brain as a program, I am quite interested in ensuring that ‘users’ of that program not have the freedom to run the code however they wish, the freedom to distribute the code however they wish, or the freedom to modify the code however they wish and distribute the modified version.
I would regard the specifics of your brain as private data. The infrastructural code to take a scan of an arbitrary brain and run its consciousness is a different matter. It’s the difference between application code and a config file / secrets used in deploying a specific instance. You need to be able to trust the app that running your brain e.g. to not feed it false inputs.
I initially assumed something similar to what you just described. However, it’s plausible to me that in practice the line between “program” and “data” might be blurry here.
You probably want brain uploads to be proprietary software controlled by you or brain uploads which are a distinct agent with some degree of autonomy over their legal status/owners of themselves as proprietary software (similar to the right to bodily autonomy in meatspace).
I think what this post is pointing to is a strong desire for the stack of technologies on which a brain is uploaded to be free software, easily modifiable by the distinct agent to suit the agents needs and purpose, and incapable of coercing the agent to some nebulous ‘bad’ state (think contract drafting em). A more object level framing for this is secure homes for digital people
Thanks for the link. The problem of how to have a cryptographic root of trust for an uploaded person and how to maintain an on going state of trusted operation is a tricky one that I’m aware people have discussed. Though it’s mostly well over my cryptography pay grade. The main point I was trying to get at was not primarily about uploaded brains. I’m using them as an anchor at the extreme end of a distribution that I’m arguing we are already on. The problems of being able to trust its own cognition that an uploaded brain has we are already beginning to experience in the aspects of our cognition that we are outsourcing.
Human brains are not just general purpose CPUs much of our cognition is performed on the wetware equivalent of application-specific integrated circuits (ASICs). ASICs that were tuned for applications that are of waning relevance in the current environment. They were tuned for our environment of evolutionary adaptiveness but the modern world presents very different challenges. By analogy it’s as if they were tuned for sha256 hashing but Ethereum changed the hash function so the returns have dropped. Not to mention that biology uses terrible, dirty hacky heuristics that would would make a grown engineer cry and statisticians yell WHY! at the sky in existential dread. These leave us wide open to all sorts of subtle exploits that can be utilised by those who have studied the systematic errors we make and if they don’t share our interests this is a problem.
Note that I am regarding the specifics of an uploaded brain as personal data which should be subject to privacy protections (both at the technical and policy level) and not as code. This distinction may be less clear for more sophisticated mind upload methods which generate an abstract representation of your brain and run that. If, however, we take a conceptually simpler approach the data/code distinction is cleaner. let’s say we have an ‘image’ of the brain which captures the ‘coordinates’ (quantum numbers) of all of the subatomic particles that make up your brain. We then run that ‘image’ in a physics simulation which can also emulate sensory inputs to place the uploadee in a virtual environment. The brain image is data, the physics and sensory emulation engine is code. I suspect a similar reasonable distinction will continue to continue to hold quite well for quite a while even once your ‘brain’ data starts being represented in a more complex data structure than and N dimensional matrix.
I actually think mind uploading is a much harder problem than many people seem to regard it as, indeed I think it is quite possibly harder than getting to AGI de novo in code. This is for reasons related to neurobiology, imaging technology and computational tractability of physics simulations and I can get into it at greater length if anyone is interested.
I might have misunderstood the part of the OP about ‘freedom of compute’. I understood it as proposing a constitutional amendment making ‘proprietary software’ not a thing and mandating that literally all software be open source.
If that’s not what it meant, what does it mean? Open source software is already a thing that exists, I’m not sure how else to interpret the proposed amendment.
If I upload my brain as a program, I am quite interested in ensuring that ‘users’ of that program not have the freedom to run the code however they wish, the freedom to distribute the code however they wish, or the freedom to modify the code however they wish and distribute the modified version.
I would regard the specifics of your brain as private data. The infrastructural code to take a scan of an arbitrary brain and run its consciousness is a different matter. It’s the difference between application code and a config file / secrets used in deploying a specific instance. You need to be able to trust the app that running your brain e.g. to not feed it false inputs.
I initially assumed something similar to what you just described. However, it’s plausible to me that in practice the line between “program” and “data” might be blurry here.
You probably want brain uploads to be proprietary software controlled by you or brain uploads which are a distinct agent with some degree of autonomy over their legal status/owners of themselves as proprietary software (similar to the right to bodily autonomy in meatspace).
I think what this post is pointing to is a strong desire for the stack of technologies on which a brain is uploaded to be free software, easily modifiable by the distinct agent to suit the agents needs and purpose, and incapable of coercing the agent to some nebulous ‘bad’ state (think contract drafting em). A more object level framing for this is secure homes for digital people
Thanks for the link. The problem of how to have a cryptographic root of trust for an uploaded person and how to maintain an on going state of trusted operation is a tricky one that I’m aware people have discussed. Though it’s mostly well over my cryptography pay grade. The main point I was trying to get at was not primarily about uploaded brains. I’m using them as an anchor at the extreme end of a distribution that I’m arguing we are already on. The problems of being able to trust its own cognition that an uploaded brain has we are already beginning to experience in the aspects of our cognition that we are outsourcing.
Human brains are not just general purpose CPUs much of our cognition is performed on the wetware equivalent of application-specific integrated circuits (ASICs). ASICs that were tuned for applications that are of waning relevance in the current environment. They were tuned for our environment of evolutionary adaptiveness but the modern world presents very different challenges. By analogy it’s as if they were tuned for sha256 hashing but Ethereum changed the hash function so the returns have dropped. Not to mention that biology uses terrible, dirty hacky heuristics that would would make a grown engineer cry and statisticians yell WHY! at the sky in existential dread. These leave us wide open to all sorts of subtle exploits that can be utilised by those who have studied the systematic errors we make and if they don’t share our interests this is a problem.
Note that I am regarding the specifics of an uploaded brain as personal data which should be subject to privacy protections (both at the technical and policy level) and not as code. This distinction may be less clear for more sophisticated mind upload methods which generate an abstract representation of your brain and run that. If, however, we take a conceptually simpler approach the data/code distinction is cleaner. let’s say we have an ‘image’ of the brain which captures the ‘coordinates’ (quantum numbers) of all of the subatomic particles that make up your brain. We then run that ‘image’ in a physics simulation which can also emulate sensory inputs to place the uploadee in a virtual environment. The brain image is data, the physics and sensory emulation engine is code. I suspect a similar reasonable distinction will continue to continue to hold quite well for quite a while even once your ‘brain’ data starts being represented in a more complex data structure than and N dimensional matrix.
I actually think mind uploading is a much harder problem than many people seem to regard it as, indeed I think it is quite possibly harder than getting to AGI de novo in code. This is for reasons related to neurobiology, imaging technology and computational tractability of physics simulations and I can get into it at greater length if anyone is interested.
I might have misunderstood the part of the OP about ‘freedom of compute’. I understood it as proposing a constitutional amendment making ‘proprietary software’ not a thing and mandating that literally all software be open source.
If that’s not what it meant, what does it mean? Open source software is already a thing that exists, I’m not sure how else to interpret the proposed amendment.