I might be misunderstanding your point. My opinion is that software brains are extremely difficult (possibly impossibly difficult) because brains are complicated. Your position, as I understand it, is that they are extremely difficult (possibly impossibly difficult) because brains are chaotic.
If its the former (complexity) then there exists a sufficiently advanced model of the human brain that can work (where “sufficiently advanced” here means “probably always science fiction”). If brains are assumed to be chaotic then a lot of what people think and do is random, and the simulated brains will necessarily end up with a different random seed due to measurement errors. This would be important in some brain simulating contexts, for example it would make predicting someone’s future behaviour based on a simulation of their brain impossible. (Omega from Newcomb’s paradox would struggle to predict whether people would two-box or not.) However, from the point of view of chasing immortality for yourself or a loved one the chaos doesn’t seem to be an immediate problem. If my decision to one-box was fundamentally random (down to thermal fluctuations) and trivial changes on the day could have changed my mind, then it couldn’t have been part of my personality. My point was, from the immortality point of view, we only really care about preserving the signal, and can accept different noise.
I think part of the difference is that I’m considering the uploading process; it seems to me that you’re skipping past it, which amounts to assuming it works perfectly.
Consider the upload of Bob the volunteer. The idea that software = Bob is based on the idea that Bob’s connectome of roughly 100 trillion synapses is accurately captured by the upload process. It seems fairly obvious to me that this process will not capture every single synapse with no errors (at least in early versions). It will miss a percentage and probably also invent some that meat-Bob doesn’t have.
This raises the question of how good a copy is good enough. If brains are chaotic, and I would expect them to be, even small error rates would have large consequences for the output of the simulation. In short, I would expect that for semi-realistic upload accuracy (whatever that means in this context), simulated Bob wouldn’t think or behave much like actual Bob.
I might be misunderstanding your point. My opinion is that software brains are extremely difficult (possibly impossibly difficult) because brains are complicated. Your position, as I understand it, is that they are extremely difficult (possibly impossibly difficult) because brains are chaotic.
If its the former (complexity) then there exists a sufficiently advanced model of the human brain that can work (where “sufficiently advanced” here means “probably always science fiction”). If brains are assumed to be chaotic then a lot of what people think and do is random, and the simulated brains will necessarily end up with a different random seed due to measurement errors. This would be important in some brain simulating contexts, for example it would make predicting someone’s future behaviour based on a simulation of their brain impossible. (Omega from Newcomb’s paradox would struggle to predict whether people would two-box or not.) However, from the point of view of chasing immortality for yourself or a loved one the chaos doesn’t seem to be an immediate problem. If my decision to one-box was fundamentally random (down to thermal fluctuations) and trivial changes on the day could have changed my mind, then it couldn’t have been part of my personality. My point was, from the immortality point of view, we only really care about preserving the signal, and can accept different noise.
I certainly agree that brains are complicated.
I think part of the difference is that I’m considering the uploading process; it seems to me that you’re skipping past it, which amounts to assuming it works perfectly.
Consider the upload of Bob the volunteer. The idea that software = Bob is based on the idea that Bob’s connectome of roughly 100 trillion synapses is accurately captured by the upload process. It seems fairly obvious to me that this process will not capture every single synapse with no errors (at least in early versions). It will miss a percentage and probably also invent some that meat-Bob doesn’t have.
This raises the question of how good a copy is good enough. If brains are chaotic, and I would expect them to be, even small error rates would have large consequences for the output of the simulation. In short, I would expect that for semi-realistic upload accuracy (whatever that means in this context), simulated Bob wouldn’t think or behave much like actual Bob.