“The question also has an “empirical core” that could turn out one way or another, depending on details of the brain’s physical organization that are not yet known. In particular, does the brain possess what one could call a clean digital abstraction layer: that is, a set of macroscopic degrees of freedom that
(1) encode everything relevant to memory and cognition,
(2) can be accurately modeled as performing a classical digital computation, and
(3) “notice” the microscopic, quantum-mechanical degrees of freedom at most as pure randomnumber sources, generating noise according to prescribed probability distributions?”
You could do worse things with your time than read the whole thing, in my opinion.
Thank you for the quote! (I tried to read the article, but after a few pages it seemed to me the author makes too many digressions, and I didn’t want to know his opinions on everything, only on the technical problems with scanning brains.)
Do I understand it correctly that the question is, essentially, whether there exists a more efficient way of modelling the brain than modelling all particles of the brain?
Because if there is no such efficient way, we can probably forget about running the uploaded brains in real time.
Then, even assuming we could successfully scan the brains, we could get some kind of immortality, but we could not get greater speed, or make life cheaper… which is necessary for the predicted economical consequences of “ems”.
Some smaller economical impacts could still be possible, for example if a person would be so miraculously productive, that even running them at 100× slower speed and 1000× higher costs could be meaningful. (Not easy to imagine, but technically not impossible.) Or perhaps if the quality of life increases globally, the costs of real humans could grow faster than costs of emulated humans, so at some moment emulation could be economically meaningful.
Still, my guess is that there probably is a way to emulate brain more efficiently, because it is a biological mechanism made by evolution, so it has a lot of backwards compatibility and chemistry (all those neurons have metabolism).
Do I understand it correctly that the question is, essentially, whether there exists a more efficient way of
modelling the brain than modelling all particles of the brain?
I don’t presume to speak for Scott, but my interpretation is that it’s not a question of efficiency but fidelity (that is, it may well happen that classical sims of brains are closely related to the brain/person scanned but aren’t the same person, or may indeed not be a person of any sort at all. Quantum sims are impossible due to no-cloning).
For more detailed questions I am afraid you will have to read the paper.
No his thesis is that it is possible that even a maximal upload wouldn’t be human in the same way. His main argument goes like this:
a) There is no way to find out the universe’s initial state, thanks to no-cloning, the requirement of low entropy, and there being only one copy.
b) So we have to talk about uncertainty about wavefunctions—something he calls Knightian uncertainty (roughly, a probability distribution over probability distributions).
c) It is conceivable that particles in which the Knightian uncertainties linger (ie they have never spoken to anything macroscopic enough for decoherence to happen) mess around with us, and it is likely that our brain and only our brain is sensitive enough to one photon for that to mess around with how it would otherwise interact (he proposes Na-ion pathways).
d) We define “non-free” as something that can be predicted by a superintelligence without destroying the system (ie you can mess around with everything else if you want, though within reasonable bounds the interior of which we can see extensively).
e) Because of Knightian uncertainty it is impossible to predict people, if such an account is true.
My disagreements (well, not quite—more, why I’m still compatibilist after reading this):
a) predictability is different from determinism—his argument never contradicts determinism (modulo prob dists but we never gave a shit about that anyway) unless we consider Knightian uncertainties ontological rather than epistemic (and I should warn you that physics has a history of things suddenly making a jump from one to the other rather suddenly). And if it’s not deterministic, according to my interpretation of the word, we wouldn’t have free will any more.
b) this freedom is still basically random. It has more to do with your identification of personality than anything Penrose ever said, because these freebits only hit you rarely and only at one place in your brain - but when they do affect it they affect it randomly among considered possiblities,
I’d say I was rather benefitted by reading it, because it is a stellar example of steelmanning a seemingly (and really, I can say now that I’m done) incoherent position (well, or being the steel man of said position). Here’s a bit of his conclusion that seems relevant here:
To any “mystical” readers, who want human beings to be as free as possible from the mechanistic
chains of cause and effect, I say: this picture represents the absolute maximum that I can see how
to offer you, if I confine myself to speculations that I can imagine making contact with our current
scientific understanding of the world. Perhaps it’s less than you want; on the other hand, it does
seem like more than the usual compatibilist account offers! To any “rationalist” readers, who cheer
when consciousness, free will, or similarly woolly notions get steamrolled by the advance of science,
I say: you can feel vindicated, if you like, that despite searching (almost literally) to the ends of
the universe, I wasn’t able to offer the “mystics” anything more than I was! And even what I do
offer might be ruled out by future discoveries.
Absolutely, here’s the relevant quote:
“The question also has an “empirical core” that could turn out one way or another, depending on details of the brain’s physical organization that are not yet known. In particular, does the brain possess what one could call a clean digital abstraction layer: that is, a set of macroscopic degrees of freedom that
(1) encode everything relevant to memory and cognition,
(2) can be accurately modeled as performing a classical digital computation, and
(3) “notice” the microscopic, quantum-mechanical degrees of freedom at most as pure randomnumber sources, generating noise according to prescribed probability distributions?”
You could do worse things with your time than read the whole thing, in my opinion.
Thank you for the quote! (I tried to read the article, but after a few pages it seemed to me the author makes too many digressions, and I didn’t want to know his opinions on everything, only on the technical problems with scanning brains.)
Do I understand it correctly that the question is, essentially, whether there exists a more efficient way of modelling the brain than modelling all particles of the brain?
Because if there is no such efficient way, we can probably forget about running the uploaded brains in real time.
Then, even assuming we could successfully scan the brains, we could get some kind of immortality, but we could not get greater speed, or make life cheaper… which is necessary for the predicted economical consequences of “ems”.
Some smaller economical impacts could still be possible, for example if a person would be so miraculously productive, that even running them at 100× slower speed and 1000× higher costs could be meaningful. (Not easy to imagine, but technically not impossible.) Or perhaps if the quality of life increases globally, the costs of real humans could grow faster than costs of emulated humans, so at some moment emulation could be economically meaningful.
Still, my guess is that there probably is a way to emulate brain more efficiently, because it is a biological mechanism made by evolution, so it has a lot of backwards compatibility and chemistry (all those neurons have metabolism).
I don’t presume to speak for Scott, but my interpretation is that it’s not a question of efficiency but fidelity (that is, it may well happen that classical sims of brains are closely related to the brain/person scanned but aren’t the same person, or may indeed not be a person of any sort at all. Quantum sims are impossible due to no-cloning).
For more detailed questions I am afraid you will have to read the paper.
No his thesis is that it is possible that even a maximal upload wouldn’t be human in the same way. His main argument goes like this:
a) There is no way to find out the universe’s initial state, thanks to no-cloning, the requirement of low entropy, and there being only one copy.
b) So we have to talk about uncertainty about wavefunctions—something he calls Knightian uncertainty (roughly, a probability distribution over probability distributions).
c) It is conceivable that particles in which the Knightian uncertainties linger (ie they have never spoken to anything macroscopic enough for decoherence to happen) mess around with us, and it is likely that our brain and only our brain is sensitive enough to one photon for that to mess around with how it would otherwise interact (he proposes Na-ion pathways).
d) We define “non-free” as something that can be predicted by a superintelligence without destroying the system (ie you can mess around with everything else if you want, though within reasonable bounds the interior of which we can see extensively).
e) Because of Knightian uncertainty it is impossible to predict people, if such an account is true.
My disagreements (well, not quite—more, why I’m still compatibilist after reading this):
a) predictability is different from determinism—his argument never contradicts determinism (modulo prob dists but we never gave a shit about that anyway) unless we consider Knightian uncertainties ontological rather than epistemic (and I should warn you that physics has a history of things suddenly making a jump from one to the other rather suddenly). And if it’s not deterministic, according to my interpretation of the word, we wouldn’t have free will any more.
b) this freedom is still basically random. It has more to do with your identification of personality than anything Penrose ever said, because these freebits only hit you rarely and only at one place in your brain - but when they do affect it they affect it randomly among considered possiblities,
I’d say I was rather benefitted by reading it, because it is a stellar example of steelmanning a seemingly (and really, I can say now that I’m done) incoherent position (well, or being the steel man of said position). Here’s a bit of his conclusion that seems relevant here: