No—it’s just an observation from my experience (CS degree in the 90′s).
Just to be clear, he is making a clear conceptual mistake that indicates he does not understand universal computability:
… the reason for this is simulating the neural activity on a Von Neumann (or related computer) architecture does not reproduce the causal structure of neural interactions in wetware. Using a different computer architecture may avert this problem …
If there is some other weird computer architecture that can reproduce the causal structure of neural interactions in wetware, then a universal computer (such as a Von Neumann machine) can also reproduce the causal structure of neural interactions simply by simulating the weird computer. This really is theory of computation 101.
“He does not understand universal computability” seems an overstatement, universal computability doesn’t logically imply functionalism, although I agree that it tends to imply that definitions of consciousness which are not invariant under simulation have little epistemic usefulness.
There is a tiny chance that when he said “does not reproduce the causal structure of neural interactions”, what he actually meant was “would simulate the neural interactions extremely slowly”, but if that was the case, he really could have said it better.
My priors are that when people without formal computer science education talk about brains and computers, they usually believe that parallelism is the magicalpower that gives you much more than merely an increase in speed.
In practice it’s just a matter of computational power. His statement makes it fairly clear that he doesn’t understand this distinction.
Circuit level simulations of advanced microchips certainly exist—this is not just theory. Yes they are super expensive when run on standard CPUs (real-time simulation of an iphone CPU naively would require on the order of an exaflop). However, low level circuit binary logic ops are much simpler than the 32⁄64 bit ops that CPUs implement, and there are more advanced simulation algorithms. Companies such as Cadence provide general purpose binary logic emulators that actually work, in practice for reasonable cost, not just theory.
The problem is, he just—JUST—got done saying that he’s talking about the exact case where it turns out that the simulation’s subject completely encompasses the source of consciousness.
If that were his objection, it wouldn’t matter if it was Von Neumann or not.
To add my own highly anecdotal evidence: my experience is that most people with a background in computer science or physics have no active model of how consciousness maps to brains, but when prodded they indeed usually come up with some form of functionalism*.
My own position is that I’m highly confused by consciousness in general, but I’m leaning slightly towards substance dualism, I have a background in computer science.
*: Though note that quite a few of these people simultaneously believe that it is fundamentally impossible to do accurate natural language parsing with a turing machine, so their position might not be completely thought through.
This seems a bit like trying to fix a problem by applying a patch that causes a lot more problems. The stunning success of naturalistic explanations so far in predicting the universe (plus Occam’s Razor) alone would enough to convince me that consciousness is a naturalistic process (and, in fact, they were what convinced me, plus a few other caveats). I’d assign maybe 95% probability to this conclusion. Still, I’d be interested in hearing what led you to your conclusion. Could you expand in more detail?
Or perhaps he’s skeptical of the fidelity of that kind of model. Evolution famously abhors abstraction barriers.
Would you care to quantify your ‘almost everyone’ claim? Are there surveys, etc.?
No—it’s just an observation from my experience (CS degree in the 90′s).
Just to be clear, he is making a clear conceptual mistake that indicates he does not understand universal computability:
If there is some other weird computer architecture that can reproduce the causal structure of neural interactions in wetware, then a universal computer (such as a Von Neumann machine) can also reproduce the causal structure of neural interactions simply by simulating the weird computer. This really is theory of computation 101.
“He does not understand universal computability” seems an overstatement, universal computability doesn’t logically imply functionalism, although I agree that it tends to imply that definitions of consciousness which are not invariant under simulation have little epistemic usefulness.
In theory there is no difference between theory and practice. In practice there is.
A physical Turing machine can simulate an iPhone, in theory. Would you like to try to build one? :-D
The only problems would be speed and memory.
There is a tiny chance that when he said “does not reproduce the causal structure of neural interactions”, what he actually meant was “would simulate the neural interactions extremely slowly”, but if that was the case, he really could have said it better.
My priors are that when people without formal computer science education talk about brains and computers, they usually believe that parallelism is the magical power that gives you much more than merely an increase in speed.
In practice it’s just a matter of computational power. His statement makes it fairly clear that he doesn’t understand this distinction.
Circuit level simulations of advanced microchips certainly exist—this is not just theory. Yes they are super expensive when run on standard CPUs (real-time simulation of an iphone CPU naively would require on the order of an exaflop). However, low level circuit binary logic ops are much simpler than the 32⁄64 bit ops that CPUs implement, and there are more advanced simulation algorithms. Companies such as Cadence provide general purpose binary logic emulators that actually work, in practice for reasonable cost, not just theory.
The problem is, he just—JUST—got done saying that he’s talking about the exact case where it turns out that the simulation’s subject completely encompasses the source of consciousness.
If that were his objection, it wouldn’t matter if it was Von Neumann or not.
To add my own highly anecdotal evidence: my experience is that most people with a background in computer science or physics have no active model of how consciousness maps to brains, but when prodded they indeed usually come up with some form of functionalism*.
My own position is that I’m highly confused by consciousness in general, but I’m leaning slightly towards substance dualism, I have a background in computer science.
*: Though note that quite a few of these people simultaneously believe that it is fundamentally impossible to do accurate natural language parsing with a turing machine, so their position might not be completely thought through.
This seems a bit like trying to fix a problem by applying a patch that causes a lot more problems. The stunning success of naturalistic explanations so far in predicting the universe (plus Occam’s Razor) alone would enough to convince me that consciousness is a naturalistic process (and, in fact, they were what convinced me, plus a few other caveats). I’d assign maybe 95% probability to this conclusion. Still, I’d be interested in hearing what led you to your conclusion. Could you expand in more detail?