The algorithmic computations can be instantiated in many different causal structures but only some will
Any sentence of this form is provably false, due to the universality of computation and multiple realizability.
This is incorrect because the causal structure of a Turing machine simulating a human brain is very different from an actual human brain. Of course, you can redefine causality in terms of “simulation causality” but the underlying causal structure of the respective systems will be very different.
Yes it is—causal structure is just computational structure, there is no difference.
If you accept Wheeler’s “it from bit” argument, then anything can be instantiated with information. But at this point, you’re veering far from science.
This is incorrect because the causal structure of a Turing machine simulating a human brain is very different from an actual human brain.
There are at least two causal structure levels in a computational system: the physical substrate level and the program level (and potentially more with multiple levels of simulation). A computational system is one that can organize it’s energy flow (state transitions in the substrate) in a very particular way so as to realize/implement any computable causal structure at the program/simulation level.
The causal structure at the substrate level is literally factored out—it does not matter (beyond performance constraints). Universal computability is not a theory at this point—it is a proven hard true fact.
causal structure of a Turing machine simulating a human brain is very different from an actual human brain.
This statement contravenes universal computability, and is therefore false. A universal computer can instantiate any other causal structure. Remember: the causal structure at the substrate level is irrelevant due to the universality of computation. Causal structures can be embedded within other causal structures (multiple realizability).
A brain is just matter, and more specifically it is just an electromechanical biological computer. It is also just a conventional irreversible computer which dissipates energy along it’s wires and junctions according to the same exact physical constraints that face modern electronic computers. It can be simulated because anything can be simulated!
Let’s cut to the chase: are there any empirical predictions where your viewpoint disagrees with functionalism?
For example, I predict that within a decade or two, computers with about 10^14 ops will run human mind simulations, and these sims will pass any and all objective tests for human intelligence, self-awareness, consciousness, etc.
Furthermore, you won’t be able to tell the difference between a human controlling a humanoid avatar in virtual reality and an AI controlling a humanoid avatar (imitating human control).
People will just accept that sims are conscious/self-aware for the exact same reasons that we reject solipsism.
causal structure of a Turing machine simulating a human brain is very different from an actual human brain.
This statement contravenes universal computability, and is therefore false. A universal computer can instantiate any other causal structure. Remember: the causal structure at the substrate level is irrelevant due to the universality of computation. Causal structures can be embedded within other causal structures (multiple realizability).
My statement does not contravene universal computability since I’m assuming a Turing machine can simulate a human brain. Let me try another approach: Look at the space-time diagram of a Turing machine adding two numbers and compare with the space-time diagram of a neuron performing a similar summation. The causal structures in the space-time diagrams are very different. Yes, you can simulate a causal structure, but this is not the same thing as the causal structure of the underlying physical substrate performing the simulation.
It can be simulated because anything can be simulated!
Anything can be simulated imperfectly. Take the weather or C. elegans nervous system.
are there any empirical predictions where your viewpoint disagrees with functionalism?
I’m just exhibiting skepticism over claims from machine functionalism relating to Turing (and related) machine consciousness. I’m not promoting a specific viewpoint.
I predict that within a decade or two, computers with about 10^14 ops will run human mind simulations, and these sims will pass any and all objective tests for human intelligence, self-awareness, consciousness, etc.
There are no objective tests for consciousness. Of course you can re-define it in terms of self-awareness but this is not the same.
People will just accept that sims are conscious/self-aware for the exact same reasons that we reject solipsism.
Have we rejected solipsism? Certainly panpsychism is consistent with it and this appears untouched in consciousness research.
My statement does not contravene universal computability since I’m assuming a Turing machine can simulate a human brain.
Well, if you assume that, then you are already most of the way to functionalism, but I suspect we may be talking about different types of simulations.
Let me try another approach: Look at the space-time diagram of a Turing machine adding two numbers and compare with the space-time diagram of a neuron performing a similar summation.
Neurons perform analog summation, so the space-time diagram or causal structure is stochastic/statistical rather than deterministic (addition over real-number distributions rather than digital addition) . My use of the term ‘simulation’ encompasses probabilistic simulation which entails matching the statistical distribution over state transitions rather than deterministic simulation.
Anything can be simulated imperfectly. Take the weather or C. elegans nervous system.
Neural analog computational systems can be simulated perfectly in a probabilistic sense when you can recreate the exact conditional probability distributions that govern spike events. You can’t necessarily predict the exact actions the brain will output (due to noise effects), but you can—in theory—predict actions from the exact correct distribution. At the limits of simulation we can predict exact samples from our multiverse distribution, rather than predict the exact future of our particular (unknowable) branch.
Simulation of intelligent minds is fundamentally different than weather simulation—for the weather we are interested in the exact outcome in our specific universe. That would be comparable to simulating the exact thoughts of a particular human mind in some situation—which in general is computationally intractable (and unimportant for AI).
There are no objective tests for consciousness. Of course you can re-define it in terms of self-awareness but this is not the same.
Science is concerned with objective reality. A definition of consciousness which precludes objective testing is outside the realm of scientific inquiry at best, and pseudo-science at worse.
In common usage the term consciousness refers to objective reality. Sentences of the form ” I was conscious of X”, or “Y rendered Bob unconscious”, or “Perhaps at a subconscious level” all suggest a common meaning involving objectively verifiable computations.
We know that consciousness is the particular mental state arising from various computations coordinated across some hundreds of major brain regions. We know that certain drugs can cause loss of consciousness even while neural activity persists. Consciousness depends on precise synchronized coordination between major brain circuits—a straightforward result of the brain being an hybrid digital/analog computer.
We aren’t so far away from being able to objectively detect consciousness via brain scanning and some form of statistical inference—see this interesting work for example (using a clever compressibility or k-complexity perturbation measure).
Neurons perform analog summation, so the space-time diagram or causal structure is stochastic/statistical rather than deterministic
Surely you realize that quibbling over the use of analog vs digital neural summation in my toy example does not address my main argument.
Neural analog computational systems can be simulated perfectly in a probabilistic sense
Anything can be simulated perfectly (and trivially) in a probabilistic sense.
There are no objective tests for consciousness. Of course you can re-define it in terms of self-awareness but this is not the same.
A definition of consciousness which precludes objective testing is outside the realm of scientific inquiry at best, and pseudo-science at worse.
If we knew the basis for consciousness, we would have objective tests. It’s possible that studying the brain’s structural and connectional organization in detail will provide the clues we need to develop better informed opinions about the basis of consciousness.
This is my final post and I would like to thank everyone for the discussion. If anyone is interested in developing autotracing and autosegmentation programs for connectomics and neural circuit reconstruction in whole-brain volume electron microscopy datasets, please email me at brainmaps at gmail dot com or visit http://connectomes.org for more information. Thanks again.
This is incorrect because the causal structure of a Turing machine simulating a human brain is very different from an actual human brain. Of course, you can redefine causality in terms of “simulation causality” but the underlying causal structure of the respective systems will be very different.
If you accept Wheeler’s “it from bit” argument, then anything can be instantiated with information. But at this point, you’re veering far from science.
There are at least two causal structure levels in a computational system: the physical substrate level and the program level (and potentially more with multiple levels of simulation). A computational system is one that can organize it’s energy flow (state transitions in the substrate) in a very particular way so as to realize/implement any computable causal structure at the program/simulation level.
The causal structure at the substrate level is literally factored out—it does not matter (beyond performance constraints). Universal computability is not a theory at this point—it is a proven hard true fact.
This statement contravenes universal computability, and is therefore false. A universal computer can instantiate any other causal structure. Remember: the causal structure at the substrate level is irrelevant due to the universality of computation. Causal structures can be embedded within other causal structures (multiple realizability).
A brain is just matter, and more specifically it is just an electromechanical biological computer. It is also just a conventional irreversible computer which dissipates energy along it’s wires and junctions according to the same exact physical constraints that face modern electronic computers. It can be simulated because anything can be simulated!
Let’s cut to the chase: are there any empirical predictions where your viewpoint disagrees with functionalism?
For example, I predict that within a decade or two, computers with about 10^14 ops will run human mind simulations, and these sims will pass any and all objective tests for human intelligence, self-awareness, consciousness, etc.
Furthermore, you won’t be able to tell the difference between a human controlling a humanoid avatar in virtual reality and an AI controlling a humanoid avatar (imitating human control).
People will just accept that sims are conscious/self-aware for the exact same reasons that we reject solipsism.
My statement does not contravene universal computability since I’m assuming a Turing machine can simulate a human brain. Let me try another approach: Look at the space-time diagram of a Turing machine adding two numbers and compare with the space-time diagram of a neuron performing a similar summation. The causal structures in the space-time diagrams are very different. Yes, you can simulate a causal structure, but this is not the same thing as the causal structure of the underlying physical substrate performing the simulation.
Anything can be simulated imperfectly. Take the weather or C. elegans nervous system.
I’m just exhibiting skepticism over claims from machine functionalism relating to Turing (and related) machine consciousness. I’m not promoting a specific viewpoint.
There are no objective tests for consciousness. Of course you can re-define it in terms of self-awareness but this is not the same.
Have we rejected solipsism? Certainly panpsychism is consistent with it and this appears untouched in consciousness research.
Well, if you assume that, then you are already most of the way to functionalism, but I suspect we may be talking about different types of simulations.
Neurons perform analog summation, so the space-time diagram or causal structure is stochastic/statistical rather than deterministic (addition over real-number distributions rather than digital addition) . My use of the term ‘simulation’ encompasses probabilistic simulation which entails matching the statistical distribution over state transitions rather than deterministic simulation.
Neural analog computational systems can be simulated perfectly in a probabilistic sense when you can recreate the exact conditional probability distributions that govern spike events. You can’t necessarily predict the exact actions the brain will output (due to noise effects), but you can—in theory—predict actions from the exact correct distribution. At the limits of simulation we can predict exact samples from our multiverse distribution, rather than predict the exact future of our particular (unknowable) branch.
Simulation of intelligent minds is fundamentally different than weather simulation—for the weather we are interested in the exact outcome in our specific universe. That would be comparable to simulating the exact thoughts of a particular human mind in some situation—which in general is computationally intractable (and unimportant for AI).
Science is concerned with objective reality. A definition of consciousness which precludes objective testing is outside the realm of scientific inquiry at best, and pseudo-science at worse.
In common usage the term consciousness refers to objective reality. Sentences of the form ” I was conscious of X”, or “Y rendered Bob unconscious”, or “Perhaps at a subconscious level” all suggest a common meaning involving objectively verifiable computations.
We know that consciousness is the particular mental state arising from various computations coordinated across some hundreds of major brain regions. We know that certain drugs can cause loss of consciousness even while neural activity persists. Consciousness depends on precise synchronized coordination between major brain circuits—a straightforward result of the brain being an hybrid digital/analog computer.
We aren’t so far away from being able to objectively detect consciousness via brain scanning and some form of statistical inference—see this interesting work for example (using a clever compressibility or k-complexity perturbation measure).
Surely you realize that quibbling over the use of analog vs digital neural summation in my toy example does not address my main argument.
Anything can be simulated perfectly (and trivially) in a probabilistic sense.
If we knew the basis for consciousness, we would have objective tests. It’s possible that studying the brain’s structural and connectional organization in detail will provide the clues we need to develop better informed opinions about the basis of consciousness.
This is my final post and I would like to thank everyone for the discussion. If anyone is interested in developing autotracing and autosegmentation programs for connectomics and neural circuit reconstruction in whole-brain volume electron microscopy datasets, please email me at brainmaps at gmail dot com or visit http://connectomes.org for more information. Thanks again.