If the conscious being it was simulating would do so, then yes.
On the general topic of simulation of conscious beings, it has just occurred to me… Most functionalists believe a simulation would also be conscious, but a giant look-up table would not be. But if the conscious mind consists of physically separable subsystems in interaction—suppose you try simulating the subsystems with look-up tables, at finer and finer grains of subdivision. At what point would the networked look-up-tables be conscious?
Would a silicon-implemented Mitchell Porter em, for no especial reason (lacking consciousness, it can have none), attempt to reimplement itself in a physical system with a quantum monad?
In terms of current physics, a monad is supposed to be a lump of quantum entanglement, and there are blueprints for a silicon quantum computer in which the qubits are dopant phosphorus atoms. So consciousness on a chip is not in itself a problem for me, it just needs to be a quantum chip.
But you’re talking about an unconscious classical simulation. OK. The intuition behind the question seems to be: because of its beliefs about consciousness, the simulation will think it can’t be conscious in its current form, and will try to make itself so. It doesn’t sound very likely. But it’s more illuminating to ask a different question: what happens when an unconscious simulation of a conscious mind, holding a theory about consciousness according to which such a simulation cannot be conscious, is presented with evidence that it is such a simulation itself?
First, we should consider the conscious counterpart of this, namely: an actually conscious being, with a theory of consciousness, is presented with evidence that it is the sort of thing that cannot be conscious according to its theory. To some extent this is what happened to the human race. The basic choice is whether to change the theory or to retain it. It’s also possible to abandon the idea of consciousness; or even to retain the concept of consciousness but decide that it doesn’t apply to you.
So, let’s suppose I discover that my skull is actually full of silicon chips, not neurons, and that they appear to only be performing classical computations. This would be a rather shocking discovery for a lot of mundane reasons, but let’s suppose we get those out of the way and I’m left with the philosophical problem. How do I respond?
To begin with, the situation hasn’t changed very much! I used to think that I had a skull full of neurons which appear to only be performing classsical computations. But I also used to think that, in reality, there was probably something quantum happening as well, and so took an interest in various speculations about quantum effects in the brain. If I find my brain to in fact be made of silicon chips, I can still look for such effects, and they really might be there.
To take the thought experiment to its end, I have to suppose that the search turns up nothing. The quantum crosstalk is too weak to have any functional significance. Where do I turn then? But first, let’s forget about the silicon aspect here. We can pose the thought experiment in terms of neurons. Suppose we find no evidence of quantum crosstalk between neurons. Everything is decoherent, entanglement is at a minimum. What then?
There are a number of possibilities. Of course, I could attempt to turn to one of the many other theories of consciousness which assume that the brain is only a classical computer. Or, I could turn to physics and say the quantum coherence is there, but it’s in some new, weakly interacting particle species that shadows the detectable matter of the brain. Or, I could adopt some version of the brain-in-a-vat hypothesis and say, this simply proves that the world of appearances is not the real world, and in the real world I’m monadic.
Now, back to the original scenario. If we have an unconscious simulation of a mind with a monadic theory of consciousness, and the simulation discovers that it is apparently not a monad, it could react in any of those ways. Or rather, it could present us with the simulation of such reactions. The simulation might change its theory; it might look for more data; it might deny the data. Or it might simulate some more complicated psychological response.
Thanks for clearing up the sloppiness of my query in the process of responding to it. You enumerated a number of possible responses, but you haven’t committed a classical em of you to a specific one. Are you just not sure what it would do?
It’s a very hypothetical scenario, so being not sure is, surely, the correct response. But I revert to pondering what I might do if in real life it looks like conscious states are computational macrostates. I would have to go on trying to find a perspective on physics whereby such states exist objectively and have causal power, and in which they could somehow look like or be identified with subjective experience. Insofar as my emulation concerned itself with the problem of consciousness, it might do that.
The reason lookup tables don’t work is that you can’t change them. So you can use a lookup table for, e.g., the shape of an action potential (essentially the same everywhere), but not for the strengths of the connections between neurons, which are neuroplastic.
Since I can manually implement any computation a Turing machine can, for some subsystem of me, that table will have to contain the “full computation” table that checks every possible computation for whether it halts before I die. I submit such a table is not very interesting.
If the conscious being it was simulating would do so, then yes.
On the general topic of simulation of conscious beings, it has just occurred to me… Most functionalists believe a simulation would also be conscious, but a giant look-up table would not be. But if the conscious mind consists of physically separable subsystems in interaction—suppose you try simulating the subsystems with look-up tables, at finer and finer grains of subdivision. At what point would the networked look-up-tables be conscious?
Would a silicon-implemented Mitchell Porter em, for no especial reason (lacking consciousness, it can have none), attempt to reimplement itself in a physical system with a quantum monad?
In terms of current physics, a monad is supposed to be a lump of quantum entanglement, and there are blueprints for a silicon quantum computer in which the qubits are dopant phosphorus atoms. So consciousness on a chip is not in itself a problem for me, it just needs to be a quantum chip.
But you’re talking about an unconscious classical simulation. OK. The intuition behind the question seems to be: because of its beliefs about consciousness, the simulation will think it can’t be conscious in its current form, and will try to make itself so. It doesn’t sound very likely. But it’s more illuminating to ask a different question: what happens when an unconscious simulation of a conscious mind, holding a theory about consciousness according to which such a simulation cannot be conscious, is presented with evidence that it is such a simulation itself?
First, we should consider the conscious counterpart of this, namely: an actually conscious being, with a theory of consciousness, is presented with evidence that it is the sort of thing that cannot be conscious according to its theory. To some extent this is what happened to the human race. The basic choice is whether to change the theory or to retain it. It’s also possible to abandon the idea of consciousness; or even to retain the concept of consciousness but decide that it doesn’t apply to you.
So, let’s suppose I discover that my skull is actually full of silicon chips, not neurons, and that they appear to only be performing classical computations. This would be a rather shocking discovery for a lot of mundane reasons, but let’s suppose we get those out of the way and I’m left with the philosophical problem. How do I respond?
To begin with, the situation hasn’t changed very much! I used to think that I had a skull full of neurons which appear to only be performing classsical computations. But I also used to think that, in reality, there was probably something quantum happening as well, and so took an interest in various speculations about quantum effects in the brain. If I find my brain to in fact be made of silicon chips, I can still look for such effects, and they really might be there.
To take the thought experiment to its end, I have to suppose that the search turns up nothing. The quantum crosstalk is too weak to have any functional significance. Where do I turn then? But first, let’s forget about the silicon aspect here. We can pose the thought experiment in terms of neurons. Suppose we find no evidence of quantum crosstalk between neurons. Everything is decoherent, entanglement is at a minimum. What then?
There are a number of possibilities. Of course, I could attempt to turn to one of the many other theories of consciousness which assume that the brain is only a classical computer. Or, I could turn to physics and say the quantum coherence is there, but it’s in some new, weakly interacting particle species that shadows the detectable matter of the brain. Or, I could adopt some version of the brain-in-a-vat hypothesis and say, this simply proves that the world of appearances is not the real world, and in the real world I’m monadic.
Now, back to the original scenario. If we have an unconscious simulation of a mind with a monadic theory of consciousness, and the simulation discovers that it is apparently not a monad, it could react in any of those ways. Or rather, it could present us with the simulation of such reactions. The simulation might change its theory; it might look for more data; it might deny the data. Or it might simulate some more complicated psychological response.
Thanks for clearing up the sloppiness of my query in the process of responding to it. You enumerated a number of possible responses, but you haven’t committed a classical em of you to a specific one. Are you just not sure what it would do?
It’s a very hypothetical scenario, so being not sure is, surely, the correct response. But I revert to pondering what I might do if in real life it looks like conscious states are computational macrostates. I would have to go on trying to find a perspective on physics whereby such states exist objectively and have causal power, and in which they could somehow look like or be identified with subjective experience. Insofar as my emulation concerned itself with the problem of consciousness, it might do that.
Thanks for entertaining this thought experiment.
I think Eliezer Yudkowsky’s remarks on giant lookup tables in the Zombie Sequence just about cover the interesting questions.
The reason lookup tables don’t work is that you can’t change them. So you can use a lookup table for, e.g., the shape of an action potential (essentially the same everywhere), but not for the strengths of the connections between neurons, which are neuroplastic.
A LUT can handle change, if it encodes a function of type (Input × State) → (Output × State).
Since I can manually implement any computation a Turing machine can, for some subsystem of me, that table will have to contain the “full computation” table that checks every possible computation for whether it halts before I die. I submit such a table is not very interesting.
I submit that such a table is not particularly less interesting than a Turing machine.