People like to attribute computational states, not just to computers, but to the brain. And they want to say that thoughts, perceptions, etc., consist of being in a certain computational state. But a physical state does not correspond inherently to any one computational state… To be in a particular cognitive state is to be in a particular computational state. But if the “computational state” of a physical object is an observer-dependent attribution rather than an intrinsic property, then how can my thoughts be brain states?
I don’t think your question is well represented by the phrase “where is computation”.
Let me ask whether you would agree that a computer executing a program can be said to be a computer executing a program. Your argument would suggest not, because you could attribute various other computations to various parts of the computer’s hardware.
For example, consider a program that repeatedly increments the value in a register. Now we could alternatively focus on just the lowest bit of the register and see a program that repeatedly complements that bit. Which is right? Or perhaps we can see it as a program that counts through all the even numbers by interpreting the register bits as being concatenated with a 0. There is a famous argument that we can in fact interpret this counting program as enumerating the states of any arbitrarily complex computation.
Chalmers in the previous link aims to resolve the ambiguity by certain rules; basically some interpretations count and some don’t. And maybe there is an unresolved ambiguity in the end. But in practice it seems likely that we could take brain activity and create a neural network simulation which runs accurately and produces the same behavioral outputs as the brain; the same speech, the same movements. At least, if you were to deny this possibility, that would be interesting.
In summary, although one can theoretically map any computation to any physical system; for a system like we believe the brain to be, with its simultaneous complexity and organizational unity, it seems likely that one could come up with a computational program that would capture the brain’s behavior, claim to have qualia, and pose the same hard questions about where the color blue lay among the electronic circuits.
I don’t think your question is well represented by the phrase “where is computation”.
If people want to say that consciousness is computation, they had better be able to say what computation is, in physical terms. Part of the problem is that computational properties often have a representational or functional element, but that’s the problem of meaning. The other part of the problem is that computational states are typically vague, from a microphysical perspective. Using the terminology from thermodynamics of microstates and macrostates—a microstate is a complete and exact description of all the microphysical details, a macrostate is an incomplete description—computational states are macrostates, and there is an arbitrariness in how the microstates are grouped into macrostates. There is also a related but distinct sorites problem: what defines the physical boundary of the macro-objects possessing these macrostates? How do you tell whether a given elementary particle needs to be included, or not?
I don’t detect much sympathy for my insistence that aspects of consciousness cannot be identified with vague entities or properties (and possibly it’s just not understood), so I will try to say why. It follows from insisting that consciousness and its phenomena do actually exist. To be is to be something, something in particular. Vaguely defined entities are not particular enough. Every perception that ever occurs is an actual thing that briefly exists. (Just to be clear, I’m not saying that the object of every perception exists—if that were true, there would be no such thing as perceptual error—but I’m saying that perceptions themselves do exist.) But computational macrostates are not exactly defined from a micro level. So they are either incompletely specified, or else, to become completely specified, the fuzziness must be filled out in a way that is necessarily arbitrary and can be done in many ways. The definitional criteria for computational or functional states are simply not strict enough to compel a unique micro completion.
Also, macrostates have no causal power—all causality is micro—and yet the whole point of functionalism is to make mental states causally efficacious.
You didn’t say any of this, Hal, but I want to provide some context for the question.
Let me ask whether you would agree that a computer executing a program can be said to be a computer executing a program. Your argument would suggest not, because you could attribute various other computations to various parts of the computer’s hardware.
You can call it that, but it is an attribution made by an observer and not a property intrinsic to the purely physical reality. The relationship between the objective physical facts and the attributed computational properties is that the former constrains but does not determine the latter. As Chalmers observes, Putnam’s argument is a little excessive. But it is definitely a fact that any complex state-machine can also be described in simpler terms by defining new states which are equivalence classes of the old states, and also that we choose to ignore many of the strictly physical properties of our computers when we conceive of them as computational devices. Any complex physical object allows a very large number of interpretations as a state machine, none of which is intrinsically realer than any other, and this rules out the identification of such states with conscious states, whose existence does not depend on the whim of an external observer.
in practice it seems likely that we could take brain activity and create a neural network simulation which runs accurately and produces the same behavioral outputs as the brain; the same speech, the same movements. At least, if you were to deny this possibility, that would be interesting.
Yes, I do think you should be able to have a brain simulation which would not be conscious and yet do all those things. It’s already clear that we can have incomplete “simulations” which claim to be thinking or feeling something, but don’t. The world is full of chatbots, lifelike artificial characters, hardware and software constructed to act or communicate anthropomorphically, and so on. There is going to be some boundary, defined by detail and method of simulation, on one side of which you actually have consciousness, and on the other side of which, you do not.
To be is to be something, something in particular. Vaguely defined entities are not particular enough. Every perception that ever occurs is an actual thing that briefly exists.
A thing doesn’t have to be fundamental in order to be exact. If individual electrons are fundamental, an “entity” consisting of one electron in a definite location, and the other electron in another definite location, is not a vague entity.
The problem is not reduction per se. The problem discussed here is the attempt to identify definitely existing entities with vaguely defined entities.
Another thread for answers to specific questions.
Second question: Where is computation?
I don’t think your question is well represented by the phrase “where is computation”.
Let me ask whether you would agree that a computer executing a program can be said to be a computer executing a program. Your argument would suggest not, because you could attribute various other computations to various parts of the computer’s hardware.
For example, consider a program that repeatedly increments the value in a register. Now we could alternatively focus on just the lowest bit of the register and see a program that repeatedly complements that bit. Which is right? Or perhaps we can see it as a program that counts through all the even numbers by interpreting the register bits as being concatenated with a 0. There is a famous argument that we can in fact interpret this counting program as enumerating the states of any arbitrarily complex computation.
Chalmers in the previous link aims to resolve the ambiguity by certain rules; basically some interpretations count and some don’t. And maybe there is an unresolved ambiguity in the end. But in practice it seems likely that we could take brain activity and create a neural network simulation which runs accurately and produces the same behavioral outputs as the brain; the same speech, the same movements. At least, if you were to deny this possibility, that would be interesting.
In summary, although one can theoretically map any computation to any physical system; for a system like we believe the brain to be, with its simultaneous complexity and organizational unity, it seems likely that one could come up with a computational program that would capture the brain’s behavior, claim to have qualia, and pose the same hard questions about where the color blue lay among the electronic circuits.
If people want to say that consciousness is computation, they had better be able to say what computation is, in physical terms. Part of the problem is that computational properties often have a representational or functional element, but that’s the problem of meaning. The other part of the problem is that computational states are typically vague, from a microphysical perspective. Using the terminology from thermodynamics of microstates and macrostates—a microstate is a complete and exact description of all the microphysical details, a macrostate is an incomplete description—computational states are macrostates, and there is an arbitrariness in how the microstates are grouped into macrostates. There is also a related but distinct sorites problem: what defines the physical boundary of the macro-objects possessing these macrostates? How do you tell whether a given elementary particle needs to be included, or not?
I don’t detect much sympathy for my insistence that aspects of consciousness cannot be identified with vague entities or properties (and possibly it’s just not understood), so I will try to say why. It follows from insisting that consciousness and its phenomena do actually exist. To be is to be something, something in particular. Vaguely defined entities are not particular enough. Every perception that ever occurs is an actual thing that briefly exists. (Just to be clear, I’m not saying that the object of every perception exists—if that were true, there would be no such thing as perceptual error—but I’m saying that perceptions themselves do exist.) But computational macrostates are not exactly defined from a micro level. So they are either incompletely specified, or else, to become completely specified, the fuzziness must be filled out in a way that is necessarily arbitrary and can be done in many ways. The definitional criteria for computational or functional states are simply not strict enough to compel a unique micro completion.
Also, macrostates have no causal power—all causality is micro—and yet the whole point of functionalism is to make mental states causally efficacious.
You didn’t say any of this, Hal, but I want to provide some context for the question.
You can call it that, but it is an attribution made by an observer and not a property intrinsic to the purely physical reality. The relationship between the objective physical facts and the attributed computational properties is that the former constrains but does not determine the latter. As Chalmers observes, Putnam’s argument is a little excessive. But it is definitely a fact that any complex state-machine can also be described in simpler terms by defining new states which are equivalence classes of the old states, and also that we choose to ignore many of the strictly physical properties of our computers when we conceive of them as computational devices. Any complex physical object allows a very large number of interpretations as a state machine, none of which is intrinsically realer than any other, and this rules out the identification of such states with conscious states, whose existence does not depend on the whim of an external observer.
Yes, I do think you should be able to have a brain simulation which would not be conscious and yet do all those things. It’s already clear that we can have incomplete “simulations” which claim to be thinking or feeling something, but don’t. The world is full of chatbots, lifelike artificial characters, hardware and software constructed to act or communicate anthropomorphically, and so on. There is going to be some boundary, defined by detail and method of simulation, on one side of which you actually have consciousness, and on the other side of which, you do not.
In other words, ontologically fundamental mental entities. Could we move on please?
A thing doesn’t have to be fundamental in order to be exact. If individual electrons are fundamental, an “entity” consisting of one electron in a definite location, and the other electron in another definite location, is not a vague entity.
The problem is not reduction per se. The problem discussed here is the attempt to identify definitely existing entities with vaguely defined entities.