We don’t need to bother ourselves over whether a transistor halfway between a 0 state and a 1 state is “really” in one state or the other, because the ultimate criterion of semantics here is behavior...
I don’t think that this is why we don’t bother ourselves with intermediate states in computers.
To say that we can model a physical system as a computer is not to say that we have a many-to-one map sending every possible microphysical state to a computational state. Rather, we are saying that there is a subset Σ′ of the entire space Σ of microstates for the physical system, and a state machine M, such that,
(1) as the system evolves according to physical laws under the conditions where we wish to apply our computational model, states in Σ′ will only evolve into other states in Σ′, but never into states in the complement of Σ′;
(2) there is a many-to-one map f sending states in Σ′ to computational states of M (i.e., states in Σ′ correspond to unambiguous states of M); and
(3) if the laws of physics say that the microphysical state σ ∈ Σ′ evolves into the state σ′ ∈ Σ′, then the definition of the state machine M says that the state f(σ) transitions to the state f(σ′).
But, in general, Σ′ is a proper subset of Σ. If a physical system, under the operating conditions that we care about, could really evolve into any arbitrary state in Σ, then most of the states that the system reached would be homogeneous blobs. In that case, we probably wouldn’t be tempted to model the physical system as a computer.
I propose that physical systems are properly modeled as computers only when the proper subset Σ′ is a union of “isolated islands” in the larger state-space Σ, with each isolated island mapping to a distinct computational state. The isolated islands are separated by “broad channels” of states in the complement of Σ′. To the extent that states in the “islands” could evolve into states in the “channels”, then, to that extent, the system shouldn’t be modeled as a computer. Conversely, insofar as a system is validly modeled as a computer, that system never enters “vague” computational states.
The computational theory of mind amounts to the claim that the brain can be modeled as a state machine in the above sense.
But suppose that a confluence of cosmic rays knocked your brain into some random state in the “channels”. Well, most such states correspond to no qualia at all. Your brain would just be an inert mush. But some of the states in the channels do correspond to qualia. So long as this is possible, why doesn’t your vagueness problem reappear here?
If this were something that we expected would ever really happen, then we would be in a world where we shouldn’t be modeling the brain as a computer, except perhaps as a computer where many qualia states correspond to unique microphysical states, so that a single microphysical change sometimes makes for a different qualia state. In practice, that would probably mean that we should think of our brains as more like a bowl of soup than a computer. But insofar as this just doesn’t happen, we don’t need to worry about the vagueness problem you propose.
I don’t think that this is why we don’t bother ourselves with intermediate states in computers.
To say that we can model a physical system as a computer is not to say that we have a many-to-one map sending every possible microphysical state to a computational state. Rather, we are saying that there is a subset Σ′ of the entire space Σ of microstates for the physical system, and a state machine M, such that,
(1) as the system evolves according to physical laws under the conditions where we wish to apply our computational model, states in Σ′ will only evolve into other states in Σ′, but never into states in the complement of Σ′;
(2) there is a many-to-one map f sending states in Σ′ to computational states of M (i.e., states in Σ′ correspond to unambiguous states of M); and
(3) if the laws of physics say that the microphysical state σ ∈ Σ′ evolves into the state σ′ ∈ Σ′, then the definition of the state machine M says that the state f(σ) transitions to the state f(σ′).
But, in general, Σ′ is a proper subset of Σ. If a physical system, under the operating conditions that we care about, could really evolve into any arbitrary state in Σ, then most of the states that the system reached would be homogeneous blobs. In that case, we probably wouldn’t be tempted to model the physical system as a computer.
I propose that physical systems are properly modeled as computers only when the proper subset Σ′ is a union of “isolated islands” in the larger state-space Σ, with each isolated island mapping to a distinct computational state. The isolated islands are separated by “broad channels” of states in the complement of Σ′. To the extent that states in the “islands” could evolve into states in the “channels”, then, to that extent, the system shouldn’t be modeled as a computer. Conversely, insofar as a system is validly modeled as a computer, that system never enters “vague” computational states.
The computational theory of mind amounts to the claim that the brain can be modeled as a state machine in the above sense.
But suppose that a confluence of cosmic rays knocked your brain into some random state in the “channels”. Well, most such states correspond to no qualia at all. Your brain would just be an inert mush. But some of the states in the channels do correspond to qualia. So long as this is possible, why doesn’t your vagueness problem reappear here?
If this were something that we expected would ever really happen, then we would be in a world where we shouldn’t be modeling the brain as a computer, except perhaps as a computer where many qualia states correspond to unique microphysical states, so that a single microphysical change sometimes makes for a different qualia state. In practice, that would probably mean that we should think of our brains as more like a bowl of soup than a computer. But insofar as this just doesn’t happen, we don’t need to worry about the vagueness problem you propose.