“One way to argue for type-A materialism is to argue that there is some intermediate X such that (i) explaining functions suffices to explain X, and (ii) explaining X suffices to explain consciousness. One possible X here is representation: it is often held both that conscious states are representational states, representing things in the world, and that we can explain representation in functional terms. If so, it may seem to follow that we can explain consciousness in functional terms. On examination, though, this argument appeals to an ambiguity in the notion of representation. There is a notion of functional representation, on which P is represented roughly when a system responds to P and/or produces behavior appropriate for P. In this sense, explaining functioning may explain representation, but explaining representation does not explain consciousness. There is also a notion of phenomenal representation, on which P is represented roughly when a system has a conscious experience as if P. In this sense, explaining representation may explain consciousness, but explaining functioning does not explain representation. Either way, the epistemic gap between the functional and the phenomenal remains as wide as ever. Similar sorts of equivocation can be found with other X’s that might be appealed to here, such as “perception” or “information.”″
The function of the brain is not merely to input the phenomena of the environment and automatically output a behavior, as the functional “representations” of a simple circuit do—with such a model one could not say that functional representation explains a phenomenal, volitional consciousness—but rather the function of the brain is a very sophisticated, active physical process involving such complex, dynamic representations as would be suitable for reason—of the environment, one’s imagination/thoughts/deliberation, and of one’s volitional actions—i.e. the phenomenal, volitional experiences of consciousness. In this way, explaining functioning explains representation, and explaining representation explains consciousness. Hence there is no epistemic gap between physical and phenomenal truths, and there is no “hard problem” of explaining consciousness remaining once one has solved the easy problems of explaining the various cognitive, behavioral, and environmental functions.
How would you respond to this?
I would defend Type A materialism:
http://consc.net/papers/nature.html
“One way to argue for type-A materialism is to argue that there is some intermediate X such that (i) explaining functions suffices to explain X, and (ii) explaining X suffices to explain consciousness. One possible X here is representation: it is often held both that conscious states are representational states, representing things in the world, and that we can explain representation in functional terms. If so, it may seem to follow that we can explain consciousness in functional terms. On examination, though, this argument appeals to an ambiguity in the notion of representation. There is a notion of functional representation, on which P is represented roughly when a system responds to P and/or produces behavior appropriate for P. In this sense, explaining functioning may explain representation, but explaining representation does not explain consciousness. There is also a notion of phenomenal representation, on which P is represented roughly when a system has a conscious experience as if P. In this sense, explaining representation may explain consciousness, but explaining functioning does not explain representation. Either way, the epistemic gap between the functional and the phenomenal remains as wide as ever. Similar sorts of equivocation can be found with other X’s that might be appealed to here, such as “perception” or “information.”″
The function of the brain is not merely to input the phenomena of the environment and automatically output a behavior, as the functional “representations” of a simple circuit do—with such a model one could not say that functional representation explains a phenomenal, volitional consciousness—but rather the function of the brain is a very sophisticated, active physical process involving such complex, dynamic representations as would be suitable for reason—of the environment, one’s imagination/thoughts/deliberation, and of one’s volitional actions—i.e. the phenomenal, volitional experiences of consciousness. In this way, explaining functioning explains representation, and explaining representation explains consciousness. Hence there is no epistemic gap between physical and phenomenal truths, and there is no “hard problem” of explaining consciousness remaining once one has solved the easy problems of explaining the various cognitive, behavioral, and environmental functions.