Wedrifid, yes, if Schwitzgebel’s conjecture were true, then farewell to reductive physicalism and the ontological unity of science. The USA is a “zombie”. Its functionally interconnected but skull-bound minds are individually conscious; and sometimes the behaviour of the USA as a whole is amenable to functional description; but the USA not a unitary subject of experience. However, the problem with relying on this intuitive response is that the phenomenology of our own minds seems to entail exactly the sort of strong ontological emergence we’re excluding for the USA. Let’s assume, as microelectrode studies tentatively confirm, that individual neurons can support rudimentary experience. How can we rigorously derive bound experiential objects, let alone the fleeting synchronic unity of the self, from discrete, distributed, membrane-bound classical feature processors? Dreamless sleep aside, why aren’t we mere patterns of “mind dust”?
None of this might seem relevant to ChrisHallquist’s question. Computationally speaking, who cares whether Deep Blue, Watson, or Alpha Dog (etc) are unitary subjects of experience. But anyone who wants to save reductive reductive physicalism should at least consider why quantum mind theorists are prepared to contemplate a role for macroscopic quantum coherence in the CNS. Max Tegmark hasn’t refuted quantum mind; he’s made a plausible but unargued assumption, namely that sub-picosecond decoherence timescales are too short to do any computational and/or phenomenological work. Maybe so; but this assumption remains to be empirically tested. If all we find is “noise”, then I don’t see how reductive physicalism can be saved.
Wedrifid, yes, if Schwitzgebel’s conjecture were true, then farewell to reductive physicalism and the ontological unity of science. The USA is a “zombie”. Its functionally interconnected but skull-bound minds are individually conscious; and sometimes the behaviour of the USA as a whole is amenable to functional description; but the USA not a unitary subject of experience. However, the problem with relying on this intuitive response is that the phenomenology of our own minds seems to entail exactly the sort of strong ontological emergence we’re excluding for the USA. Let’s assume, as microelectrode studies tentatively confirm, that individual neurons can support rudimentary experience. How can we rigorously derive bound experiential objects, let alone the fleeting synchronic unity of the self, from discrete, distributed, membrane-bound classical feature processors? Dreamless sleep aside, why aren’t we mere patterns of “mind dust”?
None of this might seem relevant to ChrisHallquist’s question. Computationally speaking, who cares whether Deep Blue, Watson, or Alpha Dog (etc) are unitary subjects of experience. But anyone who wants to save reductive reductive physicalism should at least consider why quantum mind theorists are prepared to contemplate a role for macroscopic quantum coherence in the CNS. Max Tegmark hasn’t refuted quantum mind; he’s made a plausible but unargued assumption, namely that sub-picosecond decoherence timescales are too short to do any computational and/or phenomenological work. Maybe so; but this assumption remains to be empirically tested. If all we find is “noise”, then I don’t see how reductive physicalism can be saved.