My fundamental problem with any quantum-based theory like this is that since quantum systems (as far as we can tell) can always be modeled computationally equivalent (but slower) classical systems, such theories necessarily end up hypothesizing the possibility of zombies: non-conscious entities that simulate conscious ones perfectly.
This is extremely unlikely, for several different reasons.
In a monadology such as I propose you cannot have zombies in the classical sense, but you should be able to have unconscious simulations of consciousness.
The zombie as described by Chalmers is implicitly proposed within the framework of property dualism: you have a causally closed physical world, and an epiphenomenal world of consciousness linked to the first by a psychophysical bridging law, and a zombie world results by subtracting consciousness from the picture. In his book The Conscious Mind he has a chapter on “the paradox of phenomenal judgment”, which is that zombies—being behaviorally identical to their conscious counterparts—talk about consciousness and even philosophize about it, without having it. In my monadology, the conscious state is identical with the state of the monadic self, so it is causally efficacious and you cannot simply subtract it from the world while preserving the causal structure. There is also no paradox of phenomenal judgment, because phenomenal judgments—judgments about the experience you are having—are in fact caused by consciousness. However, there is no obvious barrier within monadology to the creation of a black-box simulation of consciousness whose interior mechanism is made up of many simple monads rather than a single complex conscious monad, and which is therefore not conscious itself.
However, there is no obvious barrier within monadology to the creation of a black-box simulation of consciousness whose interior mechanism is made up of many simple monads rather than a single complex conscious monad, and which is therefore not conscious itself.
If the conscious monad’s internal dynamics are uncomputable (even if its behavior is computable), and such a simulation must therefore have a radically different internal structure, perhaps not. But if such a simulation can be made which is structurally similar enough to the conscious monad, then it z-talks about consciousness for the same reason (at the appropriate level of abstraction) as the conscious monad, and the standard anti-zombie arguments return.
it z-talks about consciousness for the same reason (at the appropriate level of abstraction) as the conscious monad
“At the appropriate level of abstraction” is pretty broad. Part of the reason that a conscious monad talks about seeing colors is because it does see colors, whereas its simulation (let us suppose) talks about seeing colors only because it contains computational tokens imitating the causal role that colors play in the conscious monad’s internal transitions of state. I don’t see any contradiction. How would you employ a standard anti-zombie argument here?
My fundamental problem with any quantum-based theory like this is that since quantum systems (as far as we can tell) can always be modeled computationally equivalent (but slower) classical systems, such theories necessarily end up hypothesizing the possibility of zombies: non-conscious entities that simulate conscious ones perfectly.
This is extremely unlikely, for several different reasons.
In a monadology such as I propose you cannot have zombies in the classical sense, but you should be able to have unconscious simulations of consciousness.
The zombie as described by Chalmers is implicitly proposed within the framework of property dualism: you have a causally closed physical world, and an epiphenomenal world of consciousness linked to the first by a psychophysical bridging law, and a zombie world results by subtracting consciousness from the picture. In his book The Conscious Mind he has a chapter on “the paradox of phenomenal judgment”, which is that zombies—being behaviorally identical to their conscious counterparts—talk about consciousness and even philosophize about it, without having it. In my monadology, the conscious state is identical with the state of the monadic self, so it is causally efficacious and you cannot simply subtract it from the world while preserving the causal structure. There is also no paradox of phenomenal judgment, because phenomenal judgments—judgments about the experience you are having—are in fact caused by consciousness. However, there is no obvious barrier within monadology to the creation of a black-box simulation of consciousness whose interior mechanism is made up of many simple monads rather than a single complex conscious monad, and which is therefore not conscious itself.
If the conscious monad’s internal dynamics are uncomputable (even if its behavior is computable), and such a simulation must therefore have a radically different internal structure, perhaps not. But if such a simulation can be made which is structurally similar enough to the conscious monad, then it z-talks about consciousness for the same reason (at the appropriate level of abstraction) as the conscious monad, and the standard anti-zombie arguments return.
“At the appropriate level of abstraction” is pretty broad. Part of the reason that a conscious monad talks about seeing colors is because it does see colors, whereas its simulation (let us suppose) talks about seeing colors only because it contains computational tokens imitating the causal role that colors play in the conscious monad’s internal transitions of state. I don’t see any contradiction. How would you employ a standard anti-zombie argument here?