Finding yourself to be a conscious being is anthropically necessary. If the universe contains quantum-computational conscious beings and classical-computational zombies, and only the first are conscious, then you can only ever be the first kind of being, and you can only ever find that you had an evolutionary history that managed to produce such beings as yourself. (ETA: Also, you can only find yourself to exist in a universe where consciousness can exist, no matter how exotic an ontology that requires.)
Obviously I believe in the possibility of unconscious simulations of conscious beings. All it should require is implementing a conscious state machine on a distributed base. But I have no idea how likely it is that evolution should produce something like that. Consciousness does have survival value, and given that I take genuine conscious states to be something relatively fundamental, some fairly fundamental laws are probably implicated in the details of its internal causality. I simply don’t know whether a naturally evolved unconscious intelligence would be likely to have a causal architecture isomorphic to that of a conscious intelligence, or whether it would be more likely to implement useful functions like self-monitoring in a computationally dissimilar way.
What I say about the internal causality of genuine consciousness may sound mysterious, so I will try to give an example; I emphasize this is not even speculation, it’s just an ontology of consciousness which allows me to make a point.
One of the basic features of conscious states is intentionality—they’re about something. So let us say that a typical conscious state contains two sorts of relations—“being aware of” a quale, and “paying attention to” a quale. Unreflective consciousness is all awareness and no attention, while a reflective state of consciousness will consist of attending to certain qualia, amid a larger background of qualia which are just at the level of awareness.
Possible states of consciousness would be specified by listing the qualia and by listing whether the subject is attending to them or just aware of them. (The whole idea is that when attending, you’re aware that you are aware.) Now we have a state space, we can talk about dynamics. There will be a “physical law” governing transitions in the conscious state, whereby the next state after the current one is a function of the current state and of various external conditions.
An example of a transition that might be of interest, is the transition from the state “aware of A, aware of B, aware of C...” to the state “attending to A, aware of B, aware of C...” What are the conditions under which we start attending to something—the conditions under which we become aware of being aware of something? In this hypothetical ontology, there would be a fundamental law describing the exact conditions which cause such a transition. We can go further, and think about embedding this model of mind, into a formal ontology of monads whose mathematical states are, say, drawn from Hilbert spaces with nested graded subspaces of varying dimensionality, and which works to reproduce quantum mechanics in some limit. We might be able to represent the recursive nature of iterated reflection (being aware of being aware of being aware of A) by utilizing this subspace structure.
We are then to think of the world as consisting mostly of “monads” or tensor factors drawn from the subspaces of smallest dimensionality, but sometimes they evolve into states of arbitrarily high dimensionality, something which corresponds to the formation of entangled states in conventional quantum mechanics. But this is all just mathematical formalism, and we are to understand that the genuine ontology of the complex monadic states is this business about a subject perceiving a set of qualia under a mixture of the two aspects (awareness versus attention), and that the dynamical laws of nature that pertain to monads in reflective states are actually statements of the form “A quale jumps from awareness level to attention level if… [some psycho-phenomenological condition is met]”.
Furthermore, it would be possible to simulate complex individual monads with appropriately organized clusters of simple monads, but ontologically you wouldn’t actually have the complex states of awareness and attention being present, you would just have lots of simple monads being used like dots in a painting or bits in a computer.
I really do expect that the truth about how consciousness works is going to sound this weird and this concrete, even if this specific fancy is way off in its details.
Sorry, I think I was unclear. When I was wondering about the causal history of human qualia, I didn’t mean the causal history of a particular quale in a human, but rather the causal history of why humans have qualia.
I don’t think anthropics are a sufficient answer to that question; if there exist no plausible causal histories of humans with qualia, then either the humans or the qualia have to go.
If the universe contains quantum-computational conscious beings and classical-computational zombies, and only the first are conscious, then you can only ever be the first kind of being, and you can only ever find that you had an evolutionary history that managed to produce such beings as yourself.
If zombies are possible, why can’t this “you” you are talking to be a zombie? Zombies should be capable of reasoning correctly in the sleeping beauty problem, or about waking up in blue or red rooms, etc.
If you make a zombie clone of a human (not necessarily a perfect copy, merely one that’s similar enough that it can’t tell if it’s a zombie or not), and have them both play a game where they are shown a button and have the choice to press it or not, if neither presses it they get $1.000, if both press it they get nothing, and if only the human presses it they get $1.000.000 (in all cases, the money is split between the copies). In such a scenario, you better hope that the zombie doesn’t follow your advice and reason that it has to be a human.
Finding yourself to be a conscious being is anthropically necessary. If the universe contains quantum-computational conscious beings and classical-computational zombies, and only the first are conscious, then you can only ever be the first kind of being, and you can only ever find that you had an evolutionary history that managed to produce such beings as yourself. (ETA: Also, you can only find yourself to exist in a universe where consciousness can exist, no matter how exotic an ontology that requires.)
Obviously I believe in the possibility of unconscious simulations of conscious beings. All it should require is implementing a conscious state machine on a distributed base. But I have no idea how likely it is that evolution should produce something like that. Consciousness does have survival value, and given that I take genuine conscious states to be something relatively fundamental, some fairly fundamental laws are probably implicated in the details of its internal causality. I simply don’t know whether a naturally evolved unconscious intelligence would be likely to have a causal architecture isomorphic to that of a conscious intelligence, or whether it would be more likely to implement useful functions like self-monitoring in a computationally dissimilar way.
What I say about the internal causality of genuine consciousness may sound mysterious, so I will try to give an example; I emphasize this is not even speculation, it’s just an ontology of consciousness which allows me to make a point.
One of the basic features of conscious states is intentionality—they’re about something. So let us say that a typical conscious state contains two sorts of relations—“being aware of” a quale, and “paying attention to” a quale. Unreflective consciousness is all awareness and no attention, while a reflective state of consciousness will consist of attending to certain qualia, amid a larger background of qualia which are just at the level of awareness.
Possible states of consciousness would be specified by listing the qualia and by listing whether the subject is attending to them or just aware of them. (The whole idea is that when attending, you’re aware that you are aware.) Now we have a state space, we can talk about dynamics. There will be a “physical law” governing transitions in the conscious state, whereby the next state after the current one is a function of the current state and of various external conditions.
An example of a transition that might be of interest, is the transition from the state “aware of A, aware of B, aware of C...” to the state “attending to A, aware of B, aware of C...” What are the conditions under which we start attending to something—the conditions under which we become aware of being aware of something? In this hypothetical ontology, there would be a fundamental law describing the exact conditions which cause such a transition. We can go further, and think about embedding this model of mind, into a formal ontology of monads whose mathematical states are, say, drawn from Hilbert spaces with nested graded subspaces of varying dimensionality, and which works to reproduce quantum mechanics in some limit. We might be able to represent the recursive nature of iterated reflection (being aware of being aware of being aware of A) by utilizing this subspace structure.
We are then to think of the world as consisting mostly of “monads” or tensor factors drawn from the subspaces of smallest dimensionality, but sometimes they evolve into states of arbitrarily high dimensionality, something which corresponds to the formation of entangled states in conventional quantum mechanics. But this is all just mathematical formalism, and we are to understand that the genuine ontology of the complex monadic states is this business about a subject perceiving a set of qualia under a mixture of the two aspects (awareness versus attention), and that the dynamical laws of nature that pertain to monads in reflective states are actually statements of the form “A quale jumps from awareness level to attention level if… [some psycho-phenomenological condition is met]”.
Furthermore, it would be possible to simulate complex individual monads with appropriately organized clusters of simple monads, but ontologically you wouldn’t actually have the complex states of awareness and attention being present, you would just have lots of simple monads being used like dots in a painting or bits in a computer.
I really do expect that the truth about how consciousness works is going to sound this weird and this concrete, even if this specific fancy is way off in its details.
Sorry, I think I was unclear. When I was wondering about the causal history of human qualia, I didn’t mean the causal history of a particular quale in a human, but rather the causal history of why humans have qualia.
I don’t think anthropics are a sufficient answer to that question; if there exist no plausible causal histories of humans with qualia, then either the humans or the qualia have to go.
If zombies are possible, why can’t this “you” you are talking to be a zombie? Zombies should be capable of reasoning correctly in the sleeping beauty problem, or about waking up in blue or red rooms, etc.
If you make a zombie clone of a human (not necessarily a perfect copy, merely one that’s similar enough that it can’t tell if it’s a zombie or not), and have them both play a game where they are shown a button and have the choice to press it or not, if neither presses it they get $1.000, if both press it they get nothing, and if only the human presses it they get $1.000.000 (in all cases, the money is split between the copies). In such a scenario, you better hope that the zombie doesn’t follow your advice and reason that it has to be a human.