Doesn’t it mean that consciousness is epiphenomenon? As all quantum algorithms can be expressed as equivalent classical algorithms, and we can have unconscious computer which is functionally equivalent to human brain.
ETA: I can’t see any reason to associate consciousness with some particular kind of physical object/process, as it undermines functional significance of consciousness as high-level coordination, decision making and self-representation system of brain.
No, it would just mean that you can have unconscious simulations of consciousness. Think of it like this. We say that the things in the universe which have causal power are “quantum tensor factors”, and consciousness always inhabits a single big tensor factor, but we can simulate it with lots of little ones interacting appropriately. More precisely, consciousness is some sort of structure which is actually present in the big tensor factor, but which is not actually present in any of the small ones. However, its dynamics and interactions can be simulated by the small ones collectively. Also, if you took a small tensor factor and made it individually “big” somehow (evolved it into a big state), it might individually be able to acquire consciousness. But the hypothesis is that consciousness as such is only ever found in one tensor factor, not in sets of them. It’s a slightly abstract conception when so many details are lacking, but it should be possible to understand the idea: the world is made of Xs, an individual X can have property Y, a set of Xs cannot, but a set of Xs can imitate the property.
What would really make consciousness epiphenomenal is if we persisted with property dualism, so we have the Xs, their “physical properties”, and then their correlated “subjective properties”. But the whole point of this exercise is to be able to say that the subjective properties (which we know to exist in ourselves) are the “physical properties” of a “big” X. That way, they can enter directly into cause and effect.
No, it would just mean that you can have unconscious simulations of consciousness.
Doesn’t this undermine your entire philosophical basis for your argument which rests on the experience of consciousness being real? if your system allows such an unconscious classical simulation then why believe you are one of the actual conscious entities? This seems P-Zombieish.
if your system allows such an unconscious classical simulation then why believe you are one of the actual conscious entities?
It’s like asking, why do you think you exist, when there are books with fictional characters in them? I don’t know exactly what is happening when I confirm by inspection that some reality exists or that I have consciousness. But I don’t see any reason to doubt the reality or efficacy of such epistemic processes, just because there should also be unconscious state machines that can mimic their causal structure.
I understand you. Your definition is “real consciousness” is quantum tensor factor that belong to particular class of quantum tensor factors, because we can find them in human brains and we know that at least one human brain is conscious and consciousness must be physical entity to participate in causal chain. All other quantum tensor factors and their sets are not consciousness by definition.
Questions are:
How to define said class without fuzziness, when it is yet not known what is not “real consciousness”? Should we include dolphins’ tensor factors, great apes’ ones and so on?
Is it always necessary for something to exist as physical entity to participate in causal chain? Does temperature exist as physical entity? Does “thermostatousness” of refrigerator exist as physical entity?
Of course, temperature and “termostatousness” are our high-level description of physical systems, they don’t exist in your sense. So, it seems that you see contradiction in subjectively apparent existence of consciousness and apparent nonexistence of physical representation of consciousness as high-level description of brain functions. Don’t you see flaw in that contradiction?
Causality for statistical or functional properties mostly reduces to generalizations about the behavior of exact microstates. (“Microstate” means physical state completely specified in its microscopic detail. A purely thermodynamic or macroscopic description is a “macrostate”.) The entropy goes up because most microstate trajectories go from the small phase-space volume into the large phase-space volume. Macroscopic objects have persistent traits because most microstate trajectories for those objects stay in the same approximate region of state space.
So the second question is about ontology of macrostate causation. I say it is fundamentally statistical. Cause and effect in elemental form only operates locally in the microstate, between and within fundamental entities, whatever they are. Macrostate tendencies are like theromodynamic laws or Zipf’s law, they are really statements about statistics of very large and complex chains of exact microscopic causal relations.
The usual materialist idea of consciousness is that it is also just a macrostate phenomenon and process. But as I explained, the macrostate definition is a little fuzzy, and this runs against the hypothesis that consciousness exists objectively. I will add that because these “monads” or “tensor factors” containing consciousness are necessarily very complex, there should be a sort of internal statistical dynamics. The laws of folk psychology might just be statistical mechanics of exact conscious states. But it is conceptually incoherent to say that consciousness is purely a high-level description if you think it exists objectively; it is the same fallacy as when some Buddhists say “everything only exists in the mind”, which then implies that the mind only exists in the mind. A “high-level description” is necessarily something which is partly conceptual in nature, and not wholly objectively independent in its existence, and this means it is partly mind-dependent.
The first question is a question about how a theory like this would develop in detail. I can’t say ahead of time. The physical premise is, the world is a web of tensor factors of various sizes, mostly small but a few of them big; and consciousness inhabits one of these big factors which exists during the lifetime of a brain. If a theory fulfilling the premise develops and makes sense, then I think you would expect any big tensor factor in a living organism, and also in any other physical system, to also correspond to some sort of consciousness. In principle, such a physical theory should itself tell you whether these big factors arise dynamically in a particular physical entity, given a specification of the entity.
Does this answer the final remark about contradiction? Each tensor factor exists completely objectively. The individual tensor factor which is complex enough to have consciousness also exists objectively and has its properties objectively, and such properties include all aspects of its subjectivity. The rest of the brain consists of the small tensor factors (which we would normally call uncorrelated or weakly correlated quantum particles), whose dynamics provide unconscious computation to supplement conscious dynamics of the big tensor factor. I think it is a self-consistent ontology in which consciousness exists objectively, fundamentally, and exactly, and I think we need such an ontology because of the paradox of saying otherwise, “the mind only exists in the mind”.
If a theory fulfilling the premise develops and makes sense, then I think you would expect any big tensor factor in a living organism, and also in any other physical system, to also correspond to some sort of consciousness.
What will make demarcation line between small and big tensor factors less fuzzy than the macrostate definition?
If we will feed internal states of classical brain simulation into quantum box (outputs discarded), containing 10^2 or 10^20 entangled particles/quasi-particles, will it make simulation conscious? How in principle can we determine that it will or will not?
A “high-level description” is necessarily something which is partly conceptual in nature, and not wholly objectively independent in its existence, and this means it is partly mind-dependent.
Interesting thing is that mind as a high-level description of brain workings is mind-dependent on the same mind (it’s not a paradox, but a recursion), not on a mind. Different observers will agree on the content of high-level model of brain workings presented in same brain, as that model is unambiguously determined by the structure of brain. Thus mind is subjective in a sense that it is conceptual description of brain workings (including concepts of self, mind and so on), and mind is objective in a sense that its content can be reconstructed from structure of brain.
I think we need such an ontology because of the paradox of saying otherwise, “the mind only exists in the mind”.
It isn’t paradox, really.
I can’t help imagining procedure of accepting works on philosophy of mind: “Please, show your tensor factor. … Zombies and simulations are not allowed. Next”.
What will make demarcation line between small and big tensor factors less fuzzy than the macrostate definition?
The difference is between conscious and not conscious. This will translate mathematically into presence or absence of some particular structure in the “tensor factor”. I can’t tell you what structure because I don’t have the theory, of course. I’m just sketching how a theory of this kind might work. But the difference between small and big is number of internal degrees of freedom. It is reasonable to suppose that among the objects containing the consciousness structure, there is a nontrivial lower bound on the number of degrees of freedom. Here is where we can draw a line between small and big, since the small tensor factors by definition can’t contain the special structure and so truly cannot be conscious. However, being above the threshold would just be necessary but not sufficient, for presence of consciousness.
How in principle can we determine that [something] will or will not [be conscious]?
If you have a completed theory of consciousness, then you answer this question just as you would answer any other empirical question in a domain where you have a well-tested theory: You evaluate the data using the theory. If the theory tells you all the tensor factors in the box are below the magic threshold, there’s definitely no consciousness there. If there might be some big tensor factors present, it will be more complicated, but it will still be standard reasoning.
If you are still developing the theory, you should focus just on the examples which will help you finish it, e.g. Roko’s example of general anesthesia. That might be an important clue to how biology, phenomenology, and physical reality go together. Eventually you have a total theory and then you can apply it to other organisms, artificial quantum systems like in your thought experiment, and so on.
Different observers will agree on the content of high-level model of brain workings presented in same brain, as that model is unambiguously determined by the structure of brain.
Any causal model using macrostates leaves out some micro information. For any complex physical system, there is a hierarchy of increasingly coarse-grained macrostate models. At the bottom of the hierarchy is exact physical fact—one model state for each exact physical microstate. At the top of the hierarchy is trivial model with no dynamics—same macrostate for all possible microstates. In between are many possible coarse-grainings, in which microstates are combined into macrostates. (A macrostate is therefore a region in the microscopic state space.)
So there is no single macrostate model of the brain determined by its structure. There is always a choice of which coarse-graining to use. Maybe now you can see the problem: if conscious states are computational macrostates, then they are not objectively grounded, because every macrostate exists in the context of a particular coarse-graining, and other ones are always possible.
So there is no single macrostate model of the brain determined by its structure. There is always a choice of which coarse-graining to use. Maybe now you can see the problem: if conscious states are computational macrostates, then they are not objectively grounded, because every macrostate exists in the context of a particular coarse-graining, and other ones are always possible.
Here’s the point of divergence. There is peculiar coarse-graining. Specifically it is conceptual self-model consciousness uses to operate on (as a wrote earlier it uses concepts of self, mind, desire, intention, emotion, memory, feeling, etc. When I think “I want to know more”, my consciousness uses concepts of that model to (crudely) represent actual state of (part of) brain including parts which represent model itself). Thus, to find a consciousness in a system it is necessary to find a coarse-graining such that corresponding macrostate of system is isomorphic to physical state of part of the system (it is not sufficient, however). Or in map-territory analogy to find a part of territory that isomorphic to a (crude) map of territory.
Edit: Well, it seems that lower bound on information content of map is necessary for this approach too. However, this approach doesn’t require adding fundamental ontological concepts.
Edit: Isomorphism condition is too limiting, it will require another level of course-graining to be true. I’ll try to come up with another definition.
Doesn’t it mean that consciousness is epiphenomenon? As all quantum algorithms can be expressed as equivalent classical algorithms, and we can have unconscious computer which is functionally equivalent to human brain.
ETA: I can’t see any reason to associate consciousness with some particular kind of physical object/process, as it undermines functional significance of consciousness as high-level coordination, decision making and self-representation system of brain.
No, it would just mean that you can have unconscious simulations of consciousness. Think of it like this. We say that the things in the universe which have causal power are “quantum tensor factors”, and consciousness always inhabits a single big tensor factor, but we can simulate it with lots of little ones interacting appropriately. More precisely, consciousness is some sort of structure which is actually present in the big tensor factor, but which is not actually present in any of the small ones. However, its dynamics and interactions can be simulated by the small ones collectively. Also, if you took a small tensor factor and made it individually “big” somehow (evolved it into a big state), it might individually be able to acquire consciousness. But the hypothesis is that consciousness as such is only ever found in one tensor factor, not in sets of them. It’s a slightly abstract conception when so many details are lacking, but it should be possible to understand the idea: the world is made of Xs, an individual X can have property Y, a set of Xs cannot, but a set of Xs can imitate the property.
What would really make consciousness epiphenomenal is if we persisted with property dualism, so we have the Xs, their “physical properties”, and then their correlated “subjective properties”. But the whole point of this exercise is to be able to say that the subjective properties (which we know to exist in ourselves) are the “physical properties” of a “big” X. That way, they can enter directly into cause and effect.
Doesn’t this undermine your entire philosophical basis for your argument which rests on the experience of consciousness being real? if your system allows such an unconscious classical simulation then why believe you are one of the actual conscious entities? This seems P-Zombieish.
It’s like asking, why do you think you exist, when there are books with fictional characters in them? I don’t know exactly what is happening when I confirm by inspection that some reality exists or that I have consciousness. But I don’t see any reason to doubt the reality or efficacy of such epistemic processes, just because there should also be unconscious state machines that can mimic their causal structure.
I understand you. Your definition is “real consciousness” is quantum tensor factor that belong to particular class of quantum tensor factors, because we can find them in human brains and
we know that at least one human brain is conscious and
consciousness must be physical entity to participate in causal chain.
All other quantum tensor factors and their sets are not consciousness by definition.
Questions are:
How to define said class without fuzziness, when it is yet not known what is not “real consciousness”? Should we include dolphins’ tensor factors, great apes’ ones and so on?
Is it always necessary for something to exist as physical entity to participate in causal chain? Does temperature exist as physical entity? Does “thermostatousness” of refrigerator exist as physical entity?
Of course, temperature and “termostatousness” are our high-level description of physical systems, they don’t exist in your sense. So, it seems that you see contradiction in subjectively apparent existence of consciousness and apparent nonexistence of physical representation of consciousness as high-level description of brain functions. Don’t you see flaw in that contradiction?
Causality for statistical or functional properties mostly reduces to generalizations about the behavior of exact microstates. (“Microstate” means physical state completely specified in its microscopic detail. A purely thermodynamic or macroscopic description is a “macrostate”.) The entropy goes up because most microstate trajectories go from the small phase-space volume into the large phase-space volume. Macroscopic objects have persistent traits because most microstate trajectories for those objects stay in the same approximate region of state space.
So the second question is about ontology of macrostate causation. I say it is fundamentally statistical. Cause and effect in elemental form only operates locally in the microstate, between and within fundamental entities, whatever they are. Macrostate tendencies are like theromodynamic laws or Zipf’s law, they are really statements about statistics of very large and complex chains of exact microscopic causal relations.
The usual materialist idea of consciousness is that it is also just a macrostate phenomenon and process. But as I explained, the macrostate definition is a little fuzzy, and this runs against the hypothesis that consciousness exists objectively. I will add that because these “monads” or “tensor factors” containing consciousness are necessarily very complex, there should be a sort of internal statistical dynamics. The laws of folk psychology might just be statistical mechanics of exact conscious states. But it is conceptually incoherent to say that consciousness is purely a high-level description if you think it exists objectively; it is the same fallacy as when some Buddhists say “everything only exists in the mind”, which then implies that the mind only exists in the mind. A “high-level description” is necessarily something which is partly conceptual in nature, and not wholly objectively independent in its existence, and this means it is partly mind-dependent.
The first question is a question about how a theory like this would develop in detail. I can’t say ahead of time. The physical premise is, the world is a web of tensor factors of various sizes, mostly small but a few of them big; and consciousness inhabits one of these big factors which exists during the lifetime of a brain. If a theory fulfilling the premise develops and makes sense, then I think you would expect any big tensor factor in a living organism, and also in any other physical system, to also correspond to some sort of consciousness. In principle, such a physical theory should itself tell you whether these big factors arise dynamically in a particular physical entity, given a specification of the entity.
Does this answer the final remark about contradiction? Each tensor factor exists completely objectively. The individual tensor factor which is complex enough to have consciousness also exists objectively and has its properties objectively, and such properties include all aspects of its subjectivity. The rest of the brain consists of the small tensor factors (which we would normally call uncorrelated or weakly correlated quantum particles), whose dynamics provide unconscious computation to supplement conscious dynamics of the big tensor factor. I think it is a self-consistent ontology in which consciousness exists objectively, fundamentally, and exactly, and I think we need such an ontology because of the paradox of saying otherwise, “the mind only exists in the mind”.
What will make demarcation line between small and big tensor factors less fuzzy than the macrostate definition? If we will feed internal states of classical brain simulation into quantum box (outputs discarded), containing 10^2 or 10^20 entangled particles/quasi-particles, will it make simulation conscious? How in principle can we determine that it will or will not?
Interesting thing is that mind as a high-level description of brain workings is mind-dependent on the same mind (it’s not a paradox, but a recursion), not on a mind. Different observers will agree on the content of high-level model of brain workings presented in same brain, as that model is unambiguously determined by the structure of brain. Thus mind is subjective in a sense that it is conceptual description of brain workings (including concepts of self, mind and so on), and mind is objective in a sense that its content can be reconstructed from structure of brain.
It isn’t paradox, really.
I can’t help imagining procedure of accepting works on philosophy of mind: “Please, show your tensor factor. … Zombies and simulations are not allowed. Next”.
The difference is between conscious and not conscious. This will translate mathematically into presence or absence of some particular structure in the “tensor factor”. I can’t tell you what structure because I don’t have the theory, of course. I’m just sketching how a theory of this kind might work. But the difference between small and big is number of internal degrees of freedom. It is reasonable to suppose that among the objects containing the consciousness structure, there is a nontrivial lower bound on the number of degrees of freedom. Here is where we can draw a line between small and big, since the small tensor factors by definition can’t contain the special structure and so truly cannot be conscious. However, being above the threshold would just be necessary but not sufficient, for presence of consciousness.
If you have a completed theory of consciousness, then you answer this question just as you would answer any other empirical question in a domain where you have a well-tested theory: You evaluate the data using the theory. If the theory tells you all the tensor factors in the box are below the magic threshold, there’s definitely no consciousness there. If there might be some big tensor factors present, it will be more complicated, but it will still be standard reasoning.
If you are still developing the theory, you should focus just on the examples which will help you finish it, e.g. Roko’s example of general anesthesia. That might be an important clue to how biology, phenomenology, and physical reality go together. Eventually you have a total theory and then you can apply it to other organisms, artificial quantum systems like in your thought experiment, and so on.
Any causal model using macrostates leaves out some micro information. For any complex physical system, there is a hierarchy of increasingly coarse-grained macrostate models. At the bottom of the hierarchy is exact physical fact—one model state for each exact physical microstate. At the top of the hierarchy is trivial model with no dynamics—same macrostate for all possible microstates. In between are many possible coarse-grainings, in which microstates are combined into macrostates. (A macrostate is therefore a region in the microscopic state space.)
So there is no single macrostate model of the brain determined by its structure. There is always a choice of which coarse-graining to use. Maybe now you can see the problem: if conscious states are computational macrostates, then they are not objectively grounded, because every macrostate exists in the context of a particular coarse-graining, and other ones are always possible.
Here’s the point of divergence. There is peculiar coarse-graining. Specifically it is conceptual self-model consciousness uses to operate on (as a wrote earlier it uses concepts of self, mind, desire, intention, emotion, memory, feeling, etc. When I think “I want to know more”, my consciousness uses concepts of that model to (crudely) represent actual state of (part of) brain including parts which represent model itself). Thus, to find a consciousness in a system it is necessary to find a coarse-graining such that corresponding macrostate of system is isomorphic to physical state of part of the system (it is not sufficient, however). Or in map-territory analogy to find a part of territory that isomorphic to a (crude) map of territory.
Edit: Well, it seems that lower bound on information content of map is necessary for this approach too. However, this approach doesn’t require adding fundamental ontological concepts.
Edit: Isomorphism condition is too limiting, it will require another level of course-graining to be true. I’ll try to come up with another definition.