What will make demarcation line between small and big tensor factors less fuzzy than the macrostate definition?
The difference is between conscious and not conscious. This will translate mathematically into presence or absence of some particular structure in the “tensor factor”. I can’t tell you what structure because I don’t have the theory, of course. I’m just sketching how a theory of this kind might work. But the difference between small and big is number of internal degrees of freedom. It is reasonable to suppose that among the objects containing the consciousness structure, there is a nontrivial lower bound on the number of degrees of freedom. Here is where we can draw a line between small and big, since the small tensor factors by definition can’t contain the special structure and so truly cannot be conscious. However, being above the threshold would just be necessary but not sufficient, for presence of consciousness.
How in principle can we determine that [something] will or will not [be conscious]?
If you have a completed theory of consciousness, then you answer this question just as you would answer any other empirical question in a domain where you have a well-tested theory: You evaluate the data using the theory. If the theory tells you all the tensor factors in the box are below the magic threshold, there’s definitely no consciousness there. If there might be some big tensor factors present, it will be more complicated, but it will still be standard reasoning.
If you are still developing the theory, you should focus just on the examples which will help you finish it, e.g. Roko’s example of general anesthesia. That might be an important clue to how biology, phenomenology, and physical reality go together. Eventually you have a total theory and then you can apply it to other organisms, artificial quantum systems like in your thought experiment, and so on.
Different observers will agree on the content of high-level model of brain workings presented in same brain, as that model is unambiguously determined by the structure of brain.
Any causal model using macrostates leaves out some micro information. For any complex physical system, there is a hierarchy of increasingly coarse-grained macrostate models. At the bottom of the hierarchy is exact physical fact—one model state for each exact physical microstate. At the top of the hierarchy is trivial model with no dynamics—same macrostate for all possible microstates. In between are many possible coarse-grainings, in which microstates are combined into macrostates. (A macrostate is therefore a region in the microscopic state space.)
So there is no single macrostate model of the brain determined by its structure. There is always a choice of which coarse-graining to use. Maybe now you can see the problem: if conscious states are computational macrostates, then they are not objectively grounded, because every macrostate exists in the context of a particular coarse-graining, and other ones are always possible.
So there is no single macrostate model of the brain determined by its structure. There is always a choice of which coarse-graining to use. Maybe now you can see the problem: if conscious states are computational macrostates, then they are not objectively grounded, because every macrostate exists in the context of a particular coarse-graining, and other ones are always possible.
Here’s the point of divergence. There is peculiar coarse-graining. Specifically it is conceptual self-model consciousness uses to operate on (as a wrote earlier it uses concepts of self, mind, desire, intention, emotion, memory, feeling, etc. When I think “I want to know more”, my consciousness uses concepts of that model to (crudely) represent actual state of (part of) brain including parts which represent model itself). Thus, to find a consciousness in a system it is necessary to find a coarse-graining such that corresponding macrostate of system is isomorphic to physical state of part of the system (it is not sufficient, however). Or in map-territory analogy to find a part of territory that isomorphic to a (crude) map of territory.
Edit: Well, it seems that lower bound on information content of map is necessary for this approach too. However, this approach doesn’t require adding fundamental ontological concepts.
Edit: Isomorphism condition is too limiting, it will require another level of course-graining to be true. I’ll try to come up with another definition.
The difference is between conscious and not conscious. This will translate mathematically into presence or absence of some particular structure in the “tensor factor”. I can’t tell you what structure because I don’t have the theory, of course. I’m just sketching how a theory of this kind might work. But the difference between small and big is number of internal degrees of freedom. It is reasonable to suppose that among the objects containing the consciousness structure, there is a nontrivial lower bound on the number of degrees of freedom. Here is where we can draw a line between small and big, since the small tensor factors by definition can’t contain the special structure and so truly cannot be conscious. However, being above the threshold would just be necessary but not sufficient, for presence of consciousness.
If you have a completed theory of consciousness, then you answer this question just as you would answer any other empirical question in a domain where you have a well-tested theory: You evaluate the data using the theory. If the theory tells you all the tensor factors in the box are below the magic threshold, there’s definitely no consciousness there. If there might be some big tensor factors present, it will be more complicated, but it will still be standard reasoning.
If you are still developing the theory, you should focus just on the examples which will help you finish it, e.g. Roko’s example of general anesthesia. That might be an important clue to how biology, phenomenology, and physical reality go together. Eventually you have a total theory and then you can apply it to other organisms, artificial quantum systems like in your thought experiment, and so on.
Any causal model using macrostates leaves out some micro information. For any complex physical system, there is a hierarchy of increasingly coarse-grained macrostate models. At the bottom of the hierarchy is exact physical fact—one model state for each exact physical microstate. At the top of the hierarchy is trivial model with no dynamics—same macrostate for all possible microstates. In between are many possible coarse-grainings, in which microstates are combined into macrostates. (A macrostate is therefore a region in the microscopic state space.)
So there is no single macrostate model of the brain determined by its structure. There is always a choice of which coarse-graining to use. Maybe now you can see the problem: if conscious states are computational macrostates, then they are not objectively grounded, because every macrostate exists in the context of a particular coarse-graining, and other ones are always possible.
Here’s the point of divergence. There is peculiar coarse-graining. Specifically it is conceptual self-model consciousness uses to operate on (as a wrote earlier it uses concepts of self, mind, desire, intention, emotion, memory, feeling, etc. When I think “I want to know more”, my consciousness uses concepts of that model to (crudely) represent actual state of (part of) brain including parts which represent model itself). Thus, to find a consciousness in a system it is necessary to find a coarse-graining such that corresponding macrostate of system is isomorphic to physical state of part of the system (it is not sufficient, however). Or in map-territory analogy to find a part of territory that isomorphic to a (crude) map of territory.
Edit: Well, it seems that lower bound on information content of map is necessary for this approach too. However, this approach doesn’t require adding fundamental ontological concepts.
Edit: Isomorphism condition is too limiting, it will require another level of course-graining to be true. I’ll try to come up with another definition.