The binding problem asks how a unified first person perspective (1PP) can bind experiences across multiple physically distinct activities, whether billions of individual neurons firing or some other underlying phenomenon. To a first approximation, the boundary problem asks why we experience hard boundaries around those unified 1PPs and why the boundaries operate at their apparent spatiotemporal scale.
The binding problem is fairly easy to understand the basics of: it’s about how “our consciousness somehow binds multiple discrete features into a single unified awareness.” The boundary problem is elaborated on:
At the same time, our consciousness does not “bind” features without limit—what we experience varies over time and is thus always strictly a subset of what could be experienced. There is an edge to our awareness, a boundary around us that is generally felt to exist at the human-scale of experience, rather than at the cellular or societal level.
To me this seems not to be a problem with my preferred functionalist-ish theories of consciousness. A single brain mainly processes information from itself, and processes information from outside through fairly limited channels such as sensory perception. Therefore, it is unsurprising that no one experiences everything, because no one knows everything.
One could speculate about there being super-entities, such as corporations, that have consciousness spanning many humans. This is not a major challenge to functionalism. If McDonalds experiences things its employees do not experience, it does not follow that any McDonalds employee writing papers about consciousness would report experiencing whatever McDonalds knows. That information is simply not present within the physical sub-system of the McDonalds employee writing a paper about consciousness.
One can also speculate about the anthropic question of why we experience being a human rather than a corporation. Here a few things can be said:
Some level of complexity of mind is required to be conscious in the relevant meta-aware sense (see HOT theories of consciousness).
The information-transmission connections within a single brain are much denser than those between brains, making the brain a relevant relatively-independent unit of mind.
While some social structures may know things comparable to a human, this is not at all clear (especially because of the low bandwidth between humans within a social structure), and even if true, the number of such structures possessing meta-knowledge comparable to a single human is low compared to the number of humans, such that is unsurprising, anthropically, to not be one of them.
Andrés does take issue with functionalist theories dealing with the boundary problem, however:
For instance, information or causality driven solutions to the binding problem, e.g., functional or computational theories of mind (discussion in Gómez-Emilsson and Percy, 2022), might define phenomenal binding as occurring when two items interact causally or are associated with each other in a database. The challenge is that there is no neat boundary where the causal interactions or informational associations should stop—the solution over-delivers and everything ends up bound together.
I believe what I’ve already written is mostly sufficient to address this criticism. I do, however, think there are details to be worked out, in line with Andrés’s statement that “in a continually changing, interacting environmental topology of connectivity strengths, it is necessary to address the challenge of separating systems from subsystems in a disciplined manner, particularly if a hard boundary is desired.” While the dense informational connections within the brain are an important feature, a mathematical theory is not worked out.
Much of what Andrés seeks to address is the relation between conscious discreteness and physical continuity: “the wheel regarded as a wheel is discrete, but regarded as a piece of matter, it is continuous.” I’ve written before about the difficulties of continuous theories such as many-worlds explaining discrete observers and observer experiences, and some physicists such as Jess Reidl have tried working out the problem of deriving discrete branches from the continuous Schroedinger equation.
To get simpler than many-worlds, one may consider identifying discrete features in the state of a dynamical system. For example, consider a sinusoidal hills-and-valleys system defined as dx/dt = sin(x). The x values for which sin(x) = 0 experience no change over time. However, x=0 is unstable (x slightly higher than 0 will tend to increase, and x slightly lower than 0 will tend to decrease), while x=pi is stable, and is an attractor. Thus, almost all x values will fall into one of the “valleys”, corresponding to the integers.
One could also consider implementing a finite state machine with a dynamical system. The dynamical system can be divided into regions, where a point in one region either stays in that region or goes to one of some set of other regions. The evolution of the point over time would therefore be predicted by a deterministic or non-deterministic finite state machine (assuming the number of regions is finite).
Electronic computers must implement digital signals on an analog substrate, e.g. using analog-to-digital converters, ensuring that continuous states fall within a discrete set of regions that behave according to discrete dynamics over time. The general theory of factoring a dynamical system into a discrete set of states that evolve over time is, however, underspecified, at least to my knowledge.
Electromagnetic theories of consciousness
Electromagnetic theories of consciousness come up in the problem of demarcating individuals:
Fekete et al. (2016) suggest a solution, asking researchers to look for properties that offer a principled mechanism for singling out intrinsic systems, demarcating systems as a matter of fact rather than being a matter of interpretation from different observers’ viewpoints. Later discussions in this section on phase transitions and the field topology in section “4. Topological segmentation of EM fields as a resolution direction” are suggested as examples of such intrinsic mechanisms.
To me this seems to be jumping to conclusions. While the boundaries of what constitutes a mind are imperfect (since brains interact with, and are part of, the rest of the world), and under-defined, it does not follow that electromagnetic connections are the only relevant ones to pay attention to. One can imagine, for example, building a mind out of mechanical components such as rolling balls.
This suggests that information processing can happen without electromagnetism. On the other hand, electromagnetism can happen without information processing. One could imagine a lot of brains that are connected with each other, but only in ways that are irrelevant to the functioning of core systems controlling sensory input, behavior, memory, and so on. For example, the brains may relay information between each other that gets processed by a pseudorandom number generator and fed back in as a source of noise (to the extent that the brain, algorithmically, utilizes noise for its functioning, which seems likely). These electromagnetic connections would be irrelevant (compared to “default” noise generators) with respect to the functioning of mental sub-systems corresponding to memory, motor control, and so on.
Electromagnetic connection, like IIT, may be a good proxy for consciousness in the empirical world (such as by labeling individual brains as discrete entities), but the theory is not robust to very different situations.
Andrés repeatedly mentions larger-scale conscious entities than humans as a problem for theories: “It is possible that all nested levels of resonance represent separate levels of consciousness, as Hunt (2020) suggests, but then we must explain why the human experience is so consistently at the meso level.”). As before, I emphasize that, even if such larger conscious entities exist, this would neither imply that humans experience the experiences of these larger entities, or that anthropic observers are likely to be them.
EM theories may have problems with explaining why humans do not experience each others’ experience:
A closer discussion of boundary problem issues comes in Jones (2013) discussion of how EM fields can be consistent with mental privacy, e.g., no telepathy even when our brains are close together such that some of the EM fields might overlap or merge. His solution is to identify consciousness of the relevant scale only in highly localised fields, unlike for instance the larger, more brain-wide fields of McFadden’s ToC. By requiring more local, stronger fields created in ion currents, the rapid decline in EM field strength with distance is sufficient that nothing gets past the boundaries of the physical brain and thus into telepathic territory. This perhaps resolves the “macro” side of Rosenberg’s problem, but not the “micro” side. There must be many candidates for boundaries within the brain that enclose sufficiently strong EM fields; why does it appear—at least most of the time—that we only experience one? Which one is it and why?
The difficulty of telepathy is broadly explained by conventional theories of mind, which emphasize that the brain processes information from a limited set of channels, such as eyesight and hearing. Whether or not additional multi-human observers exist in some ontological sense, they aren’t relevantly in control of human motor functions.
This is not such an issue for functionalism, which can note that core mental sub-systems of humans, such as memory, do not receive information from nearby other humans, except through a limited set of sensory channels. The segmentation comes from the identification of a set of connected mental functions, with information-theoretic connections rather than electro-magnetic connections being the primary object of attention.
The addition of a locality factor seems rather ad-hoc, and like it would fail to generalize to minds that span across space. One could imagine an alternative universe where evolution created organisms that use something similar to WiFi to produce minds that span across space. Functionalism would say that, given the informational connections, the distance is irrelevant, while the locality factor would exclude these connections. In that alternative universe, would EM theorists of consciousness have suggested not including a locality factor, so as to continue believing that their space-spanning consciousness exists?
In general, I do not consider the possibility of group consciousness to be a major problem for theories of consciousness; as stated before, the existence of a group consciousness does not imply that any member of that group would have experience/knowledge corresponding to that group consciousness. The problem is more in taking individual minds with dense connections (e.g. a single brain, or perhaps a brain hemisphere) to be minds, and not blowing up the number of minds too badly for anthropic purposes.
Five boundary problems
Andrés goes on to discuss five different sub-problems of the boundary problem: the hard boundary problem, the lower-levels boundary problem, the higher-levels boundary problem, the private boundary problem, and the temporal boundary problem.
The hard boundary problem “observes that our phenomenal field, or our first person perspective, appears to be enclosed by a firm, absolute boundary. There is a qualitative difference between the things that enter that phenomenal field and things that do not”. This seems to be an odd formulation. A theory of consciousness should say something about what different minds experience. It is implicit in having a theory of consciousness at all that not everything is experienced. To the extent that the brain utilizes something like analog-to-digital converters, an analog signal must be strong enough to be picked up as a digital signal (e.g. a neuron firing; neurotransmitters are discrete molecules). A memory sub-system, such as the hippocampus, has a limited space (perhaps corresponding to bits, perhaps not) with which to store information.
The lower-levels boundary problem is about why consciousness is not at a low level of physical abstraction: “consider an alien observer who does not rely on photons to construct a model of the outside world, but instead relies solely on senses of sound waves or gravitational waves. The boundaries of the human system relative to the lower/higher levels around it no longer necessarily look as unique.” The functionalist answer is that, below a certain level of abstraction, not enough mental sub-systems (such as memory and higher-order thought) are implemented. However, there could still be functionalist micro-consciousnesses, e.g. brain hemispheres, that do implement these mental functions. This is not a huge problem for that theory, as these consciousnesses would not control human behavior in the relevant direct sense for writing about consciousness to be from their perspectives.
The higher-levels boundary problem is similar, about why our experience is not of larger minds, and has already been discussed.
The private boundary problem is about why we do not experience what others experience, and has likewise already been discussed.
The temporal boundary problem is about how experience is constituted over time: “The temporal binding problem asks how the moments are knitted together over time to feel like part of the same experience. The temporal boundary problem asks how, once we have a boundary around a static experience or a particular moment of 1PP, that boundary can shift mostly contiguously to have different shapes in future moments”. It seems to me that the emphasis in answering this question must be on memory sub-systems of the brain, which do the task of knitting together experience over time, and focusing on other levels of abstraction, such as EM fields, would fail to match reports of experience, e.g. by assuming there is more connection in experience over time than would actually be remembered and reported on.
Topological segmentation of EM fields
Andrés discusses how these problems may be resolved with field topology. Field topology is about what one would expect from the name: “Field topology refers to the geometric properties of an EM field object that are preserved under continuous transformations, such as stretching, bending, or twisting”. This can be used to find patterns in the electromagnetic fields of the brain.
Some features of EM fields prevent field transmission under some circumstances: “Closed EM structures with certain durations can be understood as enclosing electromagnetic space so as to temporarily prevent the transit of energy with that same EM spectral range outside of the space”. These enclosures may affect the field topology.
EM theories of consciousness may deal badly with special relativity, to the extent that they assume a naive notion of simultaneity: “To the extent proposed consciousness generating mechanisms rely on synchronicity or in-phase frequencies at different locations (necessary if they are to resolve the binding problem between those locations, for instance), those consciousness would not be bound together from the perspective of anyone in any reference frame moving relative to the first”. Topology may, on the other hand, be relatively unaffected by this, having Lorentz variance.
Andrés discusses epiphenomenalism: “Epiphenomenalism… is the weaker claim that the 1PP experience is a by-product of particular physical processes in the human system that does not directly causally interact with those particular processes.” On the general understanding that consciousness theories look at a physical system and identify observers within it, it is hard to avoid these “observers” being epiphenomenal, as one is imagining identifying observers “after” a physical system is already defined and in motion. I believe alternative views of consciousness, such as ones where first-person experience is intrinsic to the constitution of physics theories (hinted at in a previous review, some background discussed in a previous article), may shed more light on epiphenomenalism.
Andrés believes EM theories may avoid epiphenomenalism: “In other words, EM fields are not merely a side-effect of electrical activity in the brain, but in fact influence the activity of individual neurons and their parts”. It seems that, by identifying consciousness with a physical property, consciousness would have physical effects. This is not unique to EM theories, however. For example, a functionalist could identify consciousness with a high-level property of a physical system, implying that if consciousness were different, then the physical system would behave differently; e.g. if bits in a computer register were different, then the computation and subsequent physical output of the computer would change.
How might topological EM theories address the boundary problem? Regarding the first problem: “The complexity of the contents of unified experience, i.e., often containing multiple shapes or features, is explained because the field contains all the information of the EM activity that gives rise to it, including computational insights and assessments from relevant diverse brain modules operating via a neuronal architecture”. It seems that, while EM fields are more unified than, say, individual neurons, they still don’t contain the entire information processing of the brain, e.g. neurotransmitters. It is the system of EM fields and other parts (like neurotransmitters) that implements the set of mental functions, such as memory, that is causal in reports about conscious experience.
Regarding the second and third problems: “The second and third problems are resolved by accepting all well-bounded 4D topological pockets to have their own 1PP, potentially of a very rudimentary and short-lasting nature. Depending on the mechanisms involved, there may be dozens or billions of these in any one system, both smaller than and larger than the meso-level humans typically experience and discuss with each other.” I agree in that it is not a huge problem for a theory of consciousness to accept conscious entities at different levels of abstraction than human minds. However, multiplying the number of conscious entities too much may create problems for anthropics, as one would expect to “probably be” a lower-level entity if there are many of those.
Regarding the fifth problem: “Of all the well-bounded 4D topological pockets that might exist in the brain, we suggest that only one of them bounds a field that encloses (and hence integrates) EM activity emerging from the brain’s immediate memory modules”. EM fields exist across space and time, hence being 4D. It is correct to focus on the brain’s memory modules, as those are what has the computational power to bind experience over time. However, if one is accepting “memory modules” as a well-defined unit of analysis, it is unclear why adding EM fields helps, compared to looking at information transmission between the memory module and other computational modules.
Regarding the forth problem: “The merging of two 1PPs into a single 1PP is only a relevant question for a single 4D pocket in any case, existing for a very short period of time. This may be possible in principle but extremely difficult, since any attempt to bring the necessary modules close enough would likely destroy the physical mechanisms that generate a pocket with the right hard boundaries to enclose a 1PP with any temporal persistence (even microseconds).” I don’t have much to say on the physics of this. It seems that a group mind would by default fail to have its own memory sub-system, and one can analyze entities that have memory by studying connections of the memory sub-system to whatever other systems it is connected to, whether they are in the same human brain or not.
Conclusion
Overall I’m probably not the target audience for this article. It seems to be intended for people who have already rejected a functionalist theory of consciousness and are looking for a theory that explains consciousness in terms of relatively low-level physical properties of systems, such as EM fields. The article details problems with EM theories and how topology may ameliorate these problems, but is more suggestive at an area to look for solutions than a proposal in itself (as it acknowledges).
There is a way in which much of the theory-building discussed in the paper is “working backwards”: it’s looking at high-level experiences which closely resemble reports of consciousness, and seeking to find low-level physical entities which have properties corresponding to these entities. The problem is that, if you are trying to do something like “curve fitting” on experiences that closely resemble human reports about experience, there will be many epicyclic contortions to get these to match up (such as the locality criterion), if one is not trying to make one’s explanations match up with the actual causal chain that produces reports of consciousness.
If one is seeking to explain reports of consciousness, understanding the functioning of the mind and brain is a cognitive science problem that is not especially metaphysically difficult. If one is seeking to define a metaphysical entity of “consciousness” that need not match these reports, but could be used in anthropics, one can do metaphysical study of consciousness. But EM theories seem to be trying to do both at the same time: they’re trying to find a metaphysically sensible entity corresponding to “consciousness” which corresponds to empirical reports of consciousness, without paying much attention to the multi-layered system that generates reports of consciousness.
Attending primarily to the functioning of this system would by default lead to a functionalist theory of consciousness: it is very unsurprising, for example, that the functioning of the mental human sub-systems do not have access to telepathic information. However, one runs into problems when using metaphysical considerations to select consciousness-bearing entities such as EM fields, while also trying to match human reports of conscious experience.
As such, I don’t find this overall line of research to be all that useful or compelling, and remain mostly a functionalist who thinks about the first-personal metaphysics of science to supplement theories of consciousness.
A review of “Don’t forget the boundary problem...”
Link post
Introduction
This is a review of “Don’t forget the boundary problem! How EM field topology can address the overlooked cousin to the binding problem for consciousness”, by Andrés Gómez-Emilsson. This article seeks to address two problems with theories of consciousness, using an electromagnetic theory:
The binding problem is fairly easy to understand the basics of: it’s about how “our consciousness somehow binds multiple discrete features into a single unified awareness.” The boundary problem is elaborated on:
To me this seems not to be a problem with my preferred functionalist-ish theories of consciousness. A single brain mainly processes information from itself, and processes information from outside through fairly limited channels such as sensory perception. Therefore, it is unsurprising that no one experiences everything, because no one knows everything.
One could speculate about there being super-entities, such as corporations, that have consciousness spanning many humans. This is not a major challenge to functionalism. If McDonalds experiences things its employees do not experience, it does not follow that any McDonalds employee writing papers about consciousness would report experiencing whatever McDonalds knows. That information is simply not present within the physical sub-system of the McDonalds employee writing a paper about consciousness.
One can also speculate about the anthropic question of why we experience being a human rather than a corporation. Here a few things can be said:
Some level of complexity of mind is required to be conscious in the relevant meta-aware sense (see HOT theories of consciousness).
The information-transmission connections within a single brain are much denser than those between brains, making the brain a relevant relatively-independent unit of mind.
While some social structures may know things comparable to a human, this is not at all clear (especially because of the low bandwidth between humans within a social structure), and even if true, the number of such structures possessing meta-knowledge comparable to a single human is low compared to the number of humans, such that is unsurprising, anthropically, to not be one of them.
Andrés does take issue with functionalist theories dealing with the boundary problem, however:
I believe what I’ve already written is mostly sufficient to address this criticism. I do, however, think there are details to be worked out, in line with Andrés’s statement that “in a continually changing, interacting environmental topology of connectivity strengths, it is necessary to address the challenge of separating systems from subsystems in a disciplined manner, particularly if a hard boundary is desired.” While the dense informational connections within the brain are an important feature, a mathematical theory is not worked out.
Much of what Andrés seeks to address is the relation between conscious discreteness and physical continuity: “the wheel regarded as a wheel is discrete, but regarded as a piece of matter, it is continuous.” I’ve written before about the difficulties of continuous theories such as many-worlds explaining discrete observers and observer experiences, and some physicists such as Jess Reidl have tried working out the problem of deriving discrete branches from the continuous Schroedinger equation.
To get simpler than many-worlds, one may consider identifying discrete features in the state of a dynamical system. For example, consider a sinusoidal hills-and-valleys system defined as dx/dt = sin(x). The x values for which sin(x) = 0 experience no change over time. However, x=0 is unstable (x slightly higher than 0 will tend to increase, and x slightly lower than 0 will tend to decrease), while x=pi is stable, and is an attractor. Thus, almost all x values will fall into one of the “valleys”, corresponding to the integers.
One could also consider implementing a finite state machine with a dynamical system. The dynamical system can be divided into regions, where a point in one region either stays in that region or goes to one of some set of other regions. The evolution of the point over time would therefore be predicted by a deterministic or non-deterministic finite state machine (assuming the number of regions is finite).
Electronic computers must implement digital signals on an analog substrate, e.g. using analog-to-digital converters, ensuring that continuous states fall within a discrete set of regions that behave according to discrete dynamics over time. The general theory of factoring a dynamical system into a discrete set of states that evolve over time is, however, underspecified, at least to my knowledge.
Electromagnetic theories of consciousness
Electromagnetic theories of consciousness come up in the problem of demarcating individuals:
To me this seems to be jumping to conclusions. While the boundaries of what constitutes a mind are imperfect (since brains interact with, and are part of, the rest of the world), and under-defined, it does not follow that electromagnetic connections are the only relevant ones to pay attention to. One can imagine, for example, building a mind out of mechanical components such as rolling balls.
This suggests that information processing can happen without electromagnetism. On the other hand, electromagnetism can happen without information processing. One could imagine a lot of brains that are connected with each other, but only in ways that are irrelevant to the functioning of core systems controlling sensory input, behavior, memory, and so on. For example, the brains may relay information between each other that gets processed by a pseudorandom number generator and fed back in as a source of noise (to the extent that the brain, algorithmically, utilizes noise for its functioning, which seems likely). These electromagnetic connections would be irrelevant (compared to “default” noise generators) with respect to the functioning of mental sub-systems corresponding to memory, motor control, and so on.
Electromagnetic connection, like IIT, may be a good proxy for consciousness in the empirical world (such as by labeling individual brains as discrete entities), but the theory is not robust to very different situations.
Andrés repeatedly mentions larger-scale conscious entities than humans as a problem for theories: “It is possible that all nested levels of resonance represent separate levels of consciousness, as Hunt (2020) suggests, but then we must explain why the human experience is so consistently at the meso level.”). As before, I emphasize that, even if such larger conscious entities exist, this would neither imply that humans experience the experiences of these larger entities, or that anthropic observers are likely to be them.
EM theories may have problems with explaining why humans do not experience each others’ experience:
The difficulty of telepathy is broadly explained by conventional theories of mind, which emphasize that the brain processes information from a limited set of channels, such as eyesight and hearing. Whether or not additional multi-human observers exist in some ontological sense, they aren’t relevantly in control of human motor functions.
This is not such an issue for functionalism, which can note that core mental sub-systems of humans, such as memory, do not receive information from nearby other humans, except through a limited set of sensory channels. The segmentation comes from the identification of a set of connected mental functions, with information-theoretic connections rather than electro-magnetic connections being the primary object of attention.
The addition of a locality factor seems rather ad-hoc, and like it would fail to generalize to minds that span across space. One could imagine an alternative universe where evolution created organisms that use something similar to WiFi to produce minds that span across space. Functionalism would say that, given the informational connections, the distance is irrelevant, while the locality factor would exclude these connections. In that alternative universe, would EM theorists of consciousness have suggested not including a locality factor, so as to continue believing that their space-spanning consciousness exists?
In general, I do not consider the possibility of group consciousness to be a major problem for theories of consciousness; as stated before, the existence of a group consciousness does not imply that any member of that group would have experience/knowledge corresponding to that group consciousness. The problem is more in taking individual minds with dense connections (e.g. a single brain, or perhaps a brain hemisphere) to be minds, and not blowing up the number of minds too badly for anthropic purposes.
Five boundary problems
Andrés goes on to discuss five different sub-problems of the boundary problem: the hard boundary problem, the lower-levels boundary problem, the higher-levels boundary problem, the private boundary problem, and the temporal boundary problem.
The hard boundary problem “observes that our phenomenal field, or our first person perspective, appears to be enclosed by a firm, absolute boundary. There is a qualitative difference between the things that enter that phenomenal field and things that do not”. This seems to be an odd formulation. A theory of consciousness should say something about what different minds experience. It is implicit in having a theory of consciousness at all that not everything is experienced. To the extent that the brain utilizes something like analog-to-digital converters, an analog signal must be strong enough to be picked up as a digital signal (e.g. a neuron firing; neurotransmitters are discrete molecules). A memory sub-system, such as the hippocampus, has a limited space (perhaps corresponding to bits, perhaps not) with which to store information.
The lower-levels boundary problem is about why consciousness is not at a low level of physical abstraction: “consider an alien observer who does not rely on photons to construct a model of the outside world, but instead relies solely on senses of sound waves or gravitational waves. The boundaries of the human system relative to the lower/higher levels around it no longer necessarily look as unique.” The functionalist answer is that, below a certain level of abstraction, not enough mental sub-systems (such as memory and higher-order thought) are implemented. However, there could still be functionalist micro-consciousnesses, e.g. brain hemispheres, that do implement these mental functions. This is not a huge problem for that theory, as these consciousnesses would not control human behavior in the relevant direct sense for writing about consciousness to be from their perspectives.
The higher-levels boundary problem is similar, about why our experience is not of larger minds, and has already been discussed.
The private boundary problem is about why we do not experience what others experience, and has likewise already been discussed.
The temporal boundary problem is about how experience is constituted over time: “The temporal binding problem asks how the moments are knitted together over time to feel like part of the same experience. The temporal boundary problem asks how, once we have a boundary around a static experience or a particular moment of 1PP, that boundary can shift mostly contiguously to have different shapes in future moments”. It seems to me that the emphasis in answering this question must be on memory sub-systems of the brain, which do the task of knitting together experience over time, and focusing on other levels of abstraction, such as EM fields, would fail to match reports of experience, e.g. by assuming there is more connection in experience over time than would actually be remembered and reported on.
Topological segmentation of EM fields
Andrés discusses how these problems may be resolved with field topology. Field topology is about what one would expect from the name: “Field topology refers to the geometric properties of an EM field object that are preserved under continuous transformations, such as stretching, bending, or twisting”. This can be used to find patterns in the electromagnetic fields of the brain.
Some features of EM fields prevent field transmission under some circumstances: “Closed EM structures with certain durations can be understood as enclosing electromagnetic space so as to temporarily prevent the transit of energy with that same EM spectral range outside of the space”. These enclosures may affect the field topology.
EM theories of consciousness may deal badly with special relativity, to the extent that they assume a naive notion of simultaneity: “To the extent proposed consciousness generating mechanisms rely on synchronicity or in-phase frequencies at different locations (necessary if they are to resolve the binding problem between those locations, for instance), those consciousness would not be bound together from the perspective of anyone in any reference frame moving relative to the first”. Topology may, on the other hand, be relatively unaffected by this, having Lorentz variance.
Andrés discusses epiphenomenalism: “Epiphenomenalism… is the weaker claim that the 1PP experience is a by-product of particular physical processes in the human system that does not directly causally interact with those particular processes.” On the general understanding that consciousness theories look at a physical system and identify observers within it, it is hard to avoid these “observers” being epiphenomenal, as one is imagining identifying observers “after” a physical system is already defined and in motion. I believe alternative views of consciousness, such as ones where first-person experience is intrinsic to the constitution of physics theories (hinted at in a previous review, some background discussed in a previous article), may shed more light on epiphenomenalism.
Andrés believes EM theories may avoid epiphenomenalism: “In other words, EM fields are not merely a side-effect of electrical activity in the brain, but in fact influence the activity of individual neurons and their parts”. It seems that, by identifying consciousness with a physical property, consciousness would have physical effects. This is not unique to EM theories, however. For example, a functionalist could identify consciousness with a high-level property of a physical system, implying that if consciousness were different, then the physical system would behave differently; e.g. if bits in a computer register were different, then the computation and subsequent physical output of the computer would change.
How might topological EM theories address the boundary problem? Regarding the first problem: “The complexity of the contents of unified experience, i.e., often containing multiple shapes or features, is explained because the field contains all the information of the EM activity that gives rise to it, including computational insights and assessments from relevant diverse brain modules operating via a neuronal architecture”. It seems that, while EM fields are more unified than, say, individual neurons, they still don’t contain the entire information processing of the brain, e.g. neurotransmitters. It is the system of EM fields and other parts (like neurotransmitters) that implements the set of mental functions, such as memory, that is causal in reports about conscious experience.
Regarding the second and third problems: “The second and third problems are resolved by accepting all well-bounded 4D topological pockets to have their own 1PP, potentially of a very rudimentary and short-lasting nature. Depending on the mechanisms involved, there may be dozens or billions of these in any one system, both smaller than and larger than the meso-level humans typically experience and discuss with each other.” I agree in that it is not a huge problem for a theory of consciousness to accept conscious entities at different levels of abstraction than human minds. However, multiplying the number of conscious entities too much may create problems for anthropics, as one would expect to “probably be” a lower-level entity if there are many of those.
Regarding the fifth problem: “Of all the well-bounded 4D topological pockets that might exist in the brain, we suggest that only one of them bounds a field that encloses (and hence integrates) EM activity emerging from the brain’s immediate memory modules”. EM fields exist across space and time, hence being 4D. It is correct to focus on the brain’s memory modules, as those are what has the computational power to bind experience over time. However, if one is accepting “memory modules” as a well-defined unit of analysis, it is unclear why adding EM fields helps, compared to looking at information transmission between the memory module and other computational modules.
Regarding the forth problem: “The merging of two 1PPs into a single 1PP is only a relevant question for a single 4D pocket in any case, existing for a very short period of time. This may be possible in principle but extremely difficult, since any attempt to bring the necessary modules close enough would likely destroy the physical mechanisms that generate a pocket with the right hard boundaries to enclose a 1PP with any temporal persistence (even microseconds).” I don’t have much to say on the physics of this. It seems that a group mind would by default fail to have its own memory sub-system, and one can analyze entities that have memory by studying connections of the memory sub-system to whatever other systems it is connected to, whether they are in the same human brain or not.
Conclusion
Overall I’m probably not the target audience for this article. It seems to be intended for people who have already rejected a functionalist theory of consciousness and are looking for a theory that explains consciousness in terms of relatively low-level physical properties of systems, such as EM fields. The article details problems with EM theories and how topology may ameliorate these problems, but is more suggestive at an area to look for solutions than a proposal in itself (as it acknowledges).
There is a way in which much of the theory-building discussed in the paper is “working backwards”: it’s looking at high-level experiences which closely resemble reports of consciousness, and seeking to find low-level physical entities which have properties corresponding to these entities. The problem is that, if you are trying to do something like “curve fitting” on experiences that closely resemble human reports about experience, there will be many epicyclic contortions to get these to match up (such as the locality criterion), if one is not trying to make one’s explanations match up with the actual causal chain that produces reports of consciousness.
If one is seeking to explain reports of consciousness, understanding the functioning of the mind and brain is a cognitive science problem that is not especially metaphysically difficult. If one is seeking to define a metaphysical entity of “consciousness” that need not match these reports, but could be used in anthropics, one can do metaphysical study of consciousness. But EM theories seem to be trying to do both at the same time: they’re trying to find a metaphysically sensible entity corresponding to “consciousness” which corresponds to empirical reports of consciousness, without paying much attention to the multi-layered system that generates reports of consciousness.
Attending primarily to the functioning of this system would by default lead to a functionalist theory of consciousness: it is very unsurprising, for example, that the functioning of the mental human sub-systems do not have access to telepathic information. However, one runs into problems when using metaphysical considerations to select consciousness-bearing entities such as EM fields, while also trying to match human reports of conscious experience.
As such, I don’t find this overall line of research to be all that useful or compelling, and remain mostly a functionalist who thinks about the first-personal metaphysics of science to supplement theories of consciousness.