Some brief attempted translation for the last part:
A “monad”, in Mitchell Porter’s usage, is supposed to be a somewhat isolatable quantum state machine, with states and dynamics factorizable somewhat as if it was a quantum analogue of a classical dynamic graphical model such as a dynamic Bayesian network (e.g., in the linked physics paper, a quantum cellular automaton). (I guess, unlike graphical models, it could also be supposed to not necessarily have a uniquely best natural decomposition of its Hilbert space for all purposes, like how with an atomic lattice you can analyze it either in terms of its nuclear positions or its phonons.) For a monad to be a conscious mind, the monad must also at least be complicated and [this is a mistaken guess] capable of certain kinds of evolution toward something like equilibria of tensor-product-related quantum operators having to do with reflective state representation[/mistaken guess]. His expectation that this will work out is based partly on intuitive parallels between some imaginable combinatorially composable structures in the kind of tensor algebra that shows up in quantum mechanics and the known composable grammar-like structures that tend to show up whenever we try to articulate concepts about representation (I guess mostly the operators of modal logic).
(Disclaimer: I know almost only just enough quantum physics to get into trouble.)
A monadic mind would be a state machine, but ontologically it would be different from the same state machine running on a network of a billion monads.
Not all your readers will understand that “network of a billion monads” is supposed to refer to things like classical computing machinery (or quantum computing machinery?).
His expectation that this will work out is based partly on [...]
(It’s also based on an intuition I don’t understand that says that classical states can’t evolve toward something like representational equilibrium the way quantum states can—e.g. you can’t have something that tries to come up with an equilibrium of anticipation/decisions, like neural approximate computation of Nash equilibria, but using something more like representations of starting states of motor programs that, once underway, you’ve learned will predictably try to search combinatorial spaces of options and/or redo a computation like the current one but with different details—or that, even if you can get ths sort of evolution in classical states, it’s still knowably irrelevant. Earlier he invoked bafflingly intense intuitions about the obviously compelling ontological significance of the lack of spatial locality cues attached to subjective consciousness, such as “this quale is experienced in my anterior cingulate cortex, and this one in Wernicke’s area”, to argue that experience is necessarily nonclassically replicable. (As compared with, what, the spatial cues one would expect a classical simulation of the functional core of a conscious quantum state machine to magically become able to report experiencing?) He’s now willing to spontaneously talk about non-conscious classical machines that simulate quantum ones (including not magically manifesting p-zombie subjective reports of spatial cues relating to its computational hardware), so I don’t know what the causal role of that earlier intuition is in his present beliefs; but his reference to a “sweet spot”, rather than a sweet protected quantum subspace of a space of network states or something, is suggestive, unless that’s somehow necessary for the imagined tensor products to be able to stack up high enough.)
bafflingly intense intuitions about the obviously compelling ontological significance of the lack of spatial locality cues attached to subjective consciousness
Let’s go back to the local paradigm for explaining consciousness: “how it feels from the inside”. On one side of the equation, we have a particular configuration of trillions of particles, on the other side we have a conscious being experiencing a particular combination of sensations, feelings, memories, and beliefs. The latter is supposed to be “how it feels to be that configuration”.
If I ontologically analyze the configuration of particles, I’ll probably do so in terms of nested spatial structures—particles in atoms in molecules in organelles in cells in networks. What if I analyze the other side of the equation, the experience, or even the conscious being having the experience? This is where phenomenology matters. Whenever materialists talk about consciousness, they keep interjecting references to neurons and brain computations even though none of this is evident in the experience itself. Phenomenology is the art of characterizing the experience solely in terms of how it presents itself.
So let’s look for the phenomenological “parts” of an experience. One way to divide it up is into the different sensory modalities, e.g. that which is being seen versus that which is being heard. We can also distinguish objects that may be known multimodally, so there can be some cross-classification here, e.g. I see you but I also hear you. This synthesis of a unified perception from distinct sensations seems to be an intellectual activity, so I might say that there are some visual sensations, some auditory sensations, a concept of you, and a belief that the two types of sensations are both caused by the same external entity.
The analysis can keep going in many directions from here. I can focus just on vision and examine the particular qualities that make up a localized visual sensation (e.g. the classic three-dimensional color schemes). I can look at concepts and thoughts and ask how they are generated and compounded. When I listen to my own thinking, what exactly is going on, at the level of appearance? Do I situate my thoughts and/or my self as occurring at a particular place in the overall environment of appearances, and if so, from where does the sense that I am located there arise?
I emphasize that, if one is doing phenomenology, these questions are to be answered, not by introducing one’s favorite scientific guess as to the hidden neural mechanism responsible, but on the basis of introspection and consciously available evidence. If you can’t identify a specific activity of your conscious mind as responsible for the current object of inquiry, then that is the phenomenological datum: no cause was apparent, no cause was identified. Of course you can go back to speculation and science later.
The important perspective to develop here is the capacity to think like a systematic solipsist. Suppose that appearances are all there is, including the thoughts passing through your mind. If that is all there is, then what is there, exactly? This is one way to approach the task of analyzing the “nature” or “internal structure” of consciousness, and a reasonably effective one if habitual hypotheses about the hidden material underpinnings of everything keep interfering. Just suppose for a moment that appearances don’t arise from atoms and neurons, but that they arise from something else entirely, or that they arise from nothing at all. Either way, they’re still there and you can still say something about them.
Having said that, then you can go back to your science. So let’s do that now. What we have established is that there is structure on both sides of the alleged equation between a configuration of atoms and a conscious experience. The configuration of atoms has a spatial structure. The “structure” of a conscious experience is something more abstract; for example, it includes the fact that there are distinct sensory continua which are then conceptually synthesized into a world of perceived objects. The set of all sensations coming from the same sense also has a structure which is not exactly spatial, not in the physical sense. There is an “auditory space”, a “kinesthetic space”, and even the “apparent visual space” is not the same thing as physical space.
On both sides we have many things connected by specific structural relations. On the physical side, we have particles in configurations that can be defined by distances and angles. On the phenomenological side, we have elementary qualia of diverse types which are somehow conceptually fused into sense objects, which in turn become part of intentional states whose objects can also include the self and other intentional states.
Since we have a lot of structure and a lot of relation on both sides, it might seem we have a good chance of developing a detailed theory of how physical structure relates to phenomenological structure. But even before we begin that analysis, I have to note that we are working with really different things on both sides of the equation. On one side we have colors, thoughts, etc, and the other side we have particles. On one side we have connecting relations like “form part of a common perception” and “lie next to each other in a particular sensory modality”, on the other side we have connecting relations like “located 3 angstroms apart”. This is when it becomes obvious to me that any theory we make out of this, is going to involve property dualism. There is no way you can say the two sides of the equation are the same thing.
The business with the monads is about saying that the physical description in terms of configurations in space is wrong anyway; physical reality is instead a causal network of objects with abstractly complex internal states. That’s very unspecified, but it also gives the phenomenological structure a chance to be the ontological structure, without any dualism. The “physical description” of the conscious mind is then just the mathematical characterization of the phenomenology, bleached of all ontological specifics, so it’s just “entity X with properties a,b,c, in relation R to entity Y”. If we can find a physics in which there are objects with states whose internal structure can be directly mapped onto the structure of phenomenological states, and only if we can do that, then we can have a nondualistic physical ontology.
I don’t know where you got the part about representational equilibria from.
My conception of a monad is that it is “physically elementary” but can have “mental states”. Mental states are complex so there’s some sort of structure there, but it’s not spatial structure. The monad isn’t obtained by physically concatenating simpler objects; its complexity has some other nature.
Consider the Game of Life cellular automaton. The cells are the “physically elementary objects” and they can have one of two states, “on” or “off”.
Now imagine a cellular automaton in which the state space of each individual cell is a set of binary trees of arbitrary depth. So the sequence of states experienced by a single cell, rather than being like 0, 1, 1, 0, 0, 0,… might be more like (X(XX)), (XX), ((XX)X), (X(XX)), (X(X(XX)))… There’s an internal combinatorial structure to the state of the single entity, and ontologically some of these states might even be phenomenal or intentional states.
Finally, if you get this dynamics as a result of something like the changing tensor decomposition of one of those quantum CAs, then you would have a causal system which mathematically is an automaton of “tree-state” cells, ontologically is a causal grid of monads capable of developing internal intentionality, and physically is described by a Hamiltonian built out of Pauli matrices, such as might describe a many-body quantum system.
Furthermore, since the states of the individual cell can have great or even arbitrary internal complexity, it may be possible to simulate the dynamics of a single grid-cell in complex states, using a large number of grid-cells in simple states. The simulated complex tree-states would actually be a concatenation of simple tree-states. This is the “network of a billion simple monads simulating a single complex monad”.
Some brief attempted translation for the last part:
A “monad”, in Mitchell Porter’s usage, is supposed to be a somewhat isolatable quantum state machine, with states and dynamics factorizable somewhat as if it was a quantum analogue of a classical dynamic graphical model such as a dynamic Bayesian network (e.g., in the linked physics paper, a quantum cellular automaton). (I guess, unlike graphical models, it could also be supposed to not necessarily have a uniquely best natural decomposition of its Hilbert space for all purposes, like how with an atomic lattice you can analyze it either in terms of its nuclear positions or its phonons.) For a monad to be a conscious mind, the monad must also at least be complicated and [this is a mistaken guess] capable of certain kinds of evolution toward something like equilibria of tensor-product-related quantum operators having to do with reflective state representation[/mistaken guess]. His expectation that this will work out is based partly on intuitive parallels between some imaginable combinatorially composable structures in the kind of tensor algebra that shows up in quantum mechanics and the known composable grammar-like structures that tend to show up whenever we try to articulate concepts about representation (I guess mostly the operators of modal logic).
(Disclaimer: I know almost only just enough quantum physics to get into trouble.)
Not all your readers will understand that “network of a billion monads” is supposed to refer to things like classical computing machinery (or quantum computing machinery?).
This needs further translation.
(It’s also based on an intuition I don’t understand that says that classical states can’t evolve toward something like representational equilibrium the way quantum states can—e.g. you can’t have something that tries to come up with an equilibrium of anticipation/decisions, like neural approximate computation of Nash equilibria, but using something more like representations of starting states of motor programs that, once underway, you’ve learned will predictably try to search combinatorial spaces of options and/or redo a computation like the current one but with different details—or that, even if you can get ths sort of evolution in classical states, it’s still knowably irrelevant. Earlier he invoked bafflingly intense intuitions about the obviously compelling ontological significance of the lack of spatial locality cues attached to subjective consciousness, such as “this quale is experienced in my anterior cingulate cortex, and this one in Wernicke’s area”, to argue that experience is necessarily nonclassically replicable. (As compared with, what, the spatial cues one would expect a classical simulation of the functional core of a conscious quantum state machine to magically become able to report experiencing?) He’s now willing to spontaneously talk about non-conscious classical machines that simulate quantum ones (including not magically manifesting p-zombie subjective reports of spatial cues relating to its computational hardware), so I don’t know what the causal role of that earlier intuition is in his present beliefs; but his reference to a “sweet spot”, rather than a sweet protected quantum subspace of a space of network states or something, is suggestive, unless that’s somehow necessary for the imagined tensor products to be able to stack up high enough.)
Let’s go back to the local paradigm for explaining consciousness: “how it feels from the inside”. On one side of the equation, we have a particular configuration of trillions of particles, on the other side we have a conscious being experiencing a particular combination of sensations, feelings, memories, and beliefs. The latter is supposed to be “how it feels to be that configuration”.
If I ontologically analyze the configuration of particles, I’ll probably do so in terms of nested spatial structures—particles in atoms in molecules in organelles in cells in networks. What if I analyze the other side of the equation, the experience, or even the conscious being having the experience? This is where phenomenology matters. Whenever materialists talk about consciousness, they keep interjecting references to neurons and brain computations even though none of this is evident in the experience itself. Phenomenology is the art of characterizing the experience solely in terms of how it presents itself.
So let’s look for the phenomenological “parts” of an experience. One way to divide it up is into the different sensory modalities, e.g. that which is being seen versus that which is being heard. We can also distinguish objects that may be known multimodally, so there can be some cross-classification here, e.g. I see you but I also hear you. This synthesis of a unified perception from distinct sensations seems to be an intellectual activity, so I might say that there are some visual sensations, some auditory sensations, a concept of you, and a belief that the two types of sensations are both caused by the same external entity.
The analysis can keep going in many directions from here. I can focus just on vision and examine the particular qualities that make up a localized visual sensation (e.g. the classic three-dimensional color schemes). I can look at concepts and thoughts and ask how they are generated and compounded. When I listen to my own thinking, what exactly is going on, at the level of appearance? Do I situate my thoughts and/or my self as occurring at a particular place in the overall environment of appearances, and if so, from where does the sense that I am located there arise?
I emphasize that, if one is doing phenomenology, these questions are to be answered, not by introducing one’s favorite scientific guess as to the hidden neural mechanism responsible, but on the basis of introspection and consciously available evidence. If you can’t identify a specific activity of your conscious mind as responsible for the current object of inquiry, then that is the phenomenological datum: no cause was apparent, no cause was identified. Of course you can go back to speculation and science later.
The important perspective to develop here is the capacity to think like a systematic solipsist. Suppose that appearances are all there is, including the thoughts passing through your mind. If that is all there is, then what is there, exactly? This is one way to approach the task of analyzing the “nature” or “internal structure” of consciousness, and a reasonably effective one if habitual hypotheses about the hidden material underpinnings of everything keep interfering. Just suppose for a moment that appearances don’t arise from atoms and neurons, but that they arise from something else entirely, or that they arise from nothing at all. Either way, they’re still there and you can still say something about them.
Having said that, then you can go back to your science. So let’s do that now. What we have established is that there is structure on both sides of the alleged equation between a configuration of atoms and a conscious experience. The configuration of atoms has a spatial structure. The “structure” of a conscious experience is something more abstract; for example, it includes the fact that there are distinct sensory continua which are then conceptually synthesized into a world of perceived objects. The set of all sensations coming from the same sense also has a structure which is not exactly spatial, not in the physical sense. There is an “auditory space”, a “kinesthetic space”, and even the “apparent visual space” is not the same thing as physical space.
On both sides we have many things connected by specific structural relations. On the physical side, we have particles in configurations that can be defined by distances and angles. On the phenomenological side, we have elementary qualia of diverse types which are somehow conceptually fused into sense objects, which in turn become part of intentional states whose objects can also include the self and other intentional states.
Since we have a lot of structure and a lot of relation on both sides, it might seem we have a good chance of developing a detailed theory of how physical structure relates to phenomenological structure. But even before we begin that analysis, I have to note that we are working with really different things on both sides of the equation. On one side we have colors, thoughts, etc, and the other side we have particles. On one side we have connecting relations like “form part of a common perception” and “lie next to each other in a particular sensory modality”, on the other side we have connecting relations like “located 3 angstroms apart”. This is when it becomes obvious to me that any theory we make out of this, is going to involve property dualism. There is no way you can say the two sides of the equation are the same thing.
The business with the monads is about saying that the physical description in terms of configurations in space is wrong anyway; physical reality is instead a causal network of objects with abstractly complex internal states. That’s very unspecified, but it also gives the phenomenological structure a chance to be the ontological structure, without any dualism. The “physical description” of the conscious mind is then just the mathematical characterization of the phenomenology, bleached of all ontological specifics, so it’s just “entity X with properties a,b,c, in relation R to entity Y”. If we can find a physics in which there are objects with states whose internal structure can be directly mapped onto the structure of phenomenological states, and only if we can do that, then we can have a nondualistic physical ontology.
I don’t know where you got the part about representational equilibria from.
My conception of a monad is that it is “physically elementary” but can have “mental states”. Mental states are complex so there’s some sort of structure there, but it’s not spatial structure. The monad isn’t obtained by physically concatenating simpler objects; its complexity has some other nature.
Consider the Game of Life cellular automaton. The cells are the “physically elementary objects” and they can have one of two states, “on” or “off”.
Now imagine a cellular automaton in which the state space of each individual cell is a set of binary trees of arbitrary depth. So the sequence of states experienced by a single cell, rather than being like 0, 1, 1, 0, 0, 0,… might be more like (X(XX)), (XX), ((XX)X), (X(XX)), (X(X(XX)))… There’s an internal combinatorial structure to the state of the single entity, and ontologically some of these states might even be phenomenal or intentional states.
Finally, if you get this dynamics as a result of something like the changing tensor decomposition of one of those quantum CAs, then you would have a causal system which mathematically is an automaton of “tree-state” cells, ontologically is a causal grid of monads capable of developing internal intentionality, and physically is described by a Hamiltonian built out of Pauli matrices, such as might describe a many-body quantum system.
Furthermore, since the states of the individual cell can have great or even arbitrary internal complexity, it may be possible to simulate the dynamics of a single grid-cell in complex states, using a large number of grid-cells in simple states. The simulated complex tree-states would actually be a concatenation of simple tree-states. This is the “network of a billion simple monads simulating a single complex monad”.