I’ve been trying to find a way to empathically emulate people who talk about quantum consciousness for a while, so far with only moderate success. Mitchell, I’m curious if you’re aware of the work of Christof Koch and Giulio Tononi, and if so, could you speak to their approach?
For reference (if people aren’t familiar with the work already) Koch’s team is mostly doing experiments… and seems to be somewhat close to having mice that have genes knocked out so that they “logically would seem” to lack certain kinds of qualia that normal mice “logically would seem” to have. Tononi collaborates with him and has proposed a way to examine a thing that computes and calculates that thing’s “amount of consciousness” using a framework he called Integrated Information Theory. I have not sat down and fully worked out the details of IIT such that I could explain it to a patient undergrad at a chalkboard, but the reputation of the people involved is positive (I’ve seen Koch’s dog and pony show a few times and it has improved substantially over the years and he is pimping Tononi pretty effectively)… basically the content “smells promising” but I’m hoping I can hear someone else’s well informed opinion to see if I should spend more time on it.
Also, it seems to be relevant to this philosophic discussion? Or not? That’s what I’m wondering. Opinions appreciated :-)
It bugs me when people talk about ’quantum consciousness”, given that classical computers can do anything quantum computers can do, only sometimes slower.
IIT’s measure of “information integration”, phi, is still insufficiently exact to escape the “functionalist sorites problem”. It could be relevant for a state-machine analysis of the brain, but I can’t see it being enough to specify the mapping between physical and phenomenological states. Also, Tononi’s account of conscious states seems to be just at the level of sensation. But this is an approach which could converge with mine if the right extra details were added.
I’ve been trying to find a way to empathically emulate people who talk about quantum consciousness
“We” are a heterogeneous group. Chopra and Penrose—not much in common. Besides, even if you believe consciousness can arise from classical computation but you also believe in many worlds, then quantum concepts do play a role in your theory of mind, in that you say that the mind consists of interactions between distinct states of decohered objects. Figure out how Tononi’s “phi” could be calculated for the distinct branches of a quantum computer, and lots of people will want to be your friend.
If I understand what you’re calling the “functionalist sorites problem”, it seems to me that Integrated Information Theory is meant to address almost exactly that issue, with its “phi” parameter being a measure of something like the degree (in bits) which an input is capable of exerting influence over a behavioral outcome.
Moreover, qualia, at least as I seem to experience them, are non-binary. Merely hearing the word “red” causes aspects of my present environment to leap to salience in a way that I associate with those facets of the world being more able to influence my subsequent behavior… or to put it much more prosaically: reminders can, in fact, bring reminded content to my attention and thereby actually work. Equally, however, I frequently notice my output having probably been influenced by external factors that were in my consciousness to only a very minor degree such that it would fall under the rubric of priming. Maybe this is ultimately a problem due to generalizing from one example? Maybe I have many gradations of conscious awareness and you have binary awareness and we’re each assuming homogeneity where none exists?
Solving a fun problem and lots of people wanting to be my friend sounds neat… like a minor goad to working on the problem in my spare time and seeing if I can get a neat paper on it? But I suspect you’re overestimating people’s interest, and I still haven’t figured out the trick of being paid well to play with ideas, so until them schema inference software probably pays the bills more predictably than trying to rid the world of quantum woo. There are about 1000 things I could spend the next few years on, and I only get to do maybe 2-5 of them, and then only in half-assed ways unless I settle on ONLY one of them. Hobby quantum consciousness research is ~8 on the list and unlikely to actually get many brain cycles in the next year :-P
I posed the functionalist sorites problem in the form of existence vs nonexistence of a specific quale, but it can equally be posed in the form of one state of consciousness vs another, where the difference may be as blatant or as subtle as you wish.
The question is, what are the exact physical conditions under which a completely specific quale or state of consciousness exists? And we can highlight the need for exactness, by asking at the same time what the exact conditions are, under which no quale occurs, or under which the other state of consciousness occurs; and then considering edge cases, where the physical conditions are intermediate between one vague specification and another vague specification.
For the argument to work, you must be clear on the principle that any state of consciousness is exactly something, even if we are not totally aware of it or wouldn’t know how to completely describe it. This principle—which amounts to saying that there is no such thing as entities which are objectively vague—is one that we already accept when discussing physics, I hope.
Suppose we are discussing what the position of an unmeasured electron is. I might say that it has a particular position; I might say that it has several positions or all positions, in different worlds; I might say that it has no position at all, that it just isn’t located in space right now. All of those are meaningful statements. But to say that it has a position, but it doesn’t have a particular position, is conceptually incoherent. It doesn’t designate a possibility. It most resembles “the electron has no position at all”, but then you don’t get to talk as if the electron nonetheless has a (nonspecific) position at the same time as not actually having a position.
The same principle applies to conscious experience. The quale is always a particular quale, even if you aren’t noticing its particularities.
Now let us assume for the moment that this principle of non-vagueness is true for all physical states and all phenomenological states. That means that when we try to understand the conditions under which physical states and phenomenological states are related, we are trying to match up two sets of definite “things”.
The immediate implication is that any definite physical state will be matched with a definite phenomenology (or with no phenomenology at all). Equally it implies that any definite phenomenological state will correspond to a definite physical state or to a set of definite physical states. The boundary between “physical states corresponding to one phenomenological state”, and “physical states corresponding to another phenomenological state”, must be sharp. The only way to avoid a sharp boundary is if there’s a continuum on both sides—a continuum of physical states, and a continuum of phenomenological states—but again there must be an exact mapping between them, because of non-vagueness.
IIT does not provide an exact mapping because it doesn’t really concern itself with exact microphysical facts, like exact microphysical states, or exact microscopic boundaries between the physical systems that are coupled to each other. Everything is just being described in a coarse-grained fashion; which is fine for computational or other practical causal analyses.
I don’t think I would find many people willing to defend the position that conscious states are objectively vague. I also wouldn’t find many willing to say that any law of correspondence between physical and phenomenological states must be exact on the microphysical level. But this is the implication of the principle of ontological non-vagueness, applied to both sides of the equation.
Someone downvoted you, but I upvoted you to correct it. I only downvote when I think there is (1) bad faith communication or (2) an issue above LW’s sanity line is being discussed tactlessly. Neither seems to apply here.
That said, I think you just made a creationist “no transitional forms” move in your argument? A creationist might deny that 200-million-year-separated organisms, seemingly obviously related by descent, are “the same” magically/essentially distinct “kind”. There’s a gap between them! When pressed (say by being shown some intermediate forms that have been found given the state of the scientific excavation of the crust) a creationist could point in between each intermediate form to more gaps which might naively seem to make their “gaps exist” point a stronger point against the general notion of “evolution by natural selection”. But it doesn’t. Its not a stronger argument thereby, but a weaker one.
Similarly, you seem to have a rhetorical starting point where you verbally deploy the law of the excluded middle to say that either a quale “is or is not” experienced due to a given micro-physical configuration state (notice the similarity of focusing on simplistic verbal/propositional/logical modeling of rigid “kinds” or “sets” with magically perfect inclusion/exclusion criteria). I pushed on that and you backed down. So it seems like you’ve retreated to a position where each verbally distinguishable level of conscious awareness should probably have a different physical configuration, and in fact this is what we seem to observe with things like fMRI...if you squint your eyes and acknowledge limitations in observation and theory that are being rectified by science even as we write. We haven’t nanotechnogically consumed the entire crust of the earth to find every fossil, and we haven’t simulated a brain yet, but these things may both be on the long term path of “the effecting of all things possible”.
My hope in trying to empathically emulate people who take quantum consciousness seriously is that I’ll gain a new insight… but it is hard because mostly what I see is things I find very easy to interpret as second rate thinking (like getting confused in abstractions of philosophical handwaving) while ignoring the immanent physical vastness of the natural world (with its trillions of moving parts that have had billions of years of selection to become optimized) in the manner of Penrose and Searle and so on. I want there to be something interesting to understand. Some “aha moment” when it all snaps into place, and I don’t want that moment to be a final decisive insight into what’s going wrong in your heads that makes you safe to write off…
The only way to avoid a sharp boundary is if there’s a continuum on both sides—a continuum of physical states, and a continuum of phenomenological states—but again there must be an exact mapping between them, because of non-vagueness.
This just sorta sounds to me like you’ve been infected with dualism and have no other metaphysical theories to play off against the dualistic metaphysics in your head. I might try to uncharitably translate you to be saying something like “I have a roughly verbalizable model of my phenomenological experience of my brain states for a brain that is self-monitoring, self-regulating, world-modeling, agents-in-world-modeling, self-as-agent-modeling, and behavior-generating (and btw, I promoted my model to ‘ontological realness’ and started confusing my experience of my belief in ontologically non-physical mental states with there being a platonic ghost in my head or something), but brains and my ghost model are both really complicated, and it seems hard to map them into each other with total fidelity… and this means that brains must be very magic, you might say quantumly magic, in order to match how confused I am about the lack of perfect match between my ghost model and my understanding of the hardware that might somehow compute the ghost model… and since my ghost model is ontologically real this means there are ghosts… in my brain… because it’s a quantum brain… or something… I’m not sure...”
I want something to fall out of conversation with (and reading of) quantum consciousness theorists that shows that something like a quantum fourier transform is running on our neurons to allow “such and such super powers” to be demonstrated by humans that clearly has a run time in our brains that beats what would be possible for a classical turing machine. What would classical-Turing-zombies look like that is different from how quantum-soulful-people would look like? All I can hear is mediocre philosophy of mind. I think? I don’t intend meanness.
I’m just trying to communicate the problem I’m having hearing whatever it is that you’re really trying to say that makes sense to you. I’m aware of inferential distances and understand that I might need to spend 200 weekends (which would make it a four year hobby project) reading traditionally-understood-as-boring non-fiction to understand what you’re saying, but my impression is that no such course of reading exists for you to point me towards… which would be weak but distinct evidence for you being confused rather than me being ignorant.
Is there something I should read? What am I missing?
ETA: I re-read this and find my text to be harsher than I’d like. I really don’t want this to be harsh, but actually want enlightenment here and find myself groping for words that will get you to engage with my vocabulary and replace an accessible but uncharitable interpretation in my head with a better theory. If you’d like to not respond in public, PM me your email and I’ll respond via that medium? Maybe IRC would be a better to reduce the latency on vocabulary development?
I think you just made a creationist “no transitional forms” move in your argument?
No, I explicitly mentioned the idea that there might be a continuum of possible quale states; you even quoted the sentence where I brought it up. But it is irrelevant to my argument, which is that for a proposed mapping between physical and phenomenological states to have any chance of being true, it must possess an extension to an exact mapping between fundamental microphysical states and phenomenological states (not necessarily a 1-to-1 mapping) - because the alternative is “objective vagueness” about which conscious state is present in certain physical configurations—and this requirement is very problematic for standard functionalism based on vaguely defined mesoscopic states, since any specification of how all the edge cases correspond to the functional states will be highly arbitrary.
Let me ask you this directly: do you think it would be coherent to claim that there are physical configurations in which there is a state of consciousness present, but it’s not any particular state of consciousness? It doesn’t have to be a state of consciousness that we presently know how to completely characterize, or a state of consciousness that we can subjectively discriminate from all other possible states of consciousness; it just has to be a definite, particular state of consciousness.
If we agree that ontological definiteness of physical state implies ontological definiteness in any accompanying state of consciousness (again I’ll emphasize that this is ontological definiteness, not phenomenological definiteness; I must allow for the fact that states of consciousness have details that aren’t noticed by the experiencer), then that immediately implies the existence of an exact mapping from microphysically exact states to ontologically definite states of consciousness. Which implies an inverse mapping from ontologically definite states of consciousness, to a set of exact microphysical states, which are the physical states (or state, there might only be one) in which that particular state of consciousness is realized.
OK, I hope I’m starting to get it. Are you looking for a basis to power a pigeonhole argument about equivalence classes?
If we’re going to count things, then a potential source of confusion is that there are probably more ontologically distinct states of “being consciously depressed” than can detectable from the inside, because humans just aren’t very good at internal monitoring and stuff, but that doesn’t mean they aren’t differences that a martian with Awesome Scanning Equipment couldn’t detect. So a mental patient could be phenomenologically depressed in a certain way and say “that feeling I just felt was exactly the same feeling as in the past modulo some mental trivia about vaguely knowing it is Tuesday rather than Sunday” and the Martian anthropologist might check the scanner logs and might truthfully agree but more likely the the Martian might truthfully say, “Technically no: you were more consciously obsessed about your ex-boyfriend than you were consciously obsessed about your cellulite, which is the opposite ordering of every time in the past, though until I said this you were not aware of this difference in your awareness” and then the patient might introspect based on the statement and say “Huh, yeah, I guess you’re right, curious that I didn’t notice that from the inside while it was happening… oh well, time for more crying now...” And in general, absent some crazy sort of phenomenological noise source, there are almost certainly fewer phenomenologically distinct states than ontologically distinct states.
So then the question arises as to how the Martian’s “ontology monitoring” scanner worked.
It might have measured physical brain states via advanced but ultimately prosaic classical-Turing-neuron technology or it might have used some sort of quantum-chakra-scanner that detects qualia states directly. Perhaps it has both and can run either or both scanners and compare their results over time? One of them can report that a stray serotonin molecule was different, and the other can identify an ontologically distinct feeling of satisfaction. Which leads to a second question of number: can the the quantum chakra scanner detect exactly the same cardinality of qualia states as the classical turing scanner can detect brain states? If I’m reconstructing/interpreting your claim properly, this starts to get a the heart of a sort of “quantum qualia pigeonhole puzzle”?
Except even if this is what you’re proposing, I don’t see how it implies quantum stuff is likely to be very important...
If the scanners give exactly the same counts, that would be surprising and probably very few people expect this outcome because there are certainly unconscious mental processes and those are presumably running on “brain tissue” and hence contribute to brain state counts but not qualia state counts.
So the likely answer is that there are fewer qualia states than brain states. Conversely if somehow there were more qualia states than brain states then I think that would be evidence for “mind physics” above and beyond “particle physics” and upon learning the existence of a physics that includes ontologically real cartesian mental entities that runs separately from but affect raw brain matter… well, then I guess my brain would explode… and right afterwards I’d get curious about how “computational chakronics” work :-)
Assuming the Martian’s scanners came out with more brain-states than qualia-states, this would confirm my expectations, and would also confirm the (already dominant?) theory that there was something interesting about the operation, interconnection, and/or embodied-embedding of certain kinds of brain tissue in the relatively boring way that is the obvious target of research for computationally-inspired neuro-physical science. This is what all the fMRIs and Halley Berry neuron probing and face/chalice experiments are for.
A result of |brainstates| > |qualiastates| would be consistent with the notion that consciousness was “substrate independent” in potentially two ways, first it might allow us to port the “adaptively flexible self monitoring conscious architecture dynamic” to a better medium by moving the critical patterns of interaction to microchips or something (allowing us to drop all the slimy proteins and ability to be denatured at 80 Celsius and so on). Second, it might allow us to replace significant chunks of nervous tissue (spinal tissue and retina and so on) with completely different and better stuff without even worrying because they probably aren’t even involved in “consciousness” except as simple data pipes.
Which implies an inverse mapping from ontologically definite states of consciousness, to a set of exact microphysical states, which are the physical states (or state, there might only be one) in which that particular state of consciousness is realized.
This would be pretty spooky to me if it was possible. My current expectations (call this B>Q>P) are:
If my expected ordering is right, then an inverse mapping from qualiastates to brain states should be impossible by the pigeonhole principle… and then substrate independence probably “goes through”. Quantum mechanics, in this model, could totally be “just a source of noise”, with some marginal value as a highly secure random number generator to use in mixed strategies, but this result would be perfectly consistent with quantum effects mostly existing as a source of error that makes it harder to build a classical computation above it that actually does cognitive work rather than merely thrashing around doing “every possible thing”.
I mean… quantum stuff could still matter if B>Q>P is true. Like it might be involved in speedup tricks for some neural algorithms that we haven’t yet understood? But it doesn’t seem like it would be an obvious source of “magical qualia chakras” that make the people who have them more conscious in a morally-important ghost-in-the-brain way that would be lost from porting brain processes to a faster and more editable substrate. If it does, then that result is probably really really important (hence my interest)… it just seems very unlikely to me at the present time.
Are we closer to coherent now? Do we have a simple disagreement of expectations that can be expressed in a mutually acceptable vocabulary? That seems like it would be progress, if true :-)
Which implies an inverse mapping from ontologically definite states of consciousness, to a set of exact microphysical states, which are the physical states (or state, there might only be one) in which that particular state of consciousness is realized.
This would be pretty spooky to me if it was possible. My current expectations (call this B>Q>P) are:
If my expected ordering is right, then an inverse mapping from qualiastates to brain states should be impossible by the pigeonhole principle...
I think that there was a miscommunication here. To be strictly correct, Mitchell should have written “Which implies an inverse mapping from ontologically definite states of consciousness, to sets of exact microphysical states...”. His additional text makes it clear that he’s talking about a map f sending every qualia state q to a setf(q) of brain states, namely, the set of brain states b such that being in brain state b implies experiencing qualia state q. This is consistent with the ordering B>Q>P that you expect.
This is not about counting the number of states. It is about disallowing vagueness at the fundamental level, and then seeing the implications of that for functionalist theories of consciousness.
A functionalist theory of consciousness says that a particular state of consciousness occurs, if and only if the physical object is in a particular “functional state”. If you classify all the possible physical states into functional states, there will be borderline cases. But if we disallow vagueness, then every one of those borderline cases must correspond to a specific state of consciousness.
Someone with no hair is bald, someone with a head full of hair is not bald, yet we don’t have a non-arbitrary criterion for where the exact boundary between bald and not-bald lies. This doesn’t matter because baldness is a rough judgment and not an objective property. But states of consciousness are objective, intrinsic attributes of the conscious being. So objective vagueness isn’t allowed, and there must be a definite fact about which conscious state, if any, is present, for every possible physical state.
If we are employing the usual sort of functionalist theory, then the physical variables defining the functional states will be bulk mesoscopic quantities, there will be borderline areas between one functional state and another, and any line drawn through a borderline area, demarcating an exact boundary, just for the sake of avoiding vagueness, will be completely arbitrary at the finest level. The difference between experiencing one shade of red and another will be that you have 4000 color neurons firing rather than 4001 color neurons, and a cell will count as a color neuron if it has 10 of the appropriate receptors but not if it only has 9, and a state of this neuron will count as firing if the action potential manages to traverse the whole length of the axon, but not if it’s just a localized fizzle…
The arbitrariness of the distinctions that would need to be made, in order to refine this sort of model of consciousness all the way to microphysical exactness, is evidence that it’s the wrong sort of model. This sort of inexact functionalism can only apply to unconscious computational states. It would seem that most of the brain is an unconscious coprocessor of the conscious part. We can think about the computational states of the unconscious part of the brain in the same rough-and-ready way that we think about the computational states of an ordinary digital computer—they are regularities in the operation of the “device”. We don’t need to bother ourselves over whether a transistor halfway between a 0 state and a 1 state is “really” in one state or the other, because the ultimate criterion of semantics here is behavior, and a transistor—or a neuron—in a computational “halfway state” is just one whose behavior is unpredictable, and unreliable compared to the functional role it is supposed to perform.
This is not an option when thinking about conscious states, because states of consciousness are possessed intrinsically, and not just by ascription on the basis of behavior. Therefore I deduce that the properties defining the physical correlate of a state of consciousness, are not fuzzy ones like “number of neurons firing in a particular ganglion”, but are instead properties that are microphysically exact.
The counterargument might be made, what about electrons in a transistor? There doesn’t have to be an exact answer to the question, how many electrons is enough for the transistor to really be in the “1” state rather than the “0″ state. But the reason there doesn’t have to be an exact answer, is that we only care about the transistor’s behavior, and then only its behavior under conditions that the device might encounter during its operational life. If under most circumstances there are only 0 electrons or 1000 electrons present, and if those numbers reliably produce “0 behavior” or “1 behavior” from the transistor, then that is enough for the computer to perform its function as a computational device. Maybe a transistor with 569 electrons is in an unstable state that functionally is neither definitely 0 nor definitely 1, but if those conditions almost never come up in the operation of the device, that’s OK.
With any theory about the presence of qualia, we do not have the luxury of this escape via functional pragmatism. A theory about the presence of qualia needs to have definite implications for every physically possible state—it needs to say whether the qualia are present or not in that state—or else we end up with situations as in the reductio, where we have people who allegedly neither have the quale nor don’t have the quale.
I agree that any final “theory of qualia” should say, for every physically possible state, whether that state bears qualia or not. I take seriously the idea that such a final theory of qualia is possible, meaning that there really is an objective fact of the matter about what the qualia properties of any physically possible state are. I don’t have quite the apodeictic certainty that you seem to have, but I take the idea seriously. At any rate, I feel at least some persuasive force in your argument that we shouldn’t be drawing arbitrary boundaries around the microphysical states associated with different qualia states.
But even granting the objective nature of qualia properties, I’m still not getting why vagueness or arbitrariness is an inevitable consequence of any assignment of qualia states to microphysical states.
Why couldn’t the property of bearing qualia be something that can, in general, be present with various degrees of intensity, ranging from intensely present to entirely absent? Perhaps the “isolated islands” normally traversed by our brains are always at one extreme or another of this range. In that case, it would be impossible for us to imagine what it would “be like” to “bear qualia” in only a very attenuated sense. Nonetheless, perhaps a sufficiently powerful nano-manipulator could rearrange the particles in your brain into such a state.
To be clear, I’m not talking about states that experience specifc qualia — a patch of red, say — very dimly. I’m talking about states that just barely qualify as bearing qualia at all. I’m trying to understand how you rule out the possibility that “bearing qualia” is a continuous property, like the geometrical property of “being longer than a given unit”. Just as a geometrical figure can have a length varying from not exceeding, to just barely exceeding, to greatly exceeding that of a given unit, why might not the property of bearing qualia be one that can vary from entirely absent, to just barely present, to intensely present?
It’s not obviously enough to point out, as you did to Jennifer, that I feel myself to be here, full stop, rather than just barely here or partly here, and that I can’t even imagine myself feeling otherwise. That doesn’t rule out the possibility that there are possible states, which my brain never normally enters, in which I would just barely be a bearer of qualia.
why might not the property of bearing qualia be one that can vary from entirely absent, to just barely present, to intensely present?
There are two problems here. First, you need to make the idea of “barely having qualia” meaningful. Second, you need to explain how that can solve the arbitrariness problem for a microphysically exact psychophysical correspondence.
Are weak qualia a bridge across the gap between having qualia and not having qualia? Or is the axis intense-vs-weak, orthogonal to the axis there-vs-not-there-at-all? In the latter case, even though you only have weak qualia, you still have them 100%.
The classic phenomenological proposition regarding the nature of consciousness, is that it is essentially about intentionality. According to this, even perception has an intentional structure, and you never find sense-qualia existing outside of intentionality. I guess that according to the later Husserl, all possible states of consciousness would be different forms of a fundamental ontological structure called “transcendental intentionality”; and the fundamental difference between a conscious entity and a non-conscious entity is the existence of that structure “in” the entity.
There are mathematical precedents for qualitative discontinuity. If you consider a circle versus a line interval, there’s no topological property such as “almost closed”. In the context of physics, you can’t have entanglement in a Hilbert space with less than four dimensions. So it’s conceivable that there is a discontinuity in nature, between states of consciousness and states of non-consciousness.
Twisty distinctions may need to be made. At least verbally, I can distinguish between (1) an entity whose state just is a red quale (2) an entity whose state is one of awareness of the red quale (3) an entity which is aware that it is aware of the red quale. The ontological position I described previously would say that (3) is what we call self-awareness; (2) is what we might just call awareness; there’s no such thing as (1), and intentionality is present in (2) as well as in (3). I’m agnostic about the existence of something like (1), as a bridge between having-qualia and not-having-qualia. Also, even looking for opportunities for continuity, it’s hard not to think that there’s another discontinuity between awareness and self-awareness.
If I was a real phenomenologist, I would presumably have a reasoned position on such questions. Or at least I could state the options with much more rigor. I’ll excuse the informality of my exposition by saying that one has to start somewhere.
On the arbitrariness problem: I think this is most apparent when it’s arbitrariness of the physical boundary of the conscious entity. Consider a single specific microphysical state that has an observer in it. I don’t see how you could have an exact principle determining the presence and nature of an observer from such a state, if you thought that observers don’t have exact and unique physical boundaries, as you were suggesting in another comment. It seems to involve a one-to-many-to-one mapping, where you go from one exact physical state, to many possible observer-boundaries, to just one exact conscious state. I don’t see how the existence of a conscious-to-nonconscious continuum of states deals with that.
There are two problems here. First, you need to make the idea of “barely having qualia” meaningful. Second, you need to explain how that can solve the arbitrariness problem for a microphysically exact psychophysical correspondence.
I’m still not sure where this arbitrariness problem comes from. I’m supposing that the bearing of qualia is an objective structural property of certain physical systems. Another mathematical analogy might be the property of connectivity in graphs. A given graph is either connected or not, though connectivity is also something that exists in degrees, so that there is a difference between being highly connected and just barely connected.
On this view, how does arbitrariness get in?
Are weak qualia a bridge across the gap between having qualia and not having qualia? Or is the axis intense-vs-weak, orthogonal to the axis there-vs-not-there-at-all? In the latter case, even though you only have weak qualia, you still have them 100%.
I’m suggesting something more like your “bridge across the gap” option. Analogously, one might say that the barely connected graphs are a bridge between disconnected graphs and highly connected graphs. Or, to repeat my analogy from the grandparent, the geometrical property of “being barely longer than a given unit” is a bridge across the gap between “being shorter that the given unit” and “being much longer than the given unit”.
On the arbitrariness problem: I think this is most apparent when it’s arbitrariness of the physical boundary of the conscious entity. Consider a single specific microphysical state that has an observer in it. I don’t see how you could have an exact principle determining the presence and nature of an observer from such a state, if you thought that observers don’t have exact and unique physical boundaries, as you were suggesting in another comment. It seems to involve a one-to-many-to-one mapping, where you go from one exact physical state, to many possible observer-boundaries, to just one exact conscious state. I don’t see how the existence of a conscious-to-nonconscious continuum of states deals with that.
I’m afraid that I’m not seeing the difficulty. I am suggesting that the possession of a given qualia state is a certain structure property of physical systems. I am suggesting that this structure property is of the sort that can be possessed by a variety of different physical systems in a variety of different states. Why couldn’t various parts be added or removed from the system while leaving intact the structure property corresponding to the given qualia state?
I’m not sure that I understand the question. Would you agree with the following? A given physical system in a given state satisfies certain structural properties, in virtue of which the system is in that state and not some other state.
I just want a specific example, first. You’re “supposing that the bearing of qualia is an objective structural property of certain physical systems”. So please give me one entirely concrete example of “an objective structural property”.
A sentence giving such a property would have to be in the context of a true and complete theory of physics, which I do not possess.
I expect that such a theory will provide a language for describing many such structural properties. I have this expectation because every theory that has been offered in the past, had it been literally true, would have provided such a language. For example, suppose that the universe were in fact a collection of indivisible particles in Euclidean 3-space governed by Newtonian mechanics. Then the distances separating the centers of mass of the various particles would have determinate ratios, triples of particles would determine line segments meeting at determinate angles, etc.
Since Newtonian mechanics isn’t an accurate description of physical reality, the properties that I can describe within the framework of Newtonian mechanics don’t make sense for actual physical systems. A similar problem bedevils any physical theory that is not literally true. Nonetheless, all of the false theories so far describe structural properties for physical systems. I see no reason to expect that the true theory of physics differs from its predecessors in this regard.
suppose that the universe were in fact a collection of indivisible particles in Euclidean 3-space governed by Newtonian mechanics. Then the distances separating the centers of mass of the various particles would have determinate ratios, triples of particles would determine line segments meeting at determinate angles, etc.
Let’s use this as an example (and let’s suppose that the main force in this universe is like Newtonian gravitation). It’s certainly relevant to functionalist theories of consciousness, because it ought to be possible to make universal Turing machines in such a universe. A bit might consist in the presence or absence of a medium-sized mass orbiting a massive body at a standard distance, something which is tested for by the passage of very light probe-bodies and which can be rewritten by the insertion of an object into an unoccupied orbit, or by the perturbation of an object out of an occupied orbit.
I claim that any mapping of these physical states onto computational states is going to be vague at the edges, that it can only be made exact by the delineation of arbitrary exact boundaries in physical state space with no functional consequence, and that this already exemplifies all the problems involved in positing an exact mapping between qualia-states and physics as we know it.
Let’s say that functionally, the difference between whether a given planetary system encodes 0 or 1 is whether the light probe-mass returns to its sender or not. We’re supposing that all the trajectories are synchronized such that, if the orbit is occupied, the probe will swing around the massive body, do a 180-degree turn, and go back from whence it came—that’s a “1”; but otherwise it will just sail straight through.
If we allow ourselves to be concerned with the full continuum of possible physical configurations, we will run into edge cases. If the probe does a 90-degree turn, probably that’s not “return to sender” and so can’t count as a successful “read-out” that the orbit is occupied. What about a 179.999999-degree turn? That’s so close to 180 degrees, that if our orrery-computer has any robustness-against-perturbation in its dynamics, at all, it still ought to get the job done. But somewhere in between that almost-perfect turn and the 90-degree turn, there’s a transition between a functional “1” and a functional “0″.
Now the problem is, if we are trying to say that computational properties are objectively possessed by this physical system, there has to be an exact boundary. (Or else we simply don’t consider a specific range of intermediate states; but then we are saying that the exact boundary does exist, in the form of a discontinuity between one continuum of physically realizable states, and another continuum of physically realizable states.) There is some exact angle-of-return for the probe-particle which marks the objective difference between “this gravitating system is in a 1-state” and “this gravitating system is in a 0-state”.
To specify such an angle is to “delineate an arbitrary exact boundary in physical state space with no functional consequence”. Consider what it means, functionally, for a gravitating system in this toy universe to be in a 1-state. It means that a probe-mass sent into the system at the appropriate time will return to sender, indicating that the orbit is occupied. But since we are talking about a computational mechanism made out of many systems, “return to sender” can’t mean that the returning probe-particle just heads off to infinity in the right direction. The probe must have an appropriate causal impact on some other system, so that the information it conveys enters into the next stage of the computation.
But because we are dealing with a physics in which, by hypothesis, distances and angles vary on a continuum, the configuration of the system to which the probe returns can also be counterfactually varied, and once again there are edge cases. Some specific rearrangement of masses and orbits has to happen in that system for the probe’s return to count as having registered, and whether a specific angle-of-return leads to the required rearrangement depends on the system’s configuration. Some configurations will capture returning probes on a broad range of angles, others will only capture it for a narrow range.
I hope this is beginning to make sense. The ascription of computational states as an objective property of a physical system requires that the mapping from physics to computation must be specific and exact for all possible physical states, even the edge cases, but in a physics based on continua, it’s just not possible to specify an exact mapping in a way that isn’t arbitrary in its details.
We don’t need to bother ourselves over whether a transistor halfway between a 0 state and a 1 state is “really” in one state or the other, because the ultimate criterion of semantics here is behavior...
I don’t think that this is why we don’t bother ourselves with intermediate states in computers.
To say that we can model a physical system as a computer is not to say that we have a many-to-one map sending every possible microphysical state to a computational state. Rather, we are saying that there is a subset Σ′ of the entire space Σ of microstates for the physical system, and a state machine M, such that,
(1) as the system evolves according to physical laws under the conditions where we wish to apply our computational model, states in Σ′ will only evolve into other states in Σ′, but never into states in the complement of Σ′;
(2) there is a many-to-one map f sending states in Σ′ to computational states of M (i.e., states in Σ′ correspond to unambiguous states of M); and
(3) if the laws of physics say that the microphysical state σ ∈ Σ′ evolves into the state σ′ ∈ Σ′, then the definition of the state machine M says that the state f(σ) transitions to the state f(σ′).
But, in general, Σ′ is a proper subset of Σ. If a physical system, under the operating conditions that we care about, could really evolve into any arbitrary state in Σ, then most of the states that the system reached would be homogeneous blobs. In that case, we probably wouldn’t be tempted to model the physical system as a computer.
I propose that physical systems are properly modeled as computers only when the proper subset Σ′ is a union of “isolated islands” in the larger state-space Σ, with each isolated island mapping to a distinct computational state. The isolated islands are separated by “broad channels” of states in the complement of Σ′. To the extent that states in the “islands” could evolve into states in the “channels”, then, to that extent, the system shouldn’t be modeled as a computer. Conversely, insofar as a system is validly modeled as a computer, that system never enters “vague” computational states.
The computational theory of mind amounts to the claim that the brain can be modeled as a state machine in the above sense.
But suppose that a confluence of cosmic rays knocked your brain into some random state in the “channels”. Well, most such states correspond to no qualia at all. Your brain would just be an inert mush. But some of the states in the channels do correspond to qualia. So long as this is possible, why doesn’t your vagueness problem reappear here?
If this were something that we expected would ever really happen, then we would be in a world where we shouldn’t be modeling the brain as a computer, except perhaps as a computer where many qualia states correspond to unique microphysical states, so that a single microphysical change sometimes makes for a different qualia state. In practice, that would probably mean that we should think of our brains as more like a bowl of soup than a computer. But insofar as this just doesn’t happen, we don’t need to worry about the vagueness problem you propose.
This is not working. I keep trying to get you to think in E-Prime for simplicity’s sake and you keep emitting words that seem to me to lack any implication for what I should expect to experience. I can think of a few ways to proceed from this state of affairs that might work.
One idea is for you to restate the bit I’m about to quote while tabooing the words “attribute”, “property”, “trait”, “state”, “intrinsic”, “objective”, “subjective”, and similar words.
Someone with no hair is bald, someone with a head full of hair is not bald, yet we don’t have a non-arbitrary criterion for where the exact boundary between bald and not-bald lies. This doesn’t matter because baldness is a rough judgment and not an objective property. But states of consciousness are objective, intrinsic attributes of the conscious being. So objective vagueness isn’t allowed, and there must be a definite fact about which conscious state, if any, is present, for every possible physical state.
...states of consciousness are possessed intrinsically, and not just by ascription on the basis of behavior. Therefore I deduce that the properties defining the physical correlate of a state of consciousness, are not fuzzy ones like “number of neurons firing in a particular ganglion”, but are instead properties that are microphysically exact.
If I translate this I hear this statement as being confused about the way to properly use abstraction in the course of reasoning, and insisting on pedantic precision whenever logical abstractions come up. Pushing all the squirrelly words into similar form for clarity, it sounds roughly like this:
Someone with no hair is bald, someone with a head full of hair is not bald and we don’t have a non-arbitrary criterion for where the exact boundary between bald and not-bald lies. This doesn’t matter because baldness is a rough judgment and not an ethereal feature. But each way of being conscious is an ethereal aspect of a conscious being. Since ethereal vagueness isn’t allowed, there must be ethereal precision for each way of being conscious that is distinct for every possible brain state.
Repeating for emphasis: ways of being conscious are ethereal, and not just inferred by rough judgment on the basis of behavior. Therefore I deduce that the ether relating brain states to ways of being conscious are not fuzzy ones like “number of neurons firing in a particular ganglion”, but are instead ethereally exact.
Do you see how this is a plausible interpretation of what you said? Do you see how the heart of our contention seems to me to have nothing to do with consciousness and everything to do with the language and methods of abstract reasoning?
We don’t have to play taboo. A second way that we might resolve our lack of linguistic/conceptual agreement is by working with the concepts that we don’t seem to use the same way in a much simpler place where all the trivial facts are settled and only the difficult concepts are at stake.
Consider the way that area, width, and height are all “intrinsic properties” of a rectangle in euclidean geometry. For me, this is another way of saying that if a construct defined in euclidean geometry lacks one of these features then it is not a rectangle. Consider another property of rectangles, the “tallness” of the rectangle, defined as ratio of the height to the width. This is not intrinsic and other than zero and infinity it could be anything and where you put the cutoff is mostly arbitrary. However, I also know that within the intrinsic properties of {width, height, area} any two of them are sufficient for defining a euclidean rectangle and thereby exactly constraining the third property to have some specific value. From this abstract reasoning, I infer that I could measure a rectangle on a table using a ruler for the width and height, and cutting out felt of known density and thickness to cover the shape and weighing that felt to get the area. This would give me three numbers that agreed with each other, modulo some measurement error and unit conversions.
On the other hand, with a euclidean square the width, height, and area are also intrinsic in the sense of being properties of everything I care to call a square, but because I additionally know that the length and width of squares are intrinsically equal. Thus, the tallness of a square is exactly 1, as an intrinsically unvarying property. Given this as background, I know that I only need one of the three “variable but intrinsic properties” to exactly specify the other two “variable but intrinsic properties”, which has implications for any measurements of actual square objects that I make with rulers and felt.
Getting more advanced, I know that I can use these properties in pragmatic ways. For example, if I’m trying to build a square out of lumber, I can measure the lengths of wood to be as equal as possible, cut them, and connect them with glue or nails with angles as close to 90 degrees as I can manage, and then I can check the quality of my work by measuring the two diagonals from one corner to another because these are “intrinsically equal” in euclidean squares and the closer the diagonal measurements are to each other the more I can consider my lumber construct to be “like a euclidean square” for other purposes (such as serving as the face of a cube). The diagonals aren’t a perfect proxy (because if my construct is grossly non-planar the diagonals could be perfectly equal even as my construct was not square-like) but they are useful.
Perhaps you could talk about how the properties of euclidean rectangles and squares relate to the properties of “indeterminate rectangles and squares”, and how the status of their properties as “intrinsic” and/or “varying” would relates to issues of measurement and construction in the presence of indeterminacy?
I will try to get across what I mean by calling states of consciousness “intrinsic”, “objectively existing”, and so forth; by describing what it would mean for them to not have these attributes.
It would mean that you only exist by convention or by definition. It would mean that there is no definite fact about whether your life is part of reality. It wouldn’t just be that some models of reality acknowledge your existence and others don’t; it would mean that you are nothing more than a fuzzy heuristic concept in someone else’s model, and that if they switched models, you would no longer exist even in that limited sense.
I would like to think that you personally have a robust enough sense of your own reality to decisively reject such propositions. But by now, nothing would surprise me, coming from a materialist. It’s been amply demonstrated that people can be willing to profess disbelief in anything and everything, if they think that’s the price of believing in science. So I won’t presume that you believe that you exist, I’ll just hope that you do, because if you don’t, it will be hard to have a sensible conversation about these topics.
But… if you do agree that you definitely exist, independently of any “model” that actual or hypothetical observers have, then it’s a short step to saying that you must also have some of your properties intrinsically, rather than through model-dependent attribution. The alternative would be to say that you exist, you’re a “thing”, but not any particular thing; which is the sort of untenable objective vagueness that I was talking about.
The concept of an intrinsic property is arising somewhat differently here, than it does in your discussion of squares and rectangles. The idealized geometrical figures have their intrinsic properties by definition, or by logical implication from the definition. But I can say that you have intrinsic properties, not by definition (or not just by definition), but because you exist, and to be is to be something. (Also known as the “law of identity”.) It would make no sense to say that you are real, but otherwise devoid of ontological definiteness.
For exactly the same reason, it would make no sense to have a fundamentally vague “physical theory of you”. Here I want to define “you” as narrowly as possible—this you, in this world, even just in this moment if necessary. I don’t want the identity issues of a broadly defined “you” to interfere. I hope we have agreed that you-here-now exist, that you exist objectively, that you must have some identifying or individuating properties which are also held objectively and intrinsically; the properties which make you what you are.
If we are going to be ontological materialists about you-here-now, and we are also going to acknowledge you-here-now as completely and independently real, then there also can’t be any vagueness or arbitrariness about which physical object is you-here-now. For every particle—if we have particles in our physical ontology—either it is definitely a part of you-here-now, or it definitely isn’t.
At this point I’m already departing radically from the standard materialist account of personhood, which would say that we can be vague about whether a few atoms are a part of you or not. The reason we can’t do that, is precisely the objectivity of your existence. If you are an objectively existing entity, I can’t at the same time say that you are an entity whose boundaries aren’t objectively defined. For some broader notion, like “your body”, sure, we can be vague about where its boundaries are. But there has to be a core notion of what you are that is correct, exact, fully objective; and the partially objective definitions of “you” come from watering down this core notion by adding inessential extra properties.
Now let’s contrast this situation with the piece of lumber that is close to being a square but isn’t a perfect square. My arguments against fundamental vagueness are not about insisting that the piece of lumber is a perfect square. I am merely insisting that it is what it is, and whatever it is, it is that, exactly and definitely.
The main difference between “you-here-now” and the piece of lumber, is that we don’t have the same reason to think that the lumber has a hard ontological core. It’s an aggregate of atoms, electrons will be streaming off it, and there will be some arbitrariness about when such an electron stops being “part of the lumber”. To find indisputably objective physical facts in this situation, you probably need to talk in terms of immediate relations between elementary particles.
The evidence for a hard core in you-here-now is primarily phenomenological and secondarily logical. The phenomenological evidence is what we call the unity of experience: what’s happening to you in any moment is a gestalt; it’s one thing happening to one person. Your experience of the world may have fuzzy edges to it, but it’s still a whole and hence objectively a unity. The logical “evidence” is just the incoherence of supposing there can be a phenomenological unity without there being an ontological unity at any level. This experiential whole may have parts, but you can’t use the existence of the parts to then turn around and deny the existence of the whole.
The evidence for an ontological hard core to you-here-now does not come from physics. Physically the brain looks like it should be just like the piece of lumber, an aggregate of very many very small things. This presumption is obviously why materialists often end up regarding their own existence as something less than objective, or why the search for a microphysically exact theory of the self sounds like a mistake. Instead we are to be content with the approximations of functionalism, because that’s the most you could hope to do with such an entity.
I hope it’s now very clear where I’m coming from. The phenomenological and ontological arguments for a “hard core” to the self are enough to override any counterargument from physics. They tell us that a mesoscopic theory of what’s going on, like functionalism, is at best incomplete; it cannot be the final word. The task is to understand the conscious brain as a biophysical system, in terms of a physical ontology that can contain “real selves”. And fortunately, it’s no longer the 19th century, we have quantum mechanics and the ingredients for something more sophisticated than classic atomism.
I’m going back and forth on whether to tap out here. On the one hand I feel like I’m making progress in understanding your perspective. On the other hand the progress is clarifying that it would take a large amount of time and energy to derive a vocabulary to converse in a mutually transparent way about material truth claims in this area. It had not occurred to me that pulling on the word “intrinsic” would flip the conversation into a solipsistic zone by way of Cartesian skepticism. Ooof.
Perhaps we could schedule a few hours of IM or IRC to try a bit of very low latency mutual vocabulary development, and then maybe post the logs back here for posterity (raw or edited) if that seems worthwhile to us. (See private message for logistics.) If you want to stick to public essays I recommend taking things up with Tyrrell; he’s a more careful thinker than I am and I generally agree with what he says. He noticed and extended a more generous and more interesting parsing of your claims than I did when I thought you were trying to make a pigeonhole argument in favor of magical entities, and he seems to be interested. Either public essays with Tyrrell, IM with me, or both, or neither… as you like :-)
(And/or Steve of course, but he generally requires a lot of unpacking, and I frequently only really understand why his concepts were better starting places than my own between 6 and 18 months after talking with him.)
It wouldn’t just be that some models of reality acknowledge your existence and others don’t; it would mean that you are nothing more than a fuzzy heuristic concept in someone else’s model, and that if they switched models, you would no longer exist even in that limited sense.
Or in a cascade of your own successive models, including of the cascade.
Or an incentive to keep using that model rather than to switch to another one. The models are made up, but the incentives are real. (To whatever extent the thing subject to the incentives is.)
Not that I’m agreeing, but some clever ways to formulate almost your objection could be built around the wording “The mind is in the mind, not in reality”.
At this point I’m already departing radically from the standard materialist account of personhood, which would say that we can be vague about whether a few atoms are a part of you or not. The reason we can’t do that, is precisely the objectivity of your existence. If you are an objectively existing entity, I can’t at the same time say that you are an entity whose boundaries aren’t objectively defined.
I have some sympathy for the view that my-here-now qualia are determinant and objective. But I don’t see why that implies that there must be a determinant objective unique collection of particles that is experiencing the qualia. Why not say that there are various different boundaries that I could draw, but, no matter which of these boundaries I draw, the qualia being experienced by the contained system of particles would be the same? For example, adding or removing the table in front of me doesn’t change the qualia experienced by the system.
(Here I am supposing that I can map the relevant physical systems to qualia in the manner that I describe in this comment.)
Therefore I deduce that the properties defining the physical correlate of a state of consciousness, are not fuzzy ones like “number of neurons firing in a particular ganglion”, but are instead properties that are microphysically exact.
My subjective conscious experience seems no more exact a thing to me than my experience of distinctions of colours. States of consciousness seem to be a continuous space, and there isn’t even a hard boundary (again, as I perceive things subjectively) between what is conscious and what is not.
But perhaps people vary in this; perhaps it is different for you?
I’ve been trying to find a way to empathically emulate people who talk about quantum consciousness for a while, so far with only moderate success. Mitchell, I’m curious if you’re aware of the work of Christof Koch and Giulio Tononi, and if so, could you speak to their approach?
For reference (if people aren’t familiar with the work already) Koch’s team is mostly doing experiments… and seems to be somewhat close to having mice that have genes knocked out so that they “logically would seem” to lack certain kinds of qualia that normal mice “logically would seem” to have. Tononi collaborates with him and has proposed a way to examine a thing that computes and calculates that thing’s “amount of consciousness” using a framework he called Integrated Information Theory. I have not sat down and fully worked out the details of IIT such that I could explain it to a patient undergrad at a chalkboard, but the reputation of the people involved is positive (I’ve seen Koch’s dog and pony show a few times and it has improved substantially over the years and he is pimping Tononi pretty effectively)… basically the content “smells promising” but I’m hoping I can hear someone else’s well informed opinion to see if I should spend more time on it.
Also, it seems to be relevant to this philosophic discussion? Or not? That’s what I’m wondering. Opinions appreciated :-)
It bugs me when people talk about ’quantum consciousness”, given that classical computers can do anything quantum computers can do, only sometimes slower.
IIT’s measure of “information integration”, phi, is still insufficiently exact to escape the “functionalist sorites problem”. It could be relevant for a state-machine analysis of the brain, but I can’t see it being enough to specify the mapping between physical and phenomenological states. Also, Tononi’s account of conscious states seems to be just at the level of sensation. But this is an approach which could converge with mine if the right extra details were added.
“We” are a heterogeneous group. Chopra and Penrose—not much in common. Besides, even if you believe consciousness can arise from classical computation but you also believe in many worlds, then quantum concepts do play a role in your theory of mind, in that you say that the mind consists of interactions between distinct states of decohered objects. Figure out how Tononi’s “phi” could be calculated for the distinct branches of a quantum computer, and lots of people will want to be your friend.
If I understand what you’re calling the “functionalist sorites problem”, it seems to me that Integrated Information Theory is meant to address almost exactly that issue, with its “phi” parameter being a measure of something like the degree (in bits) which an input is capable of exerting influence over a behavioral outcome.
Moreover, qualia, at least as I seem to experience them, are non-binary. Merely hearing the word “red” causes aspects of my present environment to leap to salience in a way that I associate with those facets of the world being more able to influence my subsequent behavior… or to put it much more prosaically: reminders can, in fact, bring reminded content to my attention and thereby actually work. Equally, however, I frequently notice my output having probably been influenced by external factors that were in my consciousness to only a very minor degree such that it would fall under the rubric of priming. Maybe this is ultimately a problem due to generalizing from one example? Maybe I have many gradations of conscious awareness and you have binary awareness and we’re each assuming homogeneity where none exists?
Solving a fun problem and lots of people wanting to be my friend sounds neat… like a minor goad to working on the problem in my spare time and seeing if I can get a neat paper on it? But I suspect you’re overestimating people’s interest, and I still haven’t figured out the trick of being paid well to play with ideas, so until them schema inference software probably pays the bills more predictably than trying to rid the world of quantum woo. There are about 1000 things I could spend the next few years on, and I only get to do maybe 2-5 of them, and then only in half-assed ways unless I settle on ONLY one of them. Hobby quantum consciousness research is ~8 on the list and unlikely to actually get many brain cycles in the next year :-P
I posed the functionalist sorites problem in the form of existence vs nonexistence of a specific quale, but it can equally be posed in the form of one state of consciousness vs another, where the difference may be as blatant or as subtle as you wish.
The question is, what are the exact physical conditions under which a completely specific quale or state of consciousness exists? And we can highlight the need for exactness, by asking at the same time what the exact conditions are, under which no quale occurs, or under which the other state of consciousness occurs; and then considering edge cases, where the physical conditions are intermediate between one vague specification and another vague specification.
For the argument to work, you must be clear on the principle that any state of consciousness is exactly something, even if we are not totally aware of it or wouldn’t know how to completely describe it. This principle—which amounts to saying that there is no such thing as entities which are objectively vague—is one that we already accept when discussing physics, I hope.
Suppose we are discussing what the position of an unmeasured electron is. I might say that it has a particular position; I might say that it has several positions or all positions, in different worlds; I might say that it has no position at all, that it just isn’t located in space right now. All of those are meaningful statements. But to say that it has a position, but it doesn’t have a particular position, is conceptually incoherent. It doesn’t designate a possibility. It most resembles “the electron has no position at all”, but then you don’t get to talk as if the electron nonetheless has a (nonspecific) position at the same time as not actually having a position.
The same principle applies to conscious experience. The quale is always a particular quale, even if you aren’t noticing its particularities.
Now let us assume for the moment that this principle of non-vagueness is true for all physical states and all phenomenological states. That means that when we try to understand the conditions under which physical states and phenomenological states are related, we are trying to match up two sets of definite “things”.
The immediate implication is that any definite physical state will be matched with a definite phenomenology (or with no phenomenology at all). Equally it implies that any definite phenomenological state will correspond to a definite physical state or to a set of definite physical states. The boundary between “physical states corresponding to one phenomenological state”, and “physical states corresponding to another phenomenological state”, must be sharp. The only way to avoid a sharp boundary is if there’s a continuum on both sides—a continuum of physical states, and a continuum of phenomenological states—but again there must be an exact mapping between them, because of non-vagueness.
IIT does not provide an exact mapping because it doesn’t really concern itself with exact microphysical facts, like exact microphysical states, or exact microscopic boundaries between the physical systems that are coupled to each other. Everything is just being described in a coarse-grained fashion; which is fine for computational or other practical causal analyses.
I don’t think I would find many people willing to defend the position that conscious states are objectively vague. I also wouldn’t find many willing to say that any law of correspondence between physical and phenomenological states must be exact on the microphysical level. But this is the implication of the principle of ontological non-vagueness, applied to both sides of the equation.
Someone downvoted you, but I upvoted you to correct it. I only downvote when I think there is (1) bad faith communication or (2) an issue above LW’s sanity line is being discussed tactlessly. Neither seems to apply here.
That said, I think you just made a creationist “no transitional forms” move in your argument? A creationist might deny that 200-million-year-separated organisms, seemingly obviously related by descent, are “the same” magically/essentially distinct “kind”. There’s a gap between them! When pressed (say by being shown some intermediate forms that have been found given the state of the scientific excavation of the crust) a creationist could point in between each intermediate form to more gaps which might naively seem to make their “gaps exist” point a stronger point against the general notion of “evolution by natural selection”. But it doesn’t. Its not a stronger argument thereby, but a weaker one.
Similarly, you seem to have a rhetorical starting point where you verbally deploy the law of the excluded middle to say that either a quale “is or is not” experienced due to a given micro-physical configuration state (notice the similarity of focusing on simplistic verbal/propositional/logical modeling of rigid “kinds” or “sets” with magically perfect inclusion/exclusion criteria). I pushed on that and you backed down. So it seems like you’ve retreated to a position where each verbally distinguishable level of conscious awareness should probably have a different physical configuration, and in fact this is what we seem to observe with things like fMRI...if you squint your eyes and acknowledge limitations in observation and theory that are being rectified by science even as we write. We haven’t nanotechnogically consumed the entire crust of the earth to find every fossil, and we haven’t simulated a brain yet, but these things may both be on the long term path of “the effecting of all things possible”.
My hope in trying to empathically emulate people who take quantum consciousness seriously is that I’ll gain a new insight… but it is hard because mostly what I see is things I find very easy to interpret as second rate thinking (like getting confused in abstractions of philosophical handwaving) while ignoring the immanent physical vastness of the natural world (with its trillions of moving parts that have had billions of years of selection to become optimized) in the manner of Penrose and Searle and so on. I want there to be something interesting to understand. Some “aha moment” when it all snaps into place, and I don’t want that moment to be a final decisive insight into what’s going wrong in your heads that makes you safe to write off…
This just sorta sounds to me like you’ve been infected with dualism and have no other metaphysical theories to play off against the dualistic metaphysics in your head. I might try to uncharitably translate you to be saying something like “I have a roughly verbalizable model of my phenomenological experience of my brain states for a brain that is self-monitoring, self-regulating, world-modeling, agents-in-world-modeling, self-as-agent-modeling, and behavior-generating (and btw, I promoted my model to ‘ontological realness’ and started confusing my experience of my belief in ontologically non-physical mental states with there being a platonic ghost in my head or something), but brains and my ghost model are both really complicated, and it seems hard to map them into each other with total fidelity… and this means that brains must be very magic, you might say quantumly magic, in order to match how confused I am about the lack of perfect match between my ghost model and my understanding of the hardware that might somehow compute the ghost model… and since my ghost model is ontologically real this means there are ghosts… in my brain… because it’s a quantum brain… or something… I’m not sure...”
I want something to fall out of conversation with (and reading of) quantum consciousness theorists that shows that something like a quantum fourier transform is running on our neurons to allow “such and such super powers” to be demonstrated by humans that clearly has a run time in our brains that beats what would be possible for a classical turing machine. What would classical-Turing-zombies look like that is different from how quantum-soulful-people would look like? All I can hear is mediocre philosophy of mind. I think? I don’t intend meanness.
I’m just trying to communicate the problem I’m having hearing whatever it is that you’re really trying to say that makes sense to you. I’m aware of inferential distances and understand that I might need to spend 200 weekends (which would make it a four year hobby project) reading traditionally-understood-as-boring non-fiction to understand what you’re saying, but my impression is that no such course of reading exists for you to point me towards… which would be weak but distinct evidence for you being confused rather than me being ignorant.
Is there something I should read? What am I missing?
ETA: I re-read this and find my text to be harsher than I’d like. I really don’t want this to be harsh, but actually want enlightenment here and find myself groping for words that will get you to engage with my vocabulary and replace an accessible but uncharitable interpretation in my head with a better theory. If you’d like to not respond in public, PM me your email and I’ll respond via that medium? Maybe IRC would be a better to reduce the latency on vocabulary development?
No, I explicitly mentioned the idea that there might be a continuum of possible quale states; you even quoted the sentence where I brought it up. But it is irrelevant to my argument, which is that for a proposed mapping between physical and phenomenological states to have any chance of being true, it must possess an extension to an exact mapping between fundamental microphysical states and phenomenological states (not necessarily a 1-to-1 mapping) - because the alternative is “objective vagueness” about which conscious state is present in certain physical configurations—and this requirement is very problematic for standard functionalism based on vaguely defined mesoscopic states, since any specification of how all the edge cases correspond to the functional states will be highly arbitrary.
Let me ask you this directly: do you think it would be coherent to claim that there are physical configurations in which there is a state of consciousness present, but it’s not any particular state of consciousness? It doesn’t have to be a state of consciousness that we presently know how to completely characterize, or a state of consciousness that we can subjectively discriminate from all other possible states of consciousness; it just has to be a definite, particular state of consciousness.
If we agree that ontological definiteness of physical state implies ontological definiteness in any accompanying state of consciousness (again I’ll emphasize that this is ontological definiteness, not phenomenological definiteness; I must allow for the fact that states of consciousness have details that aren’t noticed by the experiencer), then that immediately implies the existence of an exact mapping from microphysically exact states to ontologically definite states of consciousness. Which implies an inverse mapping from ontologically definite states of consciousness, to a set of exact microphysical states, which are the physical states (or state, there might only be one) in which that particular state of consciousness is realized.
OK, I hope I’m starting to get it. Are you looking for a basis to power a pigeonhole argument about equivalence classes?
If we’re going to count things, then a potential source of confusion is that there are probably more ontologically distinct states of “being consciously depressed” than can detectable from the inside, because humans just aren’t very good at internal monitoring and stuff, but that doesn’t mean they aren’t differences that a martian with Awesome Scanning Equipment couldn’t detect. So a mental patient could be phenomenologically depressed in a certain way and say “that feeling I just felt was exactly the same feeling as in the past modulo some mental trivia about vaguely knowing it is Tuesday rather than Sunday” and the Martian anthropologist might check the scanner logs and might truthfully agree but more likely the the Martian might truthfully say, “Technically no: you were more consciously obsessed about your ex-boyfriend than you were consciously obsessed about your cellulite, which is the opposite ordering of every time in the past, though until I said this you were not aware of this difference in your awareness” and then the patient might introspect based on the statement and say “Huh, yeah, I guess you’re right, curious that I didn’t notice that from the inside while it was happening… oh well, time for more crying now...” And in general, absent some crazy sort of phenomenological noise source, there are almost certainly fewer phenomenologically distinct states than ontologically distinct states.
So then the question arises as to how the Martian’s “ontology monitoring” scanner worked.
It might have measured physical brain states via advanced but ultimately prosaic classical-Turing-neuron technology or it might have used some sort of quantum-chakra-scanner that detects qualia states directly. Perhaps it has both and can run either or both scanners and compare their results over time? One of them can report that a stray serotonin molecule was different, and the other can identify an ontologically distinct feeling of satisfaction. Which leads to a second question of number: can the the quantum chakra scanner detect exactly the same cardinality of qualia states as the classical turing scanner can detect brain states? If I’m reconstructing/interpreting your claim properly, this starts to get a the heart of a sort of “quantum qualia pigeonhole puzzle”?
Except even if this is what you’re proposing, I don’t see how it implies quantum stuff is likely to be very important...
If the scanners give exactly the same counts, that would be surprising and probably very few people expect this outcome because there are certainly unconscious mental processes and those are presumably running on “brain tissue” and hence contribute to brain state counts but not qualia state counts.
So the likely answer is that there are fewer qualia states than brain states. Conversely if somehow there were more qualia states than brain states then I think that would be evidence for “mind physics” above and beyond “particle physics” and upon learning the existence of a physics that includes ontologically real cartesian mental entities that runs separately from but affect raw brain matter… well, then I guess my brain would explode… and right afterwards I’d get curious about how “computational chakronics” work :-)
Assuming the Martian’s scanners came out with more brain-states than qualia-states, this would confirm my expectations, and would also confirm the (already dominant?) theory that there was something interesting about the operation, interconnection, and/or embodied-embedding of certain kinds of brain tissue in the relatively boring way that is the obvious target of research for computationally-inspired neuro-physical science. This is what all the fMRIs and Halley Berry neuron probing and face/chalice experiments are for.
A result of |brainstates| > |qualiastates| would be consistent with the notion that consciousness was “substrate independent” in potentially two ways, first it might allow us to port the “adaptively flexible self monitoring conscious architecture dynamic” to a better medium by moving the critical patterns of interaction to microchips or something (allowing us to drop all the slimy proteins and ability to be denatured at 80 Celsius and so on). Second, it might allow us to replace significant chunks of nervous tissue (spinal tissue and retina and so on) with completely different and better stuff without even worrying because they probably aren’t even involved in “consciousness” except as simple data pipes.
This would be pretty spooky to me if it was possible. My current expectations (call this B>Q>P) are:
|brainstates| > |qualiastates| > |phenomenonologystates|
If my expected ordering is right, then an inverse mapping from qualiastates to brain states should be impossible by the pigeonhole principle… and then substrate independence probably “goes through”. Quantum mechanics, in this model, could totally be “just a source of noise”, with some marginal value as a highly secure random number generator to use in mixed strategies, but this result would be perfectly consistent with quantum effects mostly existing as a source of error that makes it harder to build a classical computation above it that actually does cognitive work rather than merely thrashing around doing “every possible thing”.
I mean… quantum stuff could still matter if B>Q>P is true. Like it might be involved in speedup tricks for some neural algorithms that we haven’t yet understood? But it doesn’t seem like it would be an obvious source of “magical qualia chakras” that make the people who have them more conscious in a morally-important ghost-in-the-brain way that would be lost from porting brain processes to a faster and more editable substrate. If it does, then that result is probably really really important (hence my interest)… it just seems very unlikely to me at the present time.
Are we closer to coherent now? Do we have a simple disagreement of expectations that can be expressed in a mutually acceptable vocabulary? That seems like it would be progress, if true :-)
I think that there was a miscommunication here. To be strictly correct, Mitchell should have written “Which implies an inverse mapping from ontologically definite states of consciousness, to sets of exact microphysical states...”. His additional text makes it clear that he’s talking about a map f sending every qualia state q to a set f(q) of brain states, namely, the set of brain states b such that being in brain state b implies experiencing qualia state q. This is consistent with the ordering B>Q>P that you expect.
This is not about counting the number of states. It is about disallowing vagueness at the fundamental level, and then seeing the implications of that for functionalist theories of consciousness.
A functionalist theory of consciousness says that a particular state of consciousness occurs, if and only if the physical object is in a particular “functional state”. If you classify all the possible physical states into functional states, there will be borderline cases. But if we disallow vagueness, then every one of those borderline cases must correspond to a specific state of consciousness.
Someone with no hair is bald, someone with a head full of hair is not bald, yet we don’t have a non-arbitrary criterion for where the exact boundary between bald and not-bald lies. This doesn’t matter because baldness is a rough judgment and not an objective property. But states of consciousness are objective, intrinsic attributes of the conscious being. So objective vagueness isn’t allowed, and there must be a definite fact about which conscious state, if any, is present, for every possible physical state.
If we are employing the usual sort of functionalist theory, then the physical variables defining the functional states will be bulk mesoscopic quantities, there will be borderline areas between one functional state and another, and any line drawn through a borderline area, demarcating an exact boundary, just for the sake of avoiding vagueness, will be completely arbitrary at the finest level. The difference between experiencing one shade of red and another will be that you have 4000 color neurons firing rather than 4001 color neurons, and a cell will count as a color neuron if it has 10 of the appropriate receptors but not if it only has 9, and a state of this neuron will count as firing if the action potential manages to traverse the whole length of the axon, but not if it’s just a localized fizzle…
The arbitrariness of the distinctions that would need to be made, in order to refine this sort of model of consciousness all the way to microphysical exactness, is evidence that it’s the wrong sort of model. This sort of inexact functionalism can only apply to unconscious computational states. It would seem that most of the brain is an unconscious coprocessor of the conscious part. We can think about the computational states of the unconscious part of the brain in the same rough-and-ready way that we think about the computational states of an ordinary digital computer—they are regularities in the operation of the “device”. We don’t need to bother ourselves over whether a transistor halfway between a 0 state and a 1 state is “really” in one state or the other, because the ultimate criterion of semantics here is behavior, and a transistor—or a neuron—in a computational “halfway state” is just one whose behavior is unpredictable, and unreliable compared to the functional role it is supposed to perform.
This is not an option when thinking about conscious states, because states of consciousness are possessed intrinsically, and not just by ascription on the basis of behavior. Therefore I deduce that the properties defining the physical correlate of a state of consciousness, are not fuzzy ones like “number of neurons firing in a particular ganglion”, but are instead properties that are microphysically exact.
I see that you already addressed precisely the points that I made here. You wrote
I agree that any final “theory of qualia” should say, for every physically possible state, whether that state bears qualia or not. I take seriously the idea that such a final theory of qualia is possible, meaning that there really is an objective fact of the matter about what the qualia properties of any physically possible state are. I don’t have quite the apodeictic certainty that you seem to have, but I take the idea seriously. At any rate, I feel at least some persuasive force in your argument that we shouldn’t be drawing arbitrary boundaries around the microphysical states associated with different qualia states.
But even granting the objective nature of qualia properties, I’m still not getting why vagueness or arbitrariness is an inevitable consequence of any assignment of qualia states to microphysical states.
Why couldn’t the property of bearing qualia be something that can, in general, be present with various degrees of intensity, ranging from intensely present to entirely absent? Perhaps the “isolated islands” normally traversed by our brains are always at one extreme or another of this range. In that case, it would be impossible for us to imagine what it would “be like” to “bear qualia” in only a very attenuated sense. Nonetheless, perhaps a sufficiently powerful nano-manipulator could rearrange the particles in your brain into such a state.
To be clear, I’m not talking about states that experience specifc qualia — a patch of red, say — very dimly. I’m talking about states that just barely qualify as bearing qualia at all. I’m trying to understand how you rule out the possibility that “bearing qualia” is a continuous property, like the geometrical property of “being longer than a given unit”. Just as a geometrical figure can have a length varying from not exceeding, to just barely exceeding, to greatly exceeding that of a given unit, why might not the property of bearing qualia be one that can vary from entirely absent, to just barely present, to intensely present?
It’s not obviously enough to point out, as you did to Jennifer, that I feel myself to be here, full stop, rather than just barely here or partly here, and that I can’t even imagine myself feeling otherwise. That doesn’t rule out the possibility that there are possible states, which my brain never normally enters, in which I would just barely be a bearer of qualia.
There are two problems here. First, you need to make the idea of “barely having qualia” meaningful. Second, you need to explain how that can solve the arbitrariness problem for a microphysically exact psychophysical correspondence.
Are weak qualia a bridge across the gap between having qualia and not having qualia? Or is the axis intense-vs-weak, orthogonal to the axis there-vs-not-there-at-all? In the latter case, even though you only have weak qualia, you still have them 100%.
The classic phenomenological proposition regarding the nature of consciousness, is that it is essentially about intentionality. According to this, even perception has an intentional structure, and you never find sense-qualia existing outside of intentionality. I guess that according to the later Husserl, all possible states of consciousness would be different forms of a fundamental ontological structure called “transcendental intentionality”; and the fundamental difference between a conscious entity and a non-conscious entity is the existence of that structure “in” the entity.
There are mathematical precedents for qualitative discontinuity. If you consider a circle versus a line interval, there’s no topological property such as “almost closed”. In the context of physics, you can’t have entanglement in a Hilbert space with less than four dimensions. So it’s conceivable that there is a discontinuity in nature, between states of consciousness and states of non-consciousness.
Twisty distinctions may need to be made. At least verbally, I can distinguish between (1) an entity whose state just is a red quale (2) an entity whose state is one of awareness of the red quale (3) an entity which is aware that it is aware of the red quale. The ontological position I described previously would say that (3) is what we call self-awareness; (2) is what we might just call awareness; there’s no such thing as (1), and intentionality is present in (2) as well as in (3). I’m agnostic about the existence of something like (1), as a bridge between having-qualia and not-having-qualia. Also, even looking for opportunities for continuity, it’s hard not to think that there’s another discontinuity between awareness and self-awareness.
If I was a real phenomenologist, I would presumably have a reasoned position on such questions. Or at least I could state the options with much more rigor. I’ll excuse the informality of my exposition by saying that one has to start somewhere.
On the arbitrariness problem: I think this is most apparent when it’s arbitrariness of the physical boundary of the conscious entity. Consider a single specific microphysical state that has an observer in it. I don’t see how you could have an exact principle determining the presence and nature of an observer from such a state, if you thought that observers don’t have exact and unique physical boundaries, as you were suggesting in another comment. It seems to involve a one-to-many-to-one mapping, where you go from one exact physical state, to many possible observer-boundaries, to just one exact conscious state. I don’t see how the existence of a conscious-to-nonconscious continuum of states deals with that.
I’m still not sure where this arbitrariness problem comes from. I’m supposing that the bearing of qualia is an objective structural property of certain physical systems. Another mathematical analogy might be the property of connectivity in graphs. A given graph is either connected or not, though connectivity is also something that exists in degrees, so that there is a difference between being highly connected and just barely connected.
On this view, how does arbitrariness get in?
I’m suggesting something more like your “bridge across the gap” option. Analogously, one might say that the barely connected graphs are a bridge between disconnected graphs and highly connected graphs. Or, to repeat my analogy from the grandparent, the geometrical property of “being barely longer than a given unit” is a bridge across the gap between “being shorter that the given unit” and “being much longer than the given unit”.
I’m afraid that I’m not seeing the difficulty. I am suggesting that the possession of a given qualia state is a certain structure property of physical systems. I am suggesting that this structure property is of the sort that can be possessed by a variety of different physical systems in a variety of different states. Why couldn’t various parts be added or removed from the system while leaving intact the structure property corresponding to the given qualia state?
Give me an example of an “objective structural property” of a physical system. I expect that it will either be “vague” or “arbitrary”…
I’m not sure that I understand the question. Would you agree with the following? A given physical system in a given state satisfies certain structural properties, in virtue of which the system is in that state and not some other state.
I just want a specific example, first. You’re “supposing that the bearing of qualia is an objective structural property of certain physical systems”. So please give me one entirely concrete example of “an objective structural property”.
A sentence giving such a property would have to be in the context of a true and complete theory of physics, which I do not possess.
I expect that such a theory will provide a language for describing many such structural properties. I have this expectation because every theory that has been offered in the past, had it been literally true, would have provided such a language. For example, suppose that the universe were in fact a collection of indivisible particles in Euclidean 3-space governed by Newtonian mechanics. Then the distances separating the centers of mass of the various particles would have determinate ratios, triples of particles would determine line segments meeting at determinate angles, etc.
Since Newtonian mechanics isn’t an accurate description of physical reality, the properties that I can describe within the framework of Newtonian mechanics don’t make sense for actual physical systems. A similar problem bedevils any physical theory that is not literally true. Nonetheless, all of the false theories so far describe structural properties for physical systems. I see no reason to expect that the true theory of physics differs from its predecessors in this regard.
Let’s use this as an example (and let’s suppose that the main force in this universe is like Newtonian gravitation). It’s certainly relevant to functionalist theories of consciousness, because it ought to be possible to make universal Turing machines in such a universe. A bit might consist in the presence or absence of a medium-sized mass orbiting a massive body at a standard distance, something which is tested for by the passage of very light probe-bodies and which can be rewritten by the insertion of an object into an unoccupied orbit, or by the perturbation of an object out of an occupied orbit.
I claim that any mapping of these physical states onto computational states is going to be vague at the edges, that it can only be made exact by the delineation of arbitrary exact boundaries in physical state space with no functional consequence, and that this already exemplifies all the problems involved in positing an exact mapping between qualia-states and physics as we know it.
Let’s say that functionally, the difference between whether a given planetary system encodes 0 or 1 is whether the light probe-mass returns to its sender or not. We’re supposing that all the trajectories are synchronized such that, if the orbit is occupied, the probe will swing around the massive body, do a 180-degree turn, and go back from whence it came—that’s a “1”; but otherwise it will just sail straight through.
If we allow ourselves to be concerned with the full continuum of possible physical configurations, we will run into edge cases. If the probe does a 90-degree turn, probably that’s not “return to sender” and so can’t count as a successful “read-out” that the orbit is occupied. What about a 179.999999-degree turn? That’s so close to 180 degrees, that if our orrery-computer has any robustness-against-perturbation in its dynamics, at all, it still ought to get the job done. But somewhere in between that almost-perfect turn and the 90-degree turn, there’s a transition between a functional “1” and a functional “0″.
Now the problem is, if we are trying to say that computational properties are objectively possessed by this physical system, there has to be an exact boundary. (Or else we simply don’t consider a specific range of intermediate states; but then we are saying that the exact boundary does exist, in the form of a discontinuity between one continuum of physically realizable states, and another continuum of physically realizable states.) There is some exact angle-of-return for the probe-particle which marks the objective difference between “this gravitating system is in a 1-state” and “this gravitating system is in a 0-state”.
To specify such an angle is to “delineate an arbitrary exact boundary in physical state space with no functional consequence”. Consider what it means, functionally, for a gravitating system in this toy universe to be in a 1-state. It means that a probe-mass sent into the system at the appropriate time will return to sender, indicating that the orbit is occupied. But since we are talking about a computational mechanism made out of many systems, “return to sender” can’t mean that the returning probe-particle just heads off to infinity in the right direction. The probe must have an appropriate causal impact on some other system, so that the information it conveys enters into the next stage of the computation.
But because we are dealing with a physics in which, by hypothesis, distances and angles vary on a continuum, the configuration of the system to which the probe returns can also be counterfactually varied, and once again there are edge cases. Some specific rearrangement of masses and orbits has to happen in that system for the probe’s return to count as having registered, and whether a specific angle-of-return leads to the required rearrangement depends on the system’s configuration. Some configurations will capture returning probes on a broad range of angles, others will only capture it for a narrow range.
I hope this is beginning to make sense. The ascription of computational states as an objective property of a physical system requires that the mapping from physics to computation must be specific and exact for all possible physical states, even the edge cases, but in a physics based on continua, it’s just not possible to specify an exact mapping in a way that isn’t arbitrary in its details.
I don’t think that this is why we don’t bother ourselves with intermediate states in computers.
To say that we can model a physical system as a computer is not to say that we have a many-to-one map sending every possible microphysical state to a computational state. Rather, we are saying that there is a subset Σ′ of the entire space Σ of microstates for the physical system, and a state machine M, such that,
(1) as the system evolves according to physical laws under the conditions where we wish to apply our computational model, states in Σ′ will only evolve into other states in Σ′, but never into states in the complement of Σ′;
(2) there is a many-to-one map f sending states in Σ′ to computational states of M (i.e., states in Σ′ correspond to unambiguous states of M); and
(3) if the laws of physics say that the microphysical state σ ∈ Σ′ evolves into the state σ′ ∈ Σ′, then the definition of the state machine M says that the state f(σ) transitions to the state f(σ′).
But, in general, Σ′ is a proper subset of Σ. If a physical system, under the operating conditions that we care about, could really evolve into any arbitrary state in Σ, then most of the states that the system reached would be homogeneous blobs. In that case, we probably wouldn’t be tempted to model the physical system as a computer.
I propose that physical systems are properly modeled as computers only when the proper subset Σ′ is a union of “isolated islands” in the larger state-space Σ, with each isolated island mapping to a distinct computational state. The isolated islands are separated by “broad channels” of states in the complement of Σ′. To the extent that states in the “islands” could evolve into states in the “channels”, then, to that extent, the system shouldn’t be modeled as a computer. Conversely, insofar as a system is validly modeled as a computer, that system never enters “vague” computational states.
The computational theory of mind amounts to the claim that the brain can be modeled as a state machine in the above sense.
But suppose that a confluence of cosmic rays knocked your brain into some random state in the “channels”. Well, most such states correspond to no qualia at all. Your brain would just be an inert mush. But some of the states in the channels do correspond to qualia. So long as this is possible, why doesn’t your vagueness problem reappear here?
If this were something that we expected would ever really happen, then we would be in a world where we shouldn’t be modeling the brain as a computer, except perhaps as a computer where many qualia states correspond to unique microphysical states, so that a single microphysical change sometimes makes for a different qualia state. In practice, that would probably mean that we should think of our brains as more like a bowl of soup than a computer. But insofar as this just doesn’t happen, we don’t need to worry about the vagueness problem you propose.
This is not working. I keep trying to get you to think in E-Prime for simplicity’s sake and you keep emitting words that seem to me to lack any implication for what I should expect to experience. I can think of a few ways to proceed from this state of affairs that might work.
One idea is for you to restate the bit I’m about to quote while tabooing the words “attribute”, “property”, “trait”, “state”, “intrinsic”, “objective”, “subjective”, and similar words.
If I translate this I hear this statement as being confused about the way to properly use abstraction in the course of reasoning, and insisting on pedantic precision whenever logical abstractions come up. Pushing all the squirrelly words into similar form for clarity, it sounds roughly like this:
Do you see how this is a plausible interpretation of what you said? Do you see how the heart of our contention seems to me to have nothing to do with consciousness and everything to do with the language and methods of abstract reasoning?
We don’t have to play taboo. A second way that we might resolve our lack of linguistic/conceptual agreement is by working with the concepts that we don’t seem to use the same way in a much simpler place where all the trivial facts are settled and only the difficult concepts are at stake.
Consider the way that area, width, and height are all “intrinsic properties” of a rectangle in euclidean geometry. For me, this is another way of saying that if a construct defined in euclidean geometry lacks one of these features then it is not a rectangle. Consider another property of rectangles, the “tallness” of the rectangle, defined as ratio of the height to the width. This is not intrinsic and other than zero and infinity it could be anything and where you put the cutoff is mostly arbitrary. However, I also know that within the intrinsic properties of {width, height, area} any two of them are sufficient for defining a euclidean rectangle and thereby exactly constraining the third property to have some specific value. From this abstract reasoning, I infer that I could measure a rectangle on a table using a ruler for the width and height, and cutting out felt of known density and thickness to cover the shape and weighing that felt to get the area. This would give me three numbers that agreed with each other, modulo some measurement error and unit conversions.
On the other hand, with a euclidean square the width, height, and area are also intrinsic in the sense of being properties of everything I care to call a square, but because I additionally know that the length and width of squares are intrinsically equal. Thus, the tallness of a square is exactly 1, as an intrinsically unvarying property. Given this as background, I know that I only need one of the three “variable but intrinsic properties” to exactly specify the other two “variable but intrinsic properties”, which has implications for any measurements of actual square objects that I make with rulers and felt.
Getting more advanced, I know that I can use these properties in pragmatic ways. For example, if I’m trying to build a square out of lumber, I can measure the lengths of wood to be as equal as possible, cut them, and connect them with glue or nails with angles as close to 90 degrees as I can manage, and then I can check the quality of my work by measuring the two diagonals from one corner to another because these are “intrinsically equal” in euclidean squares and the closer the diagonal measurements are to each other the more I can consider my lumber construct to be “like a euclidean square” for other purposes (such as serving as the face of a cube). The diagonals aren’t a perfect proxy (because if my construct is grossly non-planar the diagonals could be perfectly equal even as my construct was not square-like) but they are useful.
Perhaps you could talk about how the properties of euclidean rectangles and squares relate to the properties of “indeterminate rectangles and squares”, and how the status of their properties as “intrinsic” and/or “varying” would relates to issues of measurement and construction in the presence of indeterminacy?
I will try to get across what I mean by calling states of consciousness “intrinsic”, “objectively existing”, and so forth; by describing what it would mean for them to not have these attributes.
It would mean that you only exist by convention or by definition. It would mean that there is no definite fact about whether your life is part of reality. It wouldn’t just be that some models of reality acknowledge your existence and others don’t; it would mean that you are nothing more than a fuzzy heuristic concept in someone else’s model, and that if they switched models, you would no longer exist even in that limited sense.
I would like to think that you personally have a robust enough sense of your own reality to decisively reject such propositions. But by now, nothing would surprise me, coming from a materialist. It’s been amply demonstrated that people can be willing to profess disbelief in anything and everything, if they think that’s the price of believing in science. So I won’t presume that you believe that you exist, I’ll just hope that you do, because if you don’t, it will be hard to have a sensible conversation about these topics.
But… if you do agree that you definitely exist, independently of any “model” that actual or hypothetical observers have, then it’s a short step to saying that you must also have some of your properties intrinsically, rather than through model-dependent attribution. The alternative would be to say that you exist, you’re a “thing”, but not any particular thing; which is the sort of untenable objective vagueness that I was talking about.
The concept of an intrinsic property is arising somewhat differently here, than it does in your discussion of squares and rectangles. The idealized geometrical figures have their intrinsic properties by definition, or by logical implication from the definition. But I can say that you have intrinsic properties, not by definition (or not just by definition), but because you exist, and to be is to be something. (Also known as the “law of identity”.) It would make no sense to say that you are real, but otherwise devoid of ontological definiteness.
For exactly the same reason, it would make no sense to have a fundamentally vague “physical theory of you”. Here I want to define “you” as narrowly as possible—this you, in this world, even just in this moment if necessary. I don’t want the identity issues of a broadly defined “you” to interfere. I hope we have agreed that you-here-now exist, that you exist objectively, that you must have some identifying or individuating properties which are also held objectively and intrinsically; the properties which make you what you are.
If we are going to be ontological materialists about you-here-now, and we are also going to acknowledge you-here-now as completely and independently real, then there also can’t be any vagueness or arbitrariness about which physical object is you-here-now. For every particle—if we have particles in our physical ontology—either it is definitely a part of you-here-now, or it definitely isn’t.
At this point I’m already departing radically from the standard materialist account of personhood, which would say that we can be vague about whether a few atoms are a part of you or not. The reason we can’t do that, is precisely the objectivity of your existence. If you are an objectively existing entity, I can’t at the same time say that you are an entity whose boundaries aren’t objectively defined. For some broader notion, like “your body”, sure, we can be vague about where its boundaries are. But there has to be a core notion of what you are that is correct, exact, fully objective; and the partially objective definitions of “you” come from watering down this core notion by adding inessential extra properties.
Now let’s contrast this situation with the piece of lumber that is close to being a square but isn’t a perfect square. My arguments against fundamental vagueness are not about insisting that the piece of lumber is a perfect square. I am merely insisting that it is what it is, and whatever it is, it is that, exactly and definitely.
The main difference between “you-here-now” and the piece of lumber, is that we don’t have the same reason to think that the lumber has a hard ontological core. It’s an aggregate of atoms, electrons will be streaming off it, and there will be some arbitrariness about when such an electron stops being “part of the lumber”. To find indisputably objective physical facts in this situation, you probably need to talk in terms of immediate relations between elementary particles.
The evidence for a hard core in you-here-now is primarily phenomenological and secondarily logical. The phenomenological evidence is what we call the unity of experience: what’s happening to you in any moment is a gestalt; it’s one thing happening to one person. Your experience of the world may have fuzzy edges to it, but it’s still a whole and hence objectively a unity. The logical “evidence” is just the incoherence of supposing there can be a phenomenological unity without there being an ontological unity at any level. This experiential whole may have parts, but you can’t use the existence of the parts to then turn around and deny the existence of the whole.
The evidence for an ontological hard core to you-here-now does not come from physics. Physically the brain looks like it should be just like the piece of lumber, an aggregate of very many very small things. This presumption is obviously why materialists often end up regarding their own existence as something less than objective, or why the search for a microphysically exact theory of the self sounds like a mistake. Instead we are to be content with the approximations of functionalism, because that’s the most you could hope to do with such an entity.
I hope it’s now very clear where I’m coming from. The phenomenological and ontological arguments for a “hard core” to the self are enough to override any counterargument from physics. They tell us that a mesoscopic theory of what’s going on, like functionalism, is at best incomplete; it cannot be the final word. The task is to understand the conscious brain as a biophysical system, in terms of a physical ontology that can contain “real selves”. And fortunately, it’s no longer the 19th century, we have quantum mechanics and the ingredients for something more sophisticated than classic atomism.
I’m going back and forth on whether to tap out here. On the one hand I feel like I’m making progress in understanding your perspective. On the other hand the progress is clarifying that it would take a large amount of time and energy to derive a vocabulary to converse in a mutually transparent way about material truth claims in this area. It had not occurred to me that pulling on the word “intrinsic” would flip the conversation into a solipsistic zone by way of Cartesian skepticism. Ooof.
Perhaps we could schedule a few hours of IM or IRC to try a bit of very low latency mutual vocabulary development, and then maybe post the logs back here for posterity (raw or edited) if that seems worthwhile to us. (See private message for logistics.) If you want to stick to public essays I recommend taking things up with Tyrrell; he’s a more careful thinker than I am and I generally agree with what he says. He noticed and extended a more generous and more interesting parsing of your claims than I did when I thought you were trying to make a pigeonhole argument in favor of magical entities, and he seems to be interested. Either public essays with Tyrrell, IM with me, or both, or neither… as you like :-)
(And/or Steve of course, but he generally requires a lot of unpacking, and I frequently only really understand why his concepts were better starting places than my own between 6 and 18 months after talking with him.)
Or in a cascade of your own successive models, including of the cascade.
Or an incentive to keep using that model rather than to switch to another one. The models are made up, but the incentives are real. (To whatever extent the thing subject to the incentives is.)
Not that I’m agreeing, but some clever ways to formulate almost your objection could be built around the wording “The mind is in the mind, not in reality”.
Crap. I had not thought of quines in reference to simulationist metaphysics before.
I have some sympathy for the view that my-here-now qualia are determinant and objective. But I don’t see why that implies that there must be a determinant objective unique collection of particles that is experiencing the qualia. Why not say that there are various different boundaries that I could draw, but, no matter which of these boundaries I draw, the qualia being experienced by the contained system of particles would be the same? For example, adding or removing the table in front of me doesn’t change the qualia experienced by the system.
(Here I am supposing that I can map the relevant physical systems to qualia in the manner that I describe in this comment.)
My subjective conscious experience seems no more exact a thing to me than my experience of distinctions of colours. States of consciousness seem to be a continuous space, and there isn’t even a hard boundary (again, as I perceive things subjectively) between what is conscious and what is not.
But perhaps people vary in this; perhaps it is different for you?