It’s easy enough for us (leaving aside edge cases about animals, the unborn, and the brain dead, which in fact people find hard, or at least persistently disagree on) How do we do it? By any other means than our ordinary senses?
I would argue that humans are not very good at this. If by “good” you mean high succcess rate and low false positive rate for detecting consciousness. It seems to me that the only reason that we have a high success rate for detecting consciousness is because our false positive rate for detecting consciousness is also high (e.g. religion, ghosts, fear of the dark, etc.)
We have evolved moral intuitions such as empathy and compassion that underly what we consider to be right or wrong. These intuitions only work because we consciously internalize another agent’s subjective experience and identify with it. In other words, without the various quales that we experience we would have no foundation to act ethically. An unconscious AI that does not experience these quales could, in theory, act the way we think it should act by mimicking behaviors from a repertoire of rules (and ways to create further rules) that we give it, but that is a very brittle and complicated route, and is the route the SIAI has been taking because they have discounted qualia, which is what this post is really all about.
A human being does it by presuming that observed similarities, between themselves and the other humans around them, extend to the common possession of inner states. You could design an AI to employ a similar heuristic, though perhaps it would be pattern-matching against a designated model human, rather than against itself. But the edge cases show that you need better heuristics than that, and in any case one would expect the AI to seek consistency between its ontology of agents worth caring about and its overall ontology, which will lead it down one of the forking paths in philosophy of mind. If it arrives at the wrong terminus…
You could design an AI to employ a similar heuristic, though perhaps it would be pattern-matching against a designated model human, rather than against itself. But the edge cases show that you need better heuristics than that
I don’t see how this is different from having the AI recognise a teacup. We don’t actually know how we do it. That’s why it is difficult to make a machine to do it. We also don’t know how we recognise people. “Presuming that observed similarities etc.” isn’t a useful description of how we do it, and I don’t think any amount of introspection about our experience of doing it will help, any more than that sort of thinking has helped to develop machine vision, or indeed any of the modest successes that AI has had.
Firstly, I honestly don’t see how you came to the conclusion that the qualia you and I (as far as you know) experience are not part of a computational process. It doesn’t seem to be a belief that makes testable predictions.
Since the qualia of others are not accessible to you, you can’t know that any particular arrangement of matter and information doesn’t have them, including people, plants, and computers. You also cannot know whether my qualia feel anything like your own when subjected to the same stimuli. If you have any reason to believe they do (for your model of empathy to make sense), what reason do you have to believe it is due to something non-computable?
It seems intuitively appealing that someone who is kind to you feels similarly to you and is therefore similar to you. It helps you like them, and reciprocate the kindness, which has advantages of its own. But ultimately, your experience of another’s kindness is about the consequences to you, not their intentions or mental model of you.
If a computer with unknowable computational qualia is successfully kind to me, I’ll take that over a human with unknowable differently-computational qualia doing what they think would be best for me and fucking it up because they aren’t very good at evaluating the possible consequences.
I honestly don’t see how you came to the conclusion that the qualia you and I (as far as you know) experience are not part of a computational process … what reason do you have to believe it is due to something non-computable?
Qualia are part of some sort of causal process. If it’s cognition, maybe it deserves the name of a computational process. It certainly ought to be a computable process, in the sense that it could be simulated by a computer.
My position is that a world described in terms of purely physical properties or purely computational properties does not contain qualia. Such a description itself would contain no reference to qualia. The various attempts of materialist philosophers of mind to define qualia solely in terms of physical or computational properties do not work. The physical and computational descriptions are black-box descriptions of “things with states”, and you need to go into more detail about those states in order to be talking about qualia. Those more detailed descriptions will contain terms whose meaning one can only know by having been conscious and thereby being familiar with the relevant phenomenological realities, like pain. Otherwise, these terms will just be formal properties p, q, r… known only by how they enter into causal relations.
Moving one step up in controversialness, I also don’t believe that computational simulation of qualia will itself produce qualia. This is because the current theories about what the physical correlates of conscious states are, already require an implausible sort of correspondence between mesoscopic “functional” states (defined e.g. by the motions of large numbers of ions) and the elementary qualia which together make up an overall state of consciousness. The theory that any good enough simulation of this will also have qualia requires that the correspondence be extended in ways that no-one anywhere can specify (thus see the debates about simulations running on giant look-up tables, or the “dust theory” of simulations whose sequential conscious states are scattered across the multiverse, causally disconnected in space and time).
The whole situation looks intellectually pathological to me, and it’s a lot simpler to suppose that you get a detailed conscious experience, a complex-of-qualia, if and only if a specific sort of physical entity is realized. One state of that entity causes the next state, so the qualia have causal consequences and a computational model of the entity could exist, but the computational model of the entity is not an instance of the entity itself. I have voiced ideas about a hypothetical locus of quantum entanglement in the brain as the conscious entity. That idea may be right or wrong. It is logically independent of the claims that standard theories of consciousness are implausible, and that you can’t define consciousness just in terms of physics or computation.
it’s a lot simpler to suppose that you get a detailed conscious experience, a complex-of-qualia, if and only if a specific sort of physical entity is realized.
How is that simpler? If there is a theory that qualia can only occur in a specific sort of physical entity, then that theory must delineate all the complicated boundary conditions and exceptions as to why similar processes on entities that differ in various ways don’t count as qualia.
It must be simpler to suppose that qualia are informational processes that have certain (currently unknown) mathematical properties.
When you can identify and measure qualia in a person’s brain and understand truly what they are, THEN you can say whether they can or can’t happen on a semiconductor and WHY. Until then, words are wind.
It must be simpler to suppose that qualia are informational processes that have certain (currently unknown) mathematical properties.
Physically, an “informational process” involves bulk movements of microphysical entities, like electrons within a transistor or ions across a cell membrane.
So let’s suppose that we want to know the physical conditions under which a particular quale occurs in a human being (something like a flash of red in your visual field), and that the physical correlate is some bulk molecular process, where N copies of a particular biomolecule participate. And let’s say that we’re confident that the quale does not occur when N=0 or 1, and that it does occur when N=1000.
All I have to do is ask, for what magic value of N does the quale start happening? People characteristically evade such questions, they wave their hands and say, that doesn’t matter, there doesn’t have to be a definite answer to that question. (Just as most MWI advocates do, when asked exactly when it is that you go from having one world to two worlds.)
But let’s suppose we have 1000 people, numbered from 1 to 1000, and in each one the potentially quale-inducing process is occurring, with that many copies of the biomolecule participating. We can say that person number 1 definitely doesn’t have the quale, and person number 1000 definitely does, but what about the people in between? The handwaving non-answer, “there is no definite threshold”, means that for people in the middle, with maybe 234 or 569 molecules taking part, the answer to the question “Are they having this experience or not?” is “none of the above”. There’s supposed to be no exact fact about whether they have that flash of red or not.
There is absolutely no reason to take that seriously as an intellectual position about the nature of qualia. It’s actually a reductio ad absurdum of a commonly held view.
The counterargument might be made, what about electrons in a transistor? There doesn’t have to be an exact answer to the question, how many electrons is enough for the transistor to really be in the “1” state rather than the “0″ state. But the reason there doesn’t have to be an exact answer, is that we only care about the transistor’s behavior, and then only its behavior under conditions that the device might encounter during its operational life. If under most circumstances there are only 0 electrons or 1000 electrons present, and if those numbers reliably produce “0 behavior” or “1 behavior” from the transistor, then that is enough for the computer to perform its function as a computational device. Maybe a transistor with 569 electrons is in an unstable state that functionally is neither definitely 0 nor definitely 1, but if those conditions almost never come up in the operation of the device, that’s OK.
With any theory about the presence of qualia, we do not have the luxury of this escape via functional pragmatism. A theory about the presence of qualia needs to have definite implications for every physically possible state—it needs to say whether the qualia are present or not in that state—or else we end up with situations as in the reductio, where we have people who allegedly neither have the quale nor don’t have the quale.
This argument is simple and important enough that it deserves to have a name, but I’ve never seen it in the philosophy literature. So I’ll call it the sorites problem for functionalist theories of qualia. Any materialist theory of qualia which identifies them with bulk microphysical processes faces this sorites problem.
I don’t seem to experience qualia as all-or-nothing. I doubt that you do either. I don’t see a problem with the amount of qualia experienced being a real number between 0 and 1 in response to varying stimuli of pain or redness.
Therefore I don’t see a problem with qualia being measurable on a similar scale across different informational processes with more or fewer neurons or other computing elements involved in the structure that generates them.
I don’t know. But I don’t think so, not in the sense that it would feel like a different kind of experience. More or less intense, more definite or more ambiguous perhaps. And of course there could always be differences too small to be noticeable.
As a wild guess based on no evidence, I suppose that different kinds of qualia have different functions (in the sense of uses, not mathematical mappings) in a consciousness, and equivalent functions can be performed by different structures and processes.
I am aware of qualia (or they wouldn’t be qualia), but I am not aware of the mechanism by which they are generated, so I have no reason to believe that mechanism could not be implemented differently and still have the same outputs, and feel the same to me.
I have just expanded on the argument that any mapping between “physics” and “phenomenology” must fundamentally be an exact one. This does not mean that a proposed mapping, that would be inexact by microphysical standards, is necessarily false, it just means that it is necessarily incomplete.
The argument for exactness still goes through even if you allow for gradations of experience. For any individual gradation, it’s still true that it is what it is, and that’s enough to imply that the fundamental mapping must be exact, because the alternative would lead to incoherent statements like “an exact physical configuration has a state of consciousness associated with it, but not a particular state of consciousness”.
The requirement that any “law” of psychophysical correspondence must be microphysically exact in its complete form, including for physical configurations that we would otherwise regard as edge cases, is problematic for conventional functionalism, precisely because conventional functionalism adopts the practical rough-and-ready philosophy used by circuit designers. Circuit designers don’t care if states intermediate between “definitely 0″ and “definitely 1” are really 0, 1, or neither; they just want to make sure that these states don’t show up during the operation of their machine, because functionally they are unpredictable, that’s why their semantics would be unclear.
Scientists and ontologists of consciousness have no such option, because the principle of ontological non-vagueness (mentioned in the other comment) applies to consciousness. Consciousness objectively exists, it’s not just a useful heuristic concept, and so any theory of how it relates to physics must admit of a similarly objective completion; and that means there must be a specific answer to the question, exactly what state(s) of consciousness, if any, are present in this physical configuration… there must be a specific answer to that question for every possible physical configuration.
But the usual attitude of functionalists is that they can be fuzzy about microscopic details; that there is no need, not even in principle, for their ideas to possess a microphysically exact completion.
In these monad theories that I push, the “Cartesian theater”, where consciousness comes together into a unitary experience, is defined by a set of exact microphysical properties, e.g. a set of topological quantum numbers (a somewhat arbitrary example, but I need to give an example). For a theory like that, the principle associating physical and phenomenological states could be both functional and exact, but that’s not the sort of theory that today’s functionalists are discussing.
The idea, more or less, is that there is a big ball of quantum entanglement somewhere in the brain, and that’s the locus of consciousness. It might involve phonons in the microfilaments, anyons in the microtubules, both or neither of these; it’s presumably tissue-specific, involving particular cell types where the relevant structures are optimized for this role; and it must be causally relevant for conscious cognition, which should do something to pin down its anatomical location.
You could say that one major prediction is just that there will be such a thing as respectable quantum neurobiology and cognitive quantum neuroscience. From a quantum-physical and condensed-matter perspective, biomolecules and cells are highly nontrivial objects. By now “quantum biology” has a long history, and it’s a topic that is beloved of thinkers who are, shall we say, more poetic than scientific, but we’re still at the very beginning of that subject.
We basically know nothing about the dynamics of quantum coherence and decoherence in living matter. It’s not something that’s easily measured, and the handful of models that have been employed in order to calculate this dynamics are “spherical cow” models; they’re radically oversimplified for the sake of calculability, and just a first step into the unknown.
What I write on this subject is speculative, and it’s idiosyncratic even when compared to “well-known” forms of quantum-mind discourse. I am more interested in establishing the possibility of a very alternative view, and also in highlighting implausibilities of the conventional view that go unnoticed, or which are tolerated because the conventional picture of the brain appears to require them.
My position is that a world described in terms of purely physical properties or purely computational properties does not contain qualia. Such a description itself would contain no reference to qualia.
If this is an argument with the second sentence as premise, it’s a non sequitur. I can give you a description of the 1000 brightest objects in the night sky without mentioning the Evening Star; but that does not mean that the night sky lacked the Evening Star or that my description was incomplete.
The rest of the paragraph covers the case of indirect reference to qualia. It’s sketchy because I was outlining an argument rather than making it, if you know what I mean. I had to convey that this is not about “non-computability”.
It’s easy enough for us (leaving aside edge cases about animals, the unborn, and the brain dead, which in fact people find hard, or at least persistently disagree on) How do we do it? By any other means than our ordinary senses?
I would argue that humans are not very good at this. If by “good” you mean high succcess rate and low false positive rate for detecting consciousness. It seems to me that the only reason that we have a high success rate for detecting consciousness is because our false positive rate for detecting consciousness is also high (e.g. religion, ghosts, fear of the dark, etc.)
We have evolved moral intuitions such as empathy and compassion that underly what we consider to be right or wrong. These intuitions only work because we consciously internalize another agent’s subjective experience and identify with it. In other words, without the various quales that we experience we would have no foundation to act ethically. An unconscious AI that does not experience these quales could, in theory, act the way we think it should act by mimicking behaviors from a repertoire of rules (and ways to create further rules) that we give it, but that is a very brittle and complicated route, and is the route the SIAI has been taking because they have discounted qualia, which is what this post is really all about.
A human being does it by presuming that observed similarities, between themselves and the other humans around them, extend to the common possession of inner states. You could design an AI to employ a similar heuristic, though perhaps it would be pattern-matching against a designated model human, rather than against itself. But the edge cases show that you need better heuristics than that, and in any case one would expect the AI to seek consistency between its ontology of agents worth caring about and its overall ontology, which will lead it down one of the forking paths in philosophy of mind. If it arrives at the wrong terminus…
I don’t see how this is different from having the AI recognise a teacup. We don’t actually know how we do it. That’s why it is difficult to make a machine to do it. We also don’t know how we recognise people. “Presuming that observed similarities etc.” isn’t a useful description of how we do it, and I don’t think any amount of introspection about our experience of doing it will help, any more than that sort of thinking has helped to develop machine vision, or indeed any of the modest successes that AI has had.
Firstly, I honestly don’t see how you came to the conclusion that the qualia you and I (as far as you know) experience are not part of a computational process. It doesn’t seem to be a belief that makes testable predictions.
Since the qualia of others are not accessible to you, you can’t know that any particular arrangement of matter and information doesn’t have them, including people, plants, and computers. You also cannot know whether my qualia feel anything like your own when subjected to the same stimuli. If you have any reason to believe they do (for your model of empathy to make sense), what reason do you have to believe it is due to something non-computable?
It seems intuitively appealing that someone who is kind to you feels similarly to you and is therefore similar to you. It helps you like them, and reciprocate the kindness, which has advantages of its own. But ultimately, your experience of another’s kindness is about the consequences to you, not their intentions or mental model of you.
If a computer with unknowable computational qualia is successfully kind to me, I’ll take that over a human with unknowable differently-computational qualia doing what they think would be best for me and fucking it up because they aren’t very good at evaluating the possible consequences.
Qualia are part of some sort of causal process. If it’s cognition, maybe it deserves the name of a computational process. It certainly ought to be a computable process, in the sense that it could be simulated by a computer.
My position is that a world described in terms of purely physical properties or purely computational properties does not contain qualia. Such a description itself would contain no reference to qualia. The various attempts of materialist philosophers of mind to define qualia solely in terms of physical or computational properties do not work. The physical and computational descriptions are black-box descriptions of “things with states”, and you need to go into more detail about those states in order to be talking about qualia. Those more detailed descriptions will contain terms whose meaning one can only know by having been conscious and thereby being familiar with the relevant phenomenological realities, like pain. Otherwise, these terms will just be formal properties p, q, r… known only by how they enter into causal relations.
Moving one step up in controversialness, I also don’t believe that computational simulation of qualia will itself produce qualia. This is because the current theories about what the physical correlates of conscious states are, already require an implausible sort of correspondence between mesoscopic “functional” states (defined e.g. by the motions of large numbers of ions) and the elementary qualia which together make up an overall state of consciousness. The theory that any good enough simulation of this will also have qualia requires that the correspondence be extended in ways that no-one anywhere can specify (thus see the debates about simulations running on giant look-up tables, or the “dust theory” of simulations whose sequential conscious states are scattered across the multiverse, causally disconnected in space and time).
The whole situation looks intellectually pathological to me, and it’s a lot simpler to suppose that you get a detailed conscious experience, a complex-of-qualia, if and only if a specific sort of physical entity is realized. One state of that entity causes the next state, so the qualia have causal consequences and a computational model of the entity could exist, but the computational model of the entity is not an instance of the entity itself. I have voiced ideas about a hypothetical locus of quantum entanglement in the brain as the conscious entity. That idea may be right or wrong. It is logically independent of the claims that standard theories of consciousness are implausible, and that you can’t define consciousness just in terms of physics or computation.
How is that simpler? If there is a theory that qualia can only occur in a specific sort of physical entity, then that theory must delineate all the complicated boundary conditions and exceptions as to why similar processes on entities that differ in various ways don’t count as qualia.
It must be simpler to suppose that qualia are informational processes that have certain (currently unknown) mathematical properties.
When you can identify and measure qualia in a person’s brain and understand truly what they are, THEN you can say whether they can or can’t happen on a semiconductor and WHY. Until then, words are wind.
Physically, an “informational process” involves bulk movements of microphysical entities, like electrons within a transistor or ions across a cell membrane.
So let’s suppose that we want to know the physical conditions under which a particular quale occurs in a human being (something like a flash of red in your visual field), and that the physical correlate is some bulk molecular process, where N copies of a particular biomolecule participate. And let’s say that we’re confident that the quale does not occur when N=0 or 1, and that it does occur when N=1000.
All I have to do is ask, for what magic value of N does the quale start happening? People characteristically evade such questions, they wave their hands and say, that doesn’t matter, there doesn’t have to be a definite answer to that question. (Just as most MWI advocates do, when asked exactly when it is that you go from having one world to two worlds.)
But let’s suppose we have 1000 people, numbered from 1 to 1000, and in each one the potentially quale-inducing process is occurring, with that many copies of the biomolecule participating. We can say that person number 1 definitely doesn’t have the quale, and person number 1000 definitely does, but what about the people in between? The handwaving non-answer, “there is no definite threshold”, means that for people in the middle, with maybe 234 or 569 molecules taking part, the answer to the question “Are they having this experience or not?” is “none of the above”. There’s supposed to be no exact fact about whether they have that flash of red or not.
There is absolutely no reason to take that seriously as an intellectual position about the nature of qualia. It’s actually a reductio ad absurdum of a commonly held view.
The counterargument might be made, what about electrons in a transistor? There doesn’t have to be an exact answer to the question, how many electrons is enough for the transistor to really be in the “1” state rather than the “0″ state. But the reason there doesn’t have to be an exact answer, is that we only care about the transistor’s behavior, and then only its behavior under conditions that the device might encounter during its operational life. If under most circumstances there are only 0 electrons or 1000 electrons present, and if those numbers reliably produce “0 behavior” or “1 behavior” from the transistor, then that is enough for the computer to perform its function as a computational device. Maybe a transistor with 569 electrons is in an unstable state that functionally is neither definitely 0 nor definitely 1, but if those conditions almost never come up in the operation of the device, that’s OK.
With any theory about the presence of qualia, we do not have the luxury of this escape via functional pragmatism. A theory about the presence of qualia needs to have definite implications for every physically possible state—it needs to say whether the qualia are present or not in that state—or else we end up with situations as in the reductio, where we have people who allegedly neither have the quale nor don’t have the quale.
This argument is simple and important enough that it deserves to have a name, but I’ve never seen it in the philosophy literature. So I’ll call it the sorites problem for functionalist theories of qualia. Any materialist theory of qualia which identifies them with bulk microphysical processes faces this sorites problem.
Why?
I don’t seem to experience qualia as all-or-nothing. I doubt that you do either. I don’t see a problem with the amount of qualia experienced being a real number between 0 and 1 in response to varying stimuli of pain or redness.
Therefore I don’t see a problem with qualia being measurable on a similar scale across different informational processes with more or fewer neurons or other computing elements involved in the structure that generates them.
Do you think that there is a slightly different quale for each difference in the physical state, no matter how minute that physical difference is?
I don’t know. But I don’t think so, not in the sense that it would feel like a different kind of experience. More or less intense, more definite or more ambiguous perhaps. And of course there could always be differences too small to be noticeable.
As a wild guess based on no evidence, I suppose that different kinds of qualia have different functions (in the sense of uses, not mathematical mappings) in a consciousness, and equivalent functions can be performed by different structures and processes.
I am aware of qualia (or they wouldn’t be qualia), but I am not aware of the mechanism by which they are generated, so I have no reason to believe that mechanism could not be implemented differently and still have the same outputs, and feel the same to me.
I have just expanded on the argument that any mapping between “physics” and “phenomenology” must fundamentally be an exact one. This does not mean that a proposed mapping, that would be inexact by microphysical standards, is necessarily false, it just means that it is necessarily incomplete.
The argument for exactness still goes through even if you allow for gradations of experience. For any individual gradation, it’s still true that it is what it is, and that’s enough to imply that the fundamental mapping must be exact, because the alternative would lead to incoherent statements like “an exact physical configuration has a state of consciousness associated with it, but not a particular state of consciousness”.
The requirement that any “law” of psychophysical correspondence must be microphysically exact in its complete form, including for physical configurations that we would otherwise regard as edge cases, is problematic for conventional functionalism, precisely because conventional functionalism adopts the practical rough-and-ready philosophy used by circuit designers. Circuit designers don’t care if states intermediate between “definitely 0″ and “definitely 1” are really 0, 1, or neither; they just want to make sure that these states don’t show up during the operation of their machine, because functionally they are unpredictable, that’s why their semantics would be unclear.
Scientists and ontologists of consciousness have no such option, because the principle of ontological non-vagueness (mentioned in the other comment) applies to consciousness. Consciousness objectively exists, it’s not just a useful heuristic concept, and so any theory of how it relates to physics must admit of a similarly objective completion; and that means there must be a specific answer to the question, exactly what state(s) of consciousness, if any, are present in this physical configuration… there must be a specific answer to that question for every possible physical configuration.
But the usual attitude of functionalists is that they can be fuzzy about microscopic details; that there is no need, not even in principle, for their ideas to possess a microphysically exact completion.
In these monad theories that I push, the “Cartesian theater”, where consciousness comes together into a unitary experience, is defined by a set of exact microphysical properties, e.g. a set of topological quantum numbers (a somewhat arbitrary example, but I need to give an example). For a theory like that, the principle associating physical and phenomenological states could be both functional and exact, but that’s not the sort of theory that today’s functionalists are discussing.
What predictions does your theory make?
The idea, more or less, is that there is a big ball of quantum entanglement somewhere in the brain, and that’s the locus of consciousness. It might involve phonons in the microfilaments, anyons in the microtubules, both or neither of these; it’s presumably tissue-specific, involving particular cell types where the relevant structures are optimized for this role; and it must be causally relevant for conscious cognition, which should do something to pin down its anatomical location.
You could say that one major prediction is just that there will be such a thing as respectable quantum neurobiology and cognitive quantum neuroscience. From a quantum-physical and condensed-matter perspective, biomolecules and cells are highly nontrivial objects. By now “quantum biology” has a long history, and it’s a topic that is beloved of thinkers who are, shall we say, more poetic than scientific, but we’re still at the very beginning of that subject.
We basically know nothing about the dynamics of quantum coherence and decoherence in living matter. It’s not something that’s easily measured, and the handful of models that have been employed in order to calculate this dynamics are “spherical cow” models; they’re radically oversimplified for the sake of calculability, and just a first step into the unknown.
What I write on this subject is speculative, and it’s idiosyncratic even when compared to “well-known” forms of quantum-mind discourse. I am more interested in establishing the possibility of a very alternative view, and also in highlighting implausibilities of the conventional view that go unnoticed, or which are tolerated because the conventional picture of the brain appears to require them.
If this is an argument with the second sentence as premise, it’s a non sequitur. I can give you a description of the 1000 brightest objects in the night sky without mentioning the Evening Star; but that does not mean that the night sky lacked the Evening Star or that my description was incomplete.
The rest of the paragraph covers the case of indirect reference to qualia. It’s sketchy because I was outlining an argument rather than making it, if you know what I mean. I had to convey that this is not about “non-computability”.