I’m splitting up my response to this into several pieces because it got long. Some other stuff:
Presumably a consequence that itself has consequences: experiences can be used in causal explanations?
I expect so, sure. For example, I report having experiences; one explanation of that (though hardly the only possible one) starts with my actually having experiences and progresses forward in a causal fashion.
But it seems to me that we could explain how a bat uses echolocation without knowing what echolocation looks like (sounds like? feels like?) to a bat.
Sure, there are many causal explanations of many phenomena, including but not limited to how bats use echolocation, that don’t posit subjective experience as part of their causal chain. For example, humas do all kinds of things without the subjective experience of doing them.
And that we could distinguish how well people distinguish wavelengths of light etc. without knowing what the colour looks like to them.
Certainly.
It seems subjective experience is just being ignored
In the examples you give, yes, it is being ignored. So? Lots of things are being ignored in those examples… mass, electrical conductivity, street address, level of fluency in Russian, etc. If these things aren’t necessary to explain the examples, there’s nothing wrong with ignoring these things.
On the other hand, if we look at an example for which experience ought to be part of the causal chain (for example, as I note above, reporting having those experiences), subjective experience is not ignored. X happens, as a consequence of X a subjective experience Y arises, as a consequence of Y a report Z arises, and so forth. (Of course, for some reports we do have explanations that don’t presume Y… e.g., confabulation, automatic writing, etc. But that needn’t be true for all reports. Indeed, it would be surprising if it were.)
“But we don’t know what Xes give rise to the Y of subjective experience, so we don’t fully understand subjective experience!” Well, yes, that’s true. We don’t fully understand fluency in Russian, either. But we don’t go around as a consequence positing some mysterious essence of Russian fluency that resists neurobiological explanation… though two centuries ago, we might have done so. Nor should we. Neither should we posit some mysterious essence of subjective experience.
“But subjective experience is different! I can imagine what a mechanical explanation of Russian fluency would be like, but I can’t imagine what a mechanical explanation of subjective experience would be like.” Sure, I understand that. Two centuries ago, the notion of a mechanical explanation of Russian fluency would raise similar incredulity… how could a machine speak Russian? I’m not sure how I could go about answering such incredulity convincingly, but I don’t thereby conclude that machines can’t speak Russian. My incredulity may be resistant to my reason, but it doesn’t therefore compel or override my reason.
I have a lot of sympathy for this. The most plausible position of reductive materialism is simply that at some future scientific point this will become clear. But this is inevitably a statement of faith, rather than acknoweldging current acheivement. It’s very hard to compare current apparent mysteries to solved mysteries—I do get that. Having said that, I can’t even see what the steps on the way to explaining consciousness would be, and claiming there is not such thing seems not to be an option (unlike ‘life’, ‘free will’ etc.) whereas most other cases you rely on saying that you can’t see how the full extent could be achieved: a machine might speak crap Russian in some circumstances etc.
Also, if a machine can speak Russian, you can check that. I don’t know how we’d check a machine was conscious.
BTW, when I said ‘it seems subjective experience is just being ignored’, I meant ignored in your and Dennett’s arguments, not in specific explanations. I have nothing against analysing things in ways that ignore consciousness, if they work.
I don’t know what the mechanical explanation would look like, either. But I’m sufficiently aware of how ignorant my counterparts two centuries ago would have been of what a mechanical explanation for speaking Russian would look like that I don’t place too much significance on my ignorance.
I agree that testing whether a system is conscious or not is a tricky problem. (This doesn’t just apply to artificial systems.)
Indeed: though artificial systems are more intuitively difficult as we don’t have as clear an intuitive expectation.
You can take an outside view and say ‘this will dissolve like the other mysteries’. I just genuinely find this implausible, if only because you can take steps towards the other mysteries (speaking bad Russian occasionally) and because you have a clear empirical standard (russians). Whereas for consciousness I don’t have any standard for identifying another’s consciousness: I do it only by analogy with myself and by the implausibility of me having an apparently causal element that others who act similarly to me lack.
I agree that the “consciousness-detector” problem is a hard problem. I just can’t think of a better answer than the generalizing-from-commonalities strategy I discussed previously, so that’s the approach I go with. It seems capable of making progress for now.
And I understand that you find it implausible. That said, I suspect that if we solve the “soft” problem of consciousness well enough that a typical human is inclined to treat an artificial system as though it were conscious, it will start to seem more plausible.
Perhaps it will be plausible and incorrect, and we will happily go along treating computers as conscious when they are no such thing. Perhaps we’re currently going along treating dogs and monkeys and 90% of humans as conscious when they are no such thing.
Perhaps not.
Either way, plausibility (or the absence of it) doesn’t really tell us much.
Yes. This is what worries me: I can see more advances making everyone sure that computers are conscious, but my suspicion is that this will not be logical. Take the same processor and I suspect the chances of it being seen as conscious will rise sharply if it’s put in a moving machine, rise sharply again for a humanoid, again for face/voice and again for physically indistinguishable.
The problem with generalising from commonalities is that I have precisely one direct example of consciousness. Although having said that, I don’t find epiphenomenal accounts convincing, so it’s reasonable for me to think that as my statements about qualia seem to follow causally from experiencing said qualia, that other people don’t have a totally separate framework for their statements about qualia. I wouldn’t be that confident though, and it gets harder with artificial consciousness.
Sure. By the same token, if you take me, remove my ability to communicate, and encase me in an opaque cylinder, nobody will recognize me as a being with subjective experience. Or, for that matter, as a being with the ability to construct English sentences.
We are bounded intellects reasoning under uncertainty in a noisy environment. We will get stuff wrong. Sometimes it will be important stuff.
it’s reasonable for me to think that as my statements about qualia seem to follow causally from experiencing said qualia, that other people don’t have a totally separate framework for their statements about qualia.
I agree. And, as I said initially, I apply the same reasoning not only to the statements I make in English, but to all manner of behaviors that “seem to rise from my qualia,” as you put it… all of it is evidence in favor of other organisms also having subjective experience, even organisms that don’t speak English.
I wouldn’t be that confident though,
How confident are you that I possess subjective experience? Would that confidence rise significantly if we met in person and you verified that I have a typical human body?
Consciousness does seem different in that we can have a better and better understanding of all the various functional elements but that we’re
1) left with a sort of argument from analogy for others having qualia
2) even if we can resolve(1), I can’t see how we can start to know whether my green is your red etc. etc.
I can’t think of many comparable cases: certainly I don’t think containership is comparable. You and I could end up looking at the AI in the moment before it destroys/idealises/both the world and say ‘gosh, I wonder if it’s conscious’. This is nothing like the casuistic ‘but what about this container gives it its containerness’. I think we’re on the same point here, though?
I’m intuitively very confident you’re conscious: and yes, seeing you were human would help (in that one of the easiest ways I can imagine you weren’t conscious is that you’re actually a computer designed to post about things on less wrong. This would also explain why you like Dennett—I’ve always suspected he’s a qualia-less robot too! ;-)
Yes, I agree that we’re much more confused about subjective experience than we are about containership.
We’re also more confused about subjective experience than we are about natural language, about solving math problems, about several other aspects of cognition. We’re not _un_confused about those things, but we’re less confused than we used to be. I expect us to grow still less confused over time.
I disagree about the lack of comparable cases. I agree about containers; that’s just an intuition pump. But the issues that concern you here arise for any theoretical construct for which we have only indirect evidence. The history of science is full of such things. Electrons. Black holes. Many worlds. Fibromyalgia. Phlogiston. Etc.
What makes subjective experience different is not that we lack the ability to perceive it directly; that’s pretty common. What makes it different is that we can perceive it directly in one case, as opposed to the other stuff where we perceive it directly in zero cases.
Of course, it’s also different from many of them in that it matters to our moral reasoning in many cases. I can’t think of a moral decision that depends on whether phlogiston exists, but I can easily think of a moral decision that depends on whether cows have subjective experiences. OTOH, it still isn’t unique; some people make moral decisions that depend on the actuality of theoretical constructs like many worlds and PTSD.
Fair enough. As an intuition pump, for me at least, it’s unhelpful: it gave the impression that you thought that consciousness was merely a label being mistaken for a thing (like ‘life’ as something beyond its parts).
Only having indirect evidence isn’t the problem. For a black hole, I care about the observable functional parts. I wouldn’t be being sucked towards it and being crushed while going ‘but is it really a black hole?’ A black hole is like a container here: what matter are the functional bits that make it up. For consciousness, I care if a robot can reason and can display conscious-type behaviour, but I also care if it can experience and feel.
Many worlds could be comparable if there is evidence that means that there are ‘many worlds’ but people are vague about if these worlds actually exist. And you’re right, this is also a potentially morally relevant point.
Insofar as people infer from the fact of subjective experience that there is some essence of subjective experience that is, as you say, “beyond its parts” (and their patterns of interaction), I do in fact think they are mistaking a label for a thing.
I dunno about essences. The point is that you can observe lots of interactions of neurons and behaviours and be left with an argument from analogy to say “they must be conscious because I am and they are really similar, and the idea that my consciousness is divorced from what I do is just wacky”.
You can observe all the externally observable, measurable things that a black hole or container can do, and then if someone argues about essences you wonder if they’re actually referring to anything: it’s a purely semantic debate. But you can observe all the things a fish, or tree, or advanced computer can do, predict it for all useful purposes, and still not know if it’s conscious. This is bothersome. But it’s not to do with essences, necessarily.
Insofar as people don’t infer something else beyond the parts of a (for example) my body and their pattern of interactions that account for (for example) my subjective experience, I don’t think they are mistaking a label for a thing.
Well, until we know how to identify if something/someone is conscious, it’s all a bit of a mystery: I couldn’t rule out consciousness being some additional thing. I have an inclination to do so because it seems unparsimonious, but that’s it.
I’m splitting up my response to this into several pieces because it got long. Some other stuff:
I expect so, sure. For example, I report having experiences; one explanation of that (though hardly the only possible one) starts with my actually having experiences and progresses forward in a causal fashion.
Sure, there are many causal explanations of many phenomena, including but not limited to how bats use echolocation, that don’t posit subjective experience as part of their causal chain. For example, humas do all kinds of things without the subjective experience of doing them.
Certainly.
In the examples you give, yes, it is being ignored. So? Lots of things are being ignored in those examples… mass, electrical conductivity, street address, level of fluency in Russian, etc. If these things aren’t necessary to explain the examples, there’s nothing wrong with ignoring these things.
On the other hand, if we look at an example for which experience ought to be part of the causal chain (for example, as I note above, reporting having those experiences), subjective experience is not ignored. X happens, as a consequence of X a subjective experience Y arises, as a consequence of Y a report Z arises, and so forth. (Of course, for some reports we do have explanations that don’t presume Y… e.g., confabulation, automatic writing, etc. But that needn’t be true for all reports. Indeed, it would be surprising if it were.)
“But we don’t know what Xes give rise to the Y of subjective experience, so we don’t fully understand subjective experience!” Well, yes, that’s true. We don’t fully understand fluency in Russian, either. But we don’t go around as a consequence positing some mysterious essence of Russian fluency that resists neurobiological explanation… though two centuries ago, we might have done so. Nor should we. Neither should we posit some mysterious essence of subjective experience.
“But subjective experience is different! I can imagine what a mechanical explanation of Russian fluency would be like, but I can’t imagine what a mechanical explanation of subjective experience would be like.” Sure, I understand that. Two centuries ago, the notion of a mechanical explanation of Russian fluency would raise similar incredulity… how could a machine speak Russian? I’m not sure how I could go about answering such incredulity convincingly, but I don’t thereby conclude that machines can’t speak Russian. My incredulity may be resistant to my reason, but it doesn’t therefore compel or override my reason.
I have a lot of sympathy for this. The most plausible position of reductive materialism is simply that at some future scientific point this will become clear. But this is inevitably a statement of faith, rather than acknoweldging current acheivement. It’s very hard to compare current apparent mysteries to solved mysteries—I do get that. Having said that, I can’t even see what the steps on the way to explaining consciousness would be, and claiming there is not such thing seems not to be an option (unlike ‘life’, ‘free will’ etc.) whereas most other cases you rely on saying that you can’t see how the full extent could be achieved: a machine might speak crap Russian in some circumstances etc.
Also, if a machine can speak Russian, you can check that. I don’t know how we’d check a machine was conscious.
BTW, when I said ‘it seems subjective experience is just being ignored’, I meant ignored in your and Dennett’s arguments, not in specific explanations. I have nothing against analysing things in ways that ignore consciousness, if they work.
I don’t know what the mechanical explanation would look like, either. But I’m sufficiently aware of how ignorant my counterparts two centuries ago would have been of what a mechanical explanation for speaking Russian would look like that I don’t place too much significance on my ignorance.
I agree that testing whether a system is conscious or not is a tricky problem. (This doesn’t just apply to artificial systems.)
Indeed: though artificial systems are more intuitively difficult as we don’t have as clear an intuitive expectation.
You can take an outside view and say ‘this will dissolve like the other mysteries’. I just genuinely find this implausible, if only because you can take steps towards the other mysteries (speaking bad Russian occasionally) and because you have a clear empirical standard (russians). Whereas for consciousness I don’t have any standard for identifying another’s consciousness: I do it only by analogy with myself and by the implausibility of me having an apparently causal element that others who act similarly to me lack.
I agree that the “consciousness-detector” problem is a hard problem. I just can’t think of a better answer than the generalizing-from-commonalities strategy I discussed previously, so that’s the approach I go with. It seems capable of making progress for now.
And I understand that you find it implausible. That said, I suspect that if we solve the “soft” problem of consciousness well enough that a typical human is inclined to treat an artificial system as though it were conscious, it will start to seem more plausible.
Perhaps it will be plausible and incorrect, and we will happily go along treating computers as conscious when they are no such thing. Perhaps we’re currently going along treating dogs and monkeys and 90% of humans as conscious when they are no such thing.
Perhaps not.
Either way, plausibility (or the absence of it) doesn’t really tell us much.
Yes. This is what worries me: I can see more advances making everyone sure that computers are conscious, but my suspicion is that this will not be logical. Take the same processor and I suspect the chances of it being seen as conscious will rise sharply if it’s put in a moving machine, rise sharply again for a humanoid, again for face/voice and again for physically indistinguishable.
The problem with generalising from commonalities is that I have precisely one direct example of consciousness. Although having said that, I don’t find epiphenomenal accounts convincing, so it’s reasonable for me to think that as my statements about qualia seem to follow causally from experiencing said qualia, that other people don’t have a totally separate framework for their statements about qualia. I wouldn’t be that confident though, and it gets harder with artificial consciousness.
Sure. By the same token, if you take me, remove my ability to communicate, and encase me in an opaque cylinder, nobody will recognize me as a being with subjective experience. Or, for that matter, as a being with the ability to construct English sentences.
We are bounded intellects reasoning under uncertainty in a noisy environment. We will get stuff wrong. Sometimes it will be important stuff.
I agree. And, as I said initially, I apply the same reasoning not only to the statements I make in English, but to all manner of behaviors that “seem to rise from my qualia,” as you put it… all of it is evidence in favor of other organisms also having subjective experience, even organisms that don’t speak English.
How confident are you that I possess subjective experience?
Would that confidence rise significantly if we met in person and you verified that I have a typical human body?
Agreed.
Consciousness does seem different in that we can have a better and better understanding of all the various functional elements but that we’re 1) left with a sort of argument from analogy for others having qualia 2) even if we can resolve(1), I can’t see how we can start to know whether my green is your red etc. etc.
I can’t think of many comparable cases: certainly I don’t think containership is comparable. You and I could end up looking at the AI in the moment before it destroys/idealises/both the world and say ‘gosh, I wonder if it’s conscious’. This is nothing like the casuistic ‘but what about this container gives it its containerness’. I think we’re on the same point here, though?
I’m intuitively very confident you’re conscious: and yes, seeing you were human would help (in that one of the easiest ways I can imagine you weren’t conscious is that you’re actually a computer designed to post about things on less wrong. This would also explain why you like Dennett—I’ve always suspected he’s a qualia-less robot too! ;-)
Yes, I agree that we’re much more confused about subjective experience than we are about containership.
We’re also more confused about subjective experience than we are about natural language, about solving math problems, about several other aspects of cognition. We’re not _un_confused about those things, but we’re less confused than we used to be. I expect us to grow still less confused over time.
I disagree about the lack of comparable cases. I agree about containers; that’s just an intuition pump. But the issues that concern you here arise for any theoretical construct for which we have only indirect evidence. The history of science is full of such things. Electrons. Black holes. Many worlds. Fibromyalgia. Phlogiston. Etc.
What makes subjective experience different is not that we lack the ability to perceive it directly; that’s pretty common. What makes it different is that we can perceive it directly in one case, as opposed to the other stuff where we perceive it directly in zero cases.
Of course, it’s also different from many of them in that it matters to our moral reasoning in many cases. I can’t think of a moral decision that depends on whether phlogiston exists, but I can easily think of a moral decision that depends on whether cows have subjective experiences. OTOH, it still isn’t unique; some people make moral decisions that depend on the actuality of theoretical constructs like many worlds and PTSD.
Fair enough. As an intuition pump, for me at least, it’s unhelpful: it gave the impression that you thought that consciousness was merely a label being mistaken for a thing (like ‘life’ as something beyond its parts).
Only having indirect evidence isn’t the problem. For a black hole, I care about the observable functional parts. I wouldn’t be being sucked towards it and being crushed while going ‘but is it really a black hole?’ A black hole is like a container here: what matter are the functional bits that make it up. For consciousness, I care if a robot can reason and can display conscious-type behaviour, but I also care if it can experience and feel.
Many worlds could be comparable if there is evidence that means that there are ‘many worlds’ but people are vague about if these worlds actually exist. And you’re right, this is also a potentially morally relevant point.
Insofar as people infer from the fact of subjective experience that there is some essence of subjective experience that is, as you say, “beyond its parts” (and their patterns of interaction), I do in fact think they are mistaking a label for a thing.
I dunno about essences. The point is that you can observe lots of interactions of neurons and behaviours and be left with an argument from analogy to say “they must be conscious because I am and they are really similar, and the idea that my consciousness is divorced from what I do is just wacky”.
You can observe all the externally observable, measurable things that a black hole or container can do, and then if someone argues about essences you wonder if they’re actually referring to anything: it’s a purely semantic debate. But you can observe all the things a fish, or tree, or advanced computer can do, predict it for all useful purposes, and still not know if it’s conscious. This is bothersome. But it’s not to do with essences, necessarily.
Insofar as people don’t infer something else beyond the parts of a (for example) my body and their pattern of interactions that account for (for example) my subjective experience, I don’t think they are mistaking a label for a thing.
Well, until we know how to identify if something/someone is conscious, it’s all a bit of a mystery: I couldn’t rule out consciousness being some additional thing. I have an inclination to do so because it seems unparsimonious, but that’s it.