Consciousness does seem different in that we can have a better and better understanding of all the various functional elements but that we’re
1) left with a sort of argument from analogy for others having qualia
2) even if we can resolve(1), I can’t see how we can start to know whether my green is your red etc. etc.
I can’t think of many comparable cases: certainly I don’t think containership is comparable. You and I could end up looking at the AI in the moment before it destroys/idealises/both the world and say ‘gosh, I wonder if it’s conscious’. This is nothing like the casuistic ‘but what about this container gives it its containerness’. I think we’re on the same point here, though?
I’m intuitively very confident you’re conscious: and yes, seeing you were human would help (in that one of the easiest ways I can imagine you weren’t conscious is that you’re actually a computer designed to post about things on less wrong. This would also explain why you like Dennett—I’ve always suspected he’s a qualia-less robot too! ;-)
Yes, I agree that we’re much more confused about subjective experience than we are about containership.
We’re also more confused about subjective experience than we are about natural language, about solving math problems, about several other aspects of cognition. We’re not _un_confused about those things, but we’re less confused than we used to be. I expect us to grow still less confused over time.
I disagree about the lack of comparable cases. I agree about containers; that’s just an intuition pump. But the issues that concern you here arise for any theoretical construct for which we have only indirect evidence. The history of science is full of such things. Electrons. Black holes. Many worlds. Fibromyalgia. Phlogiston. Etc.
What makes subjective experience different is not that we lack the ability to perceive it directly; that’s pretty common. What makes it different is that we can perceive it directly in one case, as opposed to the other stuff where we perceive it directly in zero cases.
Of course, it’s also different from many of them in that it matters to our moral reasoning in many cases. I can’t think of a moral decision that depends on whether phlogiston exists, but I can easily think of a moral decision that depends on whether cows have subjective experiences. OTOH, it still isn’t unique; some people make moral decisions that depend on the actuality of theoretical constructs like many worlds and PTSD.
Fair enough. As an intuition pump, for me at least, it’s unhelpful: it gave the impression that you thought that consciousness was merely a label being mistaken for a thing (like ‘life’ as something beyond its parts).
Only having indirect evidence isn’t the problem. For a black hole, I care about the observable functional parts. I wouldn’t be being sucked towards it and being crushed while going ‘but is it really a black hole?’ A black hole is like a container here: what matter are the functional bits that make it up. For consciousness, I care if a robot can reason and can display conscious-type behaviour, but I also care if it can experience and feel.
Many worlds could be comparable if there is evidence that means that there are ‘many worlds’ but people are vague about if these worlds actually exist. And you’re right, this is also a potentially morally relevant point.
Insofar as people infer from the fact of subjective experience that there is some essence of subjective experience that is, as you say, “beyond its parts” (and their patterns of interaction), I do in fact think they are mistaking a label for a thing.
I dunno about essences. The point is that you can observe lots of interactions of neurons and behaviours and be left with an argument from analogy to say “they must be conscious because I am and they are really similar, and the idea that my consciousness is divorced from what I do is just wacky”.
You can observe all the externally observable, measurable things that a black hole or container can do, and then if someone argues about essences you wonder if they’re actually referring to anything: it’s a purely semantic debate. But you can observe all the things a fish, or tree, or advanced computer can do, predict it for all useful purposes, and still not know if it’s conscious. This is bothersome. But it’s not to do with essences, necessarily.
Insofar as people don’t infer something else beyond the parts of a (for example) my body and their pattern of interactions that account for (for example) my subjective experience, I don’t think they are mistaking a label for a thing.
Well, until we know how to identify if something/someone is conscious, it’s all a bit of a mystery: I couldn’t rule out consciousness being some additional thing. I have an inclination to do so because it seems unparsimonious, but that’s it.
Consciousness does seem different in that we can have a better and better understanding of all the various functional elements but that we’re 1) left with a sort of argument from analogy for others having qualia 2) even if we can resolve(1), I can’t see how we can start to know whether my green is your red etc. etc.
I can’t think of many comparable cases: certainly I don’t think containership is comparable. You and I could end up looking at the AI in the moment before it destroys/idealises/both the world and say ‘gosh, I wonder if it’s conscious’. This is nothing like the casuistic ‘but what about this container gives it its containerness’. I think we’re on the same point here, though?
I’m intuitively very confident you’re conscious: and yes, seeing you were human would help (in that one of the easiest ways I can imagine you weren’t conscious is that you’re actually a computer designed to post about things on less wrong. This would also explain why you like Dennett—I’ve always suspected he’s a qualia-less robot too! ;-)
Yes, I agree that we’re much more confused about subjective experience than we are about containership.
We’re also more confused about subjective experience than we are about natural language, about solving math problems, about several other aspects of cognition. We’re not _un_confused about those things, but we’re less confused than we used to be. I expect us to grow still less confused over time.
I disagree about the lack of comparable cases. I agree about containers; that’s just an intuition pump. But the issues that concern you here arise for any theoretical construct for which we have only indirect evidence. The history of science is full of such things. Electrons. Black holes. Many worlds. Fibromyalgia. Phlogiston. Etc.
What makes subjective experience different is not that we lack the ability to perceive it directly; that’s pretty common. What makes it different is that we can perceive it directly in one case, as opposed to the other stuff where we perceive it directly in zero cases.
Of course, it’s also different from many of them in that it matters to our moral reasoning in many cases. I can’t think of a moral decision that depends on whether phlogiston exists, but I can easily think of a moral decision that depends on whether cows have subjective experiences. OTOH, it still isn’t unique; some people make moral decisions that depend on the actuality of theoretical constructs like many worlds and PTSD.
Fair enough. As an intuition pump, for me at least, it’s unhelpful: it gave the impression that you thought that consciousness was merely a label being mistaken for a thing (like ‘life’ as something beyond its parts).
Only having indirect evidence isn’t the problem. For a black hole, I care about the observable functional parts. I wouldn’t be being sucked towards it and being crushed while going ‘but is it really a black hole?’ A black hole is like a container here: what matter are the functional bits that make it up. For consciousness, I care if a robot can reason and can display conscious-type behaviour, but I also care if it can experience and feel.
Many worlds could be comparable if there is evidence that means that there are ‘many worlds’ but people are vague about if these worlds actually exist. And you’re right, this is also a potentially morally relevant point.
Insofar as people infer from the fact of subjective experience that there is some essence of subjective experience that is, as you say, “beyond its parts” (and their patterns of interaction), I do in fact think they are mistaking a label for a thing.
I dunno about essences. The point is that you can observe lots of interactions of neurons and behaviours and be left with an argument from analogy to say “they must be conscious because I am and they are really similar, and the idea that my consciousness is divorced from what I do is just wacky”.
You can observe all the externally observable, measurable things that a black hole or container can do, and then if someone argues about essences you wonder if they’re actually referring to anything: it’s a purely semantic debate. But you can observe all the things a fish, or tree, or advanced computer can do, predict it for all useful purposes, and still not know if it’s conscious. This is bothersome. But it’s not to do with essences, necessarily.
Insofar as people don’t infer something else beyond the parts of a (for example) my body and their pattern of interactions that account for (for example) my subjective experience, I don’t think they are mistaking a label for a thing.
Well, until we know how to identify if something/someone is conscious, it’s all a bit of a mystery: I couldn’t rule out consciousness being some additional thing. I have an inclination to do so because it seems unparsimonious, but that’s it.