I’ve had that conversation with a few people over the years, and I conclude that it does for some people and not others. The ones for whom it doesn’t generally seem to think of it as a piece of misdirection, in which Dennett answers in great detail a different question than the one that was being asked. (It’s not entirely clear to me what question they think he answers instead.)
That said, it’s a pretty fun read. If the subject interests you, I’d recommend sitting down and writing out as clearly as you can what it is you find mysterious about subjective experience, and then reading the book and seeing if it answers, or at least addresses, that question.
The ones for whom it doesn’t generally seem to think of it as a piece of misdirection, in which Dennett answers in great detail a different question than the one that was being asked. (It’s not entirely clear to me what question they think he answers instead.)
He seems to answer the question of why humans feel and report that they are conscious; why, in fact, they are conscious. But I don’t know how to translate that into an explanation of why I am conscious.
The problem that many people (including myself) feel to be mysterious is qualia. I know indisputably that I have qualia, or subjective experience. But I have no idea why that is, or what that means, or even what it would really mean for things to be otherwise (other than a total lack of experience, as in death).
A perfect and complete explanation of of the behavior of humans, still doesn’t seem to bridge the gap from “objective” to “subjective” experience.
I don’t claim to understand the question. Understanding it would mean having some idea over what possible answers or explanations might be like, and how to judge if they are right or wrong. And I have no idea. But what Dennett writes doesn’t seem to answer the question or dissolve it.
Here’s how I got rid of my gut feeling that qualia are both real and ineffable.
First, phrasing the problem:
Even David Chalmers thinks there are some things about qualia that are effable. Some of the structural properties of experience—for example, why colour qualia can be represented in a 3-dimensional space (colour, hue, and saturation) - might be explained by structural properties of light and the brain, and might be susceptible to third-party investigation.
What he would call ineffable is the intrinsic properties of experience. With regards to colour-space, think of spectrum inversion. When we look at a firetruck, the quale I see is the one you would call “green” if you could access it, but since I learned my colour words by looking at firetrucks, I still call it “red”.
If you think this is coherent, you believe in ineffable qualia: even though our colour-spaces are structurally identical, the “atoms” of experience additionally have intrinsic natures (I’ll call these eg. RED and GREEN) which are non-causal and cannot be objectively discovered.
You can show that ineffable qualia (experiential intrinsic natures, independent of experiential structure) aren’t real by showing that spectrum inversion (changing the intrinsic natures, keeping the structure) is incoherent.
An attempt at a solution:
Take another experiential “spectrum”: pleasure vs. displeasure. Spectrum inversion is harder, I’d say impossible, to take seriously in this case. If someone seeks out P, tells everyone P is wonderful, laughs and smiles when P happens, and even herself believes (by means of mental representations or whatever) that P is pleasant, then it makes no sense to me to imagine P really “ultimately” being UNPLEASANT for her.
Anyway, if pleasure-displeasure can’t be noncausally inverted, then neither can colour-qualia. The three colour-space dimensions aren’t really all you need to represent colour experience. Colour experience doesn’t, and can’t, ever occur isolated from other cognition.
For example: seeing a lot of red puts monkeys on edge. So imagine putting a spectrum-inverted monkey in a (to us) red room, and another in a (to us) green room.
If the monkey in the green (to it, RED’) room gets antsy, or the monkey in the red (to it, GREEN’) room doesn’t, then that means the spectrum-inversion was causal and ineffable qualia don’t exist.
But if the monkey in the green room doesn’t get antsy, or the monkey in the red room does, then it hasn’t been a full spectrum inversion. RED’ without antsiness is not the same quale as RED with antsiness. If all the other experiential spectra remain uninverted, it might even look surprisingly like GREEN. But to make the inversion successfully, you’d have to flip all the other experiential spectra that connect with colour, including antiness vs. serenity, and through that, pleasure vs. displeasure.
I’m not sure pleasure/pain is that useful, because
1) they have such an intuitive link to reaction/function
2) they might be meta-qualities: a similar sensation of pain can be strongly unpleasant, entirely tolerable or even enjoyable depending on other factors.
What you’ve done with colours is combine what feels like a somewhat arbitary/ineffable qualia and declare it inextricable associated with one that has direct behavioural terms involved. Your talk of what’s required to ‘make the inversion succesfully’ is misleading: what if the monkey has GREEN and antsiness rather than RED and antsiness?
It seems intuitive to assume ‘red’ and ‘green’ remain the same in normal conditions: but I’m left totally lost as to what ‘red’ would look like to a creature that could see a far wider or narrower spectrum than the one we can see. Or to that matter to someone with limited colour-blindness. There seems to me to be the Nagel ‘what is it like to be a bat’ problem, and I’ve never understood how that dissolves.
It’s been a long time since I read Dennett, but I was in the camp of ‘not answering the question, while being fascinating around the edges and giving people who think qualia are straightforward pause for thought’. No-one’s ever been able to clearly explain how his arguments work to me, to the point that I suggest that either I or they are fundamentally missing something.
If the hard problem of consciousness has really been solved I’d really like to know!
Consider the following dialog: A: “Why do containers contain their contents?” B: “Well, because they are made out of impermeable materials arranged in such a fashion that there is no path between their contents and the rest of the universe.” A: “Yes, of course, I know that, but why does that lead to containment?” B: “I don’t quite understand. Are you asking what properties of materials make them impermeable, or what properties of shapes preclude paths between inside and outside? That can get a little technical, but basically it works like this—” A: “No, no, I understand that stuff. I’ve been studying containment for years; I understand the simple problem of containment quite well. I’m asking about the hard problem of containment: how does containment arise from those merely mechanical things?” B: “Huh? Those ‘merely mechanical things’ are just what containment is. If there’s no path X can take from inside Y to outside Y, X is contained by Y. What is left to explain?” A: “That’s an admirable formulation of the hard problem of containment, but it doesn’t solve it.”
Would you expect that reply to convince A? Or would you just accept that A might go on believing that there’s something important and ineffable left to explain about containment, and there’s not much you can do about it? Or something else?
If you were a container, you would understand the wonderful feeling of containment, the insatiable longing to contain, the sweet anticipation of the content being loaded, the ultimate reason for containing and other incomparable wonderful and tortuous qualia no non-container can enjoy. Not being one, all you can understand is the mechanics of containment, a pale shadow of the rich and true containing experience.
It is for A to state what the remaining problem actually is. And qualiphiles can do that
D: I can explain how conscious entities respond to their environments, process information and behave. What more is there?
C: How it all looks from the inside—the qualia.
That’s funny, David again and the other David arguing about the hard versus the “soft” problem of consciousness. Have you two lost your original?
I think A and B are sticking different terminology on a similar thing. A laments that the “real” problem hasn’t been solved, B points out that it has to the extent that it can be solved. Yet in a way they treat common ground:
A believes there are aspects of the problem of con(tainment|sciousness) that didn’t get explained away by a “mechanistic” model.
B believes that a (probably reductionist) model suffices, “this configuration of matter/energy can be called ‘conscious’” is not fundamentally different from “this configuration of matter/energy can be called ‘a particle’”. If you’re content with such an explanation for the latter, why not the former? …
However, with many Bs I find that even accepting a matter-of-fact workable definition of “these states correspond to consciousness” is used as a stop sign more so than as a starting point.
Just as A insists that further questions exist, so should B, and many of those questions would be quite similar, to the point of practically dissolving the initial difference.
Off of the top of my head: If the experience of qualia is a potential side-effect of physical objects, is it configuration-dependent or does everything have it in some raw, unprocessed form? Is it just that the qualia we experience are modulated and processed by virtue of the relevant matter (brain) being in a state which can organize memories, reflect on its experiences etc.?
Anthropic considerations apply: Even if anything had a “value” for “subjective experience”, we would know only about our own, and probably only ascribe that property to similar ‘things’ (other humans or highly developed mammals). But is it just because those can reflect upon that property? Are waterfalls conscious, even if not sentient? “What an algorithm feels like on the inside”—any natural phenomenon is executing algorithms just the same as our neurons and glial cells do. Is it because we can ascribe correspondences between structure in our brain and external structures, i.e. models? We can find the same models within a waterfall, simply by finding another mapping function.
So is it the difference between us and a waterfall that enables the capacity for qualia, something to do with communication, memory, planning? It’s not clear why qualia should depend on “only things that can communicate can experience qualia”, for example. That sounds more like an anthropic concern: Of course we can understand another human relate its qualia experience better than a waterfall could—if it did experience it. Occam’s Razor may prefer “everything can experience” to “only very special configurations of matter can experience”, keeping in mind that the internal structure of a waterfall is just as complex as a human brain.
It seems to be that A is better in tune with the many questions that remain, while B has more of an engineer mindset, a la “I can work with that, what more do I want?”. “Here be dragons” is what follows even the most dissolv-y explanation of qualia, and trying to stay out of those murky waters isn’t a reason to deny their existence.
I can no longer remember if there was actually an active David when I joined, or if I just picked the name on a lark. I frequently introduce myself in real life as “Dave—no, not that Dave, the other one.”
Sure, I agree that there may be systems that have subjective experience but do not manifest that subjective experience in any way we recognize or understand. Or, there may not.
In the absence of any suggestion of what might be evidence one way or the other, in the absence of any notion of what I would differentially expect to observe in one condition over the other, I don’t see any value to asking the question. If it makes you feel better if I don’t deny their existence, well, OK, I don’t deny their existence, but I really can’t see why anyone should care one way or the other.
In any case, I don’t agree that the B’s studying conscious experience fail to explore further questions. Quite the contrary, they’ve made some pretty impressive progress in the last five or six decades towards understanding just how the neurobiological substrate of conscious systems actually works. They simply don’t explore the particular questions you’re talking about here.
And it’s not clear to me that the A’s exploring those questions are accomplishing anything.
If the experience of qualia is a potential side-effect of physical objects, is it configuration-dependent or does everything have it in some raw, unprocessed form?
So, A asks “If containment is a potential side-effect of physical objects, is it configuration-dependent or does everything have it in some raw, unprocessed form?” How would you reply to A?
My response is something like “We know that certain configurations of physical objects give rise to containment. Sure, it’s not impossible that “unprocessed containment” exists in other systems, and we just haven’t ever noticed it, but why are you even asking that question?”
But I don’t think conscious experience (qualia if you like) have been explained. I think we have some pretty good explanations of how people act, but I don’t see how it pierces through to consciousness as experienced, and linked questions such as ‘what is it like to be a bat?’ or ‘how do I know my green isn’t your red’
It would help if you could sum up the merely mechnical things that are ‘just what consciousness is’ in Dennett’s (or your!) sense. I’ve never been clear on what confident materialists are saying on this: I’m sometimes left with the impression that they’re denying that we have subjective experience, sometimes that they’re saying it’s somehow an inherent quality of other things, sometimes that it’s an incidental byproduct. All of these seem to be problematic to me.
It would help if you could sum up the merely mechanical things that are ‘just what consciousness is’ in Dennett’s (or your!) sense.
I don’t think it would, actually.
The merely mechanical things that are ‘just what consciousness is’ in Dennett’s sense are the “soft problem of consciousness” in Chalmers’ sense; I don’t expect any amount of summarizing or detailing the former to help anyone feel like the “hard problem of consciousness” has been addressed, any more than I expect any amount of explanation of materials science or topology to help A feel like the “hard problem of containment” has been addressed.
But, since you asked: I’m not denying that we have subjective experiences (nor do I believe Dennett is), and I am saying that those experiences are a consequence of our neurobiology (as I believe Dennett does). If you’re looking for more details of things like how certain patterns of photons trigger increased activation levels of certain neural structures, there are better people to ask than me, but I don’t think that’s what you’re looking for.
As for whether they are an inherent quality or an incidental byproduct of that neurobiology, I’m not sure I even understand the question. Is being a container an inherent quality of being composed of certain materials and having certain shape, or an incidental byproduct? How would I tell?
I may not remember Chalmer’s soft problem well enough either for reference of that to help!
If experiences are a consequence of our neurobiology, fine. Presumably a consequence that itself has consequences: experiences can be used in causal explanations? But it seems to me that we could explain how a bat uses echolocation without knowing what echolocation looks like (sounds like? feels like?) to a bat. And that we could distinguish how well people distinguish wavelengths of light etc. without knowing what the colour looks like to them.
It seems subjective experience is just being ignored: we could identify that an AI could carry out all sorts of tasks that we associate with consciousness, but I have no idea when we’d say ‘it now has conscious experiences’. Or whether we’d talk about degrees of conscious experience, or whatever. This is obviously ethically quite important, if not that directly pertinent to me, and it bothers me that I can’t respond to it.
With a container, you describe various qualities and that leaves the question ‘can it contain things’: do things stay in it when put there. You’re adding a sort of purpose-based functional classification to a physical object. When we ask ‘is something conscious’, we’re not asking about a function that it can perform. On a similar note, I don’t think we’re trying to reify something (as with the case where we have a sense of objects having ongoing identity, which we then treat as a fundamental thing and end up asking if a ship is the same after you replace every component of it one by one). We’re not chasing some over-abstracted ideal of consciousness, we’re trying to explain an experienced reality.
So to answer A, I’d say ‘there is no fundamental property of ‘containment’. It’s just a word we use to describe one thing surrounded by another in circumstances X and Y. You’re over-idealising a useful functional concept’. The same is not true of consciousness, because it’s not (just) a function.
It might help if you could identify what, in light of a Dennett-type approach, we can identify as conscious or not. I.e. plants, animals, simple computers, top-level computers, theoretical super-computers of various kinds, theoretical complex networks divided across large areas so that each signal from one part to another takes weeks…
I’m splitting up my response to this into several pieces because it got long. Some other stuff:
what, in light of a Dennett-type approach, we can identify as conscious or not.
The process isn’t anything special, but OK, since you ask.
Let’s assert for simplicity that “I” has a relatively straightforward and consistent referent, just to get us off the ground. Given that, I conclude that am at least sometimes capable of subjective experience, because I’ve observed myself subjectively experiencing.
I further observe that my subjective experiences reliably and differentially predict certain behaviors. I do certain things when I experience pain, for example, and different things when I experience pleasure. When I observe other entities (E2) performing those behaviors, that’s evidence that they, too, experience pain and pleasure. Similar reasoning applies to other kinds of subjective experience.
I look for commonalities among E2 and I generalize across those commonalities. I notice certain biological structures are common to E2 and that when I manipulate those structures, I reliably and differentially get changes in the above-referenced behavior. Later, I observe additional entities (E3) that have similar structures; that’s evidence that E3 also demonstrates subjective experience, even though E3 doesn’t behave the way I do.
Later, I build an artificial structure (E4) and I observe that there are certain properties (P1) of E2 which, when I reproduce them in E4 without reproducing other properties (P2), reproduce the behavior of E2. I conclude that P1 is an important part of that behavior, and P2 is not.
I continue this process of observation and inference and continue to draw conclusions based on it. And at some point someone asks “is X conscious?” for various Xes:
I.e. plants, animals, simple computers, top-level computers, theoretical super-computers of various kinds, theoretical complex networks divided across large areas so that each signal from one part to another takes weeks...
If I interpret “conscious” as meaning having subjective experience, then for each X I observe it carefully and look for the kinds of attributes I’ve attributed to subjective experience… behaviors, anatomical structures, formal structures, etc… and compare it to my accumulated knowledge to make a decision.
Isn’t that how you answer such questions as well?
If not, then I’ll ask you the same question: what, in light of whatever non-Dennett-type approach you prefer, can we identify as conscious or not?
Ok, well given that responses to pain/pleasure can equally be explained by more direct evolutionary reasons, I’m not sure that the inference from action to experience is very useful. Why would you ever connect these things with expereince rather than other, more directly measurable things?
But the point is definitely not that I have a magic bullet or easy solution: it’s that I think there’s a real and urgent question—are they conscious—which I don’t see how information about responses etc. can answer. Compare to the cases of containment, or heat, or life—all the urgent questions are already resolved before those issues are even raised.
As I say, the best way I know of to answer “is it conscious?” about X is to compare X to other systems about which I have confidence-levels about its consciousness and look for commonalities and distinctions.
If there are alternative approaches that you think give us more reliable answers, I’d love to hear about them.
I have no reliable answers! And I have low meta-confidence levels (in that it seems clear to me that people and most other creatures are conscious but I have no confidence in why I think this)
If the Dennett position still sees this as a complete bafflement but thinks it will be resolved with the so-called ‘soft’ problem, I have less of an issue than I thought I did. Though I’d still regard the view that the issue will become clear one of hope rather than evidence.
I’m splitting up my response to this into several pieces because it got long. Some other stuff:
Presumably a consequence that itself has consequences: experiences can be used in causal explanations?
I expect so, sure. For example, I report having experiences; one explanation of that (though hardly the only possible one) starts with my actually having experiences and progresses forward in a causal fashion.
But it seems to me that we could explain how a bat uses echolocation without knowing what echolocation looks like (sounds like? feels like?) to a bat.
Sure, there are many causal explanations of many phenomena, including but not limited to how bats use echolocation, that don’t posit subjective experience as part of their causal chain. For example, humas do all kinds of things without the subjective experience of doing them.
And that we could distinguish how well people distinguish wavelengths of light etc. without knowing what the colour looks like to them.
Certainly.
It seems subjective experience is just being ignored
In the examples you give, yes, it is being ignored. So? Lots of things are being ignored in those examples… mass, electrical conductivity, street address, level of fluency in Russian, etc. If these things aren’t necessary to explain the examples, there’s nothing wrong with ignoring these things.
On the other hand, if we look at an example for which experience ought to be part of the causal chain (for example, as I note above, reporting having those experiences), subjective experience is not ignored. X happens, as a consequence of X a subjective experience Y arises, as a consequence of Y a report Z arises, and so forth. (Of course, for some reports we do have explanations that don’t presume Y… e.g., confabulation, automatic writing, etc. But that needn’t be true for all reports. Indeed, it would be surprising if it were.)
“But we don’t know what Xes give rise to the Y of subjective experience, so we don’t fully understand subjective experience!” Well, yes, that’s true. We don’t fully understand fluency in Russian, either. But we don’t go around as a consequence positing some mysterious essence of Russian fluency that resists neurobiological explanation… though two centuries ago, we might have done so. Nor should we. Neither should we posit some mysterious essence of subjective experience.
“But subjective experience is different! I can imagine what a mechanical explanation of Russian fluency would be like, but I can’t imagine what a mechanical explanation of subjective experience would be like.” Sure, I understand that. Two centuries ago, the notion of a mechanical explanation of Russian fluency would raise similar incredulity… how could a machine speak Russian? I’m not sure how I could go about answering such incredulity convincingly, but I don’t thereby conclude that machines can’t speak Russian. My incredulity may be resistant to my reason, but it doesn’t therefore compel or override my reason.
I have a lot of sympathy for this. The most plausible position of reductive materialism is simply that at some future scientific point this will become clear. But this is inevitably a statement of faith, rather than acknoweldging current acheivement. It’s very hard to compare current apparent mysteries to solved mysteries—I do get that. Having said that, I can’t even see what the steps on the way to explaining consciousness would be, and claiming there is not such thing seems not to be an option (unlike ‘life’, ‘free will’ etc.) whereas most other cases you rely on saying that you can’t see how the full extent could be achieved: a machine might speak crap Russian in some circumstances etc.
Also, if a machine can speak Russian, you can check that. I don’t know how we’d check a machine was conscious.
BTW, when I said ‘it seems subjective experience is just being ignored’, I meant ignored in your and Dennett’s arguments, not in specific explanations. I have nothing against analysing things in ways that ignore consciousness, if they work.
I don’t know what the mechanical explanation would look like, either. But I’m sufficiently aware of how ignorant my counterparts two centuries ago would have been of what a mechanical explanation for speaking Russian would look like that I don’t place too much significance on my ignorance.
I agree that testing whether a system is conscious or not is a tricky problem. (This doesn’t just apply to artificial systems.)
Indeed: though artificial systems are more intuitively difficult as we don’t have as clear an intuitive expectation.
You can take an outside view and say ‘this will dissolve like the other mysteries’. I just genuinely find this implausible, if only because you can take steps towards the other mysteries (speaking bad Russian occasionally) and because you have a clear empirical standard (russians). Whereas for consciousness I don’t have any standard for identifying another’s consciousness: I do it only by analogy with myself and by the implausibility of me having an apparently causal element that others who act similarly to me lack.
I agree that the “consciousness-detector” problem is a hard problem. I just can’t think of a better answer than the generalizing-from-commonalities strategy I discussed previously, so that’s the approach I go with. It seems capable of making progress for now.
And I understand that you find it implausible. That said, I suspect that if we solve the “soft” problem of consciousness well enough that a typical human is inclined to treat an artificial system as though it were conscious, it will start to seem more plausible.
Perhaps it will be plausible and incorrect, and we will happily go along treating computers as conscious when they are no such thing. Perhaps we’re currently going along treating dogs and monkeys and 90% of humans as conscious when they are no such thing.
Perhaps not.
Either way, plausibility (or the absence of it) doesn’t really tell us much.
Yes. This is what worries me: I can see more advances making everyone sure that computers are conscious, but my suspicion is that this will not be logical. Take the same processor and I suspect the chances of it being seen as conscious will rise sharply if it’s put in a moving machine, rise sharply again for a humanoid, again for face/voice and again for physically indistinguishable.
The problem with generalising from commonalities is that I have precisely one direct example of consciousness. Although having said that, I don’t find epiphenomenal accounts convincing, so it’s reasonable for me to think that as my statements about qualia seem to follow causally from experiencing said qualia, that other people don’t have a totally separate framework for their statements about qualia. I wouldn’t be that confident though, and it gets harder with artificial consciousness.
Sure. By the same token, if you take me, remove my ability to communicate, and encase me in an opaque cylinder, nobody will recognize me as a being with subjective experience. Or, for that matter, as a being with the ability to construct English sentences.
We are bounded intellects reasoning under uncertainty in a noisy environment. We will get stuff wrong. Sometimes it will be important stuff.
it’s reasonable for me to think that as my statements about qualia seem to follow causally from experiencing said qualia, that other people don’t have a totally separate framework for their statements about qualia.
I agree. And, as I said initially, I apply the same reasoning not only to the statements I make in English, but to all manner of behaviors that “seem to rise from my qualia,” as you put it… all of it is evidence in favor of other organisms also having subjective experience, even organisms that don’t speak English.
I wouldn’t be that confident though,
How confident are you that I possess subjective experience? Would that confidence rise significantly if we met in person and you verified that I have a typical human body?
Consciousness does seem different in that we can have a better and better understanding of all the various functional elements but that we’re
1) left with a sort of argument from analogy for others having qualia
2) even if we can resolve(1), I can’t see how we can start to know whether my green is your red etc. etc.
I can’t think of many comparable cases: certainly I don’t think containership is comparable. You and I could end up looking at the AI in the moment before it destroys/idealises/both the world and say ‘gosh, I wonder if it’s conscious’. This is nothing like the casuistic ‘but what about this container gives it its containerness’. I think we’re on the same point here, though?
I’m intuitively very confident you’re conscious: and yes, seeing you were human would help (in that one of the easiest ways I can imagine you weren’t conscious is that you’re actually a computer designed to post about things on less wrong. This would also explain why you like Dennett—I’ve always suspected he’s a qualia-less robot too! ;-)
Yes, I agree that we’re much more confused about subjective experience than we are about containership.
We’re also more confused about subjective experience than we are about natural language, about solving math problems, about several other aspects of cognition. We’re not _un_confused about those things, but we’re less confused than we used to be. I expect us to grow still less confused over time.
I disagree about the lack of comparable cases. I agree about containers; that’s just an intuition pump. But the issues that concern you here arise for any theoretical construct for which we have only indirect evidence. The history of science is full of such things. Electrons. Black holes. Many worlds. Fibromyalgia. Phlogiston. Etc.
What makes subjective experience different is not that we lack the ability to perceive it directly; that’s pretty common. What makes it different is that we can perceive it directly in one case, as opposed to the other stuff where we perceive it directly in zero cases.
Of course, it’s also different from many of them in that it matters to our moral reasoning in many cases. I can’t think of a moral decision that depends on whether phlogiston exists, but I can easily think of a moral decision that depends on whether cows have subjective experiences. OTOH, it still isn’t unique; some people make moral decisions that depend on the actuality of theoretical constructs like many worlds and PTSD.
Fair enough. As an intuition pump, for me at least, it’s unhelpful: it gave the impression that you thought that consciousness was merely a label being mistaken for a thing (like ‘life’ as something beyond its parts).
Only having indirect evidence isn’t the problem. For a black hole, I care about the observable functional parts. I wouldn’t be being sucked towards it and being crushed while going ‘but is it really a black hole?’ A black hole is like a container here: what matter are the functional bits that make it up. For consciousness, I care if a robot can reason and can display conscious-type behaviour, but I also care if it can experience and feel.
Many worlds could be comparable if there is evidence that means that there are ‘many worlds’ but people are vague about if these worlds actually exist. And you’re right, this is also a potentially morally relevant point.
Insofar as people infer from the fact of subjective experience that there is some essence of subjective experience that is, as you say, “beyond its parts” (and their patterns of interaction), I do in fact think they are mistaking a label for a thing.
I dunno about essences. The point is that you can observe lots of interactions of neurons and behaviours and be left with an argument from analogy to say “they must be conscious because I am and they are really similar, and the idea that my consciousness is divorced from what I do is just wacky”.
You can observe all the externally observable, measurable things that a black hole or container can do, and then if someone argues about essences you wonder if they’re actually referring to anything: it’s a purely semantic debate. But you can observe all the things a fish, or tree, or advanced computer can do, predict it for all useful purposes, and still not know if it’s conscious. This is bothersome. But it’s not to do with essences, necessarily.
Insofar as people don’t infer something else beyond the parts of a (for example) my body and their pattern of interactions that account for (for example) my subjective experience, I don’t think they are mistaking a label for a thing.
Well, until we know how to identify if something/someone is conscious, it’s all a bit of a mystery: I couldn’t rule out consciousness being some additional thing. I have an inclination to do so because it seems unparsimonious, but that’s it.
I’m splitting up my response to this into several pieces because it got long.
The key bit, IMHO:
So to answer A, I’d say ‘there is no fundamental property of ‘containment’. It’s just a word we use to describe one thing surrounded by another in circumstances X and Y.
And I would agree with you.
and that leaves the question ‘can it contain things’: [..] The same is not true of consciousness, because it’s not (just) a function.
“No,” replies A, “you miss the point completely. I don’t ask whether a container can contain things; clearly it can, I observe it doing so. I ask how it contains things. What is the explanation for its demonstrated ability to contain things? Containership is not just a function,” A insists, “though I understand you want to treat it as one. No, containership is a fundamental essence. You can’t simply ignore the hard question of “is X a container?” in favor of thinking about simpler, merely functional questions like “can X contain Y?”. And, while we’re at it,” A coninues, “what makes you think that an artificial container, such as we build all the time, is actually containing anything rather than merely emulating containership? Sure, perhaps we can’t tell the difference, but that doesn’t mean there isn’t a difference.”
I take it you don’t find A’s argument convincing, and neither do I, but it’s not clear to me what either of us could say to A that A would find at all compelling.
Maybe we couldn’t, but A is simply asserting that containership is a concept beyond its parts, whereas I’m appealing directly to experience: the relevance of this is that whether something has experience matters. Ultimately for any case, if others just express bewilderment in your concepts and apparently don’t get what you’re talking about, you can’t prove it’s an issue. But at any rate, most people seem to have subjective experience.
Being conscious isn’t a label I apply to certain conscious-type systems that I deem ‘valuable’ or ‘true’ in some way. Rather, I want to know what systems should be associated with the clearly relevant and important category of ‘conscious’
My thoughts about how I go about associating systems with the expectation of subjective experience are elsewhere and I have nothing new to add to it here.
As regards you and A… I realize that you are appealing directly to experience, whereas A is merely appealing to containment, and I accept that it’s obvious to you that experience is importantly different from containment in a way that makes your position importantly non-analogous to A’s.
I have no response to A that I expect A to find compelling… they simply don’t believe that containership is fully explained by the permeability and topology of containers. And, you know, maybe they’re right… maybe some day someone will come up with a superior explanation of containerhood that depends on some previously unsuspected property of containers and we’ll all be amazed at the realization that containers aren’t what we thought they were. I don’t find it likely, though.
I also have no response to you that I expect you to find compelling. And maybe someday someone will come up with a superior explanation of consciousness that depends on some previously unsuspected property of conscious systems, and I’ll be amazed at the realization that such systems aren’t what I thought they were, and that you were right all along.
Are you saying you don’t experience qualia and find them a bit surprising (in a way you don’t for containerness)? I find it really hard to not see arguments of this kind as a little disingenous: is the issue genuinely not difficult for some people, or is this a rhetorical stance intended to provoke better arguments, or awareness of the weakness of current arguments?
I have subjective experiences. If that’s the same thing as experiencing qualia, then I experience qualia.
I’m not quite sure what you mean by “surprising” here… no, it does not surprise me that I have subjective experiences, I’ve become rather accustomed to it over the years. I frequently find the idea that my subjective experiences are a function of the formal processes my neurobiology implements a challenging idea… is that what you’re asking?
Then again, I frequently find the idea that my memories of my dead father are a function of the formal processes my neurobiology implements a challenging idea as well. What, on your view, am I entitled to infer from that?
Yes, I meant surprising in light of other discoveries/beliefs.
On memory: is it the conscious experience that’s challenging (in which case it’s just a sub-set of the same issue) or do you find the functional aspects of memory challenging? Even though I know almost nothing about how memory works, I can see plausible models and how it could work, unlike consciousness.
Isn’t our objection to A’s position that it doesn’t pay rent in anticipated experience? If one thinks there is a “hard problem of consciousness” such that different answers would cause one to behave differently, then one must take up the burden of identifying what the difference would look like, even if we can’t create a measuring device to find it just now.
Sure, perhaps we can’t tell the difference, but that doesn’t mean there isn’t a difference.
If A means that we cannot determine the difference in principle, then there’s nothing we should do differently. If A means that a measuring device does not currently exist, he needs to identify the range of possible outputs of the device.
Isn’t our objection to A’s position that it doesn’t pay rent in anticipated experience?
This may be a situation where that’s a messy question. After all, qualia are experience. I keep expecting experiences, and I keep having experiences. Do experiences have to be publicly verifiable?
If two theories both lead me to anticipate the same experience, the fact that I have that experience isn’t grounds for choosing among them.
So, sure, the fact that I keep having experiences is grounds for preferring a theory of subjective-experience-explaining-but-otherwise-mysterious qualia over a theory that predicts no subjective experience at all, but not necessarily grounds for preferring it to a theory of subjective-experience-explaining-neural-activity.
different answers to the HP would undoubtedly change our behaviour, because they would indicate different classes of entity have feelings impacting morality. Indeed, it is pretty hard to think of anything more impactive.
The measuring device for conscious experience is consciousness, which is the whole problem.
Sure. But in this sense, false believed answers to the HP are no different from true believed answers.… that is, they would both potentially change our behavior the way you describe.
Sure. But in this sense, false believed answers to the HP are no different from true believed answers.… that is, they would both potentially change our behavior the way you describe.
That is the case for most any belief you hold (unless you mean “in the exact same way”, not as “change behavior”). You may believe there’s a burglar in your house, and that will impact your actions, whether it be false or true. Say you believe that it’s more likely there is a burglar, you are correct in acting upon that belief even if it turns out to incorrect. It’s not AIXI’s fault if it believes in the wrong thing for the right reasons.
In that sense, you can choose an answer for example based on complexity considerations. In the burglar example, the answer you choose (based on data such as crime rate, cat population etc.) can potentially be further experimentally “verified” (the probability increased) as true or false, but even before such verification, your belief can still be strong enough to act upon.
After all, you do act upon your belief that “I am not living in a simulation which will eventually judge and reward me only for the amount of cheerios I’ve eaten”. It also doesn’t lead to different expected experiences at the present time, yet you also choose to act thus as if it were true. Prior based on complexity considerations alone, yet strong enough to act upon. Same when thinking about whether the sun has qualia (“hot hot hot hot hot”).
(Bit of a hybrid fusion answer also meant to refer to our neighboring discussion branch.)
In the absence of any suggestion of what might be evidence one way or the other, in the absence of any notion of what I would differentially expect to observe in one condition over the other, I don’t see any value to asking the question. If it makes you feel better if I don’t deny their existence, well, OK, I don’t deny their existence, but I really can’t see why anyone should care one way or the other.
Well, in the case of “do landslides have qualia”, Occam’s Razor could be used to assign probabilities just the same as we assign probabilities in the “cheerio simulation” example. So we’ve got methodology, we’ve got impact, enough to adopt a stance on the “psychic unity of the cosmos”, no?
My best guess is that you’re suggesting that, with respect to systems that do not manifest subjective experience in any way we recognize or understand, Occam’s Razor provides grounds to be more confident that they have subjective experience than that they don’t. If that’s what you mean, I don’t see why that should be. If that’s not what you mean, can you rephrase the question?
I think it’s conceivable if not likely that Occam’s Razor would favor or disfavor qualia as a property of more systems than just those that seem to show or communicate them in terms we’re used to. I’m not sure which, but it is a question worth pondering, with an impact on how we view the world, and accessible through established methodology, to a degree.
I’m not advocating assigning a high probability to “landslides have raw experience”, I’m advocating that it’s an important question, the probability of which can be argued. I’m an advocate of the question, not the answer, so to speak. And as such opposed to “I really can’t see why anyone should care one way or the other”.
So, I stand by my assertion that in the absence of evidence one way or the other, I really can’t see why anyone should care.
But I agree that to the extent that Occam’s Razor type reasoning provides evidence, that’s a reason to care.
And if it provided strong evidence one way or another (which I don’t think it does, and I’m not sure you do either) that would provide a strong reason to care.
I stand by my assertion that in the absence of evidence one way or the other,
I have evidence in the form of by personal experience of qualia. Granted, I have no way of showing you that evidence, but that doesn’t mean I don’t have it.
Agreed that the ability to share evidence with others is not a necessary condition of having evidence. And to the extent that I consider you a reliable evaluator of (and reporter of) evidence, your report is evidence, and to that extent I have a reason to care.
Moral implications of a proposition in the absence of evidence one way or another for that proposition are insufficient to justify caring. If I actually care about the experiences of minds capable of experiences, I do best to look for evidence for the presence or absence of such experiences. Failing such evidence, I do best to concentrate my attention elsewhere.
It’s possible to have both a strong reason to care, and weak evidence, ie due the moral hazard dependent on some doubtful proposition. People often adopt precautionary principles in such scenarios.
It’s possible to have both a strong reason to care, and weak evidence, ie due the moral hazard dependent on some doubtful proposition.
I don’t think that’s the situation here though. That sounds like a description of this situation: (imagine) we have weak evidence that 1) snakes are sapient, and we grant that 2) sapience is morally significant. Therefore (perhaps) we should avoid wonton harm to snakes.
Part of why this argument might make sense is that (1) and (2) are independent. Our confidence in (2) is not contingent on the small probability that (1) is true: whether or not snakes are sapient, we’re all agreed (lets say) that sapience is morally significant.
On the other hand, the situation with qualia is one where we have weak evidence (suppose) that A) qualia are real, and we grant that B) qualia are morally significant.
The difference here is that (B) is false if (A) is false. So the fact that we have weak evidence for (A) means that we can have no stronger (and likely, we must have yet weaker) evidence for (B).
Does the situation change significantly if “the situation with qualia” is instead framed as A) snakes have qualia and B) qualia are morally significant?
Yes, if the implication of (A) is that we’re agreed on the reality of qualia but are now wondering whether or not snakes have them. No, if (A) is just a specific case of the general question ‘are qualia real?’. My point was probably put in a confusing way: all I mean to say was that Juno seemed to be arguing as if it were possible to be very confident about the moral significance of qualia while being only marginally confident about their reality.
What I think of the case for qualia is beside the point, I was just commenting on your ‘moral hazard’ argument. There you said that even if we assume that we have only weak evidence for the reality of qualia, we should take the possibility seriously, since we can be confident that qualia are morally significant. I was just pointing out that this argument is made problematic by the fact that our confidence in the moral significance of qualia can be no stronger than our confidence in their reality, and therefore by assumption must be weak.
I am guessing that Juno_Watt means that strong evidence for our own perception of qualia makes them real enough to seriously consider their moral significance, whether or not they are “objectively real”.
Yes, they often do. On your view, is there a threshold of doubtfulness of a proposition below which it is justifiable to not devote resources to avoiding the potential moral hazard of that proposition being true, regardless of the magnitude of that moral hazard?
i don’t think ir’s likely my house will catch fire, but i take out fire insurance. OTOH, if i don’t set a lower bound I will be susceptible to pascal’s muggings.
He may have meant something like “Qualiaphobia implies we would have no expereinces at all”. However, that all depends on what you mean by experience. I don’t think the Expected Experience criterion is useful here (or anywhere else)
I realize that non-materialistic “intrinsic qualities” of qualia, which we perceive but which aren’t causes of our behavior, are incoherent. What I don’t fully understand is why have I any qualia at all. Please see my sibling comment.
If it’s accepted that GREEN and RED are structurally identical, and that in virtue of this they are phenomenologically identical, why think that phenomenology involves anything*, beyond structure, which needs explaining?
I think this is the gist of Dennett’s dissolution attempts. Once you’ve explained why your brain is in a seeing-red brain-state, why this causes a believing-that-there-is-red mental representation, onto a meta-reflection-about-believing-there-is-red functional process, etc., why think there’s anything else?
(nods) Yes, that’s consistent with what I’ve heard others say.
Like you, I don’t understand the question and have no idea of what an answer to it might look like, which is why I say I’m not entirely clear what question you/they claim is being answered. Perhaps it would be more correct to say I’m not clear how it differs from the question you/they want answered.
Mostly I suspect that the belief that there is a second question to be answered that hasn’t been is a strong, pervasive, sincere, compelling confusion, akin to where does the bread go?. But I can’t prove it.
Relatedly: I remember, many years ago, attending a seminar where a philosophy student protested to Dennett that he didn’t feel like the sort of process Dennett described. Dennett replied “How can you tell? Maybe this is exactly what the sort of process I’m describing feels like!”
I recognize that the traditional reply to this is “No! The sort of process Dennett describes doesn’t feel like anything at all! It has no qualia, it has no subjective experience!”
To which my response is mostly “Why should I believe that?” An acceptable alternative seems to be that subjective experience (“qualia”, if you like) is simply a property of certain kinds of computation, just as the ability to predict the future location of a falling object (“prescience”, if you like) is a property of certain kinds of computation.
To which one is of course free to reply “but how could prescience—er, I mean qualia—possibly be an aspect of computation??? It just doesn’t make any sense!!!” And I shrug.
Sure, if I say in English “prescience is an aspect of computation,” that sounds like a really weird thing to say, because “prescience” and “computation” are highly charged words with opposite framings. But if I throw out the English words and think about computing the state of the world at some future time, it doesn’t seem mysterious at all, and such computations have become so standard a part of our lives we no longer give it much thought.
When computations that report their subjective experience become ubiquitous, we will take the computational nature of qualia for granted in much the same way.
How can you tell? Maybe this is exactly what the sort of process I’m describing feels like!
I agree. We already know what we feel like. Once we know empirically what kind of process we are, we can indeed conclude that “that’s what that kind of process feels like”.
What I don’t understand is why being some kind of process feels like anything at all. Why it seems to myself that I have qualia in the first place.
I do understand why it makes sense for an evolved human to have such beliefs. I don’t know if there is a further question beyond that. As I said, I don’t know what an answer would even look like.
Perhaps I should just accept this and move on. Maybe it’s just the case that “being mystified about qualia” is what the kind of process that humans are is supposed to feel like! As an analogy, humans have religious feelings with apparently dedicated neurological underpinnings. Some humans feel the numinous strongly, and they ask for an answer to the Mystery of God, which to them appears as obvious as any qualia.
However, an answer that would be more satisfactory (if possible) would be an exploration and an explanation of mind-space and its accompanying qualia. Perhaps if I understood the actual causal link from which kind of process I am, to which qualia I have, part of the apparent mystery would go away.
Does being like some other kind of process “feel like” anything? Like what? Would it be meaningful for me to experience something else without becoming something else? Are the qualia of a cat separate from being a cat? Or would I have to have a cat-mind and forget all about being human and verbal and DanArmak to experience the qualia of a cat, at which point I’d be no different than any existing cat, and which I wouldn’t remember on becoming human again?
When computations that report their subjective experience become ubiquitous, we will take the computational nature of qualia for granted in much the same way.
I agree. To clarify, I believe all of these propositions:
Full materialism
Humans are physical systems that have self-awareness (“consciousness”) and talk about it
That isn’t a separate fact that could be otherwise (p-zombies); it’s highly entangled with how human brains operate
Other beings, completely different physically, would still behave the same if they instantiated the same computation (this is pretty much tautological)
If the computation that is myself is instantiated differently (as in an upload or em), it would still be conscious and report subjective experience (if it didn’t, it would be a very poor emulation!)
If I am precisely cloned, I should anticipate either clone’s experience with 50% probability; but after finding out which clone I am, I would not expect to suddenly “switch” to experiencing being the other clone. I also would not expect to somehow experience being both clones, or anything else. (I’m less sure about this because it’s never happened yet. And I don’t understand quantum mechanics, so I can’t properly appreciate the arguments that say we’re already being split all the time anyway. Nevertheless, I see no sensible alternative, so I still accept this.)
What I meant is that some time after the cloning, the clones’ lives would become distinguishable. One of them would experience X, while the other would experience ~X. Then I would anticipate experiencing X with 50% probability.
If they live identical lives forever, then I can anticipate “being either clone” or as I would call it, “not being able to tell which clone I am”.
My first instinctive response is “be wary of theories of personal identity where your future depends on a coin flip”. You’re essentially saying “one of the clones believes that it is your current ‘I’ experiencing ‘X’, and it has a 50% chance of being wrong”. That seems off.
I think to be consistent, you have to anticipate experiencing both X and ~X with 100% probability. The problem is that the way anticipation works with probability depends implicitly on there only being one future self that things can happen to.
You’re essentially saying “one of the clones believes that it is your current ‘I’ experiencing ‘X’, and it has a 50% chance of being wrong”.
No, I’m not saying that.
I’m saying: first both clones believe “anticipate X with 50% probability”. Then one clone experiences X, and the other ~X. After that they know what they experienced, so of course one updates to believe “I experienced X with ~1 probability” and the other “I experienced ~X with ~1 probability”.
I think to be consistent, you have to anticipate experiencing both X and ~X with 100% probability.
I think we need to unpack “experiencing” here.
I anticipate there will be a future state of me, which has experienced X (= remembers experiencing X), with 50% probability.
If X takes nontrivial time, such that one can experience “X is going on now”, then I anticipate ever experiencing that with 50% probability.
I anticipate there will be a future state of me, which has experienced X (= remembers experiencing X), with 50% probability.
What I meant is that some time after the cloning, the clones’ lives would become distinguishable. One of them would experience X, while the other would experience ~X.
But that means there is always (100%) a future state of you that has experienced X, and a separate future state that has always (100%) experienced ~X. I think there’s some similarity here to the problem of probability in a many-worlds universe, except in this case both versions can still interact. I’m not sure how that affects things myself.
You’re right, there’s a contradiction in what I said. Here’s how to resolve it.
At time T=1 there is one of me, and I go to sleep.
While I sleep, a clone of me is made and placed in an identical room.
At T=2 both clones wake up.
At T=3 one clone experiences X. The other doesn’t (and knows that he doesn’t).
So, what should my expected probability for experiencing X be?
At T=3 I know for sure, so it goes to 1 for one clone and 0 for the other.
At T=2, the clones have woken up, but each doesn’t know which he is yet. Therefore each expects X with 50% probability.
At T=1, before going to sleep, there isn’t a single number that is the correct expectation. This isn’t because probability breaks down, but because the concept of “my future experience” breaks down in the presence of clones. Neither 50% nor 100% is right.
50% is wrong for the reason you point out. 100% is also wrong, because X and ~X are symmetrical. Assigning 100% to X means 0% to ~X.
So in the presence of expected future clones, we shouldn’t speak of “what I expect to experience” but “what I expect a clone of mine to experience”—or “all clones”, or “p proportion of clones”.
Suppose I’m ~100% confident that, while we sleep tonight, someone will paint a blue dot on either my forehead or my husband’s but not both. In that case, I am ~50% confident that I will see a blue dot, I am ~100% confident that one of us will see a blue dot, I am ~100% confident that one of us will not see a blue dot.
If someone said that seeing a blue dot and not-seeing a blue dot are symmetrical, so assigning ~100% confidence to “one of us will see a blue dot” means assigning ~0% to “one of us will not see a blue dot”, I would reply that they are deeply confused. The noun phrase “one of us” simply doesn’t behave that way.
In the scenario you describe, the noun phrase “I” doesn’t behave that way either.
I’m ~100% confident that I will experience X, and I’m ~100% confident that I will not experience X.
In your example, you anticipate your own experiences, but not your husband’s experiences. I don’t see how this is analogous to a case of cloning, where you equally anticipate both.
If someone said that seeing a blue dot and not-seeing a blue dot are symmetrical, so assigning ~100% confidence to “one of us will see a blue dot” means assigning ~0% to “one of us will not see a blue dot”, I would reply that they are deeply confused.
I’m not saying that “[exactly] one of us will see a blue dot” and “[neither] one of us will not see a blue dot” are symmetrical; that would be wrong. What I was saying was that “I will see a blue dot” and “I will not see a blue dot” are symmetrical.
I’m ~100% confident that I will experience X, and I’m ~100% confident that I will not experience X.
All the terminologies that have been proposed here—by me, and you, and FeepingCreature—are just disagreeing over names, not real-world predictions.
I think the quoted statement is at the very least misleading because it’s semantically different from other grammatically similar constructions. Normally you can’t say “I am ~1 confident that [Y] and also ~1 confident that [~Y]”. So “I” isn’t behaving like an ordinary object. That’s why I think it’s better to be explicit and not talk about “I expect” at all in the presence of clones.
My comment about “symmetrical” was intended to mean the same thing: that when I read the statement “expect X with 100% probability”, I normally parse it as equivalent to “expect ~X with 0% probability”, which would be wrong here. And X and ~X are symmetrical by construction in the sense that every person, at every point in time, should expect X and ~X with the same probability (whether you call it “both 50%” like I do, or “both 100%” like FeepingCreature prefers), until of course a person actually observes either X or ~X.
In your example, you anticipate your own experiences, but not your husband’s experiences. I don’t see how this is analogous to a case of cloning, where you equally anticipate both.
In my example, my husband and I are two people, anticipating the experience of two people. In your example, I am one person, anticipating the experience of two people. It seems to me that what my husband and I anticipate in my example is analogous to what I anticipate in your example.
But, regardless, I agree that we’re just disagreeing about names, and if you prefer the approach of not talking about “I expect” in such cases, that’s OK with me.
One thing you seem to know but keep forgetting is the distinction between your current state, and recorded memories. Memories use extreme amounts of lossy and biased compression, and some of your confusion seem to come from looking at your current experience while explicitly thinking about this stuff and then generalizing it as something continuous over time and something applicable to a wider range of mental states than it actually is.
Perhaps if I understood the actual causal link from which kind of process I am, to which qualia I have, part of the apparent mystery would go away
Sure, that makes sense.
As far as I know, current understanding of neuroanatomy hasn’t identified the particular circuits responsible for that experience, let alone the mechanism whereby the latter cause the former. (Of course, the same could be said for speaking English.)
But I can certainly see how having such an explanation handy might help if I were experiencing the kind of insistent sense of mysteriousness you describe (for subjective experience or for speaking English).
Well, I’m not sure. I’m not confident there are any neural circuits, strictly speaking. But I suppose I don’t have anything much more specific than ‘loop’ in mind: it would have to be something like a path that returns to an origin.
In the sense of the experience not happening if that circuit doesn’t work, yes. In the sense of being able to give a soup-to-nuts story of how events in the world result in a subjective experience that has that specific character, no.
I am having trouble knowing how to answer your question, because I’m not sure what you’re asking. We have identified neural structures that are implicated in various specific things that brains do. Does that answer your question?
I’m not very up to date on neurobiology, and so when I saw your comment that we had not found the specific circuits for some experience I was surprised by the implication that we had found that there are neural circuits at all. To my knowledge, all we’ve got is fMRI captures showing changes in blood flow which we assume to be correlated in some way with synaptic activity. I wondered if you were using ‘circuit’ literally, or if your intended reference to the oft used brain-computer metaphor. I’m quite interested to know how appropriate that metaphor is.
I’ve had that conversation with a few people over the years, and I conclude that it does for some people and not others. The ones for whom it doesn’t generally seem to think of it as a piece of misdirection, in which Dennett answers in great detail a different question than the one that was being asked. (It’s not entirely clear to me what question they think he answers instead.)
That said, it’s a pretty fun read. If the subject interests you, I’d recommend sitting down and writing out as clearly as you can what it is you find mysterious about subjective experience, and then reading the book and seeing if it answers, or at least addresses, that question.
He seems to answer the question of why humans feel and report that they are conscious; why, in fact, they are conscious. But I don’t know how to translate that into an explanation of why I am conscious.
The problem that many people (including myself) feel to be mysterious is qualia. I know indisputably that I have qualia, or subjective experience. But I have no idea why that is, or what that means, or even what it would really mean for things to be otherwise (other than a total lack of experience, as in death).
A perfect and complete explanation of of the behavior of humans, still doesn’t seem to bridge the gap from “objective” to “subjective” experience.
I don’t claim to understand the question. Understanding it would mean having some idea over what possible answers or explanations might be like, and how to judge if they are right or wrong. And I have no idea. But what Dennett writes doesn’t seem to answer the question or dissolve it.
Here’s how I got rid of my gut feeling that qualia are both real and ineffable.
First, phrasing the problem:
Even David Chalmers thinks there are some things about qualia that are effable. Some of the structural properties of experience—for example, why colour qualia can be represented in a 3-dimensional space (colour, hue, and saturation) - might be explained by structural properties of light and the brain, and might be susceptible to third-party investigation.
What he would call ineffable is the intrinsic properties of experience. With regards to colour-space, think of spectrum inversion. When we look at a firetruck, the quale I see is the one you would call “green” if you could access it, but since I learned my colour words by looking at firetrucks, I still call it “red”.
If you think this is coherent, you believe in ineffable qualia: even though our colour-spaces are structurally identical, the “atoms” of experience additionally have intrinsic natures (I’ll call these eg. RED and GREEN) which are non-causal and cannot be objectively discovered.
You can show that ineffable qualia (experiential intrinsic natures, independent of experiential structure) aren’t real by showing that spectrum inversion (changing the intrinsic natures, keeping the structure) is incoherent.
An attempt at a solution:
Take another experiential “spectrum”: pleasure vs. displeasure. Spectrum inversion is harder, I’d say impossible, to take seriously in this case. If someone seeks out P, tells everyone P is wonderful, laughs and smiles when P happens, and even herself believes (by means of mental representations or whatever) that P is pleasant, then it makes no sense to me to imagine P really “ultimately” being UNPLEASANT for her.
Anyway, if pleasure-displeasure can’t be noncausally inverted, then neither can colour-qualia. The three colour-space dimensions aren’t really all you need to represent colour experience. Colour experience doesn’t, and can’t, ever occur isolated from other cognition.
For example: seeing a lot of red puts monkeys on edge. So imagine putting a spectrum-inverted monkey in a (to us) red room, and another in a (to us) green room.
If the monkey in the green (to it, RED’) room gets antsy, or the monkey in the red (to it, GREEN’) room doesn’t, then that means the spectrum-inversion was causal and ineffable qualia don’t exist.
But if the monkey in the green room doesn’t get antsy, or the monkey in the red room does, then it hasn’t been a full spectrum inversion. RED’ without antsiness is not the same quale as RED with antsiness. If all the other experiential spectra remain uninverted, it might even look surprisingly like GREEN. But to make the inversion successfully, you’d have to flip all the other experiential spectra that connect with colour, including antiness vs. serenity, and through that, pleasure vs. displeasure.
This isn’t knockdown, but it convinced me.
I’m not sure pleasure/pain is that useful, because 1) they have such an intuitive link to reaction/function 2) they might be meta-qualities: a similar sensation of pain can be strongly unpleasant, entirely tolerable or even enjoyable depending on other factors.
What you’ve done with colours is combine what feels like a somewhat arbitary/ineffable qualia and declare it inextricable associated with one that has direct behavioural terms involved. Your talk of what’s required to ‘make the inversion succesfully’ is misleading: what if the monkey has GREEN and antsiness rather than RED and antsiness?
It seems intuitive to assume ‘red’ and ‘green’ remain the same in normal conditions: but I’m left totally lost as to what ‘red’ would look like to a creature that could see a far wider or narrower spectrum than the one we can see. Or to that matter to someone with limited colour-blindness. There seems to me to be the Nagel ‘what is it like to be a bat’ problem, and I’ve never understood how that dissolves.
It’s been a long time since I read Dennett, but I was in the camp of ‘not answering the question, while being fascinating around the edges and giving people who think qualia are straightforward pause for thought’. No-one’s ever been able to clearly explain how his arguments work to me, to the point that I suggest that either I or they are fundamentally missing something.
If the hard problem of consciousness has really been solved I’d really like to know!
Consider the following dialog:
A: “Why do containers contain their contents?”
B: “Well, because they are made out of impermeable materials arranged in such a fashion that there is no path between their contents and the rest of the universe.”
A: “Yes, of course, I know that, but why does that lead to containment?”
B: “I don’t quite understand. Are you asking what properties of materials make them impermeable, or what properties of shapes preclude paths between inside and outside? That can get a little technical, but basically it works like this—”
A: “No, no, I understand that stuff. I’ve been studying containment for years; I understand the simple problem of containment quite well. I’m asking about the hard problem of containment: how does containment arise from those merely mechanical things?”
B: “Huh? Those ‘merely mechanical things’ are just what containment is. If there’s no path X can take from inside Y to outside Y, X is contained by Y. What is left to explain?”
A: “That’s an admirable formulation of the hard problem of containment, but it doesn’t solve it.”
How would you reply to A?
There’s nothing left to explain about containment. There’s something left to explain about consc.
Would you expect that reply to convince A?
Or would you just accept that A might go on believing that there’s something important and ineffable left to explain about containment, and there’s not much you can do about it?
Or something else?
If you were a container, you would understand the wonderful feeling of containment, the insatiable longing to contain, the sweet anticipation of the content being loaded, the ultimate reason for containing and other incomparable wonderful and tortuous qualia no non-container can enjoy. Not being one, all you can understand is the mechanics of containment, a pale shadow of the rich and true containing experience.
OK, maybe I’m getting a bit NSFW here...
It is for A to state what the remaining problem actually is. And qualiphiles can do that
D: I can explain how conscious entities respond to their environments, process information and behave. What more is there? C: How it all looks from the inside—the qualia.
That’s funny, David again and the other David arguing about the hard versus the “soft” problem of consciousness. Have you two lost your original?
I think A and B are sticking different terminology on a similar thing. A laments that the “real” problem hasn’t been solved, B points out that it has to the extent that it can be solved. Yet in a way they treat common ground:
A believes there are aspects of the problem of con(tainment|sciousness) that didn’t get explained away by a “mechanistic” model.
B believes that a (probably reductionist) model suffices, “this configuration of matter/energy can be called ‘conscious’” is not fundamentally different from “this configuration of matter/energy can be called ‘a particle’”. If you’re content with such an explanation for the latter, why not the former? …
However, with many Bs I find that even accepting a matter-of-fact workable definition of “these states correspond to consciousness” is used as a stop sign more so than as a starting point.
Just as A insists that further questions exist, so should B, and many of those questions would be quite similar, to the point of practically dissolving the initial difference.
Off of the top of my head: If the experience of qualia is a potential side-effect of physical objects, is it configuration-dependent or does everything have it in some raw, unprocessed form? Is it just that the qualia we experience are modulated and processed by virtue of the relevant matter (brain) being in a state which can organize memories, reflect on its experiences etc.?
Anthropic considerations apply: Even if anything had a “value” for “subjective experience”, we would know only about our own, and probably only ascribe that property to similar ‘things’ (other humans or highly developed mammals). But is it just because those can reflect upon that property? Are waterfalls conscious, even if not sentient? “What an algorithm feels like on the inside”—any natural phenomenon is executing algorithms just the same as our neurons and glial cells do. Is it because we can ascribe correspondences between structure in our brain and external structures, i.e. models? We can find the same models within a waterfall, simply by finding another mapping function.
So is it the difference between us and a waterfall that enables the capacity for qualia, something to do with communication, memory, planning? It’s not clear why qualia should depend on “only things that can communicate can experience qualia”, for example. That sounds more like an anthropic concern: Of course we can understand another human relate its qualia experience better than a waterfall could—if it did experience it. Occam’s Razor may prefer “everything can experience” to “only very special configurations of matter can experience”, keeping in mind that the internal structure of a waterfall is just as complex as a human brain.
It seems to be that A is better in tune with the many questions that remain, while B has more of an engineer mindset, a la “I can work with that, what more do I want?”. “Here be dragons” is what follows even the most dissolv-y explanation of qualia, and trying to stay out of those murky waters isn’t a reason to deny their existence.
I can no longer remember if there was actually an active David when I joined, or if I just picked the name on a lark. I frequently introduce myself in real life as “Dave—no, not that Dave, the other one.”
I always assumed that the name was originally to distinguish you from David Gerard.
Sure, I agree that there may be systems that have subjective experience but do not manifest that subjective experience in any way we recognize or understand.
Or, there may not.
In the absence of any suggestion of what might be evidence one way or the other, in the absence of any notion of what I would differentially expect to observe in one condition over the other, I don’t see any value to asking the question. If it makes you feel better if I don’t deny their existence, well, OK, I don’t deny their existence, but I really can’t see why anyone should care one way or the other.
In any case, I don’t agree that the B’s studying conscious experience fail to explore further questions. Quite the contrary, they’ve made some pretty impressive progress in the last five or six decades towards understanding just how the neurobiological substrate of conscious systems actually works. They simply don’t explore the particular questions you’re talking about here.
And it’s not clear to me that the A’s exploring those questions are accomplishing anything.
So, A asks “If containment is a potential side-effect of physical objects, is it configuration-dependent or does everything have it in some raw, unprocessed form?”
How would you reply to A?
My response is something like “We know that certain configurations of physical objects give rise to containment. Sure, it’s not impossible that “unprocessed containment” exists in other systems, and we just haven’t ever noticed it, but why are you even asking that question?”
But I don’t think conscious experience (qualia if you like) have been explained. I think we have some pretty good explanations of how people act, but I don’t see how it pierces through to consciousness as experienced, and linked questions such as ‘what is it like to be a bat?’ or ‘how do I know my green isn’t your red’
It would help if you could sum up the merely mechnical things that are ‘just what consciousness is’ in Dennett’s (or your!) sense. I’ve never been clear on what confident materialists are saying on this: I’m sometimes left with the impression that they’re denying that we have subjective experience, sometimes that they’re saying it’s somehow an inherent quality of other things, sometimes that it’s an incidental byproduct. All of these seem to be problematic to me.
I don’t think it would, actually.
The merely mechanical things that are ‘just what consciousness is’ in Dennett’s sense are the “soft problem of consciousness” in Chalmers’ sense; I don’t expect any amount of summarizing or detailing the former to help anyone feel like the “hard problem of consciousness” has been addressed, any more than I expect any amount of explanation of materials science or topology to help A feel like the “hard problem of containment” has been addressed.
But, since you asked: I’m not denying that we have subjective experiences (nor do I believe Dennett is), and I am saying that those experiences are a consequence of our neurobiology (as I believe Dennett does). If you’re looking for more details of things like how certain patterns of photons trigger increased activation levels of certain neural structures, there are better people to ask than me, but I don’t think that’s what you’re looking for.
As for whether they are an inherent quality or an incidental byproduct of that neurobiology, I’m not sure I even understand the question. Is being a container an inherent quality of being composed of certain materials and having certain shape, or an incidental byproduct? How would I tell?
And: how would you reply to A?
That’s such a broad statement, it could cover some forms of dualism.
Agreed.
I may not remember Chalmer’s soft problem well enough either for reference of that to help!
If experiences are a consequence of our neurobiology, fine. Presumably a consequence that itself has consequences: experiences can be used in causal explanations? But it seems to me that we could explain how a bat uses echolocation without knowing what echolocation looks like (sounds like? feels like?) to a bat. And that we could distinguish how well people distinguish wavelengths of light etc. without knowing what the colour looks like to them.
It seems subjective experience is just being ignored: we could identify that an AI could carry out all sorts of tasks that we associate with consciousness, but I have no idea when we’d say ‘it now has conscious experiences’. Or whether we’d talk about degrees of conscious experience, or whatever. This is obviously ethically quite important, if not that directly pertinent to me, and it bothers me that I can’t respond to it.
With a container, you describe various qualities and that leaves the question ‘can it contain things’: do things stay in it when put there. You’re adding a sort of purpose-based functional classification to a physical object. When we ask ‘is something conscious’, we’re not asking about a function that it can perform. On a similar note, I don’t think we’re trying to reify something (as with the case where we have a sense of objects having ongoing identity, which we then treat as a fundamental thing and end up asking if a ship is the same after you replace every component of it one by one). We’re not chasing some over-abstracted ideal of consciousness, we’re trying to explain an experienced reality.
So to answer A, I’d say ‘there is no fundamental property of ‘containment’. It’s just a word we use to describe one thing surrounded by another in circumstances X and Y. You’re over-idealising a useful functional concept’. The same is not true of consciousness, because it’s not (just) a function.
It might help if you could identify what, in light of a Dennett-type approach, we can identify as conscious or not. I.e. plants, animals, simple computers, top-level computers, theoretical super-computers of various kinds, theoretical complex networks divided across large areas so that each signal from one part to another takes weeks…
I’m splitting up my response to this into several pieces because it got long. Some other stuff:
The process isn’t anything special, but OK, since you ask.
Let’s assert for simplicity that “I” has a relatively straightforward and consistent referent, just to get us off the ground. Given that, I conclude that am at least sometimes capable of subjective experience, because I’ve observed myself subjectively experiencing.
I further observe that my subjective experiences reliably and differentially predict certain behaviors. I do certain things when I experience pain, for example, and different things when I experience pleasure. When I observe other entities (E2) performing those behaviors, that’s evidence that they, too, experience pain and pleasure. Similar reasoning applies to other kinds of subjective experience.
I look for commonalities among E2 and I generalize across those commonalities. I notice certain biological structures are common to E2 and that when I manipulate those structures, I reliably and differentially get changes in the above-referenced behavior. Later, I observe additional entities (E3) that have similar structures; that’s evidence that E3 also demonstrates subjective experience, even though E3 doesn’t behave the way I do.
Later, I build an artificial structure (E4) and I observe that there are certain properties (P1) of E2 which, when I reproduce them in E4 without reproducing other properties (P2), reproduce the behavior of E2. I conclude that P1 is an important part of that behavior, and P2 is not.
I continue this process of observation and inference and continue to draw conclusions based on it. And at some point someone asks “is X conscious?” for various Xes:
If I interpret “conscious” as meaning having subjective experience, then for each X I observe it carefully and look for the kinds of attributes I’ve attributed to subjective experience… behaviors, anatomical structures, formal structures, etc… and compare it to my accumulated knowledge to make a decision.
Isn’t that how you answer such questions as well?
If not, then I’ll ask you the same question: what, in light of whatever non-Dennett-type approach you prefer, can we identify as conscious or not?
Ok, well given that responses to pain/pleasure can equally be explained by more direct evolutionary reasons, I’m not sure that the inference from action to experience is very useful. Why would you ever connect these things with expereince rather than other, more directly measurable things?
But the point is definitely not that I have a magic bullet or easy solution: it’s that I think there’s a real and urgent question—are they conscious—which I don’t see how information about responses etc. can answer. Compare to the cases of containment, or heat, or life—all the urgent questions are already resolved before those issues are even raised.
As I say, the best way I know of to answer “is it conscious?” about X is to compare X to other systems about which I have confidence-levels about its consciousness and look for commonalities and distinctions.
If there are alternative approaches that you think give us more reliable answers, I’d love to hear about them.
I have no reliable answers! And I have low meta-confidence levels (in that it seems clear to me that people and most other creatures are conscious but I have no confidence in why I think this)
If the Dennett position still sees this as a complete bafflement but thinks it will be resolved with the so-called ‘soft’ problem, I have less of an issue than I thought I did. Though I’d still regard the view that the issue will become clear one of hope rather than evidence.
I’m splitting up my response to this into several pieces because it got long. Some other stuff:
I expect so, sure. For example, I report having experiences; one explanation of that (though hardly the only possible one) starts with my actually having experiences and progresses forward in a causal fashion.
Sure, there are many causal explanations of many phenomena, including but not limited to how bats use echolocation, that don’t posit subjective experience as part of their causal chain. For example, humas do all kinds of things without the subjective experience of doing them.
Certainly.
In the examples you give, yes, it is being ignored. So? Lots of things are being ignored in those examples… mass, electrical conductivity, street address, level of fluency in Russian, etc. If these things aren’t necessary to explain the examples, there’s nothing wrong with ignoring these things.
On the other hand, if we look at an example for which experience ought to be part of the causal chain (for example, as I note above, reporting having those experiences), subjective experience is not ignored. X happens, as a consequence of X a subjective experience Y arises, as a consequence of Y a report Z arises, and so forth. (Of course, for some reports we do have explanations that don’t presume Y… e.g., confabulation, automatic writing, etc. But that needn’t be true for all reports. Indeed, it would be surprising if it were.)
“But we don’t know what Xes give rise to the Y of subjective experience, so we don’t fully understand subjective experience!” Well, yes, that’s true. We don’t fully understand fluency in Russian, either. But we don’t go around as a consequence positing some mysterious essence of Russian fluency that resists neurobiological explanation… though two centuries ago, we might have done so. Nor should we. Neither should we posit some mysterious essence of subjective experience.
“But subjective experience is different! I can imagine what a mechanical explanation of Russian fluency would be like, but I can’t imagine what a mechanical explanation of subjective experience would be like.” Sure, I understand that. Two centuries ago, the notion of a mechanical explanation of Russian fluency would raise similar incredulity… how could a machine speak Russian? I’m not sure how I could go about answering such incredulity convincingly, but I don’t thereby conclude that machines can’t speak Russian. My incredulity may be resistant to my reason, but it doesn’t therefore compel or override my reason.
I have a lot of sympathy for this. The most plausible position of reductive materialism is simply that at some future scientific point this will become clear. But this is inevitably a statement of faith, rather than acknoweldging current acheivement. It’s very hard to compare current apparent mysteries to solved mysteries—I do get that. Having said that, I can’t even see what the steps on the way to explaining consciousness would be, and claiming there is not such thing seems not to be an option (unlike ‘life’, ‘free will’ etc.) whereas most other cases you rely on saying that you can’t see how the full extent could be achieved: a machine might speak crap Russian in some circumstances etc.
Also, if a machine can speak Russian, you can check that. I don’t know how we’d check a machine was conscious.
BTW, when I said ‘it seems subjective experience is just being ignored’, I meant ignored in your and Dennett’s arguments, not in specific explanations. I have nothing against analysing things in ways that ignore consciousness, if they work.
I don’t know what the mechanical explanation would look like, either. But I’m sufficiently aware of how ignorant my counterparts two centuries ago would have been of what a mechanical explanation for speaking Russian would look like that I don’t place too much significance on my ignorance.
I agree that testing whether a system is conscious or not is a tricky problem. (This doesn’t just apply to artificial systems.)
Indeed: though artificial systems are more intuitively difficult as we don’t have as clear an intuitive expectation.
You can take an outside view and say ‘this will dissolve like the other mysteries’. I just genuinely find this implausible, if only because you can take steps towards the other mysteries (speaking bad Russian occasionally) and because you have a clear empirical standard (russians). Whereas for consciousness I don’t have any standard for identifying another’s consciousness: I do it only by analogy with myself and by the implausibility of me having an apparently causal element that others who act similarly to me lack.
I agree that the “consciousness-detector” problem is a hard problem. I just can’t think of a better answer than the generalizing-from-commonalities strategy I discussed previously, so that’s the approach I go with. It seems capable of making progress for now.
And I understand that you find it implausible. That said, I suspect that if we solve the “soft” problem of consciousness well enough that a typical human is inclined to treat an artificial system as though it were conscious, it will start to seem more plausible.
Perhaps it will be plausible and incorrect, and we will happily go along treating computers as conscious when they are no such thing. Perhaps we’re currently going along treating dogs and monkeys and 90% of humans as conscious when they are no such thing.
Perhaps not.
Either way, plausibility (or the absence of it) doesn’t really tell us much.
Yes. This is what worries me: I can see more advances making everyone sure that computers are conscious, but my suspicion is that this will not be logical. Take the same processor and I suspect the chances of it being seen as conscious will rise sharply if it’s put in a moving machine, rise sharply again for a humanoid, again for face/voice and again for physically indistinguishable.
The problem with generalising from commonalities is that I have precisely one direct example of consciousness. Although having said that, I don’t find epiphenomenal accounts convincing, so it’s reasonable for me to think that as my statements about qualia seem to follow causally from experiencing said qualia, that other people don’t have a totally separate framework for their statements about qualia. I wouldn’t be that confident though, and it gets harder with artificial consciousness.
Sure. By the same token, if you take me, remove my ability to communicate, and encase me in an opaque cylinder, nobody will recognize me as a being with subjective experience. Or, for that matter, as a being with the ability to construct English sentences.
We are bounded intellects reasoning under uncertainty in a noisy environment. We will get stuff wrong. Sometimes it will be important stuff.
I agree. And, as I said initially, I apply the same reasoning not only to the statements I make in English, but to all manner of behaviors that “seem to rise from my qualia,” as you put it… all of it is evidence in favor of other organisms also having subjective experience, even organisms that don’t speak English.
How confident are you that I possess subjective experience?
Would that confidence rise significantly if we met in person and you verified that I have a typical human body?
Agreed.
Consciousness does seem different in that we can have a better and better understanding of all the various functional elements but that we’re 1) left with a sort of argument from analogy for others having qualia 2) even if we can resolve(1), I can’t see how we can start to know whether my green is your red etc. etc.
I can’t think of many comparable cases: certainly I don’t think containership is comparable. You and I could end up looking at the AI in the moment before it destroys/idealises/both the world and say ‘gosh, I wonder if it’s conscious’. This is nothing like the casuistic ‘but what about this container gives it its containerness’. I think we’re on the same point here, though?
I’m intuitively very confident you’re conscious: and yes, seeing you were human would help (in that one of the easiest ways I can imagine you weren’t conscious is that you’re actually a computer designed to post about things on less wrong. This would also explain why you like Dennett—I’ve always suspected he’s a qualia-less robot too! ;-)
Yes, I agree that we’re much more confused about subjective experience than we are about containership.
We’re also more confused about subjective experience than we are about natural language, about solving math problems, about several other aspects of cognition. We’re not _un_confused about those things, but we’re less confused than we used to be. I expect us to grow still less confused over time.
I disagree about the lack of comparable cases. I agree about containers; that’s just an intuition pump. But the issues that concern you here arise for any theoretical construct for which we have only indirect evidence. The history of science is full of such things. Electrons. Black holes. Many worlds. Fibromyalgia. Phlogiston. Etc.
What makes subjective experience different is not that we lack the ability to perceive it directly; that’s pretty common. What makes it different is that we can perceive it directly in one case, as opposed to the other stuff where we perceive it directly in zero cases.
Of course, it’s also different from many of them in that it matters to our moral reasoning in many cases. I can’t think of a moral decision that depends on whether phlogiston exists, but I can easily think of a moral decision that depends on whether cows have subjective experiences. OTOH, it still isn’t unique; some people make moral decisions that depend on the actuality of theoretical constructs like many worlds and PTSD.
Fair enough. As an intuition pump, for me at least, it’s unhelpful: it gave the impression that you thought that consciousness was merely a label being mistaken for a thing (like ‘life’ as something beyond its parts).
Only having indirect evidence isn’t the problem. For a black hole, I care about the observable functional parts. I wouldn’t be being sucked towards it and being crushed while going ‘but is it really a black hole?’ A black hole is like a container here: what matter are the functional bits that make it up. For consciousness, I care if a robot can reason and can display conscious-type behaviour, but I also care if it can experience and feel.
Many worlds could be comparable if there is evidence that means that there are ‘many worlds’ but people are vague about if these worlds actually exist. And you’re right, this is also a potentially morally relevant point.
Insofar as people infer from the fact of subjective experience that there is some essence of subjective experience that is, as you say, “beyond its parts” (and their patterns of interaction), I do in fact think they are mistaking a label for a thing.
I dunno about essences. The point is that you can observe lots of interactions of neurons and behaviours and be left with an argument from analogy to say “they must be conscious because I am and they are really similar, and the idea that my consciousness is divorced from what I do is just wacky”.
You can observe all the externally observable, measurable things that a black hole or container can do, and then if someone argues about essences you wonder if they’re actually referring to anything: it’s a purely semantic debate. But you can observe all the things a fish, or tree, or advanced computer can do, predict it for all useful purposes, and still not know if it’s conscious. This is bothersome. But it’s not to do with essences, necessarily.
Insofar as people don’t infer something else beyond the parts of a (for example) my body and their pattern of interactions that account for (for example) my subjective experience, I don’t think they are mistaking a label for a thing.
Well, until we know how to identify if something/someone is conscious, it’s all a bit of a mystery: I couldn’t rule out consciousness being some additional thing. I have an inclination to do so because it seems unparsimonious, but that’s it.
I’m splitting up my response to this into several pieces because it got long.
The key bit, IMHO:
And I would agree with you.
“No,” replies A, “you miss the point completely. I don’t ask whether a container can contain things; clearly it can, I observe it doing so. I ask how it contains things. What is the explanation for its demonstrated ability to contain things? Containership is not just a function,” A insists, “though I understand you want to treat it as one. No, containership is a fundamental essence. You can’t simply ignore the hard question of “is X a container?” in favor of thinking about simpler, merely functional questions like “can X contain Y?”. And, while we’re at it,” A coninues, “what makes you think that an artificial container, such as we build all the time, is actually containing anything rather than merely emulating containership? Sure, perhaps we can’t tell the difference, but that doesn’t mean there isn’t a difference.”
I take it you don’t find A’s argument convincing, and neither do I, but it’s not clear to me what either of us could say to A that A would find at all compelling.
Maybe we couldn’t, but A is simply asserting that containership is a concept beyond its parts, whereas I’m appealing directly to experience: the relevance of this is that whether something has experience matters. Ultimately for any case, if others just express bewilderment in your concepts and apparently don’t get what you’re talking about, you can’t prove it’s an issue. But at any rate, most people seem to have subjective experience.
Being conscious isn’t a label I apply to certain conscious-type systems that I deem ‘valuable’ or ‘true’ in some way. Rather, I want to know what systems should be associated with the clearly relevant and important category of ‘conscious’
My thoughts about how I go about associating systems with the expectation of subjective experience are elsewhere and I have nothing new to add to it here.
As regards you and A… I realize that you are appealing directly to experience, whereas A is merely appealing to containment, and I accept that it’s obvious to you that experience is importantly different from containment in a way that makes your position importantly non-analogous to A’s.
I have no response to A that I expect A to find compelling… they simply don’t believe that containership is fully explained by the permeability and topology of containers. And, you know, maybe they’re right… maybe some day someone will come up with a superior explanation of containerhood that depends on some previously unsuspected property of containers and we’ll all be amazed at the realization that containers aren’t what we thought they were. I don’t find it likely, though.
I also have no response to you that I expect you to find compelling. And maybe someday someone will come up with a superior explanation of consciousness that depends on some previously unsuspected property of conscious systems, and I’ll be amazed at the realization that such systems aren’t what I thought they were, and that you were right all along.
Are you saying you don’t experience qualia and find them a bit surprising (in a way you don’t for containerness)? I find it really hard to not see arguments of this kind as a little disingenous: is the issue genuinely not difficult for some people, or is this a rhetorical stance intended to provoke better arguments, or awareness of the weakness of current arguments?
I have subjective experiences. If that’s the same thing as experiencing qualia, then I experience qualia.
I’m not quite sure what you mean by “surprising” here… no, it does not surprise me that I have subjective experiences, I’ve become rather accustomed to it over the years. I frequently find the idea that my subjective experiences are a function of the formal processes my neurobiology implements a challenging idea… is that what you’re asking?
Then again, I frequently find the idea that my memories of my dead father are a function of the formal processes my neurobiology implements a challenging idea as well. What, on your view, am I entitled to infer from that?
Yes, I meant surprising in light of other discoveries/beliefs.
On memory: is it the conscious experience that’s challenging (in which case it’s just a sub-set of the same issue) or do you find the functional aspects of memory challenging? Even though I know almost nothing about how memory works, I can see plausible models and how it could work, unlike consciousness.
Isn’t our objection to A’s position that it doesn’t pay rent in anticipated experience? If one thinks there is a “hard problem of consciousness” such that different answers would cause one to behave differently, then one must take up the burden of identifying what the difference would look like, even if we can’t create a measuring device to find it just now.
If A means that we cannot determine the difference in principle, then there’s nothing we should do differently. If A means that a measuring device does not currently exist, he needs to identify the range of possible outputs of the device.
This may be a situation where that’s a messy question. After all, qualia are experience. I keep expecting experiences, and I keep having experiences. Do experiences have to be publicly verifiable?
If two theories both lead me to anticipate the same experience, the fact that I have that experience isn’t grounds for choosing among them.
So, sure, the fact that I keep having experiences is grounds for preferring a theory of subjective-experience-explaining-but-otherwise-mysterious qualia over a theory that predicts no subjective experience at all, but not necessarily grounds for preferring it to a theory of subjective-experience-explaining-neural-activity.
They don’t necessarily once you start talking about uploads, or the afterlife for that matter.
different answers to the HP would undoubtedly change our behaviour, because they would indicate different classes of entity have feelings impacting morality. Indeed, it is pretty hard to think of anything more impactive.
The measuring device for conscious experience is consciousness, which is the whole problem.
Sure. But in this sense, false believed answers to the HP are no different from true believed answers.… that is, they would both potentially change our behavior the way you describe.
I suspect that’s not what TimS meant.
That is the case for most any belief you hold (unless you mean “in the exact same way”, not as “change behavior”). You may believe there’s a burglar in your house, and that will impact your actions, whether it be false or true. Say you believe that it’s more likely there is a burglar, you are correct in acting upon that belief even if it turns out to incorrect. It’s not AIXI’s fault if it believes in the wrong thing for the right reasons.
In that sense, you can choose an answer for example based on complexity considerations. In the burglar example, the answer you choose (based on data such as crime rate, cat population etc.) can potentially be further experimentally “verified” (the probability increased) as true or false, but even before such verification, your belief can still be strong enough to act upon.
After all, you do act upon your belief that “I am not living in a simulation which will eventually judge and reward me only for the amount of cheerios I’ve eaten”. It also doesn’t lead to different expected experiences at the present time, yet you also choose to act thus as if it were true. Prior based on complexity considerations alone, yet strong enough to act upon. Same when thinking about whether the sun has qualia (“hot hot hot hot hot”).
(Bit of a hybrid fusion answer also meant to refer to our neighboring discussion branch.)
Cheerio!
Yes, I agree with all of this.
Well, in the case of “do landslides have qualia”, Occam’s Razor could be used to assign probabilities just the same as we assign probabilities in the “cheerio simulation” example. So we’ve got methodology, we’ve got impact, enough to adopt a stance on the “psychic unity of the cosmos”, no?
I’m having trouble following you, to be honest.
My best guess is that you’re suggesting that, with respect to systems that do not manifest subjective experience in any way we recognize or understand, Occam’s Razor provides grounds to be more confident that they have subjective experience than that they don’t.
If that’s what you mean, I don’t see why that should be.
If that’s not what you mean, can you rephrase the question?
I think it’s conceivable if not likely that Occam’s Razor would favor or disfavor qualia as a property of more systems than just those that seem to show or communicate them in terms we’re used to. I’m not sure which, but it is a question worth pondering, with an impact on how we view the world, and accessible through established methodology, to a degree.
I’m not advocating assigning a high probability to “landslides have raw experience”, I’m advocating that it’s an important question, the probability of which can be argued. I’m an advocate of the question, not the answer, so to speak. And as such opposed to “I really can’t see why anyone should care one way or the other”.
Ah, I see.
So, I stand by my assertion that in the absence of evidence one way or the other, I really can’t see why anyone should care.
But I agree that to the extent that Occam’s Razor type reasoning provides evidence, that’s a reason to care.
And if it provided strong evidence one way or another (which I don’t think it does, and I’m not sure you do either) that would provide a strong reason to care.
I have evidence in the form of by personal experience of qualia. Granted, I have no way of showing you that evidence, but that doesn’t mean I don’t have it.
Agreed that the ability to share evidence with others is not a necessary condition of having evidence. And to the extent that I consider you a reliable evaluator of (and reporter of) evidence, your report is evidence, and to that extent I have a reason to care.
The point has been made that we should care because qualia have moral implications.
Moral implications of a proposition in the absence of evidence one way or another for that proposition are insufficient to justify caring.
If I actually care about the experiences of minds capable of experiences, I do best to look for evidence for the presence or absence of such experiences.
Failing such evidence, I do best to concentrate my attention elsewhere.
It’s possible to have both a strong reason to care, and weak evidence, ie due the moral hazard dependent on some doubtful proposition. People often adopt precautionary principles in such scenarios.
I don’t think that’s the situation here though. That sounds like a description of this situation: (imagine) we have weak evidence that 1) snakes are sapient, and we grant that 2) sapience is morally significant. Therefore (perhaps) we should avoid wonton harm to snakes.
Part of why this argument might make sense is that (1) and (2) are independent. Our confidence in (2) is not contingent on the small probability that (1) is true: whether or not snakes are sapient, we’re all agreed (lets say) that sapience is morally significant.
On the other hand, the situation with qualia is one where we have weak evidence (suppose) that A) qualia are real, and we grant that B) qualia are morally significant.
The difference here is that (B) is false if (A) is false. So the fact that we have weak evidence for (A) means that we can have no stronger (and likely, we must have yet weaker) evidence for (B).
Does the situation change significantly if “the situation with qualia” is instead framed as A) snakes have qualia and B) qualia are morally significant?
Yes, if the implication of (A) is that we’re agreed on the reality of qualia but are now wondering whether or not snakes have them. No, if (A) is just a specific case of the general question ‘are qualia real?’. My point was probably put in a confusing way: all I mean to say was that Juno seemed to be arguing as if it were possible to be very confident about the moral significance of qualia while being only marginally confident about their reality.
(nods) Makes sense.
What? Are you saying we have weak evidence for even in ourselves?
What I think of the case for qualia is beside the point, I was just commenting on your ‘moral hazard’ argument. There you said that even if we assume that we have only weak evidence for the reality of qualia, we should take the possibility seriously, since we can be confident that qualia are morally significant. I was just pointing out that this argument is made problematic by the fact that our confidence in the moral significance of qualia can be no stronger than our confidence in their reality, and therefore by assumption must be weak.
But of course it can. i can be much more confident in
(P → Q)
than I am in P. For instance, I can be highly confident that if i won the lottery, I could buy a yacht.
I am guessing that Juno_Watt means that strong evidence for our own perception of qualia makes them real enough to seriously consider their moral significance, whether or not they are “objectively real”.
Yes, they often do.
On your view, is there a threshold of doubtfulness of a proposition below which it is justifiable to not devote resources to avoiding the potential moral hazard of that proposition being true, regardless of the magnitude of that moral hazard?
i don’t think ir’s likely my house will catch fire, but i take out fire insurance. OTOH, if i don’t set a lower bound I will be susceptible to pascal’s muggings.
He may have meant something like “Qualiaphobia implies we would have no expereinces at all”. However, that all depends on what you mean by experience. I don’t think the Expected Experience criterion is useful here (or anywhere else)
I realize that non-materialistic “intrinsic qualities” of qualia, which we perceive but which aren’t causes of our behavior, are incoherent. What I don’t fully understand is why have I any qualia at all. Please see my sibling comment.
Tentatively:
If it’s accepted that GREEN and RED are structurally identical, and that in virtue of this they are phenomenologically identical, why think that phenomenology involves anything*, beyond structure, which needs explaining?
I think this is the gist of Dennett’s dissolution attempts. Once you’ve explained why your brain is in a seeing-red brain-state, why this causes a believing-that-there-is-red mental representation, onto a meta-reflection-about-believing-there-is-red functional process, etc., why think there’s anything else?
Phenomenology doesn’t involve anything beyond structure. But my experience seems to.
(nods) Yes, that’s consistent with what I’ve heard others say.
Like you, I don’t understand the question and have no idea of what an answer to it might look like, which is why I say I’m not entirely clear what question you/they claim is being answered. Perhaps it would be more correct to say I’m not clear how it differs from the question you/they want answered.
Mostly I suspect that the belief that there is a second question to be answered that hasn’t been is a strong, pervasive, sincere, compelling confusion, akin to where does the bread go?. But I can’t prove it.
Relatedly: I remember, many years ago, attending a seminar where a philosophy student protested to Dennett that he didn’t feel like the sort of process Dennett described. Dennett replied “How can you tell? Maybe this is exactly what the sort of process I’m describing feels like!”
I recognize that the traditional reply to this is “No! The sort of process Dennett describes doesn’t feel like anything at all! It has no qualia, it has no subjective experience!”
To which my response is mostly “Why should I believe that?” An acceptable alternative seems to be that subjective experience (“qualia”, if you like) is simply a property of certain kinds of computation, just as the ability to predict the future location of a falling object (“prescience”, if you like) is a property of certain kinds of computation.
To which one is of course free to reply “but how could prescience—er, I mean qualia—possibly be an aspect of computation??? It just doesn’t make any sense!!!” And I shrug.
Sure, if I say in English “prescience is an aspect of computation,” that sounds like a really weird thing to say, because “prescience” and “computation” are highly charged words with opposite framings. But if I throw out the English words and think about computing the state of the world at some future time, it doesn’t seem mysterious at all, and such computations have become so standard a part of our lives we no longer give it much thought.
When computations that report their subjective experience become ubiquitous, we will take the computational nature of qualia for granted in much the same way.
Thanks for your reply and engagement.
I agree. We already know what we feel like. Once we know empirically what kind of process we are, we can indeed conclude that “that’s what that kind of process feels like”.
What I don’t understand is why being some kind of process feels like anything at all. Why it seems to myself that I have qualia in the first place.
I do understand why it makes sense for an evolved human to have such beliefs. I don’t know if there is a further question beyond that. As I said, I don’t know what an answer would even look like.
Perhaps I should just accept this and move on. Maybe it’s just the case that “being mystified about qualia” is what the kind of process that humans are is supposed to feel like! As an analogy, humans have religious feelings with apparently dedicated neurological underpinnings. Some humans feel the numinous strongly, and they ask for an answer to the Mystery of God, which to them appears as obvious as any qualia.
However, an answer that would be more satisfactory (if possible) would be an exploration and an explanation of mind-space and its accompanying qualia. Perhaps if I understood the actual causal link from which kind of process I am, to which qualia I have, part of the apparent mystery would go away.
Does being like some other kind of process “feel like” anything? Like what? Would it be meaningful for me to experience something else without becoming something else? Are the qualia of a cat separate from being a cat? Or would I have to have a cat-mind and forget all about being human and verbal and DanArmak to experience the qualia of a cat, at which point I’d be no different than any existing cat, and which I wouldn’t remember on becoming human again?
I agree. To clarify, I believe all of these propositions:
Full materialism
Humans are physical systems that have self-awareness (“consciousness”) and talk about it
That isn’t a separate fact that could be otherwise (p-zombies); it’s highly entangled with how human brains operate
Other beings, completely different physically, would still behave the same if they instantiated the same computation (this is pretty much tautological)
If the computation that is myself is instantiated differently (as in an upload or em), it would still be conscious and report subjective experience (if it didn’t, it would be a very poor emulation!)
If I am precisely cloned, I should anticipate either clone’s experience with 50% probability; but after finding out which clone I am, I would not expect to suddenly “switch” to experiencing being the other clone. I also would not expect to somehow experience being both clones, or anything else. (I’m less sure about this because it’s never happened yet. And I don’t understand quantum mechanics, so I can’t properly appreciate the arguments that say we’re already being split all the time anyway. Nevertheless, I see no sensible alternative, so I still accept this.)
Shouldn’t you anticipate being either clone with 100% probability, since both clones will make that claim and neither can be considered wrong?
What I meant is that some time after the cloning, the clones’ lives would become distinguishable. One of them would experience X, while the other would experience ~X. Then I would anticipate experiencing X with 50% probability.
If they live identical lives forever, then I can anticipate “being either clone” or as I would call it, “not being able to tell which clone I am”.
My first instinctive response is “be wary of theories of personal identity where your future depends on a coin flip”. You’re essentially saying “one of the clones believes that it is your current ‘I’ experiencing ‘X’, and it has a 50% chance of being wrong”. That seems off.
I think to be consistent, you have to anticipate experiencing both X and ~X with 100% probability. The problem is that the way anticipation works with probability depends implicitly on there only being one future self that things can happen to.
No, I’m not saying that.
I’m saying: first both clones believe “anticipate X with 50% probability”. Then one clone experiences X, and the other ~X. After that they know what they experienced, so of course one updates to believe “I experienced X with ~1 probability” and the other “I experienced ~X with ~1 probability”.
I think we need to unpack “experiencing” here.
I anticipate there will be a future state of me, which has experienced X (= remembers experiencing X), with 50% probability.
If X takes nontrivial time, such that one can experience “X is going on now”, then I anticipate ever experiencing that with 50% probability.
But that means there is always (100%) a future state of you that has experienced X, and a separate future state that has always (100%) experienced ~X. I think there’s some similarity here to the problem of probability in a many-worlds universe, except in this case both versions can still interact. I’m not sure how that affects things myself.
You’re right, there’s a contradiction in what I said. Here’s how to resolve it.
At time T=1 there is one of me, and I go to sleep. While I sleep, a clone of me is made and placed in an identical room. At T=2 both clones wake up. At T=3 one clone experiences X. The other doesn’t (and knows that he doesn’t).
So, what should my expected probability for experiencing X be?
At T=3 I know for sure, so it goes to 1 for one clone and 0 for the other.
At T=2, the clones have woken up, but each doesn’t know which he is yet. Therefore each expects X with 50% probability.
At T=1, before going to sleep, there isn’t a single number that is the correct expectation. This isn’t because probability breaks down, but because the concept of “my future experience” breaks down in the presence of clones. Neither 50% nor 100% is right.
50% is wrong for the reason you point out. 100% is also wrong, because X and ~X are symmetrical. Assigning 100% to X means 0% to ~X.
So in the presence of expected future clones, we shouldn’t speak of “what I expect to experience” but “what I expect a clone of mine to experience”—or “all clones”, or “p proportion of clones”.
Suppose I’m ~100% confident that, while we sleep tonight, someone will paint a blue dot on either my forehead or my husband’s but not both. In that case, I am ~50% confident that I will see a blue dot, I am ~100% confident that one of us will see a blue dot, I am ~100% confident that one of us will not see a blue dot.
If someone said that seeing a blue dot and not-seeing a blue dot are symmetrical, so assigning ~100% confidence to “one of us will see a blue dot” means assigning ~0% to “one of us will not see a blue dot”, I would reply that they are deeply confused. The noun phrase “one of us” simply doesn’t behave that way.
In the scenario you describe, the noun phrase “I” doesn’t behave that way either.
I’m ~100% confident that I will experience X, and I’m ~100% confident that I will not experience X.
I really find that subscripts help here.
In your example, you anticipate your own experiences, but not your husband’s experiences. I don’t see how this is analogous to a case of cloning, where you equally anticipate both.
I’m not saying that “[exactly] one of us will see a blue dot” and “[neither] one of us will not see a blue dot” are symmetrical; that would be wrong. What I was saying was that “I will see a blue dot” and “I will not see a blue dot” are symmetrical.
All the terminologies that have been proposed here—by me, and you, and FeepingCreature—are just disagreeing over names, not real-world predictions.
I think the quoted statement is at the very least misleading because it’s semantically different from other grammatically similar constructions. Normally you can’t say “I am ~1 confident that [Y] and also ~1 confident that [~Y]”. So “I” isn’t behaving like an ordinary object. That’s why I think it’s better to be explicit and not talk about “I expect” at all in the presence of clones.
My comment about “symmetrical” was intended to mean the same thing: that when I read the statement “expect X with 100% probability”, I normally parse it as equivalent to “expect ~X with 0% probability”, which would be wrong here. And X and ~X are symmetrical by construction in the sense that every person, at every point in time, should expect X and ~X with the same probability (whether you call it “both 50%” like I do, or “both 100%” like FeepingCreature prefers), until of course a person actually observes either X or ~X.
In my example, my husband and I are two people, anticipating the experience of two people. In your example, I am one person, anticipating the experience of two people. It seems to me that what my husband and I anticipate in my example is analogous to what I anticipate in your example.
But, regardless, I agree that we’re just disagreeing about names, and if you prefer the approach of not talking about “I expect” in such cases, that’s OK with me.
One thing you seem to know but keep forgetting is the distinction between your current state, and recorded memories. Memories use extreme amounts of lossy and biased compression, and some of your confusion seem to come from looking at your current experience while explicitly thinking about this stuff and then generalizing it as something continuous over time and something applicable to a wider range of mental states than it actually is.
Sure, that makes sense.
As far as I know, current understanding of neuroanatomy hasn’t identified the particular circuits responsible for that experience, let alone the mechanism whereby the latter cause the former. (Of course, the same could be said for speaking English.)
But I can certainly see how having such an explanation handy might help if I were experiencing the kind of insistent sense of mysteriousness you describe (for subjective experience or for speaking English).
Hmm, to your knowledge, has the science of neuroanatomy ever discovered any circuits responsible for any experience?
Quick clarifying question: How small does something need to be for you to consider it a “circuit”?
It’s more a matter of discreetness than smallness: I would say I need to be able to identify the loop.
Second clarifying question, then: Can you describe what ‘identifying the loop’ would look like?
Well, I’m not sure. I’m not confident there are any neural circuits, strictly speaking. But I suppose I don’t have anything much more specific than ‘loop’ in mind: it would have to be something like a path that returns to an origin.
In the sense of the experience not happening if that circuit doesn’t work, yes.
In the sense of being able to give a soup-to-nuts story of how events in the world result in a subjective experience that has that specific character, no.
I guess I mean: has the science of neuroanatomy discovered any circuits whatsoever?
I am having trouble knowing how to answer your question, because I’m not sure what you’re asking.
We have identified neural structures that are implicated in various specific things that brains do.
Does that answer your question?
I’m not very up to date on neurobiology, and so when I saw your comment that we had not found the specific circuits for some experience I was surprised by the implication that we had found that there are neural circuits at all. To my knowledge, all we’ve got is fMRI captures showing changes in blood flow which we assume to be correlated in some way with synaptic activity. I wondered if you were using ‘circuit’ literally, or if your intended reference to the oft used brain-computer metaphor. I’m quite interested to know how appropriate that metaphor is.
Ah! Thanks for the clarification. No, I’m using “circuit” entirely metaphorically.