Hard Problem has no criteria for what would comprise a satisfying explanation; no way to distinguish a correct explanation from an incorrect one.
I feel like most of your comment is unfair, except for this part. Let me attempt to make it more concrete for you.
Suppose a future scientist offers you technological immortality, but the procedure will physically destroy your brain over time, replacing it with synthetic parts. Your new synthetic brain won’t fail from old age and unlike a biological brain, can be backed up (and reconstructed) to protect against its inevitable accidental destruction over the coming eons. Do you take his offer? What assurances do you need? If you’re wrong about certain details and accept, you die (brain destroyed) so you’d better get it right.
I expect that assurances one could rationally accept would constitute a solution to the Hard Problem. But maybe you’ll surprise me. This scenario is a crux for me (well, one of a few perhaps) such that were they addressed, I would either consider the Hard Problem solved, or else decide that I have no reason left to care about the Hard Problem.
The scenario has a number of assumptions that may not hold for you. But I can only guess. Can we agree on the following?
Humans do not have ectoplasmic-ghost souls or the like. Rather the mind more directly inhabits the brain, and if it’s destroyed, you have permanently died. Gradually replacing your brain with the wrong sort of synthetic parts (such as plastic) will kill you.
The physical molecules of the brain are completely replaced by natural biological processes over time, i.e., your mind is not your brain’s atoms, but rather something about their structure, and therefore a procedure like the offer could (in principle) work.
There are physical structures, including complex (even biological) ones that are not alive in the sense of being conscious/aware. I.e., panpsychism is false.
Automatons (chatbots?) can say they are conscious when they are not. I.e., zombies can be constructed, in principle, and a procedure like this could replace you with one. (This is not a Chalmers “p-zombie”. Its brain is synthetic, thus physically distinguishable from a normal biological human.)
Thank you for this reply, I think this helps to pin down where our disagreement comes from.
Technically I don’t disagree with your assumptions, because I think it’s equally valid to say they’re true as that they’re false, which is exactly the issue I have with them. There doesn’t seem to be a fact of the matter about them (i.e., there’s no way to experimentally distinguish a world in which any of these assumptions holds from one in which it does not), so if the existence of the Hard Problem is derived from them, then that doesn’t alleviate the issue of its unfalsifiability.
The cause of this issue is that (from my point of view) many of the words you’re using don’t have clear definitions in the domain that you’re trying to use them in. I don’t mean to be a pedant, but if we’re really trying to use language for extraordinary investigations like these, then I think precision is warranted. For now, let me just focus on the thought experiment you posed. The way I see it, it’s equivalent to the Ship of Theseus. I think what we’re ultimately trying to grapple with is how best to model reality and it seems to me that we actually already have a perfectly good model to solve the Ship of Theseus and your thought experiment, namely particle physics. If you look at the Ship of Theseus or a person’s brain or body (or a piece of text they wrote), these are collections of particles that create a causal chain to somebody saying “Hey, it’s the Ship of Theseus!” or “Hey, gilch wrote a reply!” Over time, some of those particles may get swapped for others and cause us to still use the same name or maybe not. There’s no mystery or contradiction there, it’s a bunch of particles doing their thing and names are patterns in those particles, for example in the air when we speak them or in silicon when we’re writing it on a phone.
Do we think about the world in terms of fundamental particles? No, it’s wildly impractical, so we’ve been forced to resort, through our evolution and the evolution of language, to much simpler models/heuristics. Daniel Dennett has this idea of “folk psychology”, which talks specifically about how we model other people’s behaviors, by talking about things like “belief”, “desire”, “fear” and “hope”. This model works most of the time, but it breaks down when you try to use it to model, for example, the behavior of a schizophrenic person, or the behavior of a dead person. You can extend this idea to a kind of “folk reality”, where we model the world in terms of “people”, “alive”, “dead”, “conscious”, “justice”, “love” and pretty much all other words, which can similarly break down when trying to use them to communicate about things that they’re not normally used to communicate about.
If you like, I could go into detail how this applies to each of your assumptions, but I’ll do so just for your last assumption for now. Consciousness in normal usage is a word that evolved to mean something like “able to respond appropriately to its surroundings”, so a person who is sleeping or knocked out is basically unconscious; that’s enough for practical, daily usage. Similarly, we say humans normally are conscious, other primates and mammals maybe a little less, insects maybe and plants not really, i.e., the fewer traits it has that we recognize in humans the less conscious it is; this is already a bit less practical and more academic, but it affects how we behave. (For example, vegans claim eating animals is bad, while eating plants is okay, even though they’re absurdly glossing over whether plants can feel pain, which is not clear at all.) Over time, the evolution of language (which is a product of both chance and deliberate human decisions) adapted the meaning of words like consciousness to remain a useful part of folk reality. Our intuitions about the meanings of words and in turn about reality depend on how we see these words being used as we grow up, even if they don’t model reality correctly; we always end up with somewhat mistaken intuitions, because folk reality does not model reality exactly. And now, quite suddenly, we’ve ended up in a situation where there are machines that can behave in a way that we’re only used to recognizing in humans and so there’s a lot of confusion over whether they are conscious. Again, from a particle physics perspective it’s clear what’s going on; it’s particles doing their thing like they always have. Some particles are arranged in a structure we haven’t seen before, so what? However, our folk reality model breaks down because it’s imprecise and not adapted to this new situation. That’s also not an issue in itself; language and intuitions just have to adapt. Maybe we’ll come to a consensus that they are just as conscious as we are, or maybe we’ll see them as inferior and therefore treat them with greater indifference, even though how we describe them doesn’t actually change their nature, just our perception and treatment of them.
The real problems begin when people assume that their intuitions are true and fail to recognize that our intuitions and language are models of reality (largely inherited from cultures before us who had much less experience with the world) and that they frequently don’t generalize well. So when I encounter something like the Hard Problem, I throw my intuitions about how “I really feel like I’m experiencing things, so I can’t just be an automaton” out the window, because going down that road just leads to a bunch of useless contradictions and I conclude that whatever is going on must be made possible by particles doing their thing and nothing else, at least until I encounter a better model.
As for whether I would choose to undergo the procedure, I probably would. I don’t see any meaningful difference between my brain being replaced by new synthetic or biological material. In fact, according to my intuition (perhaps mistaken), my future counterpart with a 100% biological brain would be just as much a different person from me as my alternative future counterpart with a 100% synthetic brain.
I feel like most of your comment is unfair, except for this part. Let me attempt to make it more concrete for you.
Suppose a future scientist offers you technological immortality, but the procedure will physically destroy your brain over time, replacing it with synthetic parts. Your new synthetic brain won’t fail from old age and unlike a biological brain, can be backed up (and reconstructed) to protect against its inevitable accidental destruction over the coming eons. Do you take his offer? What assurances do you need? If you’re wrong about certain details and accept, you die (brain destroyed) so you’d better get it right.
I expect that assurances one could rationally accept would constitute a solution to the Hard Problem. But maybe you’ll surprise me. This scenario is a crux for me (well, one of a few perhaps) such that were they addressed, I would either consider the Hard Problem solved, or else decide that I have no reason left to care about the Hard Problem.
The scenario has a number of assumptions that may not hold for you. But I can only guess. Can we agree on the following?
Humans do not have ectoplasmic-ghost souls or the like. Rather the mind more directly inhabits the brain, and if it’s destroyed, you have permanently died. Gradually replacing your brain with the wrong sort of synthetic parts (such as plastic) will kill you.
The physical molecules of the brain are completely replaced by natural biological processes over time, i.e., your mind is not your brain’s atoms, but rather something about their structure, and therefore a procedure like the offer could (in principle) work.
There are physical structures, including complex (even biological) ones that are not alive in the sense of being conscious/aware. I.e., panpsychism is false.
Automatons (chatbots?) can say they are conscious when they are not. I.e., zombies can be constructed, in principle, and a procedure like this could replace you with one. (This is not a Chalmers “p-zombie”. Its brain is synthetic, thus physically distinguishable from a normal biological human.)
Thank you for this reply, I think this helps to pin down where our disagreement comes from.
Technically I don’t disagree with your assumptions, because I think it’s equally valid to say they’re true as that they’re false, which is exactly the issue I have with them. There doesn’t seem to be a fact of the matter about them (i.e., there’s no way to experimentally distinguish a world in which any of these assumptions holds from one in which it does not), so if the existence of the Hard Problem is derived from them, then that doesn’t alleviate the issue of its unfalsifiability.
The cause of this issue is that (from my point of view) many of the words you’re using don’t have clear definitions in the domain that you’re trying to use them in. I don’t mean to be a pedant, but if we’re really trying to use language for extraordinary investigations like these, then I think precision is warranted. For now, let me just focus on the thought experiment you posed. The way I see it, it’s equivalent to the Ship of Theseus. I think what we’re ultimately trying to grapple with is how best to model reality and it seems to me that we actually already have a perfectly good model to solve the Ship of Theseus and your thought experiment, namely particle physics. If you look at the Ship of Theseus or a person’s brain or body (or a piece of text they wrote), these are collections of particles that create a causal chain to somebody saying “Hey, it’s the Ship of Theseus!” or “Hey, gilch wrote a reply!” Over time, some of those particles may get swapped for others and cause us to still use the same name or maybe not. There’s no mystery or contradiction there, it’s a bunch of particles doing their thing and names are patterns in those particles, for example in the air when we speak them or in silicon when we’re writing it on a phone.
Do we think about the world in terms of fundamental particles? No, it’s wildly impractical, so we’ve been forced to resort, through our evolution and the evolution of language, to much simpler models/heuristics. Daniel Dennett has this idea of “folk psychology”, which talks specifically about how we model other people’s behaviors, by talking about things like “belief”, “desire”, “fear” and “hope”. This model works most of the time, but it breaks down when you try to use it to model, for example, the behavior of a schizophrenic person, or the behavior of a dead person. You can extend this idea to a kind of “folk reality”, where we model the world in terms of “people”, “alive”, “dead”, “conscious”, “justice”, “love” and pretty much all other words, which can similarly break down when trying to use them to communicate about things that they’re not normally used to communicate about.
If you like, I could go into detail how this applies to each of your assumptions, but I’ll do so just for your last assumption for now. Consciousness in normal usage is a word that evolved to mean something like “able to respond appropriately to its surroundings”, so a person who is sleeping or knocked out is basically unconscious; that’s enough for practical, daily usage. Similarly, we say humans normally are conscious, other primates and mammals maybe a little less, insects maybe and plants not really, i.e., the fewer traits it has that we recognize in humans the less conscious it is; this is already a bit less practical and more academic, but it affects how we behave. (For example, vegans claim eating animals is bad, while eating plants is okay, even though they’re absurdly glossing over whether plants can feel pain, which is not clear at all.) Over time, the evolution of language (which is a product of both chance and deliberate human decisions) adapted the meaning of words like consciousness to remain a useful part of folk reality. Our intuitions about the meanings of words and in turn about reality depend on how we see these words being used as we grow up, even if they don’t model reality correctly; we always end up with somewhat mistaken intuitions, because folk reality does not model reality exactly. And now, quite suddenly, we’ve ended up in a situation where there are machines that can behave in a way that we’re only used to recognizing in humans and so there’s a lot of confusion over whether they are conscious. Again, from a particle physics perspective it’s clear what’s going on; it’s particles doing their thing like they always have. Some particles are arranged in a structure we haven’t seen before, so what? However, our folk reality model breaks down because it’s imprecise and not adapted to this new situation. That’s also not an issue in itself; language and intuitions just have to adapt. Maybe we’ll come to a consensus that they are just as conscious as we are, or maybe we’ll see them as inferior and therefore treat them with greater indifference, even though how we describe them doesn’t actually change their nature, just our perception and treatment of them.
The real problems begin when people assume that their intuitions are true and fail to recognize that our intuitions and language are models of reality (largely inherited from cultures before us who had much less experience with the world) and that they frequently don’t generalize well. So when I encounter something like the Hard Problem, I throw my intuitions about how “I really feel like I’m experiencing things, so I can’t just be an automaton” out the window, because going down that road just leads to a bunch of useless contradictions and I conclude that whatever is going on must be made possible by particles doing their thing and nothing else, at least until I encounter a better model.
As for whether I would choose to undergo the procedure, I probably would. I don’t see any meaningful difference between my brain being replaced by new synthetic or biological material. In fact, according to my intuition (perhaps mistaken), my future counterpart with a 100% biological brain would be just as much a different person from me as my alternative future counterpart with a 100% synthetic brain.