(nods) Yes, that’s consistent with what I’ve heard others say.
Like you, I don’t understand the question and have no idea of what an answer to it might look like, which is why I say I’m not entirely clear what question you/they claim is being answered. Perhaps it would be more correct to say I’m not clear how it differs from the question you/they want answered.
Mostly I suspect that the belief that there is a second question to be answered that hasn’t been is a strong, pervasive, sincere, compelling confusion, akin to where does the bread go?. But I can’t prove it.
Relatedly: I remember, many years ago, attending a seminar where a philosophy student protested to Dennett that he didn’t feel like the sort of process Dennett described. Dennett replied “How can you tell? Maybe this is exactly what the sort of process I’m describing feels like!”
I recognize that the traditional reply to this is “No! The sort of process Dennett describes doesn’t feel like anything at all! It has no qualia, it has no subjective experience!”
To which my response is mostly “Why should I believe that?” An acceptable alternative seems to be that subjective experience (“qualia”, if you like) is simply a property of certain kinds of computation, just as the ability to predict the future location of a falling object (“prescience”, if you like) is a property of certain kinds of computation.
To which one is of course free to reply “but how could prescience—er, I mean qualia—possibly be an aspect of computation??? It just doesn’t make any sense!!!” And I shrug.
Sure, if I say in English “prescience is an aspect of computation,” that sounds like a really weird thing to say, because “prescience” and “computation” are highly charged words with opposite framings. But if I throw out the English words and think about computing the state of the world at some future time, it doesn’t seem mysterious at all, and such computations have become so standard a part of our lives we no longer give it much thought.
When computations that report their subjective experience become ubiquitous, we will take the computational nature of qualia for granted in much the same way.
How can you tell? Maybe this is exactly what the sort of process I’m describing feels like!
I agree. We already know what we feel like. Once we know empirically what kind of process we are, we can indeed conclude that “that’s what that kind of process feels like”.
What I don’t understand is why being some kind of process feels like anything at all. Why it seems to myself that I have qualia in the first place.
I do understand why it makes sense for an evolved human to have such beliefs. I don’t know if there is a further question beyond that. As I said, I don’t know what an answer would even look like.
Perhaps I should just accept this and move on. Maybe it’s just the case that “being mystified about qualia” is what the kind of process that humans are is supposed to feel like! As an analogy, humans have religious feelings with apparently dedicated neurological underpinnings. Some humans feel the numinous strongly, and they ask for an answer to the Mystery of God, which to them appears as obvious as any qualia.
However, an answer that would be more satisfactory (if possible) would be an exploration and an explanation of mind-space and its accompanying qualia. Perhaps if I understood the actual causal link from which kind of process I am, to which qualia I have, part of the apparent mystery would go away.
Does being like some other kind of process “feel like” anything? Like what? Would it be meaningful for me to experience something else without becoming something else? Are the qualia of a cat separate from being a cat? Or would I have to have a cat-mind and forget all about being human and verbal and DanArmak to experience the qualia of a cat, at which point I’d be no different than any existing cat, and which I wouldn’t remember on becoming human again?
When computations that report their subjective experience become ubiquitous, we will take the computational nature of qualia for granted in much the same way.
I agree. To clarify, I believe all of these propositions:
Full materialism
Humans are physical systems that have self-awareness (“consciousness”) and talk about it
That isn’t a separate fact that could be otherwise (p-zombies); it’s highly entangled with how human brains operate
Other beings, completely different physically, would still behave the same if they instantiated the same computation (this is pretty much tautological)
If the computation that is myself is instantiated differently (as in an upload or em), it would still be conscious and report subjective experience (if it didn’t, it would be a very poor emulation!)
If I am precisely cloned, I should anticipate either clone’s experience with 50% probability; but after finding out which clone I am, I would not expect to suddenly “switch” to experiencing being the other clone. I also would not expect to somehow experience being both clones, or anything else. (I’m less sure about this because it’s never happened yet. And I don’t understand quantum mechanics, so I can’t properly appreciate the arguments that say we’re already being split all the time anyway. Nevertheless, I see no sensible alternative, so I still accept this.)
What I meant is that some time after the cloning, the clones’ lives would become distinguishable. One of them would experience X, while the other would experience ~X. Then I would anticipate experiencing X with 50% probability.
If they live identical lives forever, then I can anticipate “being either clone” or as I would call it, “not being able to tell which clone I am”.
My first instinctive response is “be wary of theories of personal identity where your future depends on a coin flip”. You’re essentially saying “one of the clones believes that it is your current ‘I’ experiencing ‘X’, and it has a 50% chance of being wrong”. That seems off.
I think to be consistent, you have to anticipate experiencing both X and ~X with 100% probability. The problem is that the way anticipation works with probability depends implicitly on there only being one future self that things can happen to.
You’re essentially saying “one of the clones believes that it is your current ‘I’ experiencing ‘X’, and it has a 50% chance of being wrong”.
No, I’m not saying that.
I’m saying: first both clones believe “anticipate X with 50% probability”. Then one clone experiences X, and the other ~X. After that they know what they experienced, so of course one updates to believe “I experienced X with ~1 probability” and the other “I experienced ~X with ~1 probability”.
I think to be consistent, you have to anticipate experiencing both X and ~X with 100% probability.
I think we need to unpack “experiencing” here.
I anticipate there will be a future state of me, which has experienced X (= remembers experiencing X), with 50% probability.
If X takes nontrivial time, such that one can experience “X is going on now”, then I anticipate ever experiencing that with 50% probability.
I anticipate there will be a future state of me, which has experienced X (= remembers experiencing X), with 50% probability.
What I meant is that some time after the cloning, the clones’ lives would become distinguishable. One of them would experience X, while the other would experience ~X.
But that means there is always (100%) a future state of you that has experienced X, and a separate future state that has always (100%) experienced ~X. I think there’s some similarity here to the problem of probability in a many-worlds universe, except in this case both versions can still interact. I’m not sure how that affects things myself.
You’re right, there’s a contradiction in what I said. Here’s how to resolve it.
At time T=1 there is one of me, and I go to sleep.
While I sleep, a clone of me is made and placed in an identical room.
At T=2 both clones wake up.
At T=3 one clone experiences X. The other doesn’t (and knows that he doesn’t).
So, what should my expected probability for experiencing X be?
At T=3 I know for sure, so it goes to 1 for one clone and 0 for the other.
At T=2, the clones have woken up, but each doesn’t know which he is yet. Therefore each expects X with 50% probability.
At T=1, before going to sleep, there isn’t a single number that is the correct expectation. This isn’t because probability breaks down, but because the concept of “my future experience” breaks down in the presence of clones. Neither 50% nor 100% is right.
50% is wrong for the reason you point out. 100% is also wrong, because X and ~X are symmetrical. Assigning 100% to X means 0% to ~X.
So in the presence of expected future clones, we shouldn’t speak of “what I expect to experience” but “what I expect a clone of mine to experience”—or “all clones”, or “p proportion of clones”.
Suppose I’m ~100% confident that, while we sleep tonight, someone will paint a blue dot on either my forehead or my husband’s but not both. In that case, I am ~50% confident that I will see a blue dot, I am ~100% confident that one of us will see a blue dot, I am ~100% confident that one of us will not see a blue dot.
If someone said that seeing a blue dot and not-seeing a blue dot are symmetrical, so assigning ~100% confidence to “one of us will see a blue dot” means assigning ~0% to “one of us will not see a blue dot”, I would reply that they are deeply confused. The noun phrase “one of us” simply doesn’t behave that way.
In the scenario you describe, the noun phrase “I” doesn’t behave that way either.
I’m ~100% confident that I will experience X, and I’m ~100% confident that I will not experience X.
In your example, you anticipate your own experiences, but not your husband’s experiences. I don’t see how this is analogous to a case of cloning, where you equally anticipate both.
If someone said that seeing a blue dot and not-seeing a blue dot are symmetrical, so assigning ~100% confidence to “one of us will see a blue dot” means assigning ~0% to “one of us will not see a blue dot”, I would reply that they are deeply confused.
I’m not saying that “[exactly] one of us will see a blue dot” and “[neither] one of us will not see a blue dot” are symmetrical; that would be wrong. What I was saying was that “I will see a blue dot” and “I will not see a blue dot” are symmetrical.
I’m ~100% confident that I will experience X, and I’m ~100% confident that I will not experience X.
All the terminologies that have been proposed here—by me, and you, and FeepingCreature—are just disagreeing over names, not real-world predictions.
I think the quoted statement is at the very least misleading because it’s semantically different from other grammatically similar constructions. Normally you can’t say “I am ~1 confident that [Y] and also ~1 confident that [~Y]”. So “I” isn’t behaving like an ordinary object. That’s why I think it’s better to be explicit and not talk about “I expect” at all in the presence of clones.
My comment about “symmetrical” was intended to mean the same thing: that when I read the statement “expect X with 100% probability”, I normally parse it as equivalent to “expect ~X with 0% probability”, which would be wrong here. And X and ~X are symmetrical by construction in the sense that every person, at every point in time, should expect X and ~X with the same probability (whether you call it “both 50%” like I do, or “both 100%” like FeepingCreature prefers), until of course a person actually observes either X or ~X.
In your example, you anticipate your own experiences, but not your husband’s experiences. I don’t see how this is analogous to a case of cloning, where you equally anticipate both.
In my example, my husband and I are two people, anticipating the experience of two people. In your example, I am one person, anticipating the experience of two people. It seems to me that what my husband and I anticipate in my example is analogous to what I anticipate in your example.
But, regardless, I agree that we’re just disagreeing about names, and if you prefer the approach of not talking about “I expect” in such cases, that’s OK with me.
One thing you seem to know but keep forgetting is the distinction between your current state, and recorded memories. Memories use extreme amounts of lossy and biased compression, and some of your confusion seem to come from looking at your current experience while explicitly thinking about this stuff and then generalizing it as something continuous over time and something applicable to a wider range of mental states than it actually is.
Perhaps if I understood the actual causal link from which kind of process I am, to which qualia I have, part of the apparent mystery would go away
Sure, that makes sense.
As far as I know, current understanding of neuroanatomy hasn’t identified the particular circuits responsible for that experience, let alone the mechanism whereby the latter cause the former. (Of course, the same could be said for speaking English.)
But I can certainly see how having such an explanation handy might help if I were experiencing the kind of insistent sense of mysteriousness you describe (for subjective experience or for speaking English).
Well, I’m not sure. I’m not confident there are any neural circuits, strictly speaking. But I suppose I don’t have anything much more specific than ‘loop’ in mind: it would have to be something like a path that returns to an origin.
In the sense of the experience not happening if that circuit doesn’t work, yes. In the sense of being able to give a soup-to-nuts story of how events in the world result in a subjective experience that has that specific character, no.
I am having trouble knowing how to answer your question, because I’m not sure what you’re asking. We have identified neural structures that are implicated in various specific things that brains do. Does that answer your question?
I’m not very up to date on neurobiology, and so when I saw your comment that we had not found the specific circuits for some experience I was surprised by the implication that we had found that there are neural circuits at all. To my knowledge, all we’ve got is fMRI captures showing changes in blood flow which we assume to be correlated in some way with synaptic activity. I wondered if you were using ‘circuit’ literally, or if your intended reference to the oft used brain-computer metaphor. I’m quite interested to know how appropriate that metaphor is.
(nods) Yes, that’s consistent with what I’ve heard others say.
Like you, I don’t understand the question and have no idea of what an answer to it might look like, which is why I say I’m not entirely clear what question you/they claim is being answered. Perhaps it would be more correct to say I’m not clear how it differs from the question you/they want answered.
Mostly I suspect that the belief that there is a second question to be answered that hasn’t been is a strong, pervasive, sincere, compelling confusion, akin to where does the bread go?. But I can’t prove it.
Relatedly: I remember, many years ago, attending a seminar where a philosophy student protested to Dennett that he didn’t feel like the sort of process Dennett described. Dennett replied “How can you tell? Maybe this is exactly what the sort of process I’m describing feels like!”
I recognize that the traditional reply to this is “No! The sort of process Dennett describes doesn’t feel like anything at all! It has no qualia, it has no subjective experience!”
To which my response is mostly “Why should I believe that?” An acceptable alternative seems to be that subjective experience (“qualia”, if you like) is simply a property of certain kinds of computation, just as the ability to predict the future location of a falling object (“prescience”, if you like) is a property of certain kinds of computation.
To which one is of course free to reply “but how could prescience—er, I mean qualia—possibly be an aspect of computation??? It just doesn’t make any sense!!!” And I shrug.
Sure, if I say in English “prescience is an aspect of computation,” that sounds like a really weird thing to say, because “prescience” and “computation” are highly charged words with opposite framings. But if I throw out the English words and think about computing the state of the world at some future time, it doesn’t seem mysterious at all, and such computations have become so standard a part of our lives we no longer give it much thought.
When computations that report their subjective experience become ubiquitous, we will take the computational nature of qualia for granted in much the same way.
Thanks for your reply and engagement.
I agree. We already know what we feel like. Once we know empirically what kind of process we are, we can indeed conclude that “that’s what that kind of process feels like”.
What I don’t understand is why being some kind of process feels like anything at all. Why it seems to myself that I have qualia in the first place.
I do understand why it makes sense for an evolved human to have such beliefs. I don’t know if there is a further question beyond that. As I said, I don’t know what an answer would even look like.
Perhaps I should just accept this and move on. Maybe it’s just the case that “being mystified about qualia” is what the kind of process that humans are is supposed to feel like! As an analogy, humans have religious feelings with apparently dedicated neurological underpinnings. Some humans feel the numinous strongly, and they ask for an answer to the Mystery of God, which to them appears as obvious as any qualia.
However, an answer that would be more satisfactory (if possible) would be an exploration and an explanation of mind-space and its accompanying qualia. Perhaps if I understood the actual causal link from which kind of process I am, to which qualia I have, part of the apparent mystery would go away.
Does being like some other kind of process “feel like” anything? Like what? Would it be meaningful for me to experience something else without becoming something else? Are the qualia of a cat separate from being a cat? Or would I have to have a cat-mind and forget all about being human and verbal and DanArmak to experience the qualia of a cat, at which point I’d be no different than any existing cat, and which I wouldn’t remember on becoming human again?
I agree. To clarify, I believe all of these propositions:
Full materialism
Humans are physical systems that have self-awareness (“consciousness”) and talk about it
That isn’t a separate fact that could be otherwise (p-zombies); it’s highly entangled with how human brains operate
Other beings, completely different physically, would still behave the same if they instantiated the same computation (this is pretty much tautological)
If the computation that is myself is instantiated differently (as in an upload or em), it would still be conscious and report subjective experience (if it didn’t, it would be a very poor emulation!)
If I am precisely cloned, I should anticipate either clone’s experience with 50% probability; but after finding out which clone I am, I would not expect to suddenly “switch” to experiencing being the other clone. I also would not expect to somehow experience being both clones, or anything else. (I’m less sure about this because it’s never happened yet. And I don’t understand quantum mechanics, so I can’t properly appreciate the arguments that say we’re already being split all the time anyway. Nevertheless, I see no sensible alternative, so I still accept this.)
Shouldn’t you anticipate being either clone with 100% probability, since both clones will make that claim and neither can be considered wrong?
What I meant is that some time after the cloning, the clones’ lives would become distinguishable. One of them would experience X, while the other would experience ~X. Then I would anticipate experiencing X with 50% probability.
If they live identical lives forever, then I can anticipate “being either clone” or as I would call it, “not being able to tell which clone I am”.
My first instinctive response is “be wary of theories of personal identity where your future depends on a coin flip”. You’re essentially saying “one of the clones believes that it is your current ‘I’ experiencing ‘X’, and it has a 50% chance of being wrong”. That seems off.
I think to be consistent, you have to anticipate experiencing both X and ~X with 100% probability. The problem is that the way anticipation works with probability depends implicitly on there only being one future self that things can happen to.
No, I’m not saying that.
I’m saying: first both clones believe “anticipate X with 50% probability”. Then one clone experiences X, and the other ~X. After that they know what they experienced, so of course one updates to believe “I experienced X with ~1 probability” and the other “I experienced ~X with ~1 probability”.
I think we need to unpack “experiencing” here.
I anticipate there will be a future state of me, which has experienced X (= remembers experiencing X), with 50% probability.
If X takes nontrivial time, such that one can experience “X is going on now”, then I anticipate ever experiencing that with 50% probability.
But that means there is always (100%) a future state of you that has experienced X, and a separate future state that has always (100%) experienced ~X. I think there’s some similarity here to the problem of probability in a many-worlds universe, except in this case both versions can still interact. I’m not sure how that affects things myself.
You’re right, there’s a contradiction in what I said. Here’s how to resolve it.
At time T=1 there is one of me, and I go to sleep. While I sleep, a clone of me is made and placed in an identical room. At T=2 both clones wake up. At T=3 one clone experiences X. The other doesn’t (and knows that he doesn’t).
So, what should my expected probability for experiencing X be?
At T=3 I know for sure, so it goes to 1 for one clone and 0 for the other.
At T=2, the clones have woken up, but each doesn’t know which he is yet. Therefore each expects X with 50% probability.
At T=1, before going to sleep, there isn’t a single number that is the correct expectation. This isn’t because probability breaks down, but because the concept of “my future experience” breaks down in the presence of clones. Neither 50% nor 100% is right.
50% is wrong for the reason you point out. 100% is also wrong, because X and ~X are symmetrical. Assigning 100% to X means 0% to ~X.
So in the presence of expected future clones, we shouldn’t speak of “what I expect to experience” but “what I expect a clone of mine to experience”—or “all clones”, or “p proportion of clones”.
Suppose I’m ~100% confident that, while we sleep tonight, someone will paint a blue dot on either my forehead or my husband’s but not both. In that case, I am ~50% confident that I will see a blue dot, I am ~100% confident that one of us will see a blue dot, I am ~100% confident that one of us will not see a blue dot.
If someone said that seeing a blue dot and not-seeing a blue dot are symmetrical, so assigning ~100% confidence to “one of us will see a blue dot” means assigning ~0% to “one of us will not see a blue dot”, I would reply that they are deeply confused. The noun phrase “one of us” simply doesn’t behave that way.
In the scenario you describe, the noun phrase “I” doesn’t behave that way either.
I’m ~100% confident that I will experience X, and I’m ~100% confident that I will not experience X.
I really find that subscripts help here.
In your example, you anticipate your own experiences, but not your husband’s experiences. I don’t see how this is analogous to a case of cloning, where you equally anticipate both.
I’m not saying that “[exactly] one of us will see a blue dot” and “[neither] one of us will not see a blue dot” are symmetrical; that would be wrong. What I was saying was that “I will see a blue dot” and “I will not see a blue dot” are symmetrical.
All the terminologies that have been proposed here—by me, and you, and FeepingCreature—are just disagreeing over names, not real-world predictions.
I think the quoted statement is at the very least misleading because it’s semantically different from other grammatically similar constructions. Normally you can’t say “I am ~1 confident that [Y] and also ~1 confident that [~Y]”. So “I” isn’t behaving like an ordinary object. That’s why I think it’s better to be explicit and not talk about “I expect” at all in the presence of clones.
My comment about “symmetrical” was intended to mean the same thing: that when I read the statement “expect X with 100% probability”, I normally parse it as equivalent to “expect ~X with 0% probability”, which would be wrong here. And X and ~X are symmetrical by construction in the sense that every person, at every point in time, should expect X and ~X with the same probability (whether you call it “both 50%” like I do, or “both 100%” like FeepingCreature prefers), until of course a person actually observes either X or ~X.
In my example, my husband and I are two people, anticipating the experience of two people. In your example, I am one person, anticipating the experience of two people. It seems to me that what my husband and I anticipate in my example is analogous to what I anticipate in your example.
But, regardless, I agree that we’re just disagreeing about names, and if you prefer the approach of not talking about “I expect” in such cases, that’s OK with me.
One thing you seem to know but keep forgetting is the distinction between your current state, and recorded memories. Memories use extreme amounts of lossy and biased compression, and some of your confusion seem to come from looking at your current experience while explicitly thinking about this stuff and then generalizing it as something continuous over time and something applicable to a wider range of mental states than it actually is.
Sure, that makes sense.
As far as I know, current understanding of neuroanatomy hasn’t identified the particular circuits responsible for that experience, let alone the mechanism whereby the latter cause the former. (Of course, the same could be said for speaking English.)
But I can certainly see how having such an explanation handy might help if I were experiencing the kind of insistent sense of mysteriousness you describe (for subjective experience or for speaking English).
Hmm, to your knowledge, has the science of neuroanatomy ever discovered any circuits responsible for any experience?
Quick clarifying question: How small does something need to be for you to consider it a “circuit”?
It’s more a matter of discreetness than smallness: I would say I need to be able to identify the loop.
Second clarifying question, then: Can you describe what ‘identifying the loop’ would look like?
Well, I’m not sure. I’m not confident there are any neural circuits, strictly speaking. But I suppose I don’t have anything much more specific than ‘loop’ in mind: it would have to be something like a path that returns to an origin.
In the sense of the experience not happening if that circuit doesn’t work, yes.
In the sense of being able to give a soup-to-nuts story of how events in the world result in a subjective experience that has that specific character, no.
I guess I mean: has the science of neuroanatomy discovered any circuits whatsoever?
I am having trouble knowing how to answer your question, because I’m not sure what you’re asking.
We have identified neural structures that are implicated in various specific things that brains do.
Does that answer your question?
I’m not very up to date on neurobiology, and so when I saw your comment that we had not found the specific circuits for some experience I was surprised by the implication that we had found that there are neural circuits at all. To my knowledge, all we’ve got is fMRI captures showing changes in blood flow which we assume to be correlated in some way with synaptic activity. I wondered if you were using ‘circuit’ literally, or if your intended reference to the oft used brain-computer metaphor. I’m quite interested to know how appropriate that metaphor is.
Ah! Thanks for the clarification. No, I’m using “circuit” entirely metaphorically.