I don’t have a model which I believe with certainty even provided MWI is true.
I think that, given MWI, your consciousness is in any world in which you exist, so that if you kill yourself in the other worlds, you only exist in worlds that you didn’t kill yourself. I’m not sure what else could happen; obviously you can’t exist in the worlds you’re dead in.
What happens if you die in a non-MWI world? Pretty much the same for the case of MWI with random branch choice. If your random branch happens to be a bad one, you cease to exist, and maybe some of your clones in other branches are still alive.
So at time t, the data is already determined from the computer’s perspective, but not from mine. At t+dt, the data is determined from my perspective, as I’ve awoken. In the time between t and t+dt, it’s meaningless to ask what “branch” I’m in; there’s no test I can do to determine that in theory, as I only awaken if I’m in the data=n branch. It’s meaningful to other people, but not to me. I don’t see anywhere that requires non-local laws in this scenario.
Non-locality is required if you claim that you (that copy of you which has your consciousness) will always wake up. Otherwise, it’s just a twisted version of a Russian roulette and has nothing to do with quants.
At time t, the computer either shoots you, or not. At time t + dt, its bullet kills you (or not). So you say that at time t you will go to the branch where the computer doesn’t kill you. But such a choice of a branch requires information at time t + dt (whether you are alive or not in that branch). So, physical laws have to perform a look-ahead in time to decide in which Everett branch they should put your consciousness.
Now, imagine that your (quantum) computer generates a random number n from the Poisson distribution. Then, it will kill you after n days.
Now n = … what? Well, thanks to thermodynamics, your (and computer’s) lifespan is limited, so hopefully it will be a finite number—but, look, if the universe allowed unbounded lifespan, it would be a logical contradiction in physical laws. Anyway, you see that the look-ahead in time required after the random number generation can be arbitrarily large. That’s what I mean by non-locality here.
Non-locality is required if you claim that you (that copy of you which has your consciousness)
I deny that this is meaningful. If there are two copies of me, both “have my consciousness”. I fail to see any sense in which my consciousness must move to only one copy.
So you say that at time t you will go to the branch where the computer doesn’t kill you.
I do not claim that. I claim that I exist in both branches, up until one of them no longer contains my consciousness, because I’m dead, and then I only exist in one branch. (In fact, I can consider my sleeping self unconscious, in which case no branches contained my consciousness until I woke up.)
Now, imagine that your (quantum) computer generates a random number n from the Poisson distribution. Then, it will kill you after n days.
Then many copies of my consciousness will exist, some slowly dying each day.
So, physical laws have to perform a look-ahead in time to decide in which Everett branch they should put your consciousness.
I don’t have any look-ahead required in my model at all.
Can you dissolve consciousness? What test can be performed to see which branch my consciousness has moved to, that doesn’t require me to be awake, nor have knowledge of the random data?
OK, now imagine that the computer shows you the number n on it’s screen. What will you see? You say that both copies have your consciousness; will you see a superposition of numbers? I don’t see how simultaneously being in different branches makes sense from the qualia viewpoint.
Also, let’s remove sleeping from the thought experiment. It is an unnecessary complication; by the way, I don’t think that consciousness flow is interrupted while sleeping.
OK, now imagine that the computer shows you the number n on it’s screen. What will you see? You say that both copies have your consciousness; will you see a superposition of numbers?
No, one copy will see 1, another 2, etc. Something like that will fork my consciousness, which has uncertain effects, which is why I proposed being asleep throughout. Until my brain has any info about what the data is, my consciousness hasn’t forked yet. The fact that the info is “out there” in this world is irrelevant; the opposite data is also out there “in this world”, as long as I don’t know, and both actually exist (although that requirement arguably is also irrelevant to the anthropic math), then I exist in both worlds. In other words, both copies will be “continuations” of me. If one suddenly disappears, then only the other “continues” me.
Also, let’s remove sleeping from the thought experiment. It is an unnecessary complication; by the way, I don’t think that consciousness flow is interrupted while sleeping.
There’s a reason I included it. I’m more confident that the outcome will be good with it than without. In particular, if I’m not sleeping when killed, I expect to experience death.
But the fact that you think it’s not interrupted when sleeping suggests we’re using different definitions. If it’s because of dreaming, then specify that the person isn’t dreaming. The main point is that I won’t feel pain upon dying (or in fact, won’t feel anything before dying), so putting me under general anesthesia and ensuring the death would be before I begin to feel anything should be enough, in that case.
And no, I’m currently unable to dissolve the hard problem of consciousness.
I meant just enough that I could understand what you mean when you claim that consciousness must only go to one path.
I think, the problem with consciousness/qualia discussions is that we don’t have a good set of terms to describe such phenomena, while being unable to reduce it to other terms.
No, one copy will see 1, another 2, etc. Something like that will fork my consciousness, which has uncertain effects, which is why I proposed being asleep throughout.
I mean, one of the copies would be you (and share your qualia), while others are forks of you. That’s because I think that a) your consciousness is preserved by the branching process and b) you don’t experience living in different branches, at least after you observed their difference. So, if the quantum lottery works when you’re awake, it requres look-ahead in time.
Now about sleeping. My best guess about consciousness is that we are sort-of conscious even while in non-REM sleep phases and under anesthesia; and halting (almost) all electric activity in the brain doesn’t preserve consciousness. That’s derived from the requirement of continuity of experience, which I find plausible. But that’s probably irrelevant to our discussion.
As far as I understand, in your model, one’s conscious experience is halted during quantum lottery (i.e. sleep is some kind of a temporary death). And then, his conscious experience continues in one of the survived copies.
Is this a correct description of your model?
I mean, one of the copies would be you (and share your qualia), while others are forks of you.
In my model, all the copies have qualia. Put another way, clearly there’s no way for an outside observer to say about any copy that it doesn’t have qualia, so the only possible meaning here would be subjective. However, each copy subjectively thinks itself to have qualia. (If you deny either point, please elaborate.) Given those, I don’t see any sense that anyone can say that the qualia “only” goes to a single fork, with the others being “other” people.
That’s because I think that a) your consciousness is preserved by the branching process and b) you don’t experience living in different branches, at least after you observed their difference.
I agree with a, but I think your consciousness is forked by the branching process. I agree with b, assuming you mean “no one person observes multiple branches after a fork”. I don’t think those two imply that QL requires look-ahead.
What if I rephrased this in one-world terms? I clone you while you’re asleep. I put you in two separate rooms. I take two envelopes, one with a yes on it, the other with a no, and put one in each room. Someone else goes into each room, looks at the envelope, then kills you iff it says yes, and wakes you iff it says no.
Do you think you won’t awaken in a room with no in the envelope?
My best guess about consciousness is that we are sort-of conscious even while in non-REM sleep phases and under anesthesia; and halting (almost) all electric activity in the brain doesn’t preserve consciousness.
As long as we aren’t defining consciousness, I can’t really disagree that some plausible definition would make this true.
That’s derived from the requirement of continuity of experience, which I find plausible.
I don’t.
As far as I understand, in your model, one’s conscious experience is halted during quantum lottery (i.e. sleep is some kind of a temporary death). And then, his conscious experience continues in one of the survived copies. Is this a correct description of your model?
Yes, but I also think conscious experience is halted during regular sleep. Also, should multiple copies survive, his conscious experience will continue in multiple copies. His subjective probability of finding himself as any particular copy depends on the relative weightings (i.e. self-locating uncertainty).
There is no “truth” as to which copy they’ll end up in.
Do you think you won’t awaken in a room with no in the envelope?
I think that I either wake up in a room with no in the envelope, or die, in which case my clone continues to live.
Yes, but I also think conscious experience is halted during regular sleep. Also, should multiple copies survive, his conscious experience will continue in multiple copies. His subjective probability of finding himself as any particular copy depends on the relative weightings (i.e. self-locating uncertainty).
I find this model implausible. Is there any evidence I can update on?
I think that I either wake up in a room with no in the envelope, or die, in which case my clone continues to live.
But this world I described is (or can be) completely deterministic; how can you be uncertain of what will happen? I understand how I can be subjectively uncertain due to self-locating uncertainty, but there should be no possible objective uncertainty in a deterministic world. The only out I see if if you think consciousness requires non-deterministic physical processes.
I find this model implausible. Is there any evidence I can update on?
What exactly do you think would happen when someone is cloned? Why would one copy be “real” and the other not? Would there be any way to detect which was real for outsiders?
OK, either I wake up in a room with no envelope or die (deterministically) depends on which envelope you have put in my room.
What exactly happens in the process of cloning certainly depends on a particular cloning technology; the real one is that which shares continuous conscious experience line with me. The (obvious) way to detect which was real for an outsider is to look at where it came from—if it was built as a clone, then, well, it is a clone.
Note that I’m not saying that it’s the true model, just that I currently find it more plausible; none of the consciousness theories I’ve seen so far is truly satisfactory.
I’ve read the Ebborian posts and wasn’t convinced; a thought experiment is just a thought experiment, there are many ways it can be flawed (that is true for all the thought experiments I proposed in this discussion, btw). But yes, that’s a problem.
OK, either I wake up in a room with no envelope or die (deterministically) depends on which envelope you have put in my room.
I hope you realize that you’re just moving the problem into determining which one is “your” room, considering neither room had any of you thinking in it until after one was killed.
What exactly happens in the process of cloning certainly depends on a particular cloning technology; the real one is that which shares continuous conscious experience line with me. The (obvious) way to detect which was real for an outsider is to look at where it came from—if it was built as a clone, then, well, it is a clone.
The root of our disagreement then seems to be this “continuous” insistence. In particular, you and I would disagree on whether consciousness is preserved with teleportation or stasis.
I could try to break that intuition by appealing to discrete time; does your model imply that time is continuous? It would seem unattractive for a model to postulate something like that.
What arguments/intuitions are causing you to find your model plausible?
I find a model plausible if it isn’t contradicted by evidence and matches my intuitions.
My model doesn’t imply discrete time; I don’t think I can precisely explain why, because I basically don’t know how consciousness works at that level; intuitively, just replace t + dt with t + 1. Needless to say, I’m uncertain of this, too.
Honestly, my best guess is that all these models are wrong.
Now, what arguments cause you to find your model plausible?
I think your model implies the opposite; did you misunderstand me?
Now, what arguments cause you to find your model plausible?
(First of all, you didn’t mention if you agree with my assessment of the root cause of our disagreement. I’ll assume you do, and reply based on that.)
So, why do I think that consciousness doesn’t require continuity? Well, partly because I think sleep disturbs continuity, yet I still feel like I’m mostly the same person as yesterday in important ways. I find it hard to accept that someone could act exactly like me and not be conscious, for reasons mostly similar to those in the zombie sequence. I identify consciousness with physical brain states, which makes it really hard to consider a clone somehow less, if it would have the exact same brain state as me. (For clones, that may not be practical, but for MWI-clones, it is.)
That’s a typo; I mean’t that my model doesn’t imply continuous time. By the way, does it make sense to call it “my model” if my estimate of the probability of it being true is < 50%?
So, why do I think that consciousness requires continuity?
I guess, you have meant “doesn’t require”?
I’d say that continuity requirement is the main cause for the divergence in our plausibility rankings, at least.
What is your probability estimate of your model being (mostly) true?
p(“your model”) < p(“my model”) < 50% -- that’s how I see things :)
Here is another objection to your consciousness model. You say that you are unconscious while sleeping; so, at the beginning of sleep your consciousness flow disappears, and then appears again when you wake up. But your brain state is different before and after sleep. How does your consciousness flow “find” your brain after sleep? What if I, standing on another planet many light years away from Earth, build atom-by-atom a brain which state is closer to your before-sleep brain state than your after-sleep brain state is?
The reason why I don’t believe these theories with a significant degree of certainty isn’t that I know some other brilliant consistent theory; rather, I think that all of them are more or less inconsistent.
Actually, I think that it’s probably a mistake to consider consciousness a binary trait; but non-binary consciousness assumption makes it even harder to find out what is actually going on. I hope that the progress in machine learning or neuroscience will provide some insights.
You say that you are unconscious while sleeping; so, at the beginning of sleep your consciousness flow disappears, and then appears again when you wake up. But your brain state is different before and after sleep. How does your consciousness flow “find” your brain after sleep?
I don’t think it’s meaningful to talk about a “flow” here.
What if I, standing on another planet many light years away from Earth, build atom-by-atom a brain which state is closer to your before-sleep brain state than your after-sleep brain state is?
Then that would contain my consciousness, as well as myself after awaking. You could try to quantify how similar and dissimilar those states might be, but they’re still close enough to call it the same person.
What would you say to your thought experiment, if I replace “brain” with “computer”, turn off my OS, then start it again? The state of RAM is not the same as it was right before shutdown, so who is to say it’s the same computer? If you make hardware arguments, I’ll tell you the HD was cloned after power-off, then transferred to another computer with identical hardware. If that preserves the state of “my OS”, then the same should be true for “brains”, assuming physicalism.
OK, suppose I come to you while you’re sleeping, and add/remove a single neuron. Will you wake up in your model? Yes, because while you’re naturally sleeping, much more neurons change. Now imagine that I alter your entire brain. Now, the answer seems to be no. Therefore, there must be some minimal change to your brain to ensure that a different person will wake up (i.e. with different consciousness/qualia). This seems strange.
You don’t assume that the person who wakes up always has different consciousness with the person who fell asleep, do you?
It would be the same computer, but different working session. Anyway, I doubt such analogies are precise and allow for reliable reasoning.
Now imagine that I alter your entire brain. Now, the answer seems to be no.
Alter how? Do I still have memories of this argument? Do I share any memories with my past self? If I share all memories, then probably it’s still me. If all have gone, then most likely not. (Identifying self with memories has its own problems, but let’s gloss over them for now.) So I’m going to interpret your “remove a neuron” as “remove a memory”, and then your question becomes “how many memories can I lose and still be me”? That’s a difficult question to answer, so I’ll give you the first thing I can think of. It’s still me, just a lower percentage of me. I’m not that confident that it can be put to a linear scale, though.
Therefore, there must be some minimal change to your brain to ensure that a different person will wake up (i.e. with different consciousness/qualia). This seems strange.
This is a bit like the Sorites paradox. The answer is clearly to switch to a non-binary same-consciousness dichotomy. That doesn’t mean I can’t point to an exact clone and say it’s me.
You don’t assume that the person who wakes up always has different consciousness with the person who fell asleep, do you?
Not sure what you mean. Some things change, so it won’t be exactly the same. It’s still close enough that I’d consider it “me”.
It would be the same computer, but different working session. Anyway, I doubt such analogies are precise and allow for reliable reasoning.
Such analogies can help if they force you to explain the difference between computer and brain in this regard. You seem to have an identical model to my brain model by computers; why isn’t it illogical there?
That’s a difficult question to answer, so I’ll give you the first thing I can think of. It’s still me, just a lower percentage of me. I’m not that confident that it can be put to a linear scale, though.
That is one of the reasons why I think binary-consciousness models are likely to be wrong.
There are many differences between brains and computers; they have different structure, different purpose, different properties; I’m pretty confident (>90%) that my computer isn’t conscious now, and the consciousness phenomenon may have specific qualities which are absent in its image in your analogy. My objection to using such analogies is that you can miss important details. However, they are often useful to illustrate one’s beliefs.
There are many differences between brains and computers; they have different structure, different purpose, different properties; I’m pretty confident (>90%) that my computer isn’t conscious now, and the consciousness phenomenon may have specific qualities which are absent in its image in your analogy. My objection to using such analogies is that you can miss important details. However, they are often useful to illustrate one’s beliefs.
Do you have any of these qualities in mind? It seems strange to reject something because “maybe” it has a quality that distinguishes it from another case. Can you point to any of these details that’s relevant?
I don’t think it’s strange. Firstly, it does have distinguishing qualities, the question is whether they are relevant or not. So, you choose an analogy which shares the qualities you currently think are relevant; then you do some analysis of your analogy, and come to certain conclusions, but it is easy to overlook a step in the analysis which happens to sufficiently depend on a property that you previously thought was insufficient in the original model, and you can fail to see it, because it is absent in the analogy. So I think that double-checking results provided by analogy thinking is a necessary safety measure.
As for specific examples: something like quantum consciousness by Penrose (although I don’t actually believe it it). Or any other reason why consciousness (not intelligence!) can’t be reproduced in our computer devices (I don’t actually believe it either).
Firstly, it does have distinguishing qualities, the question is whether they are relevant or not. So, you choose an analogy which shares the qualities you currently think are relevant; then you do some analysis of your analogy, and come to certain conclusions, but it is easy to overlook a step in the analysis which happens to sufficiently depend on a property that you previously thought was insufficient in the original model, and you can fail to see it, because it is absent in the analogy. So I think that double-checking results provided by analogy thinking is a necessary safety measure.
I’m not saying not to double check them. My problem was that you seemed to have come to a conclusion that requires there to be a relevant difference, but didn’t identify any.
As for specific examples: something like quantum consciousness by Penrose (although I don’t actually believe it it). Or any other reason why consciousness (not intelligence!) can’t be reproduced in our computer devices (I don’t actually believe it either).
Even repeating the thought experiment with a quantum computer doesn’t seem to change my intuition.
I don’t have a model which I believe with certainty even provided MWI is true.
What happens if you die in a non-MWI world? Pretty much the same for the case of MWI with random branch choice. If your random branch happens to be a bad one, you cease to exist, and maybe some of your clones in other branches are still alive.
Non-locality is required if you claim that you (that copy of you which has your consciousness) will always wake up. Otherwise, it’s just a twisted version of a Russian roulette and has nothing to do with quants.
At time t, the computer either shoots you, or not. At time t + dt, its bullet kills you (or not). So you say that at time t you will go to the branch where the computer doesn’t kill you. But such a choice of a branch requires information at time t + dt (whether you are alive or not in that branch). So, physical laws have to perform a look-ahead in time to decide in which Everett branch they should put your consciousness.
Now, imagine that your (quantum) computer generates a random number n from the Poisson distribution. Then, it will kill you after n days. Now n = … what? Well, thanks to thermodynamics, your (and computer’s) lifespan is limited, so hopefully it will be a finite number—but, look, if the universe allowed unbounded lifespan, it would be a logical contradiction in physical laws. Anyway, you see that the look-ahead in time required after the random number generation can be arbitrarily large. That’s what I mean by non-locality here.
I deny that this is meaningful. If there are two copies of me, both “have my consciousness”. I fail to see any sense in which my consciousness must move to only one copy.
I do not claim that. I claim that I exist in both branches, up until one of them no longer contains my consciousness, because I’m dead, and then I only exist in one branch. (In fact, I can consider my sleeping self unconscious, in which case no branches contained my consciousness until I woke up.)
Then many copies of my consciousness will exist, some slowly dying each day.
I don’t have any look-ahead required in my model at all.
Can you dissolve consciousness? What test can be performed to see which branch my consciousness has moved to, that doesn’t require me to be awake, nor have knowledge of the random data?
OK, now imagine that the computer shows you the number n on it’s screen. What will you see? You say that both copies have your consciousness; will you see a superposition of numbers? I don’t see how simultaneously being in different branches makes sense from the qualia viewpoint.
Also, let’s remove sleeping from the thought experiment. It is an unnecessary complication; by the way, I don’t think that consciousness flow is interrupted while sleeping.
And no, I’m currently unable to dissolve the hard problem of consciousness.
No, one copy will see 1, another 2, etc. Something like that will fork my consciousness, which has uncertain effects, which is why I proposed being asleep throughout. Until my brain has any info about what the data is, my consciousness hasn’t forked yet. The fact that the info is “out there” in this world is irrelevant; the opposite data is also out there “in this world”, as long as I don’t know, and both actually exist (although that requirement arguably is also irrelevant to the anthropic math), then I exist in both worlds. In other words, both copies will be “continuations” of me. If one suddenly disappears, then only the other “continues” me.
There’s a reason I included it. I’m more confident that the outcome will be good with it than without. In particular, if I’m not sleeping when killed, I expect to experience death.
But the fact that you think it’s not interrupted when sleeping suggests we’re using different definitions. If it’s because of dreaming, then specify that the person isn’t dreaming. The main point is that I won’t feel pain upon dying (or in fact, won’t feel anything before dying), so putting me under general anesthesia and ensuring the death would be before I begin to feel anything should be enough, in that case.
I meant just enough that I could understand what you mean when you claim that consciousness must only go to one path.
I think, the problem with consciousness/qualia discussions is that we don’t have a good set of terms to describe such phenomena, while being unable to reduce it to other terms.
I mean, one of the copies would be you (and share your qualia), while others are forks of you. That’s because I think that a) your consciousness is preserved by the branching process and b) you don’t experience living in different branches, at least after you observed their difference. So, if the quantum lottery works when you’re awake, it requres look-ahead in time.
Now about sleeping. My best guess about consciousness is that we are sort-of conscious even while in non-REM sleep phases and under anesthesia; and halting (almost) all electric activity in the brain doesn’t preserve consciousness. That’s derived from the requirement of continuity of experience, which I find plausible. But that’s probably irrelevant to our discussion.
As far as I understand, in your model, one’s conscious experience is halted during quantum lottery (i.e. sleep is some kind of a temporary death). And then, his conscious experience continues in one of the survived copies. Is this a correct description of your model?
In my model, all the copies have qualia. Put another way, clearly there’s no way for an outside observer to say about any copy that it doesn’t have qualia, so the only possible meaning here would be subjective. However, each copy subjectively thinks itself to have qualia. (If you deny either point, please elaborate.) Given those, I don’t see any sense that anyone can say that the qualia “only” goes to a single fork, with the others being “other” people.
I agree with a, but I think your consciousness is forked by the branching process. I agree with b, assuming you mean “no one person observes multiple branches after a fork”. I don’t think those two imply that QL requires look-ahead.
What if I rephrased this in one-world terms? I clone you while you’re asleep. I put you in two separate rooms. I take two envelopes, one with a yes on it, the other with a no, and put one in each room. Someone else goes into each room, looks at the envelope, then kills you iff it says yes, and wakes you iff it says no.
Do you think you won’t awaken in a room with no in the envelope?
As long as we aren’t defining consciousness, I can’t really disagree that some plausible definition would make this true.
I don’t.
Yes, but I also think conscious experience is halted during regular sleep. Also, should multiple copies survive, his conscious experience will continue in multiple copies. His subjective probability of finding himself as any particular copy depends on the relative weightings (i.e. self-locating uncertainty).
There is no “truth” as to which copy they’ll end up in.
I think that I either wake up in a room with no in the envelope, or die, in which case my clone continues to live.
I find this model implausible. Is there any evidence I can update on?
But this world I described is (or can be) completely deterministic; how can you be uncertain of what will happen? I understand how I can be subjectively uncertain due to self-locating uncertainty, but there should be no possible objective uncertainty in a deterministic world. The only out I see if if you think consciousness requires non-deterministic physical processes.
I’m not sure I understand your reasoning here, so I’m not sure. Have you read the Ebborian posts in the quantum sequence?
What exactly do you think would happen when someone is cloned? Why would one copy be “real” and the other not? Would there be any way to detect which was real for outsiders?
OK, either I wake up in a room with no envelope or die (deterministically) depends on which envelope you have put in my room.
What exactly happens in the process of cloning certainly depends on a particular cloning technology; the real one is that which shares continuous conscious experience line with me. The (obvious) way to detect which was real for an outsider is to look at where it came from—if it was built as a clone, then, well, it is a clone.
Note that I’m not saying that it’s the true model, just that I currently find it more plausible; none of the consciousness theories I’ve seen so far is truly satisfactory.
I’ve read the Ebborian posts and wasn’t convinced; a thought experiment is just a thought experiment, there are many ways it can be flawed (that is true for all the thought experiments I proposed in this discussion, btw). But yes, that’s a problem.
I hope you realize that you’re just moving the problem into determining which one is “your” room, considering neither room had any of you thinking in it until after one was killed.
The root of our disagreement then seems to be this “continuous” insistence. In particular, you and I would disagree on whether consciousness is preserved with teleportation or stasis.
I could try to break that intuition by appealing to discrete time; does your model imply that time is continuous? It would seem unattractive for a model to postulate something like that.
What arguments/intuitions are causing you to find your model plausible?
I find a model plausible if it isn’t contradicted by evidence and matches my intuitions.
My model doesn’t imply discrete time; I don’t think I can precisely explain why, because I basically don’t know how consciousness works at that level; intuitively, just replace t + dt with t + 1. Needless to say, I’m uncertain of this, too.
Honestly, my best guess is that all these models are wrong.
Now, what arguments cause you to find your model plausible?
I think your model implies the opposite; did you misunderstand me?
(First of all, you didn’t mention if you agree with my assessment of the root cause of our disagreement. I’ll assume you do, and reply based on that.)
So, why do I think that consciousness doesn’t require continuity? Well, partly because I think sleep disturbs continuity, yet I still feel like I’m mostly the same person as yesterday in important ways. I find it hard to accept that someone could act exactly like me and not be conscious, for reasons mostly similar to those in the zombie sequence. I identify consciousness with physical brain states, which makes it really hard to consider a clone somehow less, if it would have the exact same brain state as me. (For clones, that may not be practical, but for MWI-clones, it is.)
That’s a typo; I mean’t that my model doesn’t imply continuous time. By the way, does it make sense to call it “my model” if my estimate of the probability of it being true is < 50%?
I guess, you have meant “doesn’t require”?
I’d say that continuity requirement is the main cause for the divergence in our plausibility rankings, at least.
What is your probability estimate of your model being (mostly) true?
Fixed. I guess we’re even now :)
You’re criticising other theories based on something you put less then 50% credence in? That’s how this all started.
More than 90%. If I had a consistent alternative that didn’t require anything supernatural, then that would go down.
p(“your model”) < p(“my model”) < 50% -- that’s how I see things :)
Here is another objection to your consciousness model. You say that you are unconscious while sleeping; so, at the beginning of sleep your consciousness flow disappears, and then appears again when you wake up. But your brain state is different before and after sleep. How does your consciousness flow “find” your brain after sleep? What if I, standing on another planet many light years away from Earth, build atom-by-atom a brain which state is closer to your before-sleep brain state than your after-sleep brain state is?
The reason why I don’t believe these theories with a significant degree of certainty isn’t that I know some other brilliant consistent theory; rather, I think that all of them are more or less inconsistent.
Actually, I think that it’s probably a mistake to consider consciousness a binary trait; but non-binary consciousness assumption makes it even harder to find out what is actually going on. I hope that the progress in machine learning or neuroscience will provide some insights.
I don’t think it’s meaningful to talk about a “flow” here.
Then that would contain my consciousness, as well as myself after awaking. You could try to quantify how similar and dissimilar those states might be, but they’re still close enough to call it the same person.
What would you say to your thought experiment, if I replace “brain” with “computer”, turn off my OS, then start it again? The state of RAM is not the same as it was right before shutdown, so who is to say it’s the same computer? If you make hardware arguments, I’ll tell you the HD was cloned after power-off, then transferred to another computer with identical hardware. If that preserves the state of “my OS”, then the same should be true for “brains”, assuming physicalism.
OK, suppose I come to you while you’re sleeping, and add/remove a single neuron. Will you wake up in your model? Yes, because while you’re naturally sleeping, much more neurons change. Now imagine that I alter your entire brain. Now, the answer seems to be no. Therefore, there must be some minimal change to your brain to ensure that a different person will wake up (i.e. with different consciousness/qualia). This seems strange.
You don’t assume that the person who wakes up always has different consciousness with the person who fell asleep, do you?
It would be the same computer, but different working session. Anyway, I doubt such analogies are precise and allow for reliable reasoning.
Alter how? Do I still have memories of this argument? Do I share any memories with my past self? If I share all memories, then probably it’s still me. If all have gone, then most likely not. (Identifying self with memories has its own problems, but let’s gloss over them for now.) So I’m going to interpret your “remove a neuron” as “remove a memory”, and then your question becomes “how many memories can I lose and still be me”? That’s a difficult question to answer, so I’ll give you the first thing I can think of. It’s still me, just a lower percentage of me. I’m not that confident that it can be put to a linear scale, though.
This is a bit like the Sorites paradox. The answer is clearly to switch to a non-binary same-consciousness dichotomy. That doesn’t mean I can’t point to an exact clone and say it’s me.
Not sure what you mean. Some things change, so it won’t be exactly the same. It’s still close enough that I’d consider it “me”.
Such analogies can help if they force you to explain the difference between computer and brain in this regard. You seem to have an identical model to my brain model by computers; why isn’t it illogical there?
That is one of the reasons why I think binary-consciousness models are likely to be wrong.
There are many differences between brains and computers; they have different structure, different purpose, different properties; I’m pretty confident (>90%) that my computer isn’t conscious now, and the consciousness phenomenon may have specific qualities which are absent in its image in your analogy. My objection to using such analogies is that you can miss important details. However, they are often useful to illustrate one’s beliefs.
Do you have any of these qualities in mind? It seems strange to reject something because “maybe” it has a quality that distinguishes it from another case. Can you point to any of these details that’s relevant?
I don’t think it’s strange. Firstly, it does have distinguishing qualities, the question is whether they are relevant or not. So, you choose an analogy which shares the qualities you currently think are relevant; then you do some analysis of your analogy, and come to certain conclusions, but it is easy to overlook a step in the analysis which happens to sufficiently depend on a property that you previously thought was insufficient in the original model, and you can fail to see it, because it is absent in the analogy. So I think that double-checking results provided by analogy thinking is a necessary safety measure.
As for specific examples: something like quantum consciousness by Penrose (although I don’t actually believe it it). Or any other reason why consciousness (not intelligence!) can’t be reproduced in our computer devices (I don’t actually believe it either).
I’m not saying not to double check them. My problem was that you seemed to have come to a conclusion that requires there to be a relevant difference, but didn’t identify any.
Even repeating the thought experiment with a quantum computer doesn’t seem to change my intuition.