OK, either I wake up in a room with no envelope or die (deterministically) depends on which envelope you have put in my room.
What exactly happens in the process of cloning certainly depends on a particular cloning technology; the real one is that which shares continuous conscious experience line with me. The (obvious) way to detect which was real for an outsider is to look at where it came from—if it was built as a clone, then, well, it is a clone.
Note that I’m not saying that it’s the true model, just that I currently find it more plausible; none of the consciousness theories I’ve seen so far is truly satisfactory.
I’ve read the Ebborian posts and wasn’t convinced; a thought experiment is just a thought experiment, there are many ways it can be flawed (that is true for all the thought experiments I proposed in this discussion, btw). But yes, that’s a problem.
OK, either I wake up in a room with no envelope or die (deterministically) depends on which envelope you have put in my room.
I hope you realize that you’re just moving the problem into determining which one is “your” room, considering neither room had any of you thinking in it until after one was killed.
What exactly happens in the process of cloning certainly depends on a particular cloning technology; the real one is that which shares continuous conscious experience line with me. The (obvious) way to detect which was real for an outsider is to look at where it came from—if it was built as a clone, then, well, it is a clone.
The root of our disagreement then seems to be this “continuous” insistence. In particular, you and I would disagree on whether consciousness is preserved with teleportation or stasis.
I could try to break that intuition by appealing to discrete time; does your model imply that time is continuous? It would seem unattractive for a model to postulate something like that.
What arguments/intuitions are causing you to find your model plausible?
I find a model plausible if it isn’t contradicted by evidence and matches my intuitions.
My model doesn’t imply discrete time; I don’t think I can precisely explain why, because I basically don’t know how consciousness works at that level; intuitively, just replace t + dt with t + 1. Needless to say, I’m uncertain of this, too.
Honestly, my best guess is that all these models are wrong.
Now, what arguments cause you to find your model plausible?
I think your model implies the opposite; did you misunderstand me?
Now, what arguments cause you to find your model plausible?
(First of all, you didn’t mention if you agree with my assessment of the root cause of our disagreement. I’ll assume you do, and reply based on that.)
So, why do I think that consciousness doesn’t require continuity? Well, partly because I think sleep disturbs continuity, yet I still feel like I’m mostly the same person as yesterday in important ways. I find it hard to accept that someone could act exactly like me and not be conscious, for reasons mostly similar to those in the zombie sequence. I identify consciousness with physical brain states, which makes it really hard to consider a clone somehow less, if it would have the exact same brain state as me. (For clones, that may not be practical, but for MWI-clones, it is.)
That’s a typo; I mean’t that my model doesn’t imply continuous time. By the way, does it make sense to call it “my model” if my estimate of the probability of it being true is < 50%?
So, why do I think that consciousness requires continuity?
I guess, you have meant “doesn’t require”?
I’d say that continuity requirement is the main cause for the divergence in our plausibility rankings, at least.
What is your probability estimate of your model being (mostly) true?
p(“your model”) < p(“my model”) < 50% -- that’s how I see things :)
Here is another objection to your consciousness model. You say that you are unconscious while sleeping; so, at the beginning of sleep your consciousness flow disappears, and then appears again when you wake up. But your brain state is different before and after sleep. How does your consciousness flow “find” your brain after sleep? What if I, standing on another planet many light years away from Earth, build atom-by-atom a brain which state is closer to your before-sleep brain state than your after-sleep brain state is?
The reason why I don’t believe these theories with a significant degree of certainty isn’t that I know some other brilliant consistent theory; rather, I think that all of them are more or less inconsistent.
Actually, I think that it’s probably a mistake to consider consciousness a binary trait; but non-binary consciousness assumption makes it even harder to find out what is actually going on. I hope that the progress in machine learning or neuroscience will provide some insights.
You say that you are unconscious while sleeping; so, at the beginning of sleep your consciousness flow disappears, and then appears again when you wake up. But your brain state is different before and after sleep. How does your consciousness flow “find” your brain after sleep?
I don’t think it’s meaningful to talk about a “flow” here.
What if I, standing on another planet many light years away from Earth, build atom-by-atom a brain which state is closer to your before-sleep brain state than your after-sleep brain state is?
Then that would contain my consciousness, as well as myself after awaking. You could try to quantify how similar and dissimilar those states might be, but they’re still close enough to call it the same person.
What would you say to your thought experiment, if I replace “brain” with “computer”, turn off my OS, then start it again? The state of RAM is not the same as it was right before shutdown, so who is to say it’s the same computer? If you make hardware arguments, I’ll tell you the HD was cloned after power-off, then transferred to another computer with identical hardware. If that preserves the state of “my OS”, then the same should be true for “brains”, assuming physicalism.
OK, suppose I come to you while you’re sleeping, and add/remove a single neuron. Will you wake up in your model? Yes, because while you’re naturally sleeping, much more neurons change. Now imagine that I alter your entire brain. Now, the answer seems to be no. Therefore, there must be some minimal change to your brain to ensure that a different person will wake up (i.e. with different consciousness/qualia). This seems strange.
You don’t assume that the person who wakes up always has different consciousness with the person who fell asleep, do you?
It would be the same computer, but different working session. Anyway, I doubt such analogies are precise and allow for reliable reasoning.
Now imagine that I alter your entire brain. Now, the answer seems to be no.
Alter how? Do I still have memories of this argument? Do I share any memories with my past self? If I share all memories, then probably it’s still me. If all have gone, then most likely not. (Identifying self with memories has its own problems, but let’s gloss over them for now.) So I’m going to interpret your “remove a neuron” as “remove a memory”, and then your question becomes “how many memories can I lose and still be me”? That’s a difficult question to answer, so I’ll give you the first thing I can think of. It’s still me, just a lower percentage of me. I’m not that confident that it can be put to a linear scale, though.
Therefore, there must be some minimal change to your brain to ensure that a different person will wake up (i.e. with different consciousness/qualia). This seems strange.
This is a bit like the Sorites paradox. The answer is clearly to switch to a non-binary same-consciousness dichotomy. That doesn’t mean I can’t point to an exact clone and say it’s me.
You don’t assume that the person who wakes up always has different consciousness with the person who fell asleep, do you?
Not sure what you mean. Some things change, so it won’t be exactly the same. It’s still close enough that I’d consider it “me”.
It would be the same computer, but different working session. Anyway, I doubt such analogies are precise and allow for reliable reasoning.
Such analogies can help if they force you to explain the difference between computer and brain in this regard. You seem to have an identical model to my brain model by computers; why isn’t it illogical there?
That’s a difficult question to answer, so I’ll give you the first thing I can think of. It’s still me, just a lower percentage of me. I’m not that confident that it can be put to a linear scale, though.
That is one of the reasons why I think binary-consciousness models are likely to be wrong.
There are many differences between brains and computers; they have different structure, different purpose, different properties; I’m pretty confident (>90%) that my computer isn’t conscious now, and the consciousness phenomenon may have specific qualities which are absent in its image in your analogy. My objection to using such analogies is that you can miss important details. However, they are often useful to illustrate one’s beliefs.
There are many differences between brains and computers; they have different structure, different purpose, different properties; I’m pretty confident (>90%) that my computer isn’t conscious now, and the consciousness phenomenon may have specific qualities which are absent in its image in your analogy. My objection to using such analogies is that you can miss important details. However, they are often useful to illustrate one’s beliefs.
Do you have any of these qualities in mind? It seems strange to reject something because “maybe” it has a quality that distinguishes it from another case. Can you point to any of these details that’s relevant?
I don’t think it’s strange. Firstly, it does have distinguishing qualities, the question is whether they are relevant or not. So, you choose an analogy which shares the qualities you currently think are relevant; then you do some analysis of your analogy, and come to certain conclusions, but it is easy to overlook a step in the analysis which happens to sufficiently depend on a property that you previously thought was insufficient in the original model, and you can fail to see it, because it is absent in the analogy. So I think that double-checking results provided by analogy thinking is a necessary safety measure.
As for specific examples: something like quantum consciousness by Penrose (although I don’t actually believe it it). Or any other reason why consciousness (not intelligence!) can’t be reproduced in our computer devices (I don’t actually believe it either).
Firstly, it does have distinguishing qualities, the question is whether they are relevant or not. So, you choose an analogy which shares the qualities you currently think are relevant; then you do some analysis of your analogy, and come to certain conclusions, but it is easy to overlook a step in the analysis which happens to sufficiently depend on a property that you previously thought was insufficient in the original model, and you can fail to see it, because it is absent in the analogy. So I think that double-checking results provided by analogy thinking is a necessary safety measure.
I’m not saying not to double check them. My problem was that you seemed to have come to a conclusion that requires there to be a relevant difference, but didn’t identify any.
As for specific examples: something like quantum consciousness by Penrose (although I don’t actually believe it it). Or any other reason why consciousness (not intelligence!) can’t be reproduced in our computer devices (I don’t actually believe it either).
Even repeating the thought experiment with a quantum computer doesn’t seem to change my intuition.
OK, either I wake up in a room with no envelope or die (deterministically) depends on which envelope you have put in my room.
What exactly happens in the process of cloning certainly depends on a particular cloning technology; the real one is that which shares continuous conscious experience line with me. The (obvious) way to detect which was real for an outsider is to look at where it came from—if it was built as a clone, then, well, it is a clone.
Note that I’m not saying that it’s the true model, just that I currently find it more plausible; none of the consciousness theories I’ve seen so far is truly satisfactory.
I’ve read the Ebborian posts and wasn’t convinced; a thought experiment is just a thought experiment, there are many ways it can be flawed (that is true for all the thought experiments I proposed in this discussion, btw). But yes, that’s a problem.
I hope you realize that you’re just moving the problem into determining which one is “your” room, considering neither room had any of you thinking in it until after one was killed.
The root of our disagreement then seems to be this “continuous” insistence. In particular, you and I would disagree on whether consciousness is preserved with teleportation or stasis.
I could try to break that intuition by appealing to discrete time; does your model imply that time is continuous? It would seem unattractive for a model to postulate something like that.
What arguments/intuitions are causing you to find your model plausible?
I find a model plausible if it isn’t contradicted by evidence and matches my intuitions.
My model doesn’t imply discrete time; I don’t think I can precisely explain why, because I basically don’t know how consciousness works at that level; intuitively, just replace t + dt with t + 1. Needless to say, I’m uncertain of this, too.
Honestly, my best guess is that all these models are wrong.
Now, what arguments cause you to find your model plausible?
I think your model implies the opposite; did you misunderstand me?
(First of all, you didn’t mention if you agree with my assessment of the root cause of our disagreement. I’ll assume you do, and reply based on that.)
So, why do I think that consciousness doesn’t require continuity? Well, partly because I think sleep disturbs continuity, yet I still feel like I’m mostly the same person as yesterday in important ways. I find it hard to accept that someone could act exactly like me and not be conscious, for reasons mostly similar to those in the zombie sequence. I identify consciousness with physical brain states, which makes it really hard to consider a clone somehow less, if it would have the exact same brain state as me. (For clones, that may not be practical, but for MWI-clones, it is.)
That’s a typo; I mean’t that my model doesn’t imply continuous time. By the way, does it make sense to call it “my model” if my estimate of the probability of it being true is < 50%?
I guess, you have meant “doesn’t require”?
I’d say that continuity requirement is the main cause for the divergence in our plausibility rankings, at least.
What is your probability estimate of your model being (mostly) true?
Fixed. I guess we’re even now :)
You’re criticising other theories based on something you put less then 50% credence in? That’s how this all started.
More than 90%. If I had a consistent alternative that didn’t require anything supernatural, then that would go down.
p(“your model”) < p(“my model”) < 50% -- that’s how I see things :)
Here is another objection to your consciousness model. You say that you are unconscious while sleeping; so, at the beginning of sleep your consciousness flow disappears, and then appears again when you wake up. But your brain state is different before and after sleep. How does your consciousness flow “find” your brain after sleep? What if I, standing on another planet many light years away from Earth, build atom-by-atom a brain which state is closer to your before-sleep brain state than your after-sleep brain state is?
The reason why I don’t believe these theories with a significant degree of certainty isn’t that I know some other brilliant consistent theory; rather, I think that all of them are more or less inconsistent.
Actually, I think that it’s probably a mistake to consider consciousness a binary trait; but non-binary consciousness assumption makes it even harder to find out what is actually going on. I hope that the progress in machine learning or neuroscience will provide some insights.
I don’t think it’s meaningful to talk about a “flow” here.
Then that would contain my consciousness, as well as myself after awaking. You could try to quantify how similar and dissimilar those states might be, but they’re still close enough to call it the same person.
What would you say to your thought experiment, if I replace “brain” with “computer”, turn off my OS, then start it again? The state of RAM is not the same as it was right before shutdown, so who is to say it’s the same computer? If you make hardware arguments, I’ll tell you the HD was cloned after power-off, then transferred to another computer with identical hardware. If that preserves the state of “my OS”, then the same should be true for “brains”, assuming physicalism.
OK, suppose I come to you while you’re sleeping, and add/remove a single neuron. Will you wake up in your model? Yes, because while you’re naturally sleeping, much more neurons change. Now imagine that I alter your entire brain. Now, the answer seems to be no. Therefore, there must be some minimal change to your brain to ensure that a different person will wake up (i.e. with different consciousness/qualia). This seems strange.
You don’t assume that the person who wakes up always has different consciousness with the person who fell asleep, do you?
It would be the same computer, but different working session. Anyway, I doubt such analogies are precise and allow for reliable reasoning.
Alter how? Do I still have memories of this argument? Do I share any memories with my past self? If I share all memories, then probably it’s still me. If all have gone, then most likely not. (Identifying self with memories has its own problems, but let’s gloss over them for now.) So I’m going to interpret your “remove a neuron” as “remove a memory”, and then your question becomes “how many memories can I lose and still be me”? That’s a difficult question to answer, so I’ll give you the first thing I can think of. It’s still me, just a lower percentage of me. I’m not that confident that it can be put to a linear scale, though.
This is a bit like the Sorites paradox. The answer is clearly to switch to a non-binary same-consciousness dichotomy. That doesn’t mean I can’t point to an exact clone and say it’s me.
Not sure what you mean. Some things change, so it won’t be exactly the same. It’s still close enough that I’d consider it “me”.
Such analogies can help if they force you to explain the difference between computer and brain in this regard. You seem to have an identical model to my brain model by computers; why isn’t it illogical there?
That is one of the reasons why I think binary-consciousness models are likely to be wrong.
There are many differences between brains and computers; they have different structure, different purpose, different properties; I’m pretty confident (>90%) that my computer isn’t conscious now, and the consciousness phenomenon may have specific qualities which are absent in its image in your analogy. My objection to using such analogies is that you can miss important details. However, they are often useful to illustrate one’s beliefs.
Do you have any of these qualities in mind? It seems strange to reject something because “maybe” it has a quality that distinguishes it from another case. Can you point to any of these details that’s relevant?
I don’t think it’s strange. Firstly, it does have distinguishing qualities, the question is whether they are relevant or not. So, you choose an analogy which shares the qualities you currently think are relevant; then you do some analysis of your analogy, and come to certain conclusions, but it is easy to overlook a step in the analysis which happens to sufficiently depend on a property that you previously thought was insufficient in the original model, and you can fail to see it, because it is absent in the analogy. So I think that double-checking results provided by analogy thinking is a necessary safety measure.
As for specific examples: something like quantum consciousness by Penrose (although I don’t actually believe it it). Or any other reason why consciousness (not intelligence!) can’t be reproduced in our computer devices (I don’t actually believe it either).
I’m not saying not to double check them. My problem was that you seemed to have come to a conclusion that requires there to be a relevant difference, but didn’t identify any.
Even repeating the thought experiment with a quantum computer doesn’t seem to change my intuition.