Well, a thing that acts like us in one particular situation (say, a thing that types “I’m conscious” in chat) clearly doesn’t always have our qualia. Maybe you could say that a thing that acts like us in all possible situations must have our qualia? This is philosophically interesting! It makes a factual question (does the thing have qualia right now?) logically depend on a huge bundle of counterfactuals, most of which might never be realized. What if, during uploading, we insert a bug that changes our behavior in one of these counterfactuals—but then the upload never actually runs into that situation in the course of its life—does the upload still have the same qualia as the original person, in situations that do get realized? What if we insert quite many such bugs?
Moreover, what if we change the situations themselves? We can put the upload in circumstances that lead to more generic and less informative behavior: for example, give the upload a life where they’re never asked to remember a particular childhood experience. Or just a short life, where they’re never asked about anything much. Let’s say the machine doing the uploading is aware of that, and allowed to optimize out parts that the person won’t get to use. If there’s a thought that you sometimes think, but it doesn’t influence your I/O behavior, it can get optimized away; or if it has only a small influence on your behavior, a few bits’ worth let’s say, then it can be replaced with another thought that would cause the same few-bits effect. There’s a whole spectrum of questionable things that people tend to ignore when they say “copy the neurons”, “copy the I/O behavior” and stuff like that.
Well, a thing that acts like us in one particular situation (say, a thing that types “I’m conscious” in chat) clearly doesn’t always have our qualia. Maybe you could say that a thing that acts like us in all possible situations must have our qualia?
Right, that’s what I meant.
This is philosophically interesting!
Thank you!
It makes a factual question (does the thing have qualia right now?) logically depend on a huge bundle of counterfactuals, most of which might never be realized.
The I/O behavior being the same is a sufficient condition for it to be our mind upload. A sufficient condition for it to have some qualia, as opposed for it to have our mind and our qualia, will be weaker.
What if, during uploading, we insert a bug that changes our behavior in one of these counterfactuals
Then it’s, to a very slight extent, another person (with the continuum between me and another person being gradual).
but then the upload never actually runs into that situation in the course of its life—does the upload still have the same qualia as the original person, in situations that do get realized?
Then the qualia would be very slightly different, unless I’m missing something. (To bootstrap the intuition, I would expect my self that chooses vanilla ice-cream over chocolate icecream in one specific situation to have very slightly different feelings and preferences in general, resulting in very slightly different qualia, even if he never encounters that situation.) With many such bugs, it would be the same, but to a greater extent.
If there’s a thought that you sometimes think, but it doesn’t influence your I/O behavior, it can get optimized away
I don’t think such thoughts exist (I can always be asked to say out loud what I’m thinking). Generally, I would say that a thought that never, even in principle, influences my output, isn’t possible. (The same principle should apply to trying to replace a thought just by a few bits.)
Well, a thing that acts like us in one particular situation (say, a thing that types “I’m conscious” in chat) clearly doesn’t always have our qualia. Maybe you could say that a thing that acts like us in all possible situations must have our qualia? This is philosophically interesting! It makes a factual question (does the thing have qualia right now?) logically depend on a huge bundle of counterfactuals, most of which might never be realized. What if, during uploading, we insert a bug that changes our behavior in one of these counterfactuals—but then the upload never actually runs into that situation in the course of its life—does the upload still have the same qualia as the original person, in situations that do get realized? What if we insert quite many such bugs?
Moreover, what if we change the situations themselves? We can put the upload in circumstances that lead to more generic and less informative behavior: for example, give the upload a life where they’re never asked to remember a particular childhood experience. Or just a short life, where they’re never asked about anything much. Let’s say the machine doing the uploading is aware of that, and allowed to optimize out parts that the person won’t get to use. If there’s a thought that you sometimes think, but it doesn’t influence your I/O behavior, it can get optimized away; or if it has only a small influence on your behavior, a few bits’ worth let’s say, then it can be replaced with another thought that would cause the same few-bits effect. There’s a whole spectrum of questionable things that people tend to ignore when they say “copy the neurons”, “copy the I/O behavior” and stuff like that.
Right, that’s what I meant.
Thank you!
The I/O behavior being the same is a sufficient condition for it to be our mind upload. A sufficient condition for it to have some qualia, as opposed for it to have our mind and our qualia, will be weaker.
Then it’s, to a very slight extent, another person (with the continuum between me and another person being gradual).
Then the qualia would be very slightly different, unless I’m missing something. (To bootstrap the intuition, I would expect my self that chooses vanilla ice-cream over chocolate icecream in one specific situation to have very slightly different feelings and preferences in general, resulting in very slightly different qualia, even if he never encounters that situation.) With many such bugs, it would be the same, but to a greater extent.
I don’t think such thoughts exist (I can always be asked to say out loud what I’m thinking). Generally, I would say that a thought that never, even in principle, influences my output, isn’t possible. (The same principle should apply to trying to replace a thought just by a few bits.)