So you might be able turn a population of conscious creatures into a population of p-zombies or Elizas just by compressing them.
Suppose you mean lossless compression. The compressed program has ALL the same outputs to the same inputs as the original program.
Then if the uncompressed program running had consciousness and the compressed program running did not, you have either proved or defined consciousness as something which is not an output. If it is possible to do what you are suggesting then consciousness has no effect on behavior, which is the presumption one must make in order to conclude that p-zombies are possible.
From an evolutionary point of view, can a feature with no output, absolutely zero effect on the interaction of the creature with its environment ever evolve? There would be no mechanism for it to evolve, there is no basis on which to select for it. It seems to me that to believe in the possibility of p-zombies is to believe in the supernatural, a world of phenomena such as consciousness that for some reason is not allowed to be listed as a phenomenon of the natural world.
At the moment, I can’t really distinguish how a belief that p-zombies are possible is any different from a belief in the supernatural.
Also this scenario reopens the question of whether uploads are conscious in the first place!
Years ago I thought an interesting experiment to do in terms of artificial consciousness would be to build an increasingly complex verbal simulation of a human, to the point where you could have conversations involving reflection with the simulation. At that point you could ask it if it was conscious and see what it had to say. Would it say “not so far as I can tell?”
The p-zombie assumption is that it would say “yeah I’m conscious duhh what kind of question is that?” But the way a simulation actually gets built is you have the list of requirements and you keep accreting code until all the requirements are met. If your requirements included a vast array of features but NOT the feature that it answer this question one way or another, conceivably you could elicit an “honest” answer from your sim. If all such sims answers “yes,” you might conclude that somehow in the collection of features you HAD required, consciousness emerged, and you could do other experiments where you removed features from the sim and kept statistics on how those sims answered that question. You might see the sim saying “no, don’t think so.” and conclude that whatever it is in us that makes us function as conscious we hadn’t found that thing yet and put it in our list of requirements.
Then if the uncompressed program running had consciousness and the compressed program running did not, you have either proved or defined consciousness as something which is not an output. If it is possible to do what you are suggesting then consciousness has no effect on behavior, which is the presumption one must make in order to conclude that p-zombies are possible.
I haven’t thought about this stuff for a while and my memory is a bit hazy in relation to it so I could be getting things wrong here but this comment doesn’t seem right to me.
First, my p-zombie is not just a duplicate of me in terms of my input-output profile. Rather, it’s a perfect physical duplicate of me. So one can deny the possibility of zombies while still holding that a computer with the same input-output profile as me is not conscious. For example, one could hold that only carbon-based life could be conscious while denying the possibility of zombies (denying that a physical duplicate of a carbon-based lifeform that is conscious could lack consciousness) while denying that an identical input-output profile implies consciousness.
Second, if it could be shown that the same input-output profile could exist even with consciousness was removed this doesn’t show that consciousness can’t play a causal role in guiding behaviour. Rather, it shows that the same input-output profile can exist without consciousness. That doesn’t mean that consciousness can’t cause that input-output profile in one system and something else cause it in the other system.
Third, it seems that one can deny the possibility of zombies while accepting that consciousness has no causal impact on behaviour (contra the last sentence of the quoted fragment): one could hold that the behaviour causes the conscious experience (or that the thing which causes the behaviour also causes the conscious experience). One could then deny that something could be physically identical to me but lack consciousness (that is, deny the possibility of zombies) while still accepting that consciousness lacks causal influence on behaviour.
Am I confused here or do the three points above seem to hold?
Am I confused here or do the three points above seem to hold?
I think formally you are right.
But I think that if consciousness is essential to how we get important aspects of our input-output map, then I think the chances of there being another mechanism that works to get the same input-output map are equal to the chances that you could program a car to drive from here to Los Angeles without using any feedback mechanisms, by just dialing in all the stops and starts and turns and so on that it would need ahead of time. Formally possible, but absolutely bearing no real relationship to how anything that works has ever been built.
I am not a mathematician about these things, I am an engineer or a physicist in the sense of Feynman.
1) Initial mind uploading will probably be lossy, because it needs to convert analog to digital.
2) I don’t know if even lossless compression of the whole input-output map is going to preserve everything. Let’s say you have ten seconds left to live. Your input-output map over these ten seconds probably doesn’t contain many interesting statements about consciousness, but that doesn’t mean you’re allowed to compress away consciousness. And even on longer timescales, people don’t seem to be very good at introspecting about consciousness, so all your beliefs about consciousness might be compressible into a small input-output map. Or at least we can’t say that input-output map is large, unless we figure out more about consciousness in the first place!
3) Even if consciousness plays a large causal role, I agree with crazy88′s point that consciousness might not be the smallest possible program that can fill that role.
4) I’m not sure that consciousness is just about the input-output map. Doesn’t it feel more like internal processing? I seem to have consciousness even when I’m not talking about it, and I would still have it even if my religion prohibited me from talking about it. Or if I was mute.
I don’t know if even lossless compression of the whole input-output map is going to preserve everything. Let’s say you have ten seconds left to live. Your input-output map over these ten seconds probably doesn’t contain many interesting statements about consciousness, but that doesn’t mean you’re allowed to compress away consciousness.
It is not your actual input-output map that matters, but your potential. What is uploaded must be information about the functional organization of you, not some abstracted mapping function. If I have 10 s left to live and I am uploaded, my upload should type this comment in response to your comment above even if it is well more than 10 s since I was uploaded.
And even on longer timescales, people don’t seem to be very good at introspecting about consciousness, so all your beliefs about consciousness might be compressible into a small input-output map.
If with years of intense and expert schooling I could say more about consciousness, then that is part of my input-output map. My upload would need to have the same property.
Even if consciousness plays a large causal role, I agree with crazy88′s point that consciousness might not be the smallest possible program that can fill that role.
Might not be, but probably is. Biological function seems to be very efficient, with most bio features not equalled in efficiency by human manufactured systems even now. The chances that evolution would have created consciousness if it didn’t need to seem slim to me. So as an engineer trying to plan an attack on the problem, I’d expect consciousness to show up in any successful upload. If it did not, that would be a very interesting result. But of course, we need a way to measure consciousness to tell whether it is there in the upload or not.
To the best of my knowledge, no one anywhere has ever said how you go about distinguishing between a conscious being and a p-zombie.
’m not sure that consciousness is just about the input-output map. Doesn’t it feel more like internal processing? I seem to have consciousness even when I’m not talking about it, and I would still have it even if my religion prohibited me from talking about it. Or if I was mute.
I mean your input-output map writ broadly. But again, since you don’t even know how to distinguish a conscious me from a p-zombie me, we are not in a position yet to worry about the input-output map and compression, in my opinion.
If a simulation of me can be complete, able to attend graduate school and get 13 patents doing research afterwards, able to carry on an obsessive relationship with a married woman for a decade, able to enjoy a convertible he has owned for 8 years, able to post on lesswrong posts much like this one, then I would be shocked if it wasn’t conscious. But I would never know whether it was conscious, nor for that matter will I ever know whether you are conscious, until somebody figures out how to tell the difference between a p-zombie and a conscious person.
Even if that’s true, are you sure that AI will be optimizing us for the same mix of speed/size that evolution was optimizing for? If the weighting of speed vs size is different, the result of optimization might be different as well.
I mean your input-output map writ broadly.
Can you expand what you mean by “writ broadly”? If we know that speech is not enough because the person might be mute, how do you convince yourself that a certain set of inputs and outputs is enough?
That said, if you also think that uploading and further optimization might accidentally throw away consciousness, then I guess we’re in agreement.
Even if that’s true, are you sure that AI will be optimizing us for the same mix of speed/size that evolution was optimizing for? If the weighting of speed vs size is different, the result of optimization might be different as well.
I was thinking of uploads in the Hansonian sense, a shortcut to “building” AI. Instead of understanding AI/consciousness from the ground up and designing de novo an IA, we simply copy an actual person. Copying the person, if successful, produces a computer run person which seems to do the things the person would have done under similar conditions.
The person is much simpler than the potential input-output map. THe human system has memory, so a semi-complete input-output map could not be generated unless you started with a myriad of fresh copies of the person and ran them through all sorts of conceivable lifetimes.
You seem to be presuming the upload would consist of taking the input-output map and, like a smart compiler, trying to invent the least amount of code that would produce that, or in another metaphor, try to optimally compress that input-output map. I don’t think this is at all how an upload would work.
Consider duplicating or uploading a car. WOuld you drive the car back and forth over every road in the world under every conceivable traffic and weather condition, and then take that very large input output map and try to compress and upload that? Or would you take each part of the car and upload it, and its relationship when assembled, to each other part in the car? You would do the second, there are too many possible inputs to imagine the input-output approach could be even vaguely as efficient.
So I am thinking of Hansonian uploads for Hansonian reasons, and so it is fair to insist we do something which is more efficient, upload a copy of the machine rather than a compressed input-output map, especially if the ratio of efficiency is > 10^100:1.
Can you expand what you mean by “writ broadly”? If we know that speech is not enough because the person might be mute, how do you convince yourself that a certain set of inputs and outputs is enough?
I think I have explained that above. TO characterize the machine by its input-output map, you need to consider every possible input. In the case of a person with memory, that means every possible lifetime: the input-output map is gigantic, much bigger than the machine itself, which is the brain/body.
That said, if you also think that uploading and further optimization might accidentally throw away consciousness, then I guess we’re in agreement.
What I think is that we don’t know whether or not consciousness has been thrown away because we don’t even have a method for determining whether the original is conscious or not. To the extent you believe I am conscious, why is it? Until you can answer that, until you can build a consciousness-meter, how do we even check an upload for consciousness? What we could check it for is whether it SEEMS to act like the person uploaded, our sort of fuzzy opinion.
What I would say is IF there is a consciousness-meter even possible, and I think there is but I don’t know, then any optimization that accidentally threw away consciousness would have changed other behaviors away and would be a measurably inferior simulation than a conscious simulation would have been.
If on the other hand there is NO measure of consciousness that could be developed as a consciousness-meter (or consciousness-evaluating program if you prefer), then consciousness is supernatural, which for all intents and purposes means it is make-believe. Literally, you make yourself believe something for reasons which by definition have nothing to do with something happened in the real, natural, measurable world.
You seem to be presuming the upload would consist of taking the input-output map and, like a smart compiler, trying to invent the least amount of code that would produce that, or in another metaphor, try to optimally compress that input-output map. I don’t think this is at all how an upload would work.
Well, presumably you don’t want an atom-by-atom simulation. You want to at least compress each neuron to an approximate input-output map for that neuron, observed from practice, and then use that. Also you might want to take some implementation shortcuts to make the thing run faster. You seem to think that all these changes are obviously harmless. I also lean toward that, but not as strongly as you, because I don’t know where to draw the line between harmless and harmful optimizations.
Right; with lossless compression then you’re not going to lose anything. So cousin_it probably means lossy compression, like with jpgs and mp3s, smaller versions that are very similar to what you had before.
Well, initial mind uploading is going to be lossy because it will convert analog to digital.
That said, I don’t know if even lossless compression of the whole input-output map is going to preserve everything. Let’s say you have ten seconds left to live. Your input-output map over these ten seconds probably doesn’t contain many interesting statements about consciousness, but that doesn’t mean you’re allowed to compress away consciousness...
And even on longer timescales, people don’t seem to be very good at introspecting about consciousness, so all your beliefs about consciousness might be compressible into a small input-output map. Or at least we can’t say that input-output map is large, unless we figure out more about consciousness in the first place.
(Also I agree with crazy88′s point that consciousness might play a large causal role but still be compressible to a smaller non-conscious program.)
More generally, I’m not sure that consciousness is just about the input-output map. Doesn’t it feel more like internal processing? I seem to have consciousness even when I’m not talking about it, and I would still have it even if my religion prohibited me from talking about it, or something.
Suppose you mean lossless compression. The compressed program has ALL the same outputs to the same inputs as the original program.
Then if the uncompressed program running had consciousness and the compressed program running did not, you have either proved or defined consciousness as something which is not an output. If it is possible to do what you are suggesting then consciousness has no effect on behavior, which is the presumption one must make in order to conclude that p-zombies are possible.
From an evolutionary point of view, can a feature with no output, absolutely zero effect on the interaction of the creature with its environment ever evolve? There would be no mechanism for it to evolve, there is no basis on which to select for it. It seems to me that to believe in the possibility of p-zombies is to believe in the supernatural, a world of phenomena such as consciousness that for some reason is not allowed to be listed as a phenomenon of the natural world.
At the moment, I can’t really distinguish how a belief that p-zombies are possible is any different from a belief in the supernatural.
Years ago I thought an interesting experiment to do in terms of artificial consciousness would be to build an increasingly complex verbal simulation of a human, to the point where you could have conversations involving reflection with the simulation. At that point you could ask it if it was conscious and see what it had to say. Would it say “not so far as I can tell?”
The p-zombie assumption is that it would say “yeah I’m conscious duhh what kind of question is that?” But the way a simulation actually gets built is you have the list of requirements and you keep accreting code until all the requirements are met. If your requirements included a vast array of features but NOT the feature that it answer this question one way or another, conceivably you could elicit an “honest” answer from your sim. If all such sims answers “yes,” you might conclude that somehow in the collection of features you HAD required, consciousness emerged, and you could do other experiments where you removed features from the sim and kept statistics on how those sims answered that question. You might see the sim saying “no, don’t think so.” and conclude that whatever it is in us that makes us function as conscious we hadn’t found that thing yet and put it in our list of requirements.
I haven’t thought about this stuff for a while and my memory is a bit hazy in relation to it so I could be getting things wrong here but this comment doesn’t seem right to me.
First, my p-zombie is not just a duplicate of me in terms of my input-output profile. Rather, it’s a perfect physical duplicate of me. So one can deny the possibility of zombies while still holding that a computer with the same input-output profile as me is not conscious. For example, one could hold that only carbon-based life could be conscious while denying the possibility of zombies (denying that a physical duplicate of a carbon-based lifeform that is conscious could lack consciousness) while denying that an identical input-output profile implies consciousness.
Second, if it could be shown that the same input-output profile could exist even with consciousness was removed this doesn’t show that consciousness can’t play a causal role in guiding behaviour. Rather, it shows that the same input-output profile can exist without consciousness. That doesn’t mean that consciousness can’t cause that input-output profile in one system and something else cause it in the other system.
Third, it seems that one can deny the possibility of zombies while accepting that consciousness has no causal impact on behaviour (contra the last sentence of the quoted fragment): one could hold that the behaviour causes the conscious experience (or that the thing which causes the behaviour also causes the conscious experience). One could then deny that something could be physically identical to me but lack consciousness (that is, deny the possibility of zombies) while still accepting that consciousness lacks causal influence on behaviour.
Am I confused here or do the three points above seem to hold?
I think formally you are right.
But I think that if consciousness is essential to how we get important aspects of our input-output map, then I think the chances of there being another mechanism that works to get the same input-output map are equal to the chances that you could program a car to drive from here to Los Angeles without using any feedback mechanisms, by just dialing in all the stops and starts and turns and so on that it would need ahead of time. Formally possible, but absolutely bearing no real relationship to how anything that works has ever been built.
I am not a mathematician about these things, I am an engineer or a physicist in the sense of Feynman.
A few points:
1) Initial mind uploading will probably be lossy, because it needs to convert analog to digital.
2) I don’t know if even lossless compression of the whole input-output map is going to preserve everything. Let’s say you have ten seconds left to live. Your input-output map over these ten seconds probably doesn’t contain many interesting statements about consciousness, but that doesn’t mean you’re allowed to compress away consciousness. And even on longer timescales, people don’t seem to be very good at introspecting about consciousness, so all your beliefs about consciousness might be compressible into a small input-output map. Or at least we can’t say that input-output map is large, unless we figure out more about consciousness in the first place!
3) Even if consciousness plays a large causal role, I agree with crazy88′s point that consciousness might not be the smallest possible program that can fill that role.
4) I’m not sure that consciousness is just about the input-output map. Doesn’t it feel more like internal processing? I seem to have consciousness even when I’m not talking about it, and I would still have it even if my religion prohibited me from talking about it. Or if I was mute.
It is not your actual input-output map that matters, but your potential. What is uploaded must be information about the functional organization of you, not some abstracted mapping function. If I have 10 s left to live and I am uploaded, my upload should type this comment in response to your comment above even if it is well more than 10 s since I was uploaded.
If with years of intense and expert schooling I could say more about consciousness, then that is part of my input-output map. My upload would need to have the same property.
Might not be, but probably is. Biological function seems to be very efficient, with most bio features not equalled in efficiency by human manufactured systems even now. The chances that evolution would have created consciousness if it didn’t need to seem slim to me. So as an engineer trying to plan an attack on the problem, I’d expect consciousness to show up in any successful upload. If it did not, that would be a very interesting result. But of course, we need a way to measure consciousness to tell whether it is there in the upload or not.
To the best of my knowledge, no one anywhere has ever said how you go about distinguishing between a conscious being and a p-zombie.
I mean your input-output map writ broadly. But again, since you don’t even know how to distinguish a conscious me from a p-zombie me, we are not in a position yet to worry about the input-output map and compression, in my opinion.
If a simulation of me can be complete, able to attend graduate school and get 13 patents doing research afterwards, able to carry on an obsessive relationship with a married woman for a decade, able to enjoy a convertible he has owned for 8 years, able to post on lesswrong posts much like this one, then I would be shocked if it wasn’t conscious. But I would never know whether it was conscious, nor for that matter will I ever know whether you are conscious, until somebody figures out how to tell the difference between a p-zombie and a conscious person.
Even if that’s true, are you sure that AI will be optimizing us for the same mix of speed/size that evolution was optimizing for? If the weighting of speed vs size is different, the result of optimization might be different as well.
Can you expand what you mean by “writ broadly”? If we know that speech is not enough because the person might be mute, how do you convince yourself that a certain set of inputs and outputs is enough?
That said, if you also think that uploading and further optimization might accidentally throw away consciousness, then I guess we’re in agreement.
I was thinking of uploads in the Hansonian sense, a shortcut to “building” AI. Instead of understanding AI/consciousness from the ground up and designing de novo an IA, we simply copy an actual person. Copying the person, if successful, produces a computer run person which seems to do the things the person would have done under similar conditions.
The person is much simpler than the potential input-output map. THe human system has memory, so a semi-complete input-output map could not be generated unless you started with a myriad of fresh copies of the person and ran them through all sorts of conceivable lifetimes.
You seem to be presuming the upload would consist of taking the input-output map and, like a smart compiler, trying to invent the least amount of code that would produce that, or in another metaphor, try to optimally compress that input-output map. I don’t think this is at all how an upload would work.
Consider duplicating or uploading a car. WOuld you drive the car back and forth over every road in the world under every conceivable traffic and weather condition, and then take that very large input output map and try to compress and upload that? Or would you take each part of the car and upload it, and its relationship when assembled, to each other part in the car? You would do the second, there are too many possible inputs to imagine the input-output approach could be even vaguely as efficient.
So I am thinking of Hansonian uploads for Hansonian reasons, and so it is fair to insist we do something which is more efficient, upload a copy of the machine rather than a compressed input-output map, especially if the ratio of efficiency is > 10^100:1.
I think I have explained that above. TO characterize the machine by its input-output map, you need to consider every possible input. In the case of a person with memory, that means every possible lifetime: the input-output map is gigantic, much bigger than the machine itself, which is the brain/body.
What I think is that we don’t know whether or not consciousness has been thrown away because we don’t even have a method for determining whether the original is conscious or not. To the extent you believe I am conscious, why is it? Until you can answer that, until you can build a consciousness-meter, how do we even check an upload for consciousness? What we could check it for is whether it SEEMS to act like the person uploaded, our sort of fuzzy opinion.
What I would say is IF there is a consciousness-meter even possible, and I think there is but I don’t know, then any optimization that accidentally threw away consciousness would have changed other behaviors away and would be a measurably inferior simulation than a conscious simulation would have been.
If on the other hand there is NO measure of consciousness that could be developed as a consciousness-meter (or consciousness-evaluating program if you prefer), then consciousness is supernatural, which for all intents and purposes means it is make-believe. Literally, you make yourself believe something for reasons which by definition have nothing to do with something happened in the real, natural, measurable world.
Do we agree on any of these last two paragraphs?
Well, presumably you don’t want an atom-by-atom simulation. You want to at least compress each neuron to an approximate input-output map for that neuron, observed from practice, and then use that. Also you might want to take some implementation shortcuts to make the thing run faster. You seem to think that all these changes are obviously harmless. I also lean toward that, but not as strongly as you, because I don’t know where to draw the line between harmless and harmful optimizations.
Right; with lossless compression then you’re not going to lose anything. So cousin_it probably means lossy compression, like with jpgs and mp3s, smaller versions that are very similar to what you had before.
Well, initial mind uploading is going to be lossy because it will convert analog to digital.
That said, I don’t know if even lossless compression of the whole input-output map is going to preserve everything. Let’s say you have ten seconds left to live. Your input-output map over these ten seconds probably doesn’t contain many interesting statements about consciousness, but that doesn’t mean you’re allowed to compress away consciousness...
And even on longer timescales, people don’t seem to be very good at introspecting about consciousness, so all your beliefs about consciousness might be compressible into a small input-output map. Or at least we can’t say that input-output map is large, unless we figure out more about consciousness in the first place.
(Also I agree with crazy88′s point that consciousness might play a large causal role but still be compressible to a smaller non-conscious program.)
More generally, I’m not sure that consciousness is just about the input-output map. Doesn’t it feel more like internal processing? I seem to have consciousness even when I’m not talking about it, and I would still have it even if my religion prohibited me from talking about it, or something.