I don’t know if even lossless compression of the whole input-output map is going to preserve everything. Let’s say you have ten seconds left to live. Your input-output map over these ten seconds probably doesn’t contain many interesting statements about consciousness, but that doesn’t mean you’re allowed to compress away consciousness.
It is not your actual input-output map that matters, but your potential. What is uploaded must be information about the functional organization of you, not some abstracted mapping function. If I have 10 s left to live and I am uploaded, my upload should type this comment in response to your comment above even if it is well more than 10 s since I was uploaded.
And even on longer timescales, people don’t seem to be very good at introspecting about consciousness, so all your beliefs about consciousness might be compressible into a small input-output map.
If with years of intense and expert schooling I could say more about consciousness, then that is part of my input-output map. My upload would need to have the same property.
Even if consciousness plays a large causal role, I agree with crazy88′s point that consciousness might not be the smallest possible program that can fill that role.
Might not be, but probably is. Biological function seems to be very efficient, with most bio features not equalled in efficiency by human manufactured systems even now. The chances that evolution would have created consciousness if it didn’t need to seem slim to me. So as an engineer trying to plan an attack on the problem, I’d expect consciousness to show up in any successful upload. If it did not, that would be a very interesting result. But of course, we need a way to measure consciousness to tell whether it is there in the upload or not.
To the best of my knowledge, no one anywhere has ever said how you go about distinguishing between a conscious being and a p-zombie.
’m not sure that consciousness is just about the input-output map. Doesn’t it feel more like internal processing? I seem to have consciousness even when I’m not talking about it, and I would still have it even if my religion prohibited me from talking about it. Or if I was mute.
I mean your input-output map writ broadly. But again, since you don’t even know how to distinguish a conscious me from a p-zombie me, we are not in a position yet to worry about the input-output map and compression, in my opinion.
If a simulation of me can be complete, able to attend graduate school and get 13 patents doing research afterwards, able to carry on an obsessive relationship with a married woman for a decade, able to enjoy a convertible he has owned for 8 years, able to post on lesswrong posts much like this one, then I would be shocked if it wasn’t conscious. But I would never know whether it was conscious, nor for that matter will I ever know whether you are conscious, until somebody figures out how to tell the difference between a p-zombie and a conscious person.
Even if that’s true, are you sure that AI will be optimizing us for the same mix of speed/size that evolution was optimizing for? If the weighting of speed vs size is different, the result of optimization might be different as well.
I mean your input-output map writ broadly.
Can you expand what you mean by “writ broadly”? If we know that speech is not enough because the person might be mute, how do you convince yourself that a certain set of inputs and outputs is enough?
That said, if you also think that uploading and further optimization might accidentally throw away consciousness, then I guess we’re in agreement.
Even if that’s true, are you sure that AI will be optimizing us for the same mix of speed/size that evolution was optimizing for? If the weighting of speed vs size is different, the result of optimization might be different as well.
I was thinking of uploads in the Hansonian sense, a shortcut to “building” AI. Instead of understanding AI/consciousness from the ground up and designing de novo an IA, we simply copy an actual person. Copying the person, if successful, produces a computer run person which seems to do the things the person would have done under similar conditions.
The person is much simpler than the potential input-output map. THe human system has memory, so a semi-complete input-output map could not be generated unless you started with a myriad of fresh copies of the person and ran them through all sorts of conceivable lifetimes.
You seem to be presuming the upload would consist of taking the input-output map and, like a smart compiler, trying to invent the least amount of code that would produce that, or in another metaphor, try to optimally compress that input-output map. I don’t think this is at all how an upload would work.
Consider duplicating or uploading a car. WOuld you drive the car back and forth over every road in the world under every conceivable traffic and weather condition, and then take that very large input output map and try to compress and upload that? Or would you take each part of the car and upload it, and its relationship when assembled, to each other part in the car? You would do the second, there are too many possible inputs to imagine the input-output approach could be even vaguely as efficient.
So I am thinking of Hansonian uploads for Hansonian reasons, and so it is fair to insist we do something which is more efficient, upload a copy of the machine rather than a compressed input-output map, especially if the ratio of efficiency is > 10^100:1.
Can you expand what you mean by “writ broadly”? If we know that speech is not enough because the person might be mute, how do you convince yourself that a certain set of inputs and outputs is enough?
I think I have explained that above. TO characterize the machine by its input-output map, you need to consider every possible input. In the case of a person with memory, that means every possible lifetime: the input-output map is gigantic, much bigger than the machine itself, which is the brain/body.
That said, if you also think that uploading and further optimization might accidentally throw away consciousness, then I guess we’re in agreement.
What I think is that we don’t know whether or not consciousness has been thrown away because we don’t even have a method for determining whether the original is conscious or not. To the extent you believe I am conscious, why is it? Until you can answer that, until you can build a consciousness-meter, how do we even check an upload for consciousness? What we could check it for is whether it SEEMS to act like the person uploaded, our sort of fuzzy opinion.
What I would say is IF there is a consciousness-meter even possible, and I think there is but I don’t know, then any optimization that accidentally threw away consciousness would have changed other behaviors away and would be a measurably inferior simulation than a conscious simulation would have been.
If on the other hand there is NO measure of consciousness that could be developed as a consciousness-meter (or consciousness-evaluating program if you prefer), then consciousness is supernatural, which for all intents and purposes means it is make-believe. Literally, you make yourself believe something for reasons which by definition have nothing to do with something happened in the real, natural, measurable world.
You seem to be presuming the upload would consist of taking the input-output map and, like a smart compiler, trying to invent the least amount of code that would produce that, or in another metaphor, try to optimally compress that input-output map. I don’t think this is at all how an upload would work.
Well, presumably you don’t want an atom-by-atom simulation. You want to at least compress each neuron to an approximate input-output map for that neuron, observed from practice, and then use that. Also you might want to take some implementation shortcuts to make the thing run faster. You seem to think that all these changes are obviously harmless. I also lean toward that, but not as strongly as you, because I don’t know where to draw the line between harmless and harmful optimizations.
It is not your actual input-output map that matters, but your potential. What is uploaded must be information about the functional organization of you, not some abstracted mapping function. If I have 10 s left to live and I am uploaded, my upload should type this comment in response to your comment above even if it is well more than 10 s since I was uploaded.
If with years of intense and expert schooling I could say more about consciousness, then that is part of my input-output map. My upload would need to have the same property.
Might not be, but probably is. Biological function seems to be very efficient, with most bio features not equalled in efficiency by human manufactured systems even now. The chances that evolution would have created consciousness if it didn’t need to seem slim to me. So as an engineer trying to plan an attack on the problem, I’d expect consciousness to show up in any successful upload. If it did not, that would be a very interesting result. But of course, we need a way to measure consciousness to tell whether it is there in the upload or not.
To the best of my knowledge, no one anywhere has ever said how you go about distinguishing between a conscious being and a p-zombie.
I mean your input-output map writ broadly. But again, since you don’t even know how to distinguish a conscious me from a p-zombie me, we are not in a position yet to worry about the input-output map and compression, in my opinion.
If a simulation of me can be complete, able to attend graduate school and get 13 patents doing research afterwards, able to carry on an obsessive relationship with a married woman for a decade, able to enjoy a convertible he has owned for 8 years, able to post on lesswrong posts much like this one, then I would be shocked if it wasn’t conscious. But I would never know whether it was conscious, nor for that matter will I ever know whether you are conscious, until somebody figures out how to tell the difference between a p-zombie and a conscious person.
Even if that’s true, are you sure that AI will be optimizing us for the same mix of speed/size that evolution was optimizing for? If the weighting of speed vs size is different, the result of optimization might be different as well.
Can you expand what you mean by “writ broadly”? If we know that speech is not enough because the person might be mute, how do you convince yourself that a certain set of inputs and outputs is enough?
That said, if you also think that uploading and further optimization might accidentally throw away consciousness, then I guess we’re in agreement.
I was thinking of uploads in the Hansonian sense, a shortcut to “building” AI. Instead of understanding AI/consciousness from the ground up and designing de novo an IA, we simply copy an actual person. Copying the person, if successful, produces a computer run person which seems to do the things the person would have done under similar conditions.
The person is much simpler than the potential input-output map. THe human system has memory, so a semi-complete input-output map could not be generated unless you started with a myriad of fresh copies of the person and ran them through all sorts of conceivable lifetimes.
You seem to be presuming the upload would consist of taking the input-output map and, like a smart compiler, trying to invent the least amount of code that would produce that, or in another metaphor, try to optimally compress that input-output map. I don’t think this is at all how an upload would work.
Consider duplicating or uploading a car. WOuld you drive the car back and forth over every road in the world under every conceivable traffic and weather condition, and then take that very large input output map and try to compress and upload that? Or would you take each part of the car and upload it, and its relationship when assembled, to each other part in the car? You would do the second, there are too many possible inputs to imagine the input-output approach could be even vaguely as efficient.
So I am thinking of Hansonian uploads for Hansonian reasons, and so it is fair to insist we do something which is more efficient, upload a copy of the machine rather than a compressed input-output map, especially if the ratio of efficiency is > 10^100:1.
I think I have explained that above. TO characterize the machine by its input-output map, you need to consider every possible input. In the case of a person with memory, that means every possible lifetime: the input-output map is gigantic, much bigger than the machine itself, which is the brain/body.
What I think is that we don’t know whether or not consciousness has been thrown away because we don’t even have a method for determining whether the original is conscious or not. To the extent you believe I am conscious, why is it? Until you can answer that, until you can build a consciousness-meter, how do we even check an upload for consciousness? What we could check it for is whether it SEEMS to act like the person uploaded, our sort of fuzzy opinion.
What I would say is IF there is a consciousness-meter even possible, and I think there is but I don’t know, then any optimization that accidentally threw away consciousness would have changed other behaviors away and would be a measurably inferior simulation than a conscious simulation would have been.
If on the other hand there is NO measure of consciousness that could be developed as a consciousness-meter (or consciousness-evaluating program if you prefer), then consciousness is supernatural, which for all intents and purposes means it is make-believe. Literally, you make yourself believe something for reasons which by definition have nothing to do with something happened in the real, natural, measurable world.
Do we agree on any of these last two paragraphs?
Well, presumably you don’t want an atom-by-atom simulation. You want to at least compress each neuron to an approximate input-output map for that neuron, observed from practice, and then use that. Also you might want to take some implementation shortcuts to make the thing run faster. You seem to think that all these changes are obviously harmless. I also lean toward that, but not as strongly as you, because I don’t know where to draw the line between harmless and harmful optimizations.