All this talk of P-zombies. Is there even a hint of a mechanism that anybody can think of to detect if something else is conscious, or to measure their degree of consciousness assuming it admits of degree?
I have spent my life figuring other humans are probably conscious purely on an Occam’s razor kind of argument that I am conscious and the most straightforward explanation for my similarities and grouping with all these other people is that they are in relevant respects just like me. But I have always thought that increasingly complex simulations of humans could be both “obviously” not conscious but be mistaken by others as conscious. Does every human on the planet who reaches “voice mail jail,” voice text interactive systems, are they all aware that they have not reached a consciousness? Do even those of us who are aware forget sometimes when we are not being careful? Is this going to become even a harder distinction to make as tech continues to get better?
I have been enjoying the television show “almost human.” In this show there are androids, most of which have been designed to NOT be too much like humans, although what they are really like is boring rule-following humans. It is clear in this show that the value on an android “life” is a tiny fraction of the value on a “human” life, in the first episode a human cop kills his android partner in order to get another one. The partner he does get is much more like a human, but still considered the property of the police department for which he works, and nobody really has much of a problem with this. Ironically, this “almost human” android partner is African American.
Is this going to become even a harder distinction to make as tech continues to get better?
Wei once described an interesting scenario in that vein. Imagine you have a bunch of human uploads, computer programs that can truthfully say “I’m conscious”. Now you start optimizing them for space, compressing them into smaller and smaller programs that have the same outputs. Then at some point they might start saying “I’m conscious” for reasons other than being conscious. After all, you can have a very small program that outputs the string “I’m conscious” without being conscious.
So you might be able turn a population of conscious creatures into a population of p-zombies or Elizas just by compressing them. It’s not clear where the cutoff happens, or even if it’s meaningful to talk about the cutoff happening at some point. And this is something that could happen in reality, if we ask a future AI to optimize the universe for more humans or something.
Also this scenario reopens the question of whether uploads are conscious in the first place! After all, the process of uploading a human mind to a computer can also be viewed as a compression step, which can fold constant computations into literal constants, etc. The usual justification says that “it preserves behavior at every step, therefore it preserves consciousness”, but as the above argument shows, that justification is incomplete and could easily be wrong.
So you might be able turn a population of conscious creatures into a population of p-zombies or Elizas just by compressing them.
Suppose you mean lossless compression. The compressed program has ALL the same outputs to the same inputs as the original program.
Then if the uncompressed program running had consciousness and the compressed program running did not, you have either proved or defined consciousness as something which is not an output. If it is possible to do what you are suggesting then consciousness has no effect on behavior, which is the presumption one must make in order to conclude that p-zombies are possible.
From an evolutionary point of view, can a feature with no output, absolutely zero effect on the interaction of the creature with its environment ever evolve? There would be no mechanism for it to evolve, there is no basis on which to select for it. It seems to me that to believe in the possibility of p-zombies is to believe in the supernatural, a world of phenomena such as consciousness that for some reason is not allowed to be listed as a phenomenon of the natural world.
At the moment, I can’t really distinguish how a belief that p-zombies are possible is any different from a belief in the supernatural.
Also this scenario reopens the question of whether uploads are conscious in the first place!
Years ago I thought an interesting experiment to do in terms of artificial consciousness would be to build an increasingly complex verbal simulation of a human, to the point where you could have conversations involving reflection with the simulation. At that point you could ask it if it was conscious and see what it had to say. Would it say “not so far as I can tell?”
The p-zombie assumption is that it would say “yeah I’m conscious duhh what kind of question is that?” But the way a simulation actually gets built is you have the list of requirements and you keep accreting code until all the requirements are met. If your requirements included a vast array of features but NOT the feature that it answer this question one way or another, conceivably you could elicit an “honest” answer from your sim. If all such sims answers “yes,” you might conclude that somehow in the collection of features you HAD required, consciousness emerged, and you could do other experiments where you removed features from the sim and kept statistics on how those sims answered that question. You might see the sim saying “no, don’t think so.” and conclude that whatever it is in us that makes us function as conscious we hadn’t found that thing yet and put it in our list of requirements.
Then if the uncompressed program running had consciousness and the compressed program running did not, you have either proved or defined consciousness as something which is not an output. If it is possible to do what you are suggesting then consciousness has no effect on behavior, which is the presumption one must make in order to conclude that p-zombies are possible.
I haven’t thought about this stuff for a while and my memory is a bit hazy in relation to it so I could be getting things wrong here but this comment doesn’t seem right to me.
First, my p-zombie is not just a duplicate of me in terms of my input-output profile. Rather, it’s a perfect physical duplicate of me. So one can deny the possibility of zombies while still holding that a computer with the same input-output profile as me is not conscious. For example, one could hold that only carbon-based life could be conscious while denying the possibility of zombies (denying that a physical duplicate of a carbon-based lifeform that is conscious could lack consciousness) while denying that an identical input-output profile implies consciousness.
Second, if it could be shown that the same input-output profile could exist even with consciousness was removed this doesn’t show that consciousness can’t play a causal role in guiding behaviour. Rather, it shows that the same input-output profile can exist without consciousness. That doesn’t mean that consciousness can’t cause that input-output profile in one system and something else cause it in the other system.
Third, it seems that one can deny the possibility of zombies while accepting that consciousness has no causal impact on behaviour (contra the last sentence of the quoted fragment): one could hold that the behaviour causes the conscious experience (or that the thing which causes the behaviour also causes the conscious experience). One could then deny that something could be physically identical to me but lack consciousness (that is, deny the possibility of zombies) while still accepting that consciousness lacks causal influence on behaviour.
Am I confused here or do the three points above seem to hold?
Am I confused here or do the three points above seem to hold?
I think formally you are right.
But I think that if consciousness is essential to how we get important aspects of our input-output map, then I think the chances of there being another mechanism that works to get the same input-output map are equal to the chances that you could program a car to drive from here to Los Angeles without using any feedback mechanisms, by just dialing in all the stops and starts and turns and so on that it would need ahead of time. Formally possible, but absolutely bearing no real relationship to how anything that works has ever been built.
I am not a mathematician about these things, I am an engineer or a physicist in the sense of Feynman.
1) Initial mind uploading will probably be lossy, because it needs to convert analog to digital.
2) I don’t know if even lossless compression of the whole input-output map is going to preserve everything. Let’s say you have ten seconds left to live. Your input-output map over these ten seconds probably doesn’t contain many interesting statements about consciousness, but that doesn’t mean you’re allowed to compress away consciousness. And even on longer timescales, people don’t seem to be very good at introspecting about consciousness, so all your beliefs about consciousness might be compressible into a small input-output map. Or at least we can’t say that input-output map is large, unless we figure out more about consciousness in the first place!
3) Even if consciousness plays a large causal role, I agree with crazy88′s point that consciousness might not be the smallest possible program that can fill that role.
4) I’m not sure that consciousness is just about the input-output map. Doesn’t it feel more like internal processing? I seem to have consciousness even when I’m not talking about it, and I would still have it even if my religion prohibited me from talking about it. Or if I was mute.
I don’t know if even lossless compression of the whole input-output map is going to preserve everything. Let’s say you have ten seconds left to live. Your input-output map over these ten seconds probably doesn’t contain many interesting statements about consciousness, but that doesn’t mean you’re allowed to compress away consciousness.
It is not your actual input-output map that matters, but your potential. What is uploaded must be information about the functional organization of you, not some abstracted mapping function. If I have 10 s left to live and I am uploaded, my upload should type this comment in response to your comment above even if it is well more than 10 s since I was uploaded.
And even on longer timescales, people don’t seem to be very good at introspecting about consciousness, so all your beliefs about consciousness might be compressible into a small input-output map.
If with years of intense and expert schooling I could say more about consciousness, then that is part of my input-output map. My upload would need to have the same property.
Even if consciousness plays a large causal role, I agree with crazy88′s point that consciousness might not be the smallest possible program that can fill that role.
Might not be, but probably is. Biological function seems to be very efficient, with most bio features not equalled in efficiency by human manufactured systems even now. The chances that evolution would have created consciousness if it didn’t need to seem slim to me. So as an engineer trying to plan an attack on the problem, I’d expect consciousness to show up in any successful upload. If it did not, that would be a very interesting result. But of course, we need a way to measure consciousness to tell whether it is there in the upload or not.
To the best of my knowledge, no one anywhere has ever said how you go about distinguishing between a conscious being and a p-zombie.
’m not sure that consciousness is just about the input-output map. Doesn’t it feel more like internal processing? I seem to have consciousness even when I’m not talking about it, and I would still have it even if my religion prohibited me from talking about it. Or if I was mute.
I mean your input-output map writ broadly. But again, since you don’t even know how to distinguish a conscious me from a p-zombie me, we are not in a position yet to worry about the input-output map and compression, in my opinion.
If a simulation of me can be complete, able to attend graduate school and get 13 patents doing research afterwards, able to carry on an obsessive relationship with a married woman for a decade, able to enjoy a convertible he has owned for 8 years, able to post on lesswrong posts much like this one, then I would be shocked if it wasn’t conscious. But I would never know whether it was conscious, nor for that matter will I ever know whether you are conscious, until somebody figures out how to tell the difference between a p-zombie and a conscious person.
Even if that’s true, are you sure that AI will be optimizing us for the same mix of speed/size that evolution was optimizing for? If the weighting of speed vs size is different, the result of optimization might be different as well.
I mean your input-output map writ broadly.
Can you expand what you mean by “writ broadly”? If we know that speech is not enough because the person might be mute, how do you convince yourself that a certain set of inputs and outputs is enough?
That said, if you also think that uploading and further optimization might accidentally throw away consciousness, then I guess we’re in agreement.
Even if that’s true, are you sure that AI will be optimizing us for the same mix of speed/size that evolution was optimizing for? If the weighting of speed vs size is different, the result of optimization might be different as well.
I was thinking of uploads in the Hansonian sense, a shortcut to “building” AI. Instead of understanding AI/consciousness from the ground up and designing de novo an IA, we simply copy an actual person. Copying the person, if successful, produces a computer run person which seems to do the things the person would have done under similar conditions.
The person is much simpler than the potential input-output map. THe human system has memory, so a semi-complete input-output map could not be generated unless you started with a myriad of fresh copies of the person and ran them through all sorts of conceivable lifetimes.
You seem to be presuming the upload would consist of taking the input-output map and, like a smart compiler, trying to invent the least amount of code that would produce that, or in another metaphor, try to optimally compress that input-output map. I don’t think this is at all how an upload would work.
Consider duplicating or uploading a car. WOuld you drive the car back and forth over every road in the world under every conceivable traffic and weather condition, and then take that very large input output map and try to compress and upload that? Or would you take each part of the car and upload it, and its relationship when assembled, to each other part in the car? You would do the second, there are too many possible inputs to imagine the input-output approach could be even vaguely as efficient.
So I am thinking of Hansonian uploads for Hansonian reasons, and so it is fair to insist we do something which is more efficient, upload a copy of the machine rather than a compressed input-output map, especially if the ratio of efficiency is > 10^100:1.
Can you expand what you mean by “writ broadly”? If we know that speech is not enough because the person might be mute, how do you convince yourself that a certain set of inputs and outputs is enough?
I think I have explained that above. TO characterize the machine by its input-output map, you need to consider every possible input. In the case of a person with memory, that means every possible lifetime: the input-output map is gigantic, much bigger than the machine itself, which is the brain/body.
That said, if you also think that uploading and further optimization might accidentally throw away consciousness, then I guess we’re in agreement.
What I think is that we don’t know whether or not consciousness has been thrown away because we don’t even have a method for determining whether the original is conscious or not. To the extent you believe I am conscious, why is it? Until you can answer that, until you can build a consciousness-meter, how do we even check an upload for consciousness? What we could check it for is whether it SEEMS to act like the person uploaded, our sort of fuzzy opinion.
What I would say is IF there is a consciousness-meter even possible, and I think there is but I don’t know, then any optimization that accidentally threw away consciousness would have changed other behaviors away and would be a measurably inferior simulation than a conscious simulation would have been.
If on the other hand there is NO measure of consciousness that could be developed as a consciousness-meter (or consciousness-evaluating program if you prefer), then consciousness is supernatural, which for all intents and purposes means it is make-believe. Literally, you make yourself believe something for reasons which by definition have nothing to do with something happened in the real, natural, measurable world.
You seem to be presuming the upload would consist of taking the input-output map and, like a smart compiler, trying to invent the least amount of code that would produce that, or in another metaphor, try to optimally compress that input-output map. I don’t think this is at all how an upload would work.
Well, presumably you don’t want an atom-by-atom simulation. You want to at least compress each neuron to an approximate input-output map for that neuron, observed from practice, and then use that. Also you might want to take some implementation shortcuts to make the thing run faster. You seem to think that all these changes are obviously harmless. I also lean toward that, but not as strongly as you, because I don’t know where to draw the line between harmless and harmful optimizations.
Right; with lossless compression then you’re not going to lose anything. So cousin_it probably means lossy compression, like with jpgs and mp3s, smaller versions that are very similar to what you had before.
Well, initial mind uploading is going to be lossy because it will convert analog to digital.
That said, I don’t know if even lossless compression of the whole input-output map is going to preserve everything. Let’s say you have ten seconds left to live. Your input-output map over these ten seconds probably doesn’t contain many interesting statements about consciousness, but that doesn’t mean you’re allowed to compress away consciousness...
And even on longer timescales, people don’t seem to be very good at introspecting about consciousness, so all your beliefs about consciousness might be compressible into a small input-output map. Or at least we can’t say that input-output map is large, unless we figure out more about consciousness in the first place.
(Also I agree with crazy88′s point that consciousness might play a large causal role but still be compressible to a smaller non-conscious program.)
More generally, I’m not sure that consciousness is just about the input-output map. Doesn’t it feel more like internal processing? I seem to have consciousness even when I’m not talking about it, and I would still have it even if my religion prohibited me from talking about it, or something.
It depends on whether you subscribe to materialism. If you do then there nothing to measure. Conscious might even be a tricky illusion as Dennett suggests.
If on the other hand you do believe that there something beyond materialism there are plenty of frameworks to choose from that provide ideas about what one could measure.
If on the other hand you do believe that there something beyond materialism there are plenty of frameworks to choose from that provide ideas about what one could measure.
OMG then someone should get busy! Tell me what I can measure and if it makes any kind of sense I will start working on it!
I do have a qualia for perceiving whether someone else is present in a meditation or is absent minded. It could be that it’s some mental reactions that picks up microgestures or some other thing that I don’t consciously perceive and summarizes that information into a qualia for mental presence.
Investigating how such a qualia works is what I would do personally when I would want to investigate consciousness.
But you probably have no such qualia, so you either need someone who has or develop it yourself. In both cases that probably means seeking a good meditation teacher.
It’s a difficult subject to talk about in a medium like this where people who are into a spiritual framework that has some model of what conscious happens to be have phenomenological primitives that the audience I’m addressing doesn’t have. In my experience most of the people who I consider capable in that regard are very unwilling to talk about details with people who don’t have phenomenological primitives to make sense of them. Instead of answering a question directly a Zen teacher might give you a koan and tell you to come back in a month when you build the phenomenological primitives to understand it, expect that he doesn’t tell you about phenomenological primitives.
I don’t know of a human-independent definition of consciousness, do you? If not, how can one say that “something else is conscious”? So the statement
increasingly complex simulations of humans could be both “obviously” not conscious but be mistaken by others as conscious
will only make sense once there is a definition of consciousness not relying on being a human or using one to evaluate it. (I have a couple ideas about that, but they are not firm enough to explicate here.)
IIT is provides a mathematical approach to measuring consciousness. It is not crazy, and has a significant number of good papers on the topic. It is human-independent
I don’t understand it, but from reading the wikipedia summary it seems to me it measures a complexity of the system. A complexity is not necessarily consciousness.
According to this theory, what is the key difference between a human brain, and… let’s say a hard disk of the same capacity, connected to a high-resolution camera? Let’s assume that the data from the camera are being written in real time to pseudo-random parts of the hard disk. The pseudo-random parts are chosen by calculating a checksum of the whole hard disk. This system obviously is not conscious, but seems complex enough.
IIT proposes that consciousness is integrated information.
The key difference between a brain and the hard disk is the disk has no way of knowing what it is actually sensing. Brain can tell difference between many more sense and receive and use more forms of information. The camera is not conscious of the fact it sensing light and colour.
All this talk of P-zombies. Is there even a hint of a mechanism that anybody can think of to detect if something else is conscious, or to measure their degree of consciousness assuming it admits of degree?
I have spent my life figuring other humans are probably conscious purely on an Occam’s razor kind of argument that I am conscious and the most straightforward explanation for my similarities and grouping with all these other people is that they are in relevant respects just like me. But I have always thought that increasingly complex simulations of humans could be both “obviously” not conscious but be mistaken by others as conscious. Does every human on the planet who reaches “voice mail jail,” voice text interactive systems, are they all aware that they have not reached a consciousness? Do even those of us who are aware forget sometimes when we are not being careful? Is this going to become even a harder distinction to make as tech continues to get better?
I have been enjoying the television show “almost human.” In this show there are androids, most of which have been designed to NOT be too much like humans, although what they are really like is boring rule-following humans. It is clear in this show that the value on an android “life” is a tiny fraction of the value on a “human” life, in the first episode a human cop kills his android partner in order to get another one. The partner he does get is much more like a human, but still considered the property of the police department for which he works, and nobody really has much of a problem with this. Ironically, this “almost human” android partner is African American.
Wei once described an interesting scenario in that vein. Imagine you have a bunch of human uploads, computer programs that can truthfully say “I’m conscious”. Now you start optimizing them for space, compressing them into smaller and smaller programs that have the same outputs. Then at some point they might start saying “I’m conscious” for reasons other than being conscious. After all, you can have a very small program that outputs the string “I’m conscious” without being conscious.
So you might be able turn a population of conscious creatures into a population of p-zombies or Elizas just by compressing them. It’s not clear where the cutoff happens, or even if it’s meaningful to talk about the cutoff happening at some point. And this is something that could happen in reality, if we ask a future AI to optimize the universe for more humans or something.
Also this scenario reopens the question of whether uploads are conscious in the first place! After all, the process of uploading a human mind to a computer can also be viewed as a compression step, which can fold constant computations into literal constants, etc. The usual justification says that “it preserves behavior at every step, therefore it preserves consciousness”, but as the above argument shows, that justification is incomplete and could easily be wrong.
Suppose you mean lossless compression. The compressed program has ALL the same outputs to the same inputs as the original program.
Then if the uncompressed program running had consciousness and the compressed program running did not, you have either proved or defined consciousness as something which is not an output. If it is possible to do what you are suggesting then consciousness has no effect on behavior, which is the presumption one must make in order to conclude that p-zombies are possible.
From an evolutionary point of view, can a feature with no output, absolutely zero effect on the interaction of the creature with its environment ever evolve? There would be no mechanism for it to evolve, there is no basis on which to select for it. It seems to me that to believe in the possibility of p-zombies is to believe in the supernatural, a world of phenomena such as consciousness that for some reason is not allowed to be listed as a phenomenon of the natural world.
At the moment, I can’t really distinguish how a belief that p-zombies are possible is any different from a belief in the supernatural.
Years ago I thought an interesting experiment to do in terms of artificial consciousness would be to build an increasingly complex verbal simulation of a human, to the point where you could have conversations involving reflection with the simulation. At that point you could ask it if it was conscious and see what it had to say. Would it say “not so far as I can tell?”
The p-zombie assumption is that it would say “yeah I’m conscious duhh what kind of question is that?” But the way a simulation actually gets built is you have the list of requirements and you keep accreting code until all the requirements are met. If your requirements included a vast array of features but NOT the feature that it answer this question one way or another, conceivably you could elicit an “honest” answer from your sim. If all such sims answers “yes,” you might conclude that somehow in the collection of features you HAD required, consciousness emerged, and you could do other experiments where you removed features from the sim and kept statistics on how those sims answered that question. You might see the sim saying “no, don’t think so.” and conclude that whatever it is in us that makes us function as conscious we hadn’t found that thing yet and put it in our list of requirements.
I haven’t thought about this stuff for a while and my memory is a bit hazy in relation to it so I could be getting things wrong here but this comment doesn’t seem right to me.
First, my p-zombie is not just a duplicate of me in terms of my input-output profile. Rather, it’s a perfect physical duplicate of me. So one can deny the possibility of zombies while still holding that a computer with the same input-output profile as me is not conscious. For example, one could hold that only carbon-based life could be conscious while denying the possibility of zombies (denying that a physical duplicate of a carbon-based lifeform that is conscious could lack consciousness) while denying that an identical input-output profile implies consciousness.
Second, if it could be shown that the same input-output profile could exist even with consciousness was removed this doesn’t show that consciousness can’t play a causal role in guiding behaviour. Rather, it shows that the same input-output profile can exist without consciousness. That doesn’t mean that consciousness can’t cause that input-output profile in one system and something else cause it in the other system.
Third, it seems that one can deny the possibility of zombies while accepting that consciousness has no causal impact on behaviour (contra the last sentence of the quoted fragment): one could hold that the behaviour causes the conscious experience (or that the thing which causes the behaviour also causes the conscious experience). One could then deny that something could be physically identical to me but lack consciousness (that is, deny the possibility of zombies) while still accepting that consciousness lacks causal influence on behaviour.
Am I confused here or do the three points above seem to hold?
I think formally you are right.
But I think that if consciousness is essential to how we get important aspects of our input-output map, then I think the chances of there being another mechanism that works to get the same input-output map are equal to the chances that you could program a car to drive from here to Los Angeles without using any feedback mechanisms, by just dialing in all the stops and starts and turns and so on that it would need ahead of time. Formally possible, but absolutely bearing no real relationship to how anything that works has ever been built.
I am not a mathematician about these things, I am an engineer or a physicist in the sense of Feynman.
A few points:
1) Initial mind uploading will probably be lossy, because it needs to convert analog to digital.
2) I don’t know if even lossless compression of the whole input-output map is going to preserve everything. Let’s say you have ten seconds left to live. Your input-output map over these ten seconds probably doesn’t contain many interesting statements about consciousness, but that doesn’t mean you’re allowed to compress away consciousness. And even on longer timescales, people don’t seem to be very good at introspecting about consciousness, so all your beliefs about consciousness might be compressible into a small input-output map. Or at least we can’t say that input-output map is large, unless we figure out more about consciousness in the first place!
3) Even if consciousness plays a large causal role, I agree with crazy88′s point that consciousness might not be the smallest possible program that can fill that role.
4) I’m not sure that consciousness is just about the input-output map. Doesn’t it feel more like internal processing? I seem to have consciousness even when I’m not talking about it, and I would still have it even if my religion prohibited me from talking about it. Or if I was mute.
It is not your actual input-output map that matters, but your potential. What is uploaded must be information about the functional organization of you, not some abstracted mapping function. If I have 10 s left to live and I am uploaded, my upload should type this comment in response to your comment above even if it is well more than 10 s since I was uploaded.
If with years of intense and expert schooling I could say more about consciousness, then that is part of my input-output map. My upload would need to have the same property.
Might not be, but probably is. Biological function seems to be very efficient, with most bio features not equalled in efficiency by human manufactured systems even now. The chances that evolution would have created consciousness if it didn’t need to seem slim to me. So as an engineer trying to plan an attack on the problem, I’d expect consciousness to show up in any successful upload. If it did not, that would be a very interesting result. But of course, we need a way to measure consciousness to tell whether it is there in the upload or not.
To the best of my knowledge, no one anywhere has ever said how you go about distinguishing between a conscious being and a p-zombie.
I mean your input-output map writ broadly. But again, since you don’t even know how to distinguish a conscious me from a p-zombie me, we are not in a position yet to worry about the input-output map and compression, in my opinion.
If a simulation of me can be complete, able to attend graduate school and get 13 patents doing research afterwards, able to carry on an obsessive relationship with a married woman for a decade, able to enjoy a convertible he has owned for 8 years, able to post on lesswrong posts much like this one, then I would be shocked if it wasn’t conscious. But I would never know whether it was conscious, nor for that matter will I ever know whether you are conscious, until somebody figures out how to tell the difference between a p-zombie and a conscious person.
Even if that’s true, are you sure that AI will be optimizing us for the same mix of speed/size that evolution was optimizing for? If the weighting of speed vs size is different, the result of optimization might be different as well.
Can you expand what you mean by “writ broadly”? If we know that speech is not enough because the person might be mute, how do you convince yourself that a certain set of inputs and outputs is enough?
That said, if you also think that uploading and further optimization might accidentally throw away consciousness, then I guess we’re in agreement.
I was thinking of uploads in the Hansonian sense, a shortcut to “building” AI. Instead of understanding AI/consciousness from the ground up and designing de novo an IA, we simply copy an actual person. Copying the person, if successful, produces a computer run person which seems to do the things the person would have done under similar conditions.
The person is much simpler than the potential input-output map. THe human system has memory, so a semi-complete input-output map could not be generated unless you started with a myriad of fresh copies of the person and ran them through all sorts of conceivable lifetimes.
You seem to be presuming the upload would consist of taking the input-output map and, like a smart compiler, trying to invent the least amount of code that would produce that, or in another metaphor, try to optimally compress that input-output map. I don’t think this is at all how an upload would work.
Consider duplicating or uploading a car. WOuld you drive the car back and forth over every road in the world under every conceivable traffic and weather condition, and then take that very large input output map and try to compress and upload that? Or would you take each part of the car and upload it, and its relationship when assembled, to each other part in the car? You would do the second, there are too many possible inputs to imagine the input-output approach could be even vaguely as efficient.
So I am thinking of Hansonian uploads for Hansonian reasons, and so it is fair to insist we do something which is more efficient, upload a copy of the machine rather than a compressed input-output map, especially if the ratio of efficiency is > 10^100:1.
I think I have explained that above. TO characterize the machine by its input-output map, you need to consider every possible input. In the case of a person with memory, that means every possible lifetime: the input-output map is gigantic, much bigger than the machine itself, which is the brain/body.
What I think is that we don’t know whether or not consciousness has been thrown away because we don’t even have a method for determining whether the original is conscious or not. To the extent you believe I am conscious, why is it? Until you can answer that, until you can build a consciousness-meter, how do we even check an upload for consciousness? What we could check it for is whether it SEEMS to act like the person uploaded, our sort of fuzzy opinion.
What I would say is IF there is a consciousness-meter even possible, and I think there is but I don’t know, then any optimization that accidentally threw away consciousness would have changed other behaviors away and would be a measurably inferior simulation than a conscious simulation would have been.
If on the other hand there is NO measure of consciousness that could be developed as a consciousness-meter (or consciousness-evaluating program if you prefer), then consciousness is supernatural, which for all intents and purposes means it is make-believe. Literally, you make yourself believe something for reasons which by definition have nothing to do with something happened in the real, natural, measurable world.
Do we agree on any of these last two paragraphs?
Well, presumably you don’t want an atom-by-atom simulation. You want to at least compress each neuron to an approximate input-output map for that neuron, observed from practice, and then use that. Also you might want to take some implementation shortcuts to make the thing run faster. You seem to think that all these changes are obviously harmless. I also lean toward that, but not as strongly as you, because I don’t know where to draw the line between harmless and harmful optimizations.
Right; with lossless compression then you’re not going to lose anything. So cousin_it probably means lossy compression, like with jpgs and mp3s, smaller versions that are very similar to what you had before.
Well, initial mind uploading is going to be lossy because it will convert analog to digital.
That said, I don’t know if even lossless compression of the whole input-output map is going to preserve everything. Let’s say you have ten seconds left to live. Your input-output map over these ten seconds probably doesn’t contain many interesting statements about consciousness, but that doesn’t mean you’re allowed to compress away consciousness...
And even on longer timescales, people don’t seem to be very good at introspecting about consciousness, so all your beliefs about consciousness might be compressible into a small input-output map. Or at least we can’t say that input-output map is large, unless we figure out more about consciousness in the first place.
(Also I agree with crazy88′s point that consciousness might play a large causal role but still be compressible to a smaller non-conscious program.)
More generally, I’m not sure that consciousness is just about the input-output map. Doesn’t it feel more like internal processing? I seem to have consciousness even when I’m not talking about it, and I would still have it even if my religion prohibited me from talking about it, or something.
It depends on whether you subscribe to materialism. If you do then there nothing to measure. Conscious might even be a tricky illusion as Dennett suggests.
If on the other hand you do believe that there something beyond materialism there are plenty of frameworks to choose from that provide ideas about what one could measure.
OMG then someone should get busy! Tell me what I can measure and if it makes any kind of sense I will start working on it!
I do have a qualia for perceiving whether someone else is present in a meditation or is absent minded. It could be that it’s some mental reactions that picks up microgestures or some other thing that I don’t consciously perceive and summarizes that information into a qualia for mental presence.
Investigating how such a qualia works is what I would do personally when I would want to investigate consciousness.
But you probably have no such qualia, so you either need someone who has or develop it yourself. In both cases that probably means seeking a good meditation teacher.
It’s a difficult subject to talk about in a medium like this where people who are into a spiritual framework that has some model of what conscious happens to be have phenomenological primitives that the audience I’m addressing doesn’t have. In my experience most of the people who I consider capable in that regard are very unwilling to talk about details with people who don’t have phenomenological primitives to make sense of them. Instead of answering a question directly a Zen teacher might give you a koan and tell you to come back in a month when you build the phenomenological primitives to understand it, expect that he doesn’t tell you about phenomenological primitives.
I don’t know of a human-independent definition of consciousness, do you? If not, how can one say that “something else is conscious”? So the statement
will only make sense once there is a definition of consciousness not relying on being a human or using one to evaluate it. (I have a couple ideas about that, but they are not firm enough to explicate here.)
I don’t know of ANY definition of consciousness which is testable, human-independent or not.
Integrated Information Theory is one attempt at a definition. I read about it a little, but not enough to determine if it is completely crazy.
IIT is provides a mathematical approach to measuring consciousness. It is not crazy, and has a significant number of good papers on the topic. It is human-independent
I don’t understand it, but from reading the wikipedia summary it seems to me it measures a complexity of the system. A complexity is not necessarily consciousness.
According to this theory, what is the key difference between a human brain, and… let’s say a hard disk of the same capacity, connected to a high-resolution camera? Let’s assume that the data from the camera are being written in real time to pseudo-random parts of the hard disk. The pseudo-random parts are chosen by calculating a checksum of the whole hard disk. This system obviously is not conscious, but seems complex enough.
IIT proposes that consciousness is integrated information.
The key difference between a brain and the hard disk is the disk has no way of knowing what it is actually sensing. Brain can tell difference between many more sense and receive and use more forms of information. The camera is not conscious of the fact it sensing light and colour.
This article is a good introduction to the topic and the photodiode example in the paper is the simple version of your question http://www.biolbull.org/content/215/3/216.full
Thanks! The article was good. At this moment, I am… not convinced, but also not able to find an obvious error.