Here’s a reductio ad absurdum against computers being capable of consciousness at all. It’s probably wrong, and I’d appreciate feedback on why.
Suppose a consciousness-producing computer program which experiences its own isolated, deterministic world. There must be some critical instruction in the program which causes consciousness to occur; an instruction such that, if we halt the program immediately before it is executed, consciousness will not occur, and if we halt immediately after it is executed, consciousness will occur.
If we halt the program before executing the critical instruction, but save its state, consciousness should still not occur; and if we load the state back up again, and compute the results of executing the critical instruction, consciousness should then occur. It seems obvious enough that this shouldn’t stop the program’s consciousness, since it is still executed fully, just with a delay in between.
What if we subsequently load the state taken immediately prior to executing the critical instruction onto another computer? Will it produce a second conscious experience, identical to the first? The second computer is executing precisely the same code on precisely the same data as the first, so it seems reasonable to conclude that it will have the same effects. If the second computer doesn’t produce consciousness, that would seem to imply the universe has an eternally-persistent memory of every conscious experience which has ever occurred, and uses it to prevent reoccurrences; a pretty bizarre implication
However, if the second computer does produce consciousness, this means that once you’ve executed a conscious program, causing its conscious experiences to occur a second time has essentially no processing requirement: you just have to execute one instruction in the simplest instruction set you like.
If that doesn’t seem weird to you, consider the practical implication: you could print out the memory dump of a conscious program and produce consciousness by simulating the critical instruction by hand. If the program suffers, you could produce real, morally-relevant suffering by performing a single operation on a sheet of paper – and then erase your pencil marks and do it again, producing more suffering. Can consciousness really be so easy to create?
There must be some critical instruction in the program which causes consciousness to occur; an instruction such that, if we halt the program immediately before it is executed, consciousness will not occur, and if we halt immediately after it is executed, consciousness will occur.
I don’t see why. Consciosuness occurs in lesser and greater amounts in humans.
Maybe there are degrees of consciousness. I read something
by Daniel Dennett where he said he thought that a
refrigerator light (a light bulb that is turned on when you
open the refrigerator door and thus close a switch) had a
very primitive form of consciousness.
Can consciousness really be so easy to create?
I too think this is a head-scratcher, yet on balance I am
still a reductionist. Maybe this is not a reductio ad
absurdum, but a reductio ad weirdam—a demonstration of
how weird existence is. (Obligatory “reality is not weird”
link.)
If you have not yet read Greg Egan’s Permutation
City,
you should. It will really bake your noodle.
Suppose a consciousness-producing computer program which experiences its own isolated, deterministic world. There must be some critical instruction in the program which causes consciousness to occur; an instruction such that, if we halt the program immediately before it is executed, consciousness will not occur, and if we halt immediately after it is executed, consciousness will occur.
There needn’t be any such thing. Consciousness is not an all or nothing thing, as is already evident from ordinary experience. As well ask how many atoms make life.
You’re assuming consciousness (or personhood or whatever) is binary; I’ve always assumed there’s a continuum. Then again, I’m vegetarian. If you assume that, say, the experiences of a chimpanzee or a dolphin or a dog cannot have moral weight then yes, crossing that boundary has some odd effects. And that does seem to be a common position on LW [citation needed], even if it’s not often articulated, let alone exposed to a reductio like this one, so … well done, more people should see this.
If I accept all of your suppositions, your conclusion doesn’t seem particularly difficult to accept. Sure, after doing all the prep work you describe, executing a conscious experience (albeit an entirely static, non-environment-dependent one) requires a single operation… even the conscious experience of suffering. As does any other computation you might ever wish to perform, no matter how complicated.
That said, your suppositions do strike me as revealing some confusion between an isolated conscious experience (whatever that is) and the moral standing of a system (whatever that is).
Well, this post heavily hints that a system’s moral standing is related to whether it is conscious. Elizezer mentions a need to tackle the hard problem of consciousness in order to figure out whether the simulations performed by our AI cause immoral suffering. Those simulations would be basically isolated; their inputs may be chosen based on our real-world requirements, but they don’t necessarily correspond to what’s actually going on in the real world; and their outputs would presumably be used in aggregate to make decisions, but not pushed directly into the outside world.
Maybe moral standing requires something else too, like self-awareness, in addition to consciousness. But wouldn’t there still be a critical instruction in a self-aware and conscious program, where a conscious experience of being self-aware was produced? Wouldn’t the same argument apply to any criteria given for moral standing in a deterministic program?
It’s not clear to me that whether a system is conscious (whatever that means) and whether it’s capable of a single conscious experience (whatever that means) are the same thing.
Here’s a reductio ad absurdum against computers being capable of consciousness at all. It’s probably wrong, and I’d appreciate feedback on why.
Suppose a consciousness-producing computer program which experiences its own isolated, deterministic world. There must be some critical instruction in the program which causes consciousness to occur; an instruction such that, if we halt the program immediately before it is executed, consciousness will not occur, and if we halt immediately after it is executed, consciousness will occur.
If we halt the program before executing the critical instruction, but save its state, consciousness should still not occur; and if we load the state back up again, and compute the results of executing the critical instruction, consciousness should then occur. It seems obvious enough that this shouldn’t stop the program’s consciousness, since it is still executed fully, just with a delay in between.
What if we subsequently load the state taken immediately prior to executing the critical instruction onto another computer? Will it produce a second conscious experience, identical to the first? The second computer is executing precisely the same code on precisely the same data as the first, so it seems reasonable to conclude that it will have the same effects. If the second computer doesn’t produce consciousness, that would seem to imply the universe has an eternally-persistent memory of every conscious experience which has ever occurred, and uses it to prevent reoccurrences; a pretty bizarre implication
However, if the second computer does produce consciousness, this means that once you’ve executed a conscious program, causing its conscious experiences to occur a second time has essentially no processing requirement: you just have to execute one instruction in the simplest instruction set you like.
If that doesn’t seem weird to you, consider the practical implication: you could print out the memory dump of a conscious program and produce consciousness by simulating the critical instruction by hand. If the program suffers, you could produce real, morally-relevant suffering by performing a single operation on a sheet of paper – and then erase your pencil marks and do it again, producing more suffering. Can consciousness really be so easy to create?
I don’t see why. Consciosuness occurs in lesser and greater amounts in humans.
Maybe there are degrees of consciousness. I read something by Daniel Dennett where he said he thought that a refrigerator light (a light bulb that is turned on when you open the refrigerator door and thus close a switch) had a very primitive form of consciousness.
I too think this is a head-scratcher, yet on balance I am still a reductionist. Maybe this is not a reductio ad absurdum, but a reductio ad weirdam—a demonstration of how weird existence is. (Obligatory “reality is not weird” link.)
If you have not yet read Greg Egan’s Permutation City, you should. It will really bake your noodle.
There needn’t be any such thing. Consciousness is not an all or nothing thing, as is already evident from ordinary experience. As well ask how many atoms make life.
You’re assuming consciousness (or personhood or whatever) is binary; I’ve always assumed there’s a continuum. Then again, I’m vegetarian. If you assume that, say, the experiences of a chimpanzee or a dolphin or a dog cannot have moral weight then yes, crossing that boundary has some odd effects. And that does seem to be a common position on LW [citation needed], even if it’s not often articulated, let alone exposed to a reductio like this one, so … well done, more people should see this.
If I accept all of your suppositions, your conclusion doesn’t seem particularly difficult to accept. Sure, after doing all the prep work you describe, executing a conscious experience (albeit an entirely static, non-environment-dependent one) requires a single operation… even the conscious experience of suffering. As does any other computation you might ever wish to perform, no matter how complicated.
That said, your suppositions do strike me as revealing some confusion between an isolated conscious experience (whatever that is) and the moral standing of a system (whatever that is).
Well, this post heavily hints that a system’s moral standing is related to whether it is conscious. Elizezer mentions a need to tackle the hard problem of consciousness in order to figure out whether the simulations performed by our AI cause immoral suffering. Those simulations would be basically isolated; their inputs may be chosen based on our real-world requirements, but they don’t necessarily correspond to what’s actually going on in the real world; and their outputs would presumably be used in aggregate to make decisions, but not pushed directly into the outside world.
Maybe moral standing requires something else too, like self-awareness, in addition to consciousness. But wouldn’t there still be a critical instruction in a self-aware and conscious program, where a conscious experience of being self-aware was produced? Wouldn’t the same argument apply to any criteria given for moral standing in a deterministic program?
It’s not clear to me that whether a system is conscious (whatever that means) and whether it’s capable of a single conscious experience (whatever that means) are the same thing.