I have problems with the “Giant look-up table” post.
“The problem isn’t the levers,” replies the functionalist, “the problem is that a GLUT has the wrong pattern of levers. You need levers that implement things like, say, formation of beliefs about beliefs, or self-modeling… Heck, you need the ability to write things to memory just so that time can pass for the computation. Unless you think it’s possible to program a conscious being in Haskell.”
If the GLUT is indeed behaving like a human, then it will need some sort of memory of previous inputs. A human’s behaviour is dependent not just on the present state of the environment, but also on previous states. I don’t see how you can successfully emulate a human without that. So the GLUT’s entries would be in the form of products of input states over all previous time instants. To each of these possible combinations, the GLUT would assign a given action.
Note that “creation of beliefs” (including about beliefs) is just a special case of memory. It’s all about input/state at time t1 influencing (restricting) the set of entries in the table that can be looked up at time t2>t1. If a GLUT doesn’t have this ability, it can’t emulate a human. If it does, then it can meet all the requirements spelt out by Eliezer in the above passage.
So I don’t see how the non-consciousness of the GLUT is established by this argument.
But in this case, the origin of the GLUT matters; and that’s why it’s important to understand the motivating question, “Where did the improbability come from?”
The obvious answer is that you took a computational specification of a human brain, and used that to precompute the Giant Lookup Table. (...)
In this case, the GLUT is writing papers about consciousness because of a conscious algorithm. The GLUT is no more a zombie, than a cellphone is a zombie because it can talk about consciousness while being just a small consumer electronic device. The cellphone is just transmitting philosophy speeches from whoever happens to be on the other end of the line. A GLUT generated from an originally human brain-specification is doing the same thing.
But the difficulty is precisely to explain why the GLUT would be different from just about any possible human-created AI in this respect. Keeping in mind the above, of course.
If the GLUT is indeed behaving like a human, then it will need some sort of memory of previous inputs. A human’s behaviour is dependent not just on the present state of the environment, but also on previous states. I don’t see how you can successfully emulate a human without that. So the GLUT’s entries would be in the form of products of input states over all previous time instants. To each of these possible combinations, the GLUT would assign a given action.
Memmory is input to. The “GLUT” is just fed all of the things its seen so far back in as input along with the current state of its external enviroment. A copy is made and then added to the rest of the memmory and the next cycle its fed in again with the next new state.
This is basically just the Chinese room argument. There is a room in China. Someone slips a few symbols underneath the door every so often. The symbols are given to a computer with artificial intelligence which then makes an appropriate response and slips it back through the door. Does the computer actually understand Chinese? Well what if a human did exactly the same process the computer did, manually? However, the operator only speaks English. No matter how long he does it he will never truly understand Chinese—even if he memorizes the entire process and does it in his head. So how could the computer “understand”?
I have problems with the “Giant look-up table” post.
If the GLUT is indeed behaving like a human, then it will need some sort of memory of previous inputs. A human’s behaviour is dependent not just on the present state of the environment, but also on previous states. I don’t see how you can successfully emulate a human without that. So the GLUT’s entries would be in the form of products of input states over all previous time instants. To each of these possible combinations, the GLUT would assign a given action.
Note that “creation of beliefs” (including about beliefs) is just a special case of memory. It’s all about input/state at time t1 influencing (restricting) the set of entries in the table that can be looked up at time t2>t1. If a GLUT doesn’t have this ability, it can’t emulate a human. If it does, then it can meet all the requirements spelt out by Eliezer in the above passage.
So I don’t see how the non-consciousness of the GLUT is established by this argument.
But the difficulty is precisely to explain why the GLUT would be different from just about any possible human-created AI in this respect. Keeping in mind the above, of course.
Memmory is input to. The “GLUT” is just fed all of the things its seen so far back in as input along with the current state of its external enviroment. A copy is made and then added to the rest of the memmory and the next cycle its fed in again with the next new state.
This is basically just the Chinese room argument. There is a room in China. Someone slips a few symbols underneath the door every so often. The symbols are given to a computer with artificial intelligence which then makes an appropriate response and slips it back through the door. Does the computer actually understand Chinese? Well what if a human did exactly the same process the computer did, manually? However, the operator only speaks English. No matter how long he does it he will never truly understand Chinese—even if he memorizes the entire process and does it in his head. So how could the computer “understand”?