ADDED: Many people are replying by saying, “Obviously, option 2 is correct,” then listing arguments for, without addressing the problems with option 2. That’s cheating.
Phil, I have to say that I don’t think the problems with option 2 are actually presented in you post. But that does not mean that we are allowed to dodge the question implicit in your post: how to formally distinguish between two computational processes, one conscious, the other not. Let me start my attempt with a quote:
“Consciousness is overrated. What we call consciousness now is a very imperfect summary in one part of the brain of what the rest is doing.”—Marvin Minsky
I believe with Minsky that consciousness is a very anthropocentric concept, inheriting much of the complexity of its originators. I actually have no problem with an anthropocentric approach to consciousness, so I like the following intuitive “definition”: X is conscious if it is not silly to ask “what is it like to be X?”. The subtle source of anthropocentrism here, of course, is that it is humans who do the asking. As materialists, we just can’t formalize this intuitive definition without mapping specific human brain functions to processes of X. In short, we inherently need human neuroscience. So it is not too surprising that we will not find a nice, clean decision procedure to distinguish between two computational processes, one conscious the other not.
Most probably you are not happy with this anthropocentric approach. Then you will have to distill some clean, mathematically tractable concept from the messy concept of consciousness. If you agree with Hofstadter and Minsky, then you will probably reach something related to self-reflection. This may or may not work, but I believe that you will lose the spirit of the original concept during such a formalization. Your decision procedure will probably give unexpected results for many things: various simple, very unintelligent computer programs, hive minds, and maybe even rooms full of people.
Phil, I have to say that I don’t think the problems with option 2 are actually presented in you post. But that does not mean that we are allowed to dodge the question implicit in your post: how to formally distinguish between two computational processes, one conscious, the other not. Let me start my attempt with a quote:
I believe with Minsky that consciousness is a very anthropocentric concept, inheriting much of the complexity of its originators. I actually have no problem with an anthropocentric approach to consciousness, so I like the following intuitive “definition”: X is conscious if it is not silly to ask “what is it like to be X?”. The subtle source of anthropocentrism here, of course, is that it is humans who do the asking. As materialists, we just can’t formalize this intuitive definition without mapping specific human brain functions to processes of X. In short, we inherently need human neuroscience. So it is not too surprising that we will not find a nice, clean decision procedure to distinguish between two computational processes, one conscious the other not.
Most probably you are not happy with this anthropocentric approach. Then you will have to distill some clean, mathematically tractable concept from the messy concept of consciousness. If you agree with Hofstadter and Minsky, then you will probably reach something related to self-reflection. This may or may not work, but I believe that you will lose the spirit of the original concept during such a formalization. Your decision procedure will probably give unexpected results for many things: various simple, very unintelligent computer programs, hive minds, and maybe even rooms full of people.