Hi Matt, thanks for dropping by. Here is an older comment of mine that tries to directly address what I consider the hardest of your questions: How to distinguish from the outside between two computational processes, one conscious, the other not. I’ll copy it here for convenience. Most of the replies to you here can be safely considered Less Wrong consensus opinion, but I am definitely not claiming that about my reply.
I start my answer with a Minsky quote:
“Consciousness is overrated. What we call consciousness now is a very imperfect summary in one part of the brain of what the rest is doing.”—Marvin Minsky
I believe with Minsky that consciousness is a very anthropocentric concept, inheriting much of the complexity of its originators. I actually have no problem with an anthropocentric approach to consciousness, so I like the following intuitive “definition”: X is conscious if it is not silly to ask “what is it like to be X?”. The subtle source of anthropocentrism here, of course, is that it is humans who do the asking. As materialists, we just can’t formalize this intuitive definition without mapping specific human brain functions to processes of X. In short, we inherently need human neuroscience. So it is not too surprising that we will not find a nice, clean decision procedure to distinguish between two computational processes, one conscious the other not.
Most probably you are not happy with this anthropocentric approach. Then you will have to distill some clean, mathematically tractable concept from the messy concept of consciousness. If you agree with Hofstadter and Minsky, then you will probably reach something related to self-reflection. This may or may not work, but I believe that you will lose the spirit of the original concept during such a formalization. Your decision procedure will probably give unexpected results for many things: various simple, very unintelligent computer programs, hive minds, and maybe even rooms full of people.
This ends my old comment, and I will just add a footnote related to ethical implications. With HonoreDB, I can in principle imagine a world with cooperating and competing agents, some conscious, others not, but otherwise having similar negotiating power. I believe that the ethical norms emerging in this imagined world would not even mention consciousness. If you want to build an ethical system for humans, you can “arbitrarily” decide that protecting consciousness is a terminal value. Why not? But if you want to build a non-anthropocentric ethical system, you will see that the question of consciousness is orthogonal to its issues.
Hi Matt, thanks for dropping by. Here is an older comment of mine that tries to directly address what I consider the hardest of your questions: How to distinguish from the outside between two computational processes, one conscious, the other not. I’ll copy it here for convenience. Most of the replies to you here can be safely considered Less Wrong consensus opinion, but I am definitely not claiming that about my reply.
I start my answer with a Minsky quote:
I believe with Minsky that consciousness is a very anthropocentric concept, inheriting much of the complexity of its originators. I actually have no problem with an anthropocentric approach to consciousness, so I like the following intuitive “definition”: X is conscious if it is not silly to ask “what is it like to be X?”. The subtle source of anthropocentrism here, of course, is that it is humans who do the asking. As materialists, we just can’t formalize this intuitive definition without mapping specific human brain functions to processes of X. In short, we inherently need human neuroscience. So it is not too surprising that we will not find a nice, clean decision procedure to distinguish between two computational processes, one conscious the other not.
Most probably you are not happy with this anthropocentric approach. Then you will have to distill some clean, mathematically tractable concept from the messy concept of consciousness. If you agree with Hofstadter and Minsky, then you will probably reach something related to self-reflection. This may or may not work, but I believe that you will lose the spirit of the original concept during such a formalization. Your decision procedure will probably give unexpected results for many things: various simple, very unintelligent computer programs, hive minds, and maybe even rooms full of people.
This ends my old comment, and I will just add a footnote related to ethical implications. With HonoreDB, I can in principle imagine a world with cooperating and competing agents, some conscious, others not, but otherwise having similar negotiating power. I believe that the ethical norms emerging in this imagined world would not even mention consciousness. If you want to build an ethical system for humans, you can “arbitrarily” decide that protecting consciousness is a terminal value. Why not? But if you want to build a non-anthropocentric ethical system, you will see that the question of consciousness is orthogonal to its issues.