Each time a question like this comes up it seems to get down voted as a bad question. I think it’s a great question, just one for which there are no obviously satisfactory answers. Dennet’s approach seems to be to say, if you just word things differently its all fine, nothing to see here. But to me this is a weird avoiding of the question.
We feel there is a difference between living things and inanimate ones. We believe that other people and some animals are feeling things that are similar to the feelings we have. Many people would find it absurd to think that devices or machines were feeling anything. Yet whatever computational model of our minds we create, it is hard to identify the point at which it starts to feel. It is easy to create a virtual character that appears to feel but most people doubt that it is doing any more than simulating feelings, similar to the inauthentic patterns of behaviour we form when we are acting or lying. I think one can imagine what life would feel like to be constantly acting, performing reasoned interactions without sincere emotion, if at heart we are computational why does all interaction not feel this way?
To me this distinction is what makes consciousness distinct and special. I think it is a fascinating consequence of a certain pattern of interacting systems. Implying that conscious feelings occur all over the place, perhaps every feedback system is feeling something.
My justification for this theory is an attempt to provide a simple explanation of the origin of conscious experience, based on a belief that explanations should be simple and lack special cases (I don’t find the idea that human beings are fundamentally distinct from other structures particularly elegant).
To me this distinction is what makes consciousness distinct and special. I think it is a fascinating consequence of a certain pattern of interacting systems. Implying that conscious feelings occur all over the place, perhaps every feedback system is feeling something.
This sounds like the point Pinker makes in How the Mind Works—that apart from the problem of consciousness, concepts like “thinking” and “knowing” and “talking” are actually very simple:
(...) Ryle and other philosophers
argued that mentalistic terms such as “beliefs,” “desires,” and
“images” are meaningless and come from sloppy misunderstandings of
language, as if someone heard the expression “for Pete’s sake” and went
around looking for Pete. Simpatico behaviorist psychologists claimed
that these invisible entities were as unscientific as the Tooth Fairy and
tried to ban them from psychology.
And then along came computers: fairy-free, fully exorcised hunks of
metal that could not be explained without the full lexicon of mentalistic
taboo words. “Why isn’t my computer printing?” “Because the program
doesn’t know you replaced your dot-matrix printer with a laser printer. It
still thinks it is talking to the dot-matrix and is trying to print the document
by asking the printer to acknowledge its message. But the printer
doesn’t understand the message; it’s ignoring it because it expects its input
to begin with ‘%!’ The program refuses to give up control while it polls the
printer, so you have to get the attention of the monitor so that it can wrest
control back from the program. Once the program learns what printer is
connected to it, they can communicate.” The more complex the system
and the more expert the users, the more their technical conversation
sounds like the plot of a soap opera.
Behaviorist philosophers would insist that this is all just loose talk.
The machines aren’t really understanding or trying anything, they
would say; the observers are just being careless in their choice of
words and are in danger of being seduced into grave conceptual
errors. Now, what is wrong with this picture? The philosophers are
accusing the computer scientists of fuzzy thinking? A computer is the
most legalistic, persnickety, hard-nosed, unforgiving demander of
precision and explicitness in the universe. From the accusation you’d
think it was the befuddled computer scientists who call a philosopher
when their computer stops working rather than the other way around.
A better explanation is that computation has finally demystified mentalistic
terms. Beliefs are inscriptions in memory, desires are goal
inscriptions, thinking is computation, perceptions are inscriptions
triggered by sensors, trying is executing operations triggered by a
goal.
Each time a question like this comes up it seems to get down voted as a bad question. I think it’s a great question, just one for which there are no obviously satisfactory answers. Dennet’s approach seems to be to say, if you just word things differently its all fine, nothing to see here. But to me this is a weird avoiding of the question.
We feel there is a difference between living things and inanimate ones. We believe that other people and some animals are feeling things that are similar to the feelings we have. Many people would find it absurd to think that devices or machines were feeling anything. Yet whatever computational model of our minds we create, it is hard to identify the point at which it starts to feel. It is easy to create a virtual character that appears to feel but most people doubt that it is doing any more than simulating feelings, similar to the inauthentic patterns of behaviour we form when we are acting or lying. I think one can imagine what life would feel like to be constantly acting, performing reasoned interactions without sincere emotion, if at heart we are computational why does all interaction not feel this way?
To me this distinction is what makes consciousness distinct and special. I think it is a fascinating consequence of a certain pattern of interacting systems. Implying that conscious feelings occur all over the place, perhaps every feedback system is feeling something.
My justification for this theory is an attempt to provide a simple explanation of the origin of conscious experience, based on a belief that explanations should be simple and lack special cases (I don’t find the idea that human beings are fundamentally distinct from other structures particularly elegant).
This sounds like the point Pinker makes in How the Mind Works—that apart from the problem of consciousness, concepts like “thinking” and “knowing” and “talking” are actually very simple: