Is there no middle ground between “aware” and “not aware” then? This is like asking “Is a boulder a chair?”, “is a tree stump a chair?” “Is a stool a chair?” Words are fuzzy like that.
That we have subjective experience is an objective fact.
Rather, that you have it is an objective fact to you. The empirical questions involved here are applied to other minds, not your own.
Is there no middle ground between “aware” and “not aware” then? This is like asking “Is a boulder a chair?”, “is a tree stump a chair?” “Is a stool a chair?” Words are fuzzy like that.
Yes, there’s a whole range. Maybe a worm has a microconsciousness, or a nanoconsciousness, or maybe it has none at all, relative to a human. Or maybe it’s like asking about the temperature of a cluster of a few atoms. The concept is indeed fuzzy.
That we have subjective experience is an objective fact.
Rather, that you have it is an objective fact to you. The empirical questions involved here are applied to other minds, not your own.
Other people seem to be the same sorts of thing as me, and they report awareness of things. That’s good enough for me to believe them to have consciousness. When robots get good enough to not sound like spam when they pretend to be people, then that criterion would have to be reexamined.
As Scott Aaronson points out in his discussion of IIT, experiences of oneself and intuitions about other creatures based on their behaviour are all we have to go on at present. If an explanation of consciousness doesn’t more or less match up to those intuitions, it’s a problem for the explanation, not the intuitions.
Is there no middle ground between “aware” and “not aware” then? This is like asking “Is a boulder a chair?”, “is a tree stump a chair?” “Is a stool a chair?” Words are fuzzy like that.
Rather, that you have it is an objective fact to you. The empirical questions involved here are applied to other minds, not your own.
Yes, there’s a whole range. Maybe a worm has a microconsciousness, or a nanoconsciousness, or maybe it has none at all, relative to a human. Or maybe it’s like asking about the temperature of a cluster of a few atoms. The concept is indeed fuzzy.
Other people seem to be the same sorts of thing as me, and they report awareness of things. That’s good enough for me to believe them to have consciousness. When robots get good enough to not sound like spam when they pretend to be people, then that criterion would have to be reexamined.
As Scott Aaronson points out in his discussion of IIT, experiences of oneself and intuitions about other creatures based on their behaviour are all we have to go on at present. If an explanation of consciousness doesn’t more or less match up to those intuitions, it’s a problem for the explanation, not the intuitions.