I’ll chime in with the same thing I always say: the label “consciousness” is a convenient lumping-together of a bunch of different properties that co-occur in humans, but don’t have to all go together a priori.
For example, the way I can do reasoning via subverbal arguments that I can later remember and elaborate if I attend to them. Or what it’s like to remember things via associations and some vague indications of how long ago they were, rather than e.g. a discrete system of tags and timestamps. My sense of aversion to noxious stimuli, which can in turn be broken down into many subroutines.
Future AIs are simply going to have some of these properties, to some extent, but not all of them. Asking whether that’s consciousness is somewhat useless—what matters is how much of a moral patient we want to treat them as, based on a more fine-grained consideration of their properties.
I’ll chime in with the same thing I always say: the label “consciousness” is a convenient lumping-together of a bunch of different properties that co-occur in humans, but don’t have to all go together a priori.
For example, the way I can do reasoning via subverbal arguments that I can later remember and elaborate if I attend to them. Or what it’s like to remember things via associations and some vague indications of how long ago they were, rather than e.g. a discrete system of tags and timestamps. My sense of aversion to noxious stimuli, which can in turn be broken down into many subroutines.
Future AIs are simply going to have some of these properties, to some extent, but not all of them. Asking whether that’s consciousness is somewhat useless—what matters is how much of a moral patient we want to treat them as, based on a more fine-grained consideration of their properties.