presumably given a sufficiently advanced cognitive science, we could look at its inner workings and say whether it’s conscious.
Can we please stop discussing consciousness as though it’s some sort of binary option? As though passing a Turing test somehow imbues a system with some magical quality that changes everything?
An AI won’t suddenly go ‘ping’ and become self-aware, any more than a baby suddenly becomes a self-aware entity on its second birthday. Deciding whether or not boxing an AI is slavery is akin to discussions on animal rights, in that it deals with the slippery, quantitative question of how much moral weight we give to ‘consciousness’. It’s definitely not a yes/no question, and we shouldn’t treat it as such.
It’s very hard for me to imagine half a quale. Perhaps this is a failure of imagination?
How do we detect even quantitative levels of consciousness? Surely it’s not enough to just have processing power; you must actually be doing the right sort of thing (computations, behaviors, chemical reactions, something). But then… are our computers conscious, even a little bit? If so, does this change our moral relationship to them? If not, how do we know that?
presumably given a sufficiently advanced cognitive science, we could look at its inner workings and say whether it’s conscious.
Can we please stop discussing consciousness as though it’s some sort of binary option? As though passing a Turing test somehow imbues a system with some magical quality that changes everything?
An AI won’t suddenly go ‘ping’ and become self-aware, any more than a baby suddenly becomes a self-aware entity on its second birthday. Deciding whether or not boxing an AI is slavery is akin to discussions on animal rights, in that it deals with the slippery, quantitative question of how much moral weight we give to ‘consciousness’. It’s definitely not a yes/no question, and we shouldn’t treat it as such.
I think that’s right.
Yet, two things:
It’s very hard for me to imagine half a quale. Perhaps this is a failure of imagination?
How do we detect even quantitative levels of consciousness? Surely it’s not enough to just have processing power; you must actually be doing the right sort of thing (computations, behaviors, chemical reactions, something). But then… are our computers conscious, even a little bit? If so, does this change our moral relationship to them? If not, how do we know that?