One of philosophical insights showing the inside of the system doesn’t matter to conscious states would be to consider that we can describe our conscious states to an outside observer, so what-we-call-consciousness has no parts unconnected to the output of the entire system.
Believing that an artificial consciousness has to conform to the computational architecture of the human brain (on a specific level of abstraction), would be unjustified anthropocentrism, no different from believing that a system can’t be conscious without neural tissue.
What about a large look-up table that mapped conversation so far → what to say next and was able to pass the Turing test? This program would have all the external signs of consciousness, but would you really describe it as a conscious being in the same way that you are?
That wouldn’t fit into our universe (by about 2 metaorders of magnitude). But yes, that simple software would indeed have an equivalent consciousness, with the complexity almost completely moved from the algorithm to the data. There is no other option.
What would it be conscious of, though? Could it feel a headache when you gave it a difficult riddle? I don’t think a look-up table can be conscious of anything except for matching bytes to bytes. Perhaps that corresponds to our experience of recognizing that two geometric forms are identical.
We’re not conscious of internal computational processes at that level of abstraction (like matching bits). We’re conscious of outside inputs, and of the transformations of the state-machine-which-is-us from one state to the next.
Recognizing two geometric forms are identical would correspond to giving whatever output we’d give in reaction to that.
One of philosophical insights showing the inside of the system doesn’t matter to conscious states would be to consider that we can describe our conscious states to an outside observer, so what-we-call-consciousness has no parts unconnected to the output of the entire system.
Believing that an artificial consciousness has to conform to the computational architecture of the human brain (on a specific level of abstraction), would be unjustified anthropocentrism, no different from believing that a system can’t be conscious without neural tissue.
What about a large look-up table that mapped conversation so far → what to say next and was able to pass the Turing test? This program would have all the external signs of consciousness, but would you really describe it as a conscious being in the same way that you are?
That wouldn’t fit into our universe (by about 2 metaorders of magnitude). But yes, that simple software would indeed have an equivalent consciousness, with the complexity almost completely moved from the algorithm to the data. There is no other option.
What would it be conscious of, though? Could it feel a headache when you gave it a difficult riddle? I don’t think a look-up table can be conscious of anything except for matching bytes to bytes. Perhaps that corresponds to our experience of recognizing that two geometric forms are identical.
We’re not conscious of internal computational processes at that level of abstraction (like matching bits). We’re conscious of outside inputs, and of the transformations of the state-machine-which-is-us from one state to the next.
Recognizing two geometric forms are identical would correspond to giving whatever output we’d give in reaction to that.