The reason few people argue for the possibility of artificial consciousness is that it seems obviously possible.
The brain is built of matter. “Artificial” systems are built of matter. Make the two similar enough, and you’re going to get all of the same properties.
There’s just no benefit to assuming consciousness doesn’t arise from information processing in the brain. If dualism were true, it would explain absolutely nothing. So it’s probably false, and artificial systems can be conscious.
This is one question, and I agree that they “can” be. The other question is whether they “must” be, especially when the mechanisms are not identical to human wetware. I’m more uncertain here.
Note that my uncertainty starts with lack of operational/measurable definitions. I don’t know where to draw the line (or how steep the gradient, if it’s not a binary feature) between “not sentient” and “sentient” (terms I find a lot more important than “conscious”, which gets redefined to things I don’t care much about pretty often). This uncertainty definitely applies to animals, and even some other humans—I give them the benefit of the doubt, but the doubt remains.
There’s just no benefit to assuming consciousness doesn’t arise from information processing in the brain.
Nor is there an explanation of how it (in the HP sense)_ does arise. It’s still true that..
Make the two similar enough, and you’re going to get all of the same properties.
...but it’s a different line of reasoning. Everyone expects a quark-by-quark duplicate to be conscious, but that’s not the interesting case of “artificial consciousness”.
Just to be clear, I am not arguing in favour of or against dualism, however, it is not true that if dualism were true, it would explain nothing — it is certainly an explanation of consciousness (something like “it arises out of immaterial minds”) but perhaps is just an unpopular one/suffers from too many problems according to some. Secondly, while I may agree that what you are saying about AC being obvious, this does not really address any part of my argument — many things seemed obvious in the past that turned out to be wrong, so just relying on our intuitions rather than arguments does not seem valid. And since there may be reasons that the two cannot turn out to be similar enough (this is the crux of my argument), this may contest your thesis about AC simply being obvious.
The reason few people argue for the possibility of artificial consciousness is that it seems obviously possible.
The brain is built of matter. “Artificial” systems are built of matter. Make the two similar enough, and you’re going to get all of the same properties.
There’s just no benefit to assuming consciousness doesn’t arise from information processing in the brain. If dualism were true, it would explain absolutely nothing. So it’s probably false, and artificial systems can be conscious.
This is one question, and I agree that they “can” be. The other question is whether they “must” be, especially when the mechanisms are not identical to human wetware. I’m more uncertain here.
Note that my uncertainty starts with lack of operational/measurable definitions. I don’t know where to draw the line (or how steep the gradient, if it’s not a binary feature) between “not sentient” and “sentient” (terms I find a lot more important than “conscious”, which gets redefined to things I don’t care much about pretty often). This uncertainty definitely applies to animals, and even some other humans—I give them the benefit of the doubt, but the doubt remains.
Nor is there an explanation of how it (in the HP sense)_ does arise. It’s still true that..
...but it’s a different line of reasoning. Everyone expects a quark-by-quark duplicate to be conscious, but that’s not the interesting case of “artificial consciousness”.
It’s not, I agree. But then the question becomes not whether it’s possible but how. Which IMO is a very worthwhile question.
Just to be clear, I am not arguing in favour of or against dualism, however, it is not true that if dualism were true, it would explain nothing — it is certainly an explanation of consciousness (something like “it arises out of immaterial minds”) but perhaps is just an unpopular one/suffers from too many problems according to some. Secondly, while I may agree that what you are saying about AC being obvious, this does not really address any part of my argument — many things seemed obvious in the past that turned out to be wrong, so just relying on our intuitions rather than arguments does not seem valid. And since there may be reasons that the two cannot turn out to be similar enough (this is the crux of my argument), this may contest your thesis about AC simply being obvious.