To my mind all such questions are related to arguments about solipcism, i.e. the notion that even other humans don’t, or may not, have minds/consciousness/qualia. The basic argument is that I can only see behavior (not mind) in anyone other than myself. Most everyone rejects solipsism, but I don’t know if there have actually many very good arguments against it, except that it is morally unappealing (if anyone know of any please point them out). I think the same questions hold regarding emulations, only even more so (at least with other humans we know they are physically similar, suggesting some possibility that they are mentally similar as well—not so with emulations*). Especially, I don’t see how there can ever be empirical evidence that anything is conscious or experiences qualia (or that anything is not conscious!): behavior isn’t strictly relevant, and other minds are non-perceptible. I think this is the most common objection to Turing-tests as a standard, as well.
*Maybe this is the logic of the biological position you mention—essentially, the more something seems like the one thing I know is conscious (me), the more likehood I assign to it also being conscious. Thus other humans > other complex animals > simple animals > other organisms > abiotics.
Arguments against solipsism? Well, the most obvious one would be that everyone else appears to be implemented on very nearly the same hardware I am, and so they should be conscious for the same reasons I am.
Admittedly I don’t know what those reasons are.
It’s possible that there are none, that the only reason I’m conscious is because I’m not implemented on a brain but in fact on some unknown thing by a Dark Lord of the Matrix, but although this recovers the solipsism option as not quite as impossible as it’d get if you buy the previous argument, it doesn’t seem likely. Even in matrix scenarios.
To my mind all such questions are related to arguments about solipcism, i.e. the notion that even other humans don’t, or may not, have minds/consciousness/qualia. The basic argument is that I can only see behavior (not mind) in anyone other than myself. Most everyone rejects solipsism, but I don’t know if there have actually many very good arguments against it, except that it is morally unappealing (if anyone know of any please point them out). I think the same questions hold regarding emulations, only even more so (at least with other humans we know they are physically similar, suggesting some possibility that they are mentally similar as well—not so with emulations*). Especially, I don’t see how there can ever be empirical evidence that anything is conscious or experiences qualia (or that anything is not conscious!): behavior isn’t strictly relevant, and other minds are non-perceptible. I think this is the most common objection to Turing-tests as a standard, as well.
*Maybe this is the logic of the biological position you mention—essentially, the more something seems like the one thing I know is conscious (me), the more likehood I assign to it also being conscious. Thus other humans > other complex animals > simple animals > other organisms > abiotics.
Arguments against solipsism? Well, the most obvious one would be that everyone else appears to be implemented on very nearly the same hardware I am, and so they should be conscious for the same reasons I am.
Admittedly I don’t know what those reasons are.
It’s possible that there are none, that the only reason I’m conscious is because I’m not implemented on a brain but in fact on some unknown thing by a Dark Lord of the Matrix, but although this recovers the solipsism option as not quite as impossible as it’d get if you buy the previous argument, it doesn’t seem likely. Even in matrix scenarios.