My issue with consciousness involves p-zombies. Any experiment that wanted to understand consciousness, would have to be able to detect it, which seems to me to be philosophically impossible. To be more specific, any scientific investigation of the cause of consciousness would have (to simplify) an independent variable that we could manipulate to see if consciousness is present or not, depending on the manipulated variable. We assume that those around us are conscious, and we have good reason to do so, but we can’t rely on that assumption in any experiment in which we are investigating consciousness.
As Eliezer points out, that an individual says he’s conscious is a pretty good signal of consciousness, but we can’t necessarily rely on that signal for non-human minds. A conscious AI may never talk about it’s internal states depending on its structure (humans have a survival advantage to social sharing of internal realities). On the flip side, a savvy but non-conscious AI, may talk about it’s “internal states” because it is guessing the teacher’s password in the realist way imaginable: it has no understanding whatsoever of what those state are, but computes that aping them will accomplish it’s goals. I don’t know how we could possibly know if the AI is aping conciseness for it own ends or if it actually is conscious. If consciousness is thus undetectable, I can’t see how science can investigate it.
That said, I am very well aware that “Throughout history, every mystery, ever solved has turned out to be not magic” and that ever single time something has seemed inscrutable to science, a reductionist explanation eventually, surfaced. Knowing this, I have to seriously down grade my confidence that “No, really, this time it is different. Science really can’t pierce this veil.” I look forward to someone coming forward with somthign clever that dissolves the question, but even so, it does seem inscrutable.
Thank you, A little bit more informed.
My issue with consciousness involves p-zombies. Any experiment that wanted to understand consciousness, would have to be able to detect it, which seems to me to be philosophically impossible. To be more specific, any scientific investigation of the cause of consciousness would have (to simplify) an independent variable that we could manipulate to see if consciousness is present or not, depending on the manipulated variable. We assume that those around us are conscious, and we have good reason to do so, but we can’t rely on that assumption in any experiment in which we are investigating consciousness.
As Eliezer points out, that an individual says he’s conscious is a pretty good signal of consciousness, but we can’t necessarily rely on that signal for non-human minds. A conscious AI may never talk about it’s internal states depending on its structure (humans have a survival advantage to social sharing of internal realities). On the flip side, a savvy but non-conscious AI, may talk about it’s “internal states” because it is guessing the teacher’s password in the realist way imaginable: it has no understanding whatsoever of what those state are, but computes that aping them will accomplish it’s goals. I don’t know how we could possibly know if the AI is aping conciseness for it own ends or if it actually is conscious. If consciousness is thus undetectable, I can’t see how science can investigate it.
That said, I am very well aware that “Throughout history, every mystery, ever solved has turned out to be not magic” and that ever single time something has seemed inscrutable to science, a reductionist explanation eventually, surfaced. Knowing this, I have to seriously down grade my confidence that “No, really, this time it is different. Science really can’t pierce this veil.” I look forward to someone coming forward with somthign clever that dissolves the question, but even so, it does seem inscrutable.