an AI system passing the ACT—demonstrating sophisticated reasoning about consciousness and qualia—should be considered conscious. [...] if a system can reason about consciousness in a sophisticated way, it must be implementing the functional architecture that gives rise to consciousness.
This is provably wrong. This route will never offer any test on consciousness:
Suppose for a second that xAI in 2027, a very large LLM, will be stunning you by uttering C, where C = more profound musings about your and her own consciousness than you’ve ever even imagined!
For a given set of random variable draws R used in the randomized output generation of xAI’s uttering, S the xAI structure you’ve designed (transformers neuron arrangements or so), T the training you had given it:
What is P(C | {xAI conscious, R, S, T})? It’s 100%.
What is P(C | {xAI not conscious, R, S, T})? It’s of course also 100%. Schneider’s claims you refer to don’t change that. You know you can readily track what the each element within xAI is mathematically doing, how the bits propagate, and, if examining it in enough detail, you’d find exactly the output you observe, without resorting to any concept of consciousness or whatever.
As the probability of what you observe is exactly the same with or without consciousness in the machine, there’s no way to infer from xAI’s uttering whether it’s conscious or not.
Combining this with the fact that, as you write, biological essentialism seems odd too, does of course create a rather unbearable tension, that many may still be ignoring. When we embrace this tension, some see raise illusionism-type questions, however strange those may feel (and if I dare guess, illusionist type of thinking may already be, or may grow to be, more popular than the biological essentialism you point out, although on that point I’m merely speculating).
Thanks for your response! It’s my first time posting on LessWrong so I’m glad at least one person read and engaged with the argument :)
Regarding the mathematical argument you’ve put forward, I think there are a few considerations:
1. The same argument could be run for human consciousness. Given a fixed brain state and inputs, the laws of physics would produce identical behavioural outputs regardless of whether consciousness exists. Yet, we generally accept behavioural evidence (including sophisticated reasoning about consciousness) as evidence of consciousness in humans.
2. Under functionalism, there’s no formal difference between “implementing conscious like functions” and “being conscious.” If consciousness emerges from certain patterns of information processing then a system implementing those patterns is conscious by definition.
3. The mathematical argument seems (at least to me) to implicitly assume consciousness is an additional property beyond the computational/functional architecture which is precisely what functionalism rejects. On functionalism, the conscious component is not an “additional ingredient” that could be present or absent all things being equal.
4. I think your response hints at something like the “Audience Objection” by Udell & Schwitzgebel which critiques Schneider’s argument.
“The tests thus have an audience problem: If a theorist is sufficiently skeptical about outward appearances of seeming AI consciousness to want to employ one of these tests, that theorist should also be worried that a system might pass the test without being conscious. Generally speaking, liberals about attributing AI consciousness will reasonably regard such stringent tests as unnecessary, while skeptics about AI consciousness will doubt that the tests are sufficiently stringent to demonstrate what they claim.”
5. I haven’t thought about this very carefully but I’d challenge the Illusionist to respond to the claims of machine consciousness in the ACT in the same way as a Functionalist. If consciousness is “just” the story that a complex system is telling itself then LLM’s on the ACT would seem to be conscious in precisely the way Illusionism suggests. The Illusionist wouldn’t be able to coherently maintain that systems telling sophisticated stories about their own consciousness are not actually conscious.
This is provably wrong. This route will never offer any test on consciousness:
Suppose for a second that xAI in 2027, a very large LLM, will be stunning you by uttering C, where C = more profound musings about your and her own consciousness than you’ve ever even imagined!
For a given set of random variable draws R used in the randomized output generation of xAI’s uttering, S the xAI structure you’ve designed (transformers neuron arrangements or so), T the training you had given it:
What is P(C | {xAI conscious, R, S, T})? It’s 100%.
What is P(C | {xAI not conscious, R, S, T})? It’s of course also 100%. Schneider’s claims you refer to don’t change that. You know you can readily track what the each element within xAI is mathematically doing, how the bits propagate, and, if examining it in enough detail, you’d find exactly the output you observe, without resorting to any concept of consciousness or whatever.
As the probability of what you observe is exactly the same with or without consciousness in the machine, there’s no way to infer from xAI’s uttering whether it’s conscious or not.
Combining this with the fact that, as you write, biological essentialism seems odd too, does of course create a rather unbearable tension, that many may still be ignoring. When we embrace this tension, some see raise illusionism-type questions, however strange those may feel (and if I dare guess, illusionist type of thinking may already be, or may grow to be, more popular than the biological essentialism you point out, although on that point I’m merely speculating).
Thanks for your response! It’s my first time posting on LessWrong so I’m glad at least one person read and engaged with the argument :)
Regarding the mathematical argument you’ve put forward, I think there are a few considerations:
1. The same argument could be run for human consciousness. Given a fixed brain state and inputs, the laws of physics would produce identical behavioural outputs regardless of whether consciousness exists. Yet, we generally accept behavioural evidence (including sophisticated reasoning about consciousness) as evidence of consciousness in humans.
2. Under functionalism, there’s no formal difference between “implementing conscious like functions” and “being conscious.” If consciousness emerges from certain patterns of information processing then a system implementing those patterns is conscious by definition.
3. The mathematical argument seems (at least to me) to implicitly assume consciousness is an additional property beyond the computational/functional architecture which is precisely what functionalism rejects. On functionalism, the conscious component is not an “additional ingredient” that could be present or absent all things being equal.
4. I think your response hints at something like the “Audience Objection” by Udell & Schwitzgebel which critiques Schneider’s argument.
“The tests thus have an audience problem: If a theorist is sufficiently skeptical about outward appearances of seeming AI consciousness to want to employ one of these tests, that theorist should also be worried that a system might pass the test without being conscious. Generally speaking, liberals about attributing AI consciousness will reasonably regard such stringent tests as unnecessary, while skeptics about AI consciousness will doubt that the tests are sufficiently stringent to demonstrate what they claim.”
5. I haven’t thought about this very carefully but I’d challenge the Illusionist to respond to the claims of machine consciousness in the ACT in the same way as a Functionalist. If consciousness is “just” the story that a complex system is telling itself then LLM’s on the ACT would seem to be conscious in precisely the way Illusionism suggests. The Illusionist wouldn’t be able to coherently maintain that systems telling sophisticated stories about their own consciousness are not actually conscious.