Lead Data Scientist at Quantium.
PhD in Theoretical Physics / Cosmology.
Views my own not my employers.
Lead Data Scientist at Quantium.
PhD in Theoretical Physics / Cosmology.
Views my own not my employers.
Just clarifying something important: Schneider’s ACT is proposed as a sufficient test of consciousness not a necessary test. So the fact that young children, dementia patients, animals etc… would fail the test isn’t a problem for the argument. It just says that these entities experience consciousness for other reasons or in other ways than regular functioning adults.
I agree with your points around multiple meanings of consciousness and potential equivocation and the gap between evidence and “intuition.”
Importantly, the claim here is around phenomenal consciousness I.e the subjective experience and “what it feels like” to be conscious rather than just cognitive consciousness. So if we entertain seriously the idea that AI systems indeed have phenomenal consciousness this does actually have serious ethical implications regardless of our own intuitive emotional response.
Thanks for your response! It’s my first time posting on LessWrong so I’m glad at least one person read and engaged with the argument :)
Regarding the mathematical argument you’ve put forward, I think there are a few considerations:
1. The same argument could be run for human consciousness. Given a fixed brain state and inputs, the laws of physics would produce identical behavioural outputs regardless of whether consciousness exists. Yet, we generally accept behavioural evidence (including sophisticated reasoning about consciousness) as evidence of consciousness in humans.
2. Under functionalism, there’s no formal difference between “implementing conscious like functions” and “being conscious.” If consciousness emerges from certain patterns of information processing then a system implementing those patterns is conscious by definition.
3. The mathematical argument seems (at least to me) to implicitly assume consciousness is an additional property beyond the computational/functional architecture which is precisely what functionalism rejects. On functionalism, the conscious component is not an “additional ingredient” that could be present or absent all things being equal.
4. I think your response hints at something like the “Audience Objection” by Udell & Schwitzgebel which critiques Schneider’s argument.
“The tests thus have an audience problem: If a theorist is sufficiently skeptical about outward appearances of seeming AI consciousness to want to employ one of these tests, that theorist should also be worried that a system might pass the test without being conscious. Generally speaking, liberals about attributing AI consciousness will reasonably regard such stringent tests as unnecessary, while skeptics about AI consciousness will doubt that the tests are sufficiently stringent to demonstrate what they claim.”
5. I haven’t thought about this very carefully but I’d challenge the Illusionist to respond to the claims of machine consciousness in the ACT in the same way as a Functionalist. If consciousness is “just” the story that a complex system is telling itself then LLM’s on the ACT would seem to be conscious in precisely the way Illusionism suggests. The Illusionist wouldn’t be able to coherently maintain that systems telling sophisticated stories about their own consciousness are not actually conscious.
Thank you for the comment.
I take your point around substrate independence being a conclusion of computationalism rather than independent evidence for it—this is a fair criticism.
If I’m interpreting your argument correctly, there are two possibilities:
1. Biological structures happen to implement some function which produces consciousness [Functionalism]
2. Biological structures have some physical property X which produces consciousness. [Biological Essentialism or non-Computationalist Physicalism]
Your argument seems to be that 2) has more explanatory power because it has access to all of the potential physical processes underlying biology to try to explain consciousness whereas 1) is restricted to the functions that the biological systems implement. Have I captured the argument correctly? (please let me know if I haven’t)
Imagine that we could successfully implement a functional isomorph of the human brain in silicon. A proponent of 2) would need to explain why this functional isomorph of the human brain which has all the same functional properties as an actual brain does not, in fact, have consciousness. A proponent of 1) could simply assert that it does. It’s not clear to me what property X the biological brain has which induces consciousness which couldn’t be captured by a functional isomorph in silicon. I know there’s been some recent work by Anil Seth where he tries to pin down the properties X which biological systems may require for consciousness https://osf.io/preprints/psyarxiv/tz6an. His argument suggests that extremely complex biological systems may implement functions which are non-Turing computable. However, while he identifies biological properties that would be difficult to implement in silicon, I didn’t find this sufficient evidence for the claim that brains perform non-Turing computable functions.. Do you have any ideas?
I’ll admit that modern day LLM’s are nowhere near functional isomorphs of the human brain so it could be that there’s some “functional gap” between their implementation and the human brain. So it could indeed be that LLM’s are not consciousness because they are missing some “important function” required for consciousness. My contention in this post is that if they’re able to reason about their internal experience and qualia in a sophisticated manner then this is at least circumstantial evidence that they’re not missing the “important function.”