The authors give three arguments for the view that “present-day AI systems do not have consciousness”, which I will quote in full here as they first present them:
“First, in mammalian brains, consciousness is supported by a highly interconnected thalamocortical system.”
“Second, consciousness, is tied to the sensory streams that are meaningful for the organism.”
“And third, consciousness might be bound to complex ‘skin in the game’ processes characteristic of living systems.”
This is irrelevant to the question whether biology is necessary for consciousness. In principle, machines can also be highly interconnected and have sensory streams. Some kind of “skin in the game” can be achieved by uploading algorithms to robotic bodies (which can potentially be destroyed, or maybe need to find energy int he environment), or even easier by simulated environments.
Generally, it seems to me that authors are playing the following game: “Find some property X that humans have, and machines do not have (yet). Claim that X is inevitable for consciousness (without explaining why).” In other words, as long as we cannot clearly describe what exactly consciousness is, we can make up arbitrary statements about what is a necessary ingredient, and use them to win debates. I am not impressed.
Claim that X is inevitable for consciousness (without explaining why).”
Note that this still works as a “might be” claim. Abstract computationalism isn’t a necessary truth, so dependence on physics, chemistry or biology is possible truth.
In other words, as long as we cannot clearly describe what exactly consciousness is, we can make up arbitrary statements about what is a necessary ingredient, and use them to win debates. I am not impressed.
I don’t see why being able to define “consciousness” would tell you how it works.
Yes, that’s exactly the game the authors are playing—I too was pretty unimpressed tbh.
To be fair to them, though, “X = thalamocortical networks” or “X = sensory streams that are meaningful to the organism” aren’t claims with literally 0 evidence (even though the evidence to this date is contentious). They are claims based off of contemporary neuroscience—eg, studies which show that conscious (as opposed to non-conscious) processing appears to involve thalamacortical networks in some special way. Also worth noting that the authors fully acknowledge that, yes, machines can be given these “sensory streams” or relevant forms of “interconnection”.
I do also think that one could argue that we don’t need an exact description of consciousness is to get an idea of the sorts of information processing that might generate it. The most widely accepted paradigm in neuroscience is basically just to ask someone whether they consciously experienced something, and look at the neural correlates of that experience. If you accept that this approach makes sense (and there are ofc good reasons not to), then you do end up with a non-arbitrary reason for saying something is a necessary ingredient of consciousness.
Wrt the possiblity of creating “skin in the game” by uploading algorithms to robotic bodies—I agree that this is possible in the normal sense in which you or I might conceive of “skin in the game”. But the authors of the paper are arguing that this is literally impossible, because they use “skin in the game” to describe a system whose existence is underpinned by biological processes at every single level—from intracellular upwards. They don’t, however, provide much of argument for why this makes consciousness a product only of systems with “skin in the game”. I was kinda just trying to get to the bottom of why the paper thought this conception of “skin in the game” uniquely leads to consciousness, since variants of “X = biology” are pretty commonly offered as reasons for AI consciousness being impossible.
How in the world are we ever supposed to know if machines can be conscious?
Start by how you know exactly when a human becomes conscious (presuming an embryo isn’t, and the adult it becomes is). And how you know whether a whale is conscious, or a slime mold.
If you can’t do this, you probably don’t have a solid definition of “conscious”, and your question is kind of meaningless.
This is irrelevant to the question whether biology is necessary for consciousness. In principle, machines can also be highly interconnected and have sensory streams. Some kind of “skin in the game” can be achieved by uploading algorithms to robotic bodies (which can potentially be destroyed, or maybe need to find energy int he environment), or even easier by simulated environments.
Generally, it seems to me that authors are playing the following game: “Find some property X that humans have, and machines do not have (yet). Claim that X is inevitable for consciousness (without explaining why).” In other words, as long as we cannot clearly describe what exactly consciousness is, we can make up arbitrary statements about what is a necessary ingredient, and use them to win debates. I am not impressed.
Note that this still works as a “might be” claim. Abstract computationalism isn’t a necessary truth, so dependence on physics, chemistry or biology is possible truth.
I don’t see why being able to define “consciousness” would tell you how it works.
Yes, that’s exactly the game the authors are playing—I too was pretty unimpressed tbh.
To be fair to them, though, “X = thalamocortical networks” or “X = sensory streams that are meaningful to the organism” aren’t claims with literally 0 evidence (even though the evidence to this date is contentious). They are claims based off of contemporary neuroscience—eg, studies which show that conscious (as opposed to non-conscious) processing appears to involve thalamacortical networks in some special way. Also worth noting that the authors fully acknowledge that, yes, machines can be given these “sensory streams” or relevant forms of “interconnection”.
I do also think that one could argue that we don’t need an exact description of consciousness is to get an idea of the sorts of information processing that might generate it. The most widely accepted paradigm in neuroscience is basically just to ask someone whether they consciously experienced something, and look at the neural correlates of that experience. If you accept that this approach makes sense (and there are ofc good reasons not to), then you do end up with a non-arbitrary reason for saying something is a necessary ingredient of consciousness.
Wrt the possiblity of creating “skin in the game” by uploading algorithms to robotic bodies—I agree that this is possible in the normal sense in which you or I might conceive of “skin in the game”. But the authors of the paper are arguing that this is literally impossible, because they use “skin in the game” to describe a system whose existence is underpinned by biological processes at every single level—from intracellular upwards. They don’t, however, provide much of argument for why this makes consciousness a product only of systems with “skin in the game”. I was kinda just trying to get to the bottom of why the paper thought this conception of “skin in the game” uniquely leads to consciousness, since variants of “X = biology” are pretty commonly offered as reasons for AI consciousness being impossible.
Start by how you know exactly when a human becomes conscious (presuming an embryo isn’t, and the adult it becomes is). And how you know whether a whale is conscious, or a slime mold.
If you can’t do this, you probably don’t have a solid definition of “conscious”, and your question is kind of meaningless.