Yes, that’s exactly the game the authors are playing—I too was pretty unimpressed tbh.
To be fair to them, though, “X = thalamocortical networks” or “X = sensory streams that are meaningful to the organism” aren’t claims with literally 0 evidence (even though the evidence to this date is contentious). They are claims based off of contemporary neuroscience—eg, studies which show that conscious (as opposed to non-conscious) processing appears to involve thalamacortical networks in some special way. Also worth noting that the authors fully acknowledge that, yes, machines can be given these “sensory streams” or relevant forms of “interconnection”.
I do also think that one could argue that we don’t need an exact description of consciousness is to get an idea of the sorts of information processing that might generate it. The most widely accepted paradigm in neuroscience is basically just to ask someone whether they consciously experienced something, and look at the neural correlates of that experience. If you accept that this approach makes sense (and there are ofc good reasons not to), then you do end up with a non-arbitrary reason for saying something is a necessary ingredient of consciousness.
Wrt the possiblity of creating “skin in the game” by uploading algorithms to robotic bodies—I agree that this is possible in the normal sense in which you or I might conceive of “skin in the game”. But the authors of the paper are arguing that this is literally impossible, because they use “skin in the game” to describe a system whose existence is underpinned by biological processes at every single level—from intracellular upwards. They don’t, however, provide much of argument for why this makes consciousness a product only of systems with “skin in the game”. I was kinda just trying to get to the bottom of why the paper thought this conception of “skin in the game” uniquely leads to consciousness, since variants of “X = biology” are pretty commonly offered as reasons for AI consciousness being impossible.
Yes, that’s exactly the game the authors are playing—I too was pretty unimpressed tbh.
To be fair to them, though, “X = thalamocortical networks” or “X = sensory streams that are meaningful to the organism” aren’t claims with literally 0 evidence (even though the evidence to this date is contentious). They are claims based off of contemporary neuroscience—eg, studies which show that conscious (as opposed to non-conscious) processing appears to involve thalamacortical networks in some special way. Also worth noting that the authors fully acknowledge that, yes, machines can be given these “sensory streams” or relevant forms of “interconnection”.
I do also think that one could argue that we don’t need an exact description of consciousness is to get an idea of the sorts of information processing that might generate it. The most widely accepted paradigm in neuroscience is basically just to ask someone whether they consciously experienced something, and look at the neural correlates of that experience. If you accept that this approach makes sense (and there are ofc good reasons not to), then you do end up with a non-arbitrary reason for saying something is a necessary ingredient of consciousness.
Wrt the possiblity of creating “skin in the game” by uploading algorithms to robotic bodies—I agree that this is possible in the normal sense in which you or I might conceive of “skin in the game”. But the authors of the paper are arguing that this is literally impossible, because they use “skin in the game” to describe a system whose existence is underpinned by biological processes at every single level—from intracellular upwards. They don’t, however, provide much of argument for why this makes consciousness a product only of systems with “skin in the game”. I was kinda just trying to get to the bottom of why the paper thought this conception of “skin in the game” uniquely leads to consciousness, since variants of “X = biology” are pretty commonly offered as reasons for AI consciousness being impossible.