I started in on the precis, but a serious problem with his first three constraints popped up for me right away: a thermostat implements “minimal consciousness” by those rules, as it has a global world-model that cannot be seen by the thermostat to be a world model.
That seems like a good minimal case. This has to be the closest there is to no consciousness at all; your ‘selective’ would seem to exclude many lower animals. It might be better to think of minimal as being unconscious—a dog has no choice but to react mentally to a whistle, say, but neither does the thermostat have a choice.
That seems like a good minimal case. This has to be the closest there is to no consciousness at all; your ‘selective’ would seem to exclude many lower animals. It might be better to think of minimal as being unconscious—a dog has no choice but to react mentally to a whistle, say, but neither does the thermostat have a choice.
Actually, it does have a choice; dogs can be trained to ignore stimuli, and you can only be trained to do something that you can do anyway. Either that, or humans also have no choice but to “react mentally”, either, and the distinction is meaningless.
Either way, “choice” is less meaningful than “selection”—we can argue how much choice there is in the selection later.
In fact, the mere fact of selectivity means, there’s always something not being “reacted to mentally” by the “observer” of the model. Whether this selectivity has anything to do with choice is another matter. I can direct where my attention goes, but I can also feel it “drawn” to things, so clearly, selectivity is a mixed bag with respect to choice.
It seems we disagree on what ‘reacting mentally’ is—I’d say a dog so trained may be an organism too high up on the power/consciousness scale (surely something lower than a dog—lower than gerbils or rats even—is where we ought to be looking), and that even if it is not making any physical actions, its mind is reacting (it knows about it) while humans truly can ‘tune out’ stimuli.
But an example may help. What would you have to add to a thermostat to make it non-‘minimal’, do you think? Another gauge, like a humidity gauge, which has no electrical connection to the binary output circuit?
It seems we disagree on what ‘reacting mentally’ is—I’d say a dog so trained may be an organism too high up on the power/consciousness scale (surely something lower than a dog—lower than gerbils or rats even—is where we ought to be looking), and that even if it is not making any physical actions, its mind is reacting (it knows about it) while humans truly can ‘tune out’ stimuli.
We seem to be talking past each other; AFAIK the ability to have selective attention to components of a perceptual model is present in all the vertebrates, and probably anything else worthy of being considered to have a brain at all.
What would you have to add to a thermostat to make it non-‘minimal’, do you think? Another gauge, like a humidity gauge, which has no electrical connection to the binary output circuit?
No, in order to have selective attention you’d need something that could say, choose which of six thermal input sensors to “pay attention to” (i.e., use to drive outputs) based on which sensor had more “interesting” data.
I’m not sure what to add to give it a self-model—unless it was something like an efficiency score, or various statistics about how it’s been paying attention, and allow the attention system to use that as part of its attention-selection and output.
Anyway, my point was that the language of the model in the Being No One precis is sufficiently vague to allow quite trivial mechanical systems to pass as “minimally conscious”… and then too hand-wavy to specify how to get past that point. i.e., I think that the self-model concept is too much of an intuitive projection, and not sufficiently reduced.
In other words, I think it’s provocative but thoroughly unsatisfying.
(I also think you’re doing a similar intuitive anthropomorphic projection on the notions of “reacting mentally” and “tune out”, which would explain our difficulty in communicating.)
That seems like a good minimal case. This has to be the closest there is to no consciousness at all; your ‘selective’ would seem to exclude many lower animals. It might be better to think of minimal as being unconscious—a dog has no choice but to react mentally to a whistle, say, but neither does the thermostat have a choice.
Actually, it does have a choice; dogs can be trained to ignore stimuli, and you can only be trained to do something that you can do anyway. Either that, or humans also have no choice but to “react mentally”, either, and the distinction is meaningless.
Either way, “choice” is less meaningful than “selection”—we can argue how much choice there is in the selection later.
In fact, the mere fact of selectivity means, there’s always something not being “reacted to mentally” by the “observer” of the model. Whether this selectivity has anything to do with choice is another matter. I can direct where my attention goes, but I can also feel it “drawn” to things, so clearly, selectivity is a mixed bag with respect to choice.
It seems we disagree on what ‘reacting mentally’ is—I’d say a dog so trained may be an organism too high up on the power/consciousness scale (surely something lower than a dog—lower than gerbils or rats even—is where we ought to be looking), and that even if it is not making any physical actions, its mind is reacting (it knows about it) while humans truly can ‘tune out’ stimuli.
But an example may help. What would you have to add to a thermostat to make it non-‘minimal’, do you think? Another gauge, like a humidity gauge, which has no electrical connection to the binary output circuit?
We seem to be talking past each other; AFAIK the ability to have selective attention to components of a perceptual model is present in all the vertebrates, and probably anything else worthy of being considered to have a brain at all.
No, in order to have selective attention you’d need something that could say, choose which of six thermal input sensors to “pay attention to” (i.e., use to drive outputs) based on which sensor had more “interesting” data.
I’m not sure what to add to give it a self-model—unless it was something like an efficiency score, or various statistics about how it’s been paying attention, and allow the attention system to use that as part of its attention-selection and output.
Anyway, my point was that the language of the model in the Being No One precis is sufficiently vague to allow quite trivial mechanical systems to pass as “minimally conscious”… and then too hand-wavy to specify how to get past that point. i.e., I think that the self-model concept is too much of an intuitive projection, and not sufficiently reduced.
In other words, I think it’s provocative but thoroughly unsatisfying.
(I also think you’re doing a similar intuitive anthropomorphic projection on the notions of “reacting mentally” and “tune out”, which would explain our difficulty in communicating.)