It seems we disagree on what ‘reacting mentally’ is—I’d say a dog so trained may be an organism too high up on the power/consciousness scale (surely something lower than a dog—lower than gerbils or rats even—is where we ought to be looking), and that even if it is not making any physical actions, its mind is reacting (it knows about it) while humans truly can ‘tune out’ stimuli.
We seem to be talking past each other; AFAIK the ability to have selective attention to components of a perceptual model is present in all the vertebrates, and probably anything else worthy of being considered to have a brain at all.
What would you have to add to a thermostat to make it non-‘minimal’, do you think? Another gauge, like a humidity gauge, which has no electrical connection to the binary output circuit?
No, in order to have selective attention you’d need something that could say, choose which of six thermal input sensors to “pay attention to” (i.e., use to drive outputs) based on which sensor had more “interesting” data.
I’m not sure what to add to give it a self-model—unless it was something like an efficiency score, or various statistics about how it’s been paying attention, and allow the attention system to use that as part of its attention-selection and output.
Anyway, my point was that the language of the model in the Being No One precis is sufficiently vague to allow quite trivial mechanical systems to pass as “minimally conscious”… and then too hand-wavy to specify how to get past that point. i.e., I think that the self-model concept is too much of an intuitive projection, and not sufficiently reduced.
In other words, I think it’s provocative but thoroughly unsatisfying.
(I also think you’re doing a similar intuitive anthropomorphic projection on the notions of “reacting mentally” and “tune out”, which would explain our difficulty in communicating.)
We seem to be talking past each other; AFAIK the ability to have selective attention to components of a perceptual model is present in all the vertebrates, and probably anything else worthy of being considered to have a brain at all.
No, in order to have selective attention you’d need something that could say, choose which of six thermal input sensors to “pay attention to” (i.e., use to drive outputs) based on which sensor had more “interesting” data.
I’m not sure what to add to give it a self-model—unless it was something like an efficiency score, or various statistics about how it’s been paying attention, and allow the attention system to use that as part of its attention-selection and output.
Anyway, my point was that the language of the model in the Being No One precis is sufficiently vague to allow quite trivial mechanical systems to pass as “minimally conscious”… and then too hand-wavy to specify how to get past that point. i.e., I think that the self-model concept is too much of an intuitive projection, and not sufficiently reduced.
In other words, I think it’s provocative but thoroughly unsatisfying.
(I also think you’re doing a similar intuitive anthropomorphic projection on the notions of “reacting mentally” and “tune out”, which would explain our difficulty in communicating.)