It’s heavy cognitive neurology, but it does attempt to find minimal sets of properties needed for subjectivity and conciousness. It also suggests that the fundamental problem in monist/dualist debates is that the processes of cognition are invisible to self-inspection.
Being No One, Metzinger. Review and overview here. Precis here.
I started in on the precis, but a serious problem with his first three constraints popped up for me right away: a thermostat implements “minimal consciousness” by those rules, as it has a global world-model that cannot be seen by the thermostat to be a world model.
I don’t see this as a problem with the ideas presented, mind you, it’s more of a problem in the statement of the constraints. I think that what he meant to require that a conscious system have a subsystem which can selectively observe a limited subset of a nonconcsious model of the world. (In which case a thermostat would fail, since it has only a single, non-reflective level of modeling.)
Much of the precis (or at least the 20% I got through before getting tired of wading through vague and ambiguous language full of mind-projections) seems to have similar problems. It’s definitely not an implementation specification for consciousness, as far as I can tell, but at the same time I have found little fault with what the author appears to be pointing towards. The answers given seem vaguely helpful, but tend to raise new questions.
I started in on the precis, but a serious problem with his first three constraints popped up for me right away: a thermostat implements “minimal consciousness” by those rules, as it has a global world-model that cannot be seen by the thermostat to be a world model.
That seems like a good minimal case. This has to be the closest there is to no consciousness at all; your ‘selective’ would seem to exclude many lower animals. It might be better to think of minimal as being unconscious—a dog has no choice but to react mentally to a whistle, say, but neither does the thermostat have a choice.
That seems like a good minimal case. This has to be the closest there is to no consciousness at all; your ‘selective’ would seem to exclude many lower animals. It might be better to think of minimal as being unconscious—a dog has no choice but to react mentally to a whistle, say, but neither does the thermostat have a choice.
Actually, it does have a choice; dogs can be trained to ignore stimuli, and you can only be trained to do something that you can do anyway. Either that, or humans also have no choice but to “react mentally”, either, and the distinction is meaningless.
Either way, “choice” is less meaningful than “selection”—we can argue how much choice there is in the selection later.
In fact, the mere fact of selectivity means, there’s always something not being “reacted to mentally” by the “observer” of the model. Whether this selectivity has anything to do with choice is another matter. I can direct where my attention goes, but I can also feel it “drawn” to things, so clearly, selectivity is a mixed bag with respect to choice.
It seems we disagree on what ‘reacting mentally’ is—I’d say a dog so trained may be an organism too high up on the power/consciousness scale (surely something lower than a dog—lower than gerbils or rats even—is where we ought to be looking), and that even if it is not making any physical actions, its mind is reacting (it knows about it) while humans truly can ‘tune out’ stimuli.
But an example may help. What would you have to add to a thermostat to make it non-‘minimal’, do you think? Another gauge, like a humidity gauge, which has no electrical connection to the binary output circuit?
It seems we disagree on what ‘reacting mentally’ is—I’d say a dog so trained may be an organism too high up on the power/consciousness scale (surely something lower than a dog—lower than gerbils or rats even—is where we ought to be looking), and that even if it is not making any physical actions, its mind is reacting (it knows about it) while humans truly can ‘tune out’ stimuli.
We seem to be talking past each other; AFAIK the ability to have selective attention to components of a perceptual model is present in all the vertebrates, and probably anything else worthy of being considered to have a brain at all.
What would you have to add to a thermostat to make it non-‘minimal’, do you think? Another gauge, like a humidity gauge, which has no electrical connection to the binary output circuit?
No, in order to have selective attention you’d need something that could say, choose which of six thermal input sensors to “pay attention to” (i.e., use to drive outputs) based on which sensor had more “interesting” data.
I’m not sure what to add to give it a self-model—unless it was something like an efficiency score, or various statistics about how it’s been paying attention, and allow the attention system to use that as part of its attention-selection and output.
Anyway, my point was that the language of the model in the Being No One precis is sufficiently vague to allow quite trivial mechanical systems to pass as “minimally conscious”… and then too hand-wavy to specify how to get past that point. i.e., I think that the self-model concept is too much of an intuitive projection, and not sufficiently reduced.
In other words, I think it’s provocative but thoroughly unsatisfying.
(I also think you’re doing a similar intuitive anthropomorphic projection on the notions of “reacting mentally” and “tune out”, which would explain our difficulty in communicating.)
The precis is, by its nature, shorter than it should be; the book gives more precise definitions and gives a defence of that set of constraints over others. I don’t have the book on hand at the moment, as it’s in the university library.
The book itself is more concerned with the neurology; the precis is more a quick overview of claimed results for other philosophers.
Thanks, that’s a perfect example of what I’m talking about. Metzinger’s theory doesn’t seem to rely on any noncomputable physics, so in principle the book could have contained a program we could type in and run, but it doesn’t. Insta-fail.
So you demand AGI-level projects be completed before admitting even in principle that conciousness might be a solvable problem?
Do you apply similar standards to evolution by means of natural selection?
Metzinger identifies a plausible set of minimal properties, and justifies that selection on the basis of neurological work and thinking. It’s as much philosophising as reverse engineering “mind” based on failure modes.
So you demand AGI-level projects be completed before admitting even in principle that conciousness might be a solvable problem?
I readily admit it’s solvable, but declaring it solved calls for higher standards. And consciousness isn’t the same as intelligence: a “dreamer” device that just watched a movie (or a bitstring, whatever) without analyzing it would make a perfectly fine demonstration.
It’s a fine, bold title, but I don’t think that many feel that Consciousness Explained actually explains consciousness. What I got from it wasn’t the feeling that there was no work left to do, but that the back of the problem had been broken—that it had been moved from something so hard to think about that it made your brain spin into something that we could apply ordinary reductionistic methods to without getting mired.
Being No One, Metzinger. Review and overview here. Precis here.
It’s heavy cognitive neurology, but it does attempt to find minimal sets of properties needed for subjectivity and conciousness. It also suggests that the fundamental problem in monist/dualist debates is that the processes of cognition are invisible to self-inspection.
I started in on the precis, but a serious problem with his first three constraints popped up for me right away: a thermostat implements “minimal consciousness” by those rules, as it has a global world-model that cannot be seen by the thermostat to be a world model.
I don’t see this as a problem with the ideas presented, mind you, it’s more of a problem in the statement of the constraints. I think that what he meant to require that a conscious system have a subsystem which can selectively observe a limited subset of a nonconcsious model of the world. (In which case a thermostat would fail, since it has only a single, non-reflective level of modeling.)
Much of the precis (or at least the 20% I got through before getting tired of wading through vague and ambiguous language full of mind-projections) seems to have similar problems. It’s definitely not an implementation specification for consciousness, as far as I can tell, but at the same time I have found little fault with what the author appears to be pointing towards. The answers given seem vaguely helpful, but tend to raise new questions.
That seems like a good minimal case. This has to be the closest there is to no consciousness at all; your ‘selective’ would seem to exclude many lower animals. It might be better to think of minimal as being unconscious—a dog has no choice but to react mentally to a whistle, say, but neither does the thermostat have a choice.
Actually, it does have a choice; dogs can be trained to ignore stimuli, and you can only be trained to do something that you can do anyway. Either that, or humans also have no choice but to “react mentally”, either, and the distinction is meaningless.
Either way, “choice” is less meaningful than “selection”—we can argue how much choice there is in the selection later.
In fact, the mere fact of selectivity means, there’s always something not being “reacted to mentally” by the “observer” of the model. Whether this selectivity has anything to do with choice is another matter. I can direct where my attention goes, but I can also feel it “drawn” to things, so clearly, selectivity is a mixed bag with respect to choice.
It seems we disagree on what ‘reacting mentally’ is—I’d say a dog so trained may be an organism too high up on the power/consciousness scale (surely something lower than a dog—lower than gerbils or rats even—is where we ought to be looking), and that even if it is not making any physical actions, its mind is reacting (it knows about it) while humans truly can ‘tune out’ stimuli.
But an example may help. What would you have to add to a thermostat to make it non-‘minimal’, do you think? Another gauge, like a humidity gauge, which has no electrical connection to the binary output circuit?
We seem to be talking past each other; AFAIK the ability to have selective attention to components of a perceptual model is present in all the vertebrates, and probably anything else worthy of being considered to have a brain at all.
No, in order to have selective attention you’d need something that could say, choose which of six thermal input sensors to “pay attention to” (i.e., use to drive outputs) based on which sensor had more “interesting” data.
I’m not sure what to add to give it a self-model—unless it was something like an efficiency score, or various statistics about how it’s been paying attention, and allow the attention system to use that as part of its attention-selection and output.
Anyway, my point was that the language of the model in the Being No One precis is sufficiently vague to allow quite trivial mechanical systems to pass as “minimally conscious”… and then too hand-wavy to specify how to get past that point. i.e., I think that the self-model concept is too much of an intuitive projection, and not sufficiently reduced.
In other words, I think it’s provocative but thoroughly unsatisfying.
(I also think you’re doing a similar intuitive anthropomorphic projection on the notions of “reacting mentally” and “tune out”, which would explain our difficulty in communicating.)
The precis is, by its nature, shorter than it should be; the book gives more precise definitions and gives a defence of that set of constraints over others. I don’t have the book on hand at the moment, as it’s in the university library.
The book itself is more concerned with the neurology; the precis is more a quick overview of claimed results for other philosophers.
Thanks, that’s a perfect example of what I’m talking about. Metzinger’s theory doesn’t seem to rely on any noncomputable physics, so in principle the book could have contained a program we could type in and run, but it doesn’t. Insta-fail.
So you demand AGI-level projects be completed before admitting even in principle that conciousness might be a solvable problem?
Do you apply similar standards to evolution by means of natural selection?
Metzinger identifies a plausible set of minimal properties, and justifies that selection on the basis of neurological work and thinking. It’s as much philosophising as reverse engineering “mind” based on failure modes.
I readily admit it’s solvable, but declaring it solved calls for higher standards. And consciousness isn’t the same as intelligence: a “dreamer” device that just watched a movie (or a bitstring, whatever) without analyzing it would make a perfectly fine demonstration.
It’s a fine, bold title, but I don’t think that many feel that Consciousness Explained actually explains consciousness. What I got from it wasn’t the feeling that there was no work left to do, but that the back of the problem had been broken—that it had been moved from something so hard to think about that it made your brain spin into something that we could apply ordinary reductionistic methods to without getting mired.