Outside view tells me Dennett didn’t solve the problem of consciousness, because philosophers don’t solve problems...
Any purported explanation of consciousness had better clearly identify the minimum stuff required for consciousness to exist. For example, if someone claims computer programs can be conscious, they should provide a 100-line conscious program. If it has to be some kind of quantum voodoo machine, ditto: how do we build the simplest possible one? I don’t think that’s setting the bar too high!
If Dennett’s book contains an answer to this question, I’d like to hear it right away before wading into the philosophical gook. If it doesn’t, case closed.
It’s heavy cognitive neurology, but it does attempt to find minimal sets of properties needed for subjectivity and conciousness. It also suggests that the fundamental problem in monist/dualist debates is that the processes of cognition are invisible to self-inspection.
Being No One, Metzinger. Review and overview here. Precis here.
I started in on the precis, but a serious problem with his first three constraints popped up for me right away: a thermostat implements “minimal consciousness” by those rules, as it has a global world-model that cannot be seen by the thermostat to be a world model.
I don’t see this as a problem with the ideas presented, mind you, it’s more of a problem in the statement of the constraints. I think that what he meant to require that a conscious system have a subsystem which can selectively observe a limited subset of a nonconcsious model of the world. (In which case a thermostat would fail, since it has only a single, non-reflective level of modeling.)
Much of the precis (or at least the 20% I got through before getting tired of wading through vague and ambiguous language full of mind-projections) seems to have similar problems. It’s definitely not an implementation specification for consciousness, as far as I can tell, but at the same time I have found little fault with what the author appears to be pointing towards. The answers given seem vaguely helpful, but tend to raise new questions.
I started in on the precis, but a serious problem with his first three constraints popped up for me right away: a thermostat implements “minimal consciousness” by those rules, as it has a global world-model that cannot be seen by the thermostat to be a world model.
That seems like a good minimal case. This has to be the closest there is to no consciousness at all; your ‘selective’ would seem to exclude many lower animals. It might be better to think of minimal as being unconscious—a dog has no choice but to react mentally to a whistle, say, but neither does the thermostat have a choice.
That seems like a good minimal case. This has to be the closest there is to no consciousness at all; your ‘selective’ would seem to exclude many lower animals. It might be better to think of minimal as being unconscious—a dog has no choice but to react mentally to a whistle, say, but neither does the thermostat have a choice.
Actually, it does have a choice; dogs can be trained to ignore stimuli, and you can only be trained to do something that you can do anyway. Either that, or humans also have no choice but to “react mentally”, either, and the distinction is meaningless.
Either way, “choice” is less meaningful than “selection”—we can argue how much choice there is in the selection later.
In fact, the mere fact of selectivity means, there’s always something not being “reacted to mentally” by the “observer” of the model. Whether this selectivity has anything to do with choice is another matter. I can direct where my attention goes, but I can also feel it “drawn” to things, so clearly, selectivity is a mixed bag with respect to choice.
It seems we disagree on what ‘reacting mentally’ is—I’d say a dog so trained may be an organism too high up on the power/consciousness scale (surely something lower than a dog—lower than gerbils or rats even—is where we ought to be looking), and that even if it is not making any physical actions, its mind is reacting (it knows about it) while humans truly can ‘tune out’ stimuli.
But an example may help. What would you have to add to a thermostat to make it non-‘minimal’, do you think? Another gauge, like a humidity gauge, which has no electrical connection to the binary output circuit?
It seems we disagree on what ‘reacting mentally’ is—I’d say a dog so trained may be an organism too high up on the power/consciousness scale (surely something lower than a dog—lower than gerbils or rats even—is where we ought to be looking), and that even if it is not making any physical actions, its mind is reacting (it knows about it) while humans truly can ‘tune out’ stimuli.
We seem to be talking past each other; AFAIK the ability to have selective attention to components of a perceptual model is present in all the vertebrates, and probably anything else worthy of being considered to have a brain at all.
What would you have to add to a thermostat to make it non-‘minimal’, do you think? Another gauge, like a humidity gauge, which has no electrical connection to the binary output circuit?
No, in order to have selective attention you’d need something that could say, choose which of six thermal input sensors to “pay attention to” (i.e., use to drive outputs) based on which sensor had more “interesting” data.
I’m not sure what to add to give it a self-model—unless it was something like an efficiency score, or various statistics about how it’s been paying attention, and allow the attention system to use that as part of its attention-selection and output.
Anyway, my point was that the language of the model in the Being No One precis is sufficiently vague to allow quite trivial mechanical systems to pass as “minimally conscious”… and then too hand-wavy to specify how to get past that point. i.e., I think that the self-model concept is too much of an intuitive projection, and not sufficiently reduced.
In other words, I think it’s provocative but thoroughly unsatisfying.
(I also think you’re doing a similar intuitive anthropomorphic projection on the notions of “reacting mentally” and “tune out”, which would explain our difficulty in communicating.)
The precis is, by its nature, shorter than it should be; the book gives more precise definitions and gives a defence of that set of constraints over others. I don’t have the book on hand at the moment, as it’s in the university library.
The book itself is more concerned with the neurology; the precis is more a quick overview of claimed results for other philosophers.
Thanks, that’s a perfect example of what I’m talking about. Metzinger’s theory doesn’t seem to rely on any noncomputable physics, so in principle the book could have contained a program we could type in and run, but it doesn’t. Insta-fail.
So you demand AGI-level projects be completed before admitting even in principle that conciousness might be a solvable problem?
Do you apply similar standards to evolution by means of natural selection?
Metzinger identifies a plausible set of minimal properties, and justifies that selection on the basis of neurological work and thinking. It’s as much philosophising as reverse engineering “mind” based on failure modes.
So you demand AGI-level projects be completed before admitting even in principle that conciousness might be a solvable problem?
I readily admit it’s solvable, but declaring it solved calls for higher standards. And consciousness isn’t the same as intelligence: a “dreamer” device that just watched a movie (or a bitstring, whatever) without analyzing it would make a perfectly fine demonstration.
It’s a fine, bold title, but I don’t think that many feel that Consciousness Explained actually explains consciousness. What I got from it wasn’t the feeling that there was no work left to do, but that the back of the problem had been broken—that it had been moved from something so hard to think about that it made your brain spin into something that we could apply ordinary reductionistic methods to without getting mired.
If Dennett’s book contains an answer to this question, I’d like to hear it right away before wading into the philosophical gook. If it doesn’t, case closed.
I think Dennett is more into showing that the naive view of consciousness is inconsistent and that “being conscious” is not a legit property of things.
I meant that perhaps consciousness cannot be consistently and meaningfully defined as a property of things, so as to enable us to say: a man is conscious, a rock is not.
What is consciousness, anyway? It comes to something when we need a whole book (Consciousness Explained) to tell us what a word means, instead of a simple definition. And even then we don’t agree. I certainly sympathize with those who’d prefer to abolish the whole idea of consciousness, instead.
I can’t name offhand any important problem that philosophers posed and other philosophers later solved. From Zeno’s paradox to Newcomb’s problem, solutions always seem to come from other fields.
Agreed, and a lot of modern fields, including many of the natural sciences and social sciences, derive from philosophers’ framework-establishing questions. The trick is that we then consider the fields therein derived as solving the original questions, rather than philosophy.
Philosophy doesn’t really solve questions in itself; instead, it allows others to solve them.
Red herrings may (and black ravens may not) constitute evidence that all ravens are black.
Most of his other points rely on loose definitions, IMO (“rational”, “justified”, “selfish”, “cat”), but this one seems plainly wrong to me, as he seems to attach the same meaning to the word “evidence” as LW does (although not that formal).
I’m not saying philosophers do not contribute to problem-solving, far from it. It may be that he is wrong and this is not “at least as well-established as most scientific results” in philosophy. It may also be that a significant amount of philosophers disregard (or have no knowledge of) Bayesian inference.
I too would generally regard observations of black ravens as being weak evidence that all ravens are black.
Weak evidence, but evidence nonetheless. I read the essay again, and it appears that what the author means is that there exists a case where observing a black raven is not evidence that all ravens are black; the case he specified is one where the raven is picked from a population already known to be consisting of black ravens only. In some sense, he is correct. Then again, this is not a new observation.
He does present a case where observing a red haring constitutes weak probabilistic evidence that all ravens are black.
So, my disagreement comes from my misinterpretation of the word “may”.
I would find this list more convincing if the author weren’t himself a philosopher.
I agree that the philosophy of science is a different category entirely. I would also suggest that the current sorry state of AI is due primarily to limitations in our current understanding of scientific philosophy (as opposed to limitations of our mathematical or neurological understanding).
Outside view tells me Dennett didn’t solve the problem of consciousness, because philosophers don’t solve problems...
Any purported explanation of consciousness had better clearly identify the minimum stuff required for consciousness to exist. For example, if someone claims computer programs can be conscious, they should provide a 100-line conscious program. If it has to be some kind of quantum voodoo machine, ditto: how do we build the simplest possible one? I don’t think that’s setting the bar too high!
If Dennett’s book contains an answer to this question, I’d like to hear it right away before wading into the philosophical gook. If it doesn’t, case closed.
This “outside view abuse” is getting a little extreme. Next it will tell you that Barack Obama isn’t President, because people don’t become President.
Being No One, Metzinger. Review and overview here. Precis here.
It’s heavy cognitive neurology, but it does attempt to find minimal sets of properties needed for subjectivity and conciousness. It also suggests that the fundamental problem in monist/dualist debates is that the processes of cognition are invisible to self-inspection.
I started in on the precis, but a serious problem with his first three constraints popped up for me right away: a thermostat implements “minimal consciousness” by those rules, as it has a global world-model that cannot be seen by the thermostat to be a world model.
I don’t see this as a problem with the ideas presented, mind you, it’s more of a problem in the statement of the constraints. I think that what he meant to require that a conscious system have a subsystem which can selectively observe a limited subset of a nonconcsious model of the world. (In which case a thermostat would fail, since it has only a single, non-reflective level of modeling.)
Much of the precis (or at least the 20% I got through before getting tired of wading through vague and ambiguous language full of mind-projections) seems to have similar problems. It’s definitely not an implementation specification for consciousness, as far as I can tell, but at the same time I have found little fault with what the author appears to be pointing towards. The answers given seem vaguely helpful, but tend to raise new questions.
That seems like a good minimal case. This has to be the closest there is to no consciousness at all; your ‘selective’ would seem to exclude many lower animals. It might be better to think of minimal as being unconscious—a dog has no choice but to react mentally to a whistle, say, but neither does the thermostat have a choice.
Actually, it does have a choice; dogs can be trained to ignore stimuli, and you can only be trained to do something that you can do anyway. Either that, or humans also have no choice but to “react mentally”, either, and the distinction is meaningless.
Either way, “choice” is less meaningful than “selection”—we can argue how much choice there is in the selection later.
In fact, the mere fact of selectivity means, there’s always something not being “reacted to mentally” by the “observer” of the model. Whether this selectivity has anything to do with choice is another matter. I can direct where my attention goes, but I can also feel it “drawn” to things, so clearly, selectivity is a mixed bag with respect to choice.
It seems we disagree on what ‘reacting mentally’ is—I’d say a dog so trained may be an organism too high up on the power/consciousness scale (surely something lower than a dog—lower than gerbils or rats even—is where we ought to be looking), and that even if it is not making any physical actions, its mind is reacting (it knows about it) while humans truly can ‘tune out’ stimuli.
But an example may help. What would you have to add to a thermostat to make it non-‘minimal’, do you think? Another gauge, like a humidity gauge, which has no electrical connection to the binary output circuit?
We seem to be talking past each other; AFAIK the ability to have selective attention to components of a perceptual model is present in all the vertebrates, and probably anything else worthy of being considered to have a brain at all.
No, in order to have selective attention you’d need something that could say, choose which of six thermal input sensors to “pay attention to” (i.e., use to drive outputs) based on which sensor had more “interesting” data.
I’m not sure what to add to give it a self-model—unless it was something like an efficiency score, or various statistics about how it’s been paying attention, and allow the attention system to use that as part of its attention-selection and output.
Anyway, my point was that the language of the model in the Being No One precis is sufficiently vague to allow quite trivial mechanical systems to pass as “minimally conscious”… and then too hand-wavy to specify how to get past that point. i.e., I think that the self-model concept is too much of an intuitive projection, and not sufficiently reduced.
In other words, I think it’s provocative but thoroughly unsatisfying.
(I also think you’re doing a similar intuitive anthropomorphic projection on the notions of “reacting mentally” and “tune out”, which would explain our difficulty in communicating.)
The precis is, by its nature, shorter than it should be; the book gives more precise definitions and gives a defence of that set of constraints over others. I don’t have the book on hand at the moment, as it’s in the university library.
The book itself is more concerned with the neurology; the precis is more a quick overview of claimed results for other philosophers.
Thanks, that’s a perfect example of what I’m talking about. Metzinger’s theory doesn’t seem to rely on any noncomputable physics, so in principle the book could have contained a program we could type in and run, but it doesn’t. Insta-fail.
So you demand AGI-level projects be completed before admitting even in principle that conciousness might be a solvable problem?
Do you apply similar standards to evolution by means of natural selection?
Metzinger identifies a plausible set of minimal properties, and justifies that selection on the basis of neurological work and thinking. It’s as much philosophising as reverse engineering “mind” based on failure modes.
I readily admit it’s solvable, but declaring it solved calls for higher standards. And consciousness isn’t the same as intelligence: a “dreamer” device that just watched a movie (or a bitstring, whatever) without analyzing it would make a perfectly fine demonstration.
It’s a fine, bold title, but I don’t think that many feel that Consciousness Explained actually explains consciousness. What I got from it wasn’t the feeling that there was no work left to do, but that the back of the problem had been broken—that it had been moved from something so hard to think about that it made your brain spin into something that we could apply ordinary reductionistic methods to without getting mired.
I think Dennett is more into showing that the naive view of consciousness is inconsistent and that “being conscious” is not a legit property of things.
What does that mean?
I meant that perhaps consciousness cannot be consistently and meaningfully defined as a property of things, so as to enable us to say: a man is conscious, a rock is not.
What is consciousness, anyway? It comes to something when we need a whole book (Consciousness Explained) to tell us what a word means, instead of a simple definition. And even then we don’t agree. I certainly sympathize with those who’d prefer to abolish the whole idea of consciousness, instead.
Why do you say “philosophers don’t solve problems”? That seems rather harsh!
I can’t name offhand any important problem that philosophers posed and other philosophers later solved. From Zeno’s paradox to Newcomb’s problem, solutions always seem to come from other fields.
Noticing a problem seems an important contribution to solving it.
Agreed, and a lot of modern fields, including many of the natural sciences and social sciences, derive from philosophers’ framework-establishing questions. The trick is that we then consider the fields therein derived as solving the original questions, rather than philosophy.
Philosophy doesn’t really solve questions in itself; instead, it allows others to solve them.
--Wittgenstein
Take David Hume’s correct refutation of the design argument, for example:
http://en.wikipedia.org/wiki/David_Hume#The_design_argument
This argument is still used today—though we know a bit more about the subject now.
Refute it then.
http://www.philosophyetc.net/2008/02/examples-of-solved-philosophy.html
...has one guy’s list.
One might also point to the philosophy of science (Popper, Kuhn, Hull) to see philosophers making definite progress on the problems in their field.
Most of his other points rely on loose definitions, IMO (“rational”, “justified”, “selfish”, “cat”), but this one seems plainly wrong to me, as he seems to attach the same meaning to the word “evidence” as LW does (although not that formal).
I’m not saying philosophers do not contribute to problem-solving, far from it. It may be that he is wrong and this is not “at least as well-established as most scientific results” in philosophy. It may also be that a significant amount of philosophers disregard (or have no knowledge of) Bayesian inference.
http://www.philosophyetc.net/2005/09/raven-paradox-essay.html
Fair enough, I think. I too would generally regard observations of black ravens as being weak evidence that all ravens are black.
Weak evidence, but evidence nonetheless. I read the essay again, and it appears that what the author means is that there exists a case where observing a black raven is not evidence that all ravens are black; the case he specified is one where the raven is picked from a population already known to be consisting of black ravens only. In some sense, he is correct. Then again, this is not a new observation.
He does present a case where observing a red haring constitutes weak probabilistic evidence that all ravens are black.
So, my disagreement comes from my misinterpretation of the word “may”.
I would find this list more convincing if the author weren’t himself a philosopher.
I agree that the philosophy of science is a different category entirely. I would also suggest that the current sorry state of AI is due primarily to limitations in our current understanding of scientific philosophy (as opposed to limitations of our mathematical or neurological understanding).