Not having read the book from which this mini-sequence stems from, I raise here three points hoping they won’t overlap with some future post.
The first one pertains to this quote:
As long as information is ”walled off”, many, many contradictions can be maintained within one head.
Strictly, this is not true. That is, having separated modules for different informations is surely a sufficient condition to make the brain be able to hold many contradictory informations, but it’s not necessary: a trivial counter-example is a database holding in different lines different statements about some fact.
A more poignant proof is the callosotomy, showing that two modules continue to exist even if there are no more connections between them. However, the presence of contradictory information by itself is evidence of the modularity only under the unlikely assumption that every module tries to achieve internal consistency.
The second one regards the connectivity of the whole “brain graph” (if the modules are mapped to vertices and accessibility relationships as edges): while a complete connectedness seems highly unlikely, it is appealing to think to the brain as a strongly connected graph, i.e. a graph in which there’s a path from every node to every other nodes.
Third, we must not forget that these modules are a ‘software’ or ‘cognitive’ reduction of the brain. Evidence from neurofeedback, or simply the possibility to control blinking frequency, points to the creation/destruction of non-innate connection between separated modules. It would be fun if you could learn, through neurofeedback, to send false anticipatory brain activity for the wrist movement.
So there are many modules in your brain, some of them conscious, some of them not.
This to me seems plain wrong. I would say that none of them is conscious, otherwise you have just moved and fractioned the problem. But maybe I’m misinterpreting here, and you really meant “some of them produce consciousness and others don’t”.
So there are many modules in your brain, some of them conscious, some of them not.
This to me seems plain wrong. I would say that none of them is conscious, otherwise you have just moved and fractioned the problem. But maybe I’m misinterpreting here, and you really meant “some of them produce consciousness and others don’t”.
I assume by “conscious modules” Kaj Sotala means those modules whose activity one is conscious of.
I assume by “conscious modules” Kaj Sotala means those modules whose activity one is conscious of.
This formulation seems problematic also. If the brain is really so many agents (and I think there’s no reason to think the contrary), there’s no “one” who can be “conscious” of the activity of some module, unless consciousness is explained with “it’s when this very special module access the activities of other modules”. But then you have to explain why that special agent has consciousness and why others don’t. You have just moved the problem.
If consciousness has any hope of being explained through modularity, it (in my opinion) ought to be by deconstructing it into the shared activity of such and such modules, none of them being effectively describable as conscious.
If problematic, it points to a problem with the theory, rather than the formulation. Presuming wildly that your mental experience is similar to mine, then there is a very distinct notion of being conscious of some activities (performed by modules) and not others. I am, for example, quite conscious of writing this letter, but nearly oblivious to the beating of my heart. There is distinctly “one” that is “conscious” of this activity. Letting that go temporarily in order to better investigate some cognitive theory may be productive, but eventually you have to come back to it. Trying to explain it away via theory is like trying to find a theory of gravity that states dropped apples don’t really hit the ground. It may be wonderfully constructed, but doesn’t describe the world we exist in.
Not having read the book from which this mini-sequence stems from, I raise here three points hoping they won’t overlap with some future post.
The first one pertains to this quote:
Strictly, this is not true. That is, having separated modules for different informations is surely a sufficient condition to make the brain be able to hold many contradictory informations, but it’s not necessary: a trivial counter-example is a database holding in different lines different statements about some fact. A more poignant proof is the callosotomy, showing that two modules continue to exist even if there are no more connections between them. However, the presence of contradictory information by itself is evidence of the modularity only under the unlikely assumption that every module tries to achieve internal consistency.
The second one regards the connectivity of the whole “brain graph” (if the modules are mapped to vertices and accessibility relationships as edges): while a complete connectedness seems highly unlikely, it is appealing to think to the brain as a strongly connected graph, i.e. a graph in which there’s a path from every node to every other nodes.
Third, we must not forget that these modules are a ‘software’ or ‘cognitive’ reduction of the brain. Evidence from neurofeedback, or simply the possibility to control blinking frequency, points to the creation/destruction of non-innate connection between separated modules. It would be fun if you could learn, through neurofeedback, to send false anticipatory brain activity for the wrist movement.
This to me seems plain wrong. I would say that none of them is conscious, otherwise you have just moved and fractioned the problem. But maybe I’m misinterpreting here, and you really meant “some of them produce consciousness and others don’t”.
Yes, that’s what I meant.
I assume by “conscious modules” Kaj Sotala means those modules whose activity one is conscious of.
This formulation seems problematic also. If the brain is really so many agents (and I think there’s no reason to think the contrary), there’s no “one” who can be “conscious” of the activity of some module, unless consciousness is explained with “it’s when this very special module access the activities of other modules”. But then you have to explain why that special agent has consciousness and why others don’t. You have just moved the problem. If consciousness has any hope of being explained through modularity, it (in my opinion) ought to be by deconstructing it into the shared activity of such and such modules, none of them being effectively describable as conscious.
If problematic, it points to a problem with the theory, rather than the formulation. Presuming wildly that your mental experience is similar to mine, then there is a very distinct notion of being conscious of some activities (performed by modules) and not others. I am, for example, quite conscious of writing this letter, but nearly oblivious to the beating of my heart. There is distinctly “one” that is “conscious” of this activity. Letting that go temporarily in order to better investigate some cognitive theory may be productive, but eventually you have to come back to it. Trying to explain it away via theory is like trying to find a theory of gravity that states dropped apples don’t really hit the ground. It may be wonderfully constructed, but doesn’t describe the world we exist in.