I assume by “conscious modules” Kaj Sotala means those modules whose activity one is conscious of.
This formulation seems problematic also. If the brain is really so many agents (and I think there’s no reason to think the contrary), there’s no “one” who can be “conscious” of the activity of some module, unless consciousness is explained with “it’s when this very special module access the activities of other modules”. But then you have to explain why that special agent has consciousness and why others don’t. You have just moved the problem.
If consciousness has any hope of being explained through modularity, it (in my opinion) ought to be by deconstructing it into the shared activity of such and such modules, none of them being effectively describable as conscious.
If problematic, it points to a problem with the theory, rather than the formulation. Presuming wildly that your mental experience is similar to mine, then there is a very distinct notion of being conscious of some activities (performed by modules) and not others. I am, for example, quite conscious of writing this letter, but nearly oblivious to the beating of my heart. There is distinctly “one” that is “conscious” of this activity. Letting that go temporarily in order to better investigate some cognitive theory may be productive, but eventually you have to come back to it. Trying to explain it away via theory is like trying to find a theory of gravity that states dropped apples don’t really hit the ground. It may be wonderfully constructed, but doesn’t describe the world we exist in.
This formulation seems problematic also. If the brain is really so many agents (and I think there’s no reason to think the contrary), there’s no “one” who can be “conscious” of the activity of some module, unless consciousness is explained with “it’s when this very special module access the activities of other modules”. But then you have to explain why that special agent has consciousness and why others don’t. You have just moved the problem. If consciousness has any hope of being explained through modularity, it (in my opinion) ought to be by deconstructing it into the shared activity of such and such modules, none of them being effectively describable as conscious.
If problematic, it points to a problem with the theory, rather than the formulation. Presuming wildly that your mental experience is similar to mine, then there is a very distinct notion of being conscious of some activities (performed by modules) and not others. I am, for example, quite conscious of writing this letter, but nearly oblivious to the beating of my heart. There is distinctly “one” that is “conscious” of this activity. Letting that go temporarily in order to better investigate some cognitive theory may be productive, but eventually you have to come back to it. Trying to explain it away via theory is like trying to find a theory of gravity that states dropped apples don’t really hit the ground. It may be wonderfully constructed, but doesn’t describe the world we exist in.