The suggestion that the integration of new sense-data into a model is at least partly driven by the state of the model is further supported by images with multiple interpretations (classically the Necker cube or shadows of rotating objects). Data consistent with multiple models is integrated into the currently held one. Inattentional blindness is a similar phenomena.
… consciousness is a big, strange problem. Not intelligence, not even assigning meaning to representations, but consciousness.
Why?
Mitchell Porter hasn’t explained this either. What do you deem conciousness to be? Is this typical minds at another level? To me, at least, the argument seems analogous to prime mover arguments, in that it is asserted that no finite regress of physical causes could account for (consciousness/the universe), and thus we must extend ontology.
The suggestion that the integration of new sense-data into a model is at least partly driven by the state of the model is further supported by images with multiple interpretations (classically the Necker cube or shadows of rotating objects). Data consistent with multiple models is integrated into the currently held one. Inattentional blindness is a similar phenomena.
Why?
Mitchell Porter hasn’t explained this either. What do you deem conciousness to be? Is this typical minds at another level? To me, at least, the argument seems analogous to prime mover arguments, in that it is asserted that no finite regress of physical causes could account for (consciousness/the universe), and thus we must extend ontology.
It would be a little bit analogous to a prime mover argument if we actually were the prime movers.