I sometimes look at human conscious thought as software which is running on partially re-programmable hardware.
The hardware can be reprogrammed by two actors—the conscious one, mostly indirectly, and the unconscious one, which seems to have direct access to the wiring of the whole mechanism (including the bits that represent the conscious actor).
I haven’t yet seen a coherent discussion of this kind of model—maybe it exists and I’m missing it. Is there already a coherent discussion of this point of view on this site, or somewhere else?
I look at conscious thought like a person trying to simultaneously ride multiple animals. Each animal can manage itself, if left to it’s own devices it’ll keep on walking in some direction, perhaps even a good one. The rider can devote different levels of attention to any given animal, but his level of control bottoms out at some point: he can’t control the muscles of the animals, only the trajectory (and not always this).
One animal might be vision: it’ll go on recognizing and paying attention to things unspurred, but the rider can rein the animal in and make it focus on one particular object, or even one point on that object.
The animals all interact with each other, and sometimes it’s impossible to control one after being incited by another. And of course, the rider only has so much attention to devote to the numerous beasts, and often can only wrangle one or two at time.
I sometimes look at human conscious thought as software which is running on partially re-programmable hardware.
The hardware can be reprogrammed by two actors—the conscious one, mostly indirectly, and the unconscious one, which seems to have direct access to the wiring of the whole mechanism (including the bits that represent the conscious actor).
I haven’t yet seen a coherent discussion of this kind of model—maybe it exists and I’m missing it. Is there already a coherent discussion of this point of view on this site, or somewhere else?
I look at conscious thought like a person trying to simultaneously ride multiple animals. Each animal can manage itself, if left to it’s own devices it’ll keep on walking in some direction, perhaps even a good one. The rider can devote different levels of attention to any given animal, but his level of control bottoms out at some point: he can’t control the muscles of the animals, only the trajectory (and not always this).
One animal might be vision: it’ll go on recognizing and paying attention to things unspurred, but the rider can rein the animal in and make it focus on one particular object, or even one point on that object.
The animals all interact with each other, and sometimes it’s impossible to control one after being incited by another. And of course, the rider only has so much attention to devote to the numerous beasts, and often can only wrangle one or two at time.
Some riders even have reins on themselves.
It’s a little old, but there’s always The Multiple Self.
I think that’s a part of PJEby’s theories.