Wouldn’t decisions about e.g. which objects get selected and broadcast to the global workspace be made by a majority or plurality of subagents? “Committee requiring unanimous agreement” feels more like what would be the case in practice for a unified mind, to use a TMI term. I guess the unanimous agreement is only required because we’re looking for strict/formal coherence in the overall system, whereas e.g. suboptimally-unified/coherent humans with lots of akrasia can have tug-of-wars between groups of subagents for control.
The way I’d think of it, it’s not that you literally need unanimous agreement, but that in some situations there may be subagents that are strong enough to block a given decision. And then if you only look at the subagents that are strong enough to exert a major influence on that particular decision (and ignore the ones either who don’t care about it or who aren’t strong enough to make a difference), it kind of looks like a committee requiring unanimous agreement.
It gets a little handwavy and metaphorical but so does the concept of a subagent. :)
The way I’d think of it, it’s not that you literally need unanimous agreement, but that in some situations there may be subagents that are strong enough to block a given decision.
Ah, I think that makes sense. Is this somehow related to the idea that the consciousness is more of a “last stop for a veto from the collective mind system” for already-subconsciously-initiated thoughts and actions? Struggling to remember where I read this, though.
It gets a little handwavy and metaphorical but so does the concept of a subagent.
Yeah, considering the fact that subagents are only “agents” insofar as it makes sense to apply the intentional stance (the thing we’d like to avoid having to apply to the whole system because it seems fundamentally limited) to the individual parts, I’m not surprised. It seems like it’s either “agents all the way down” or abandon the concept of agency altogether (although posing that dichotomy feels like a suspicious presumption of agency, itself!).
Wouldn’t decisions about e.g. which objects get selected and broadcast to the global workspace be made by a majority or plurality of subagents? “Committee requiring unanimous agreement” feels more like what would be the case in practice for a unified mind, to use a TMI term. I guess the unanimous agreement is only required because we’re looking for strict/formal coherence in the overall system, whereas e.g. suboptimally-unified/coherent humans with lots of akrasia can have tug-of-wars between groups of subagents for control.
The way I’d think of it, it’s not that you literally need unanimous agreement, but that in some situations there may be subagents that are strong enough to block a given decision. And then if you only look at the subagents that are strong enough to exert a major influence on that particular decision (and ignore the ones either who don’t care about it or who aren’t strong enough to make a difference), it kind of looks like a committee requiring unanimous agreement.
It gets a little handwavy and metaphorical but so does the concept of a subagent. :)
Ah, I think that makes sense. Is this somehow related to the idea that the consciousness is more of a “last stop for a veto from the collective mind system” for already-subconsciously-initiated thoughts and actions? Struggling to remember where I read this, though.
Yeah, considering the fact that subagents are only “agents” insofar as it makes sense to apply the intentional stance (the thing we’d like to avoid having to apply to the whole system because it seems fundamentally limited) to the individual parts, I’m not surprised. It seems like it’s either “agents all the way down” or abandon the concept of agency altogether (although posing that dichotomy feels like a suspicious presumption of agency, itself!).