Sorta? I mean, if you construct an agent via learning, then for a long time the shards within the agent will be much more like reflexes than like full sub-agents/utility maximizers. But in the limit of sophistication, yes there will be some pressure pushing those shards towards individual coherence (EUM-ness), though it’s hard to say how the balance shakes out compared to coalitional & other pressures.
Sorry for misrepresenting the third post.
Though does shard theory agree with the implication of the third post that the shards/sub-agents are utility maximizers themselves?
Sorta? I mean, if you construct an agent via learning, then for a long time the shards within the agent will be much more like reflexes than like full sub-agents/utility maximizers. But in the limit of sophistication, yes there will be some pressure pushing those shards towards individual coherence (EUM-ness), though it’s hard to say how the balance shakes out compared to coalitional & other pressures.