Yes, I generally view human values as partially ordered, not totally ordered.
However, the third post answers your second question well. Humans don’t have complete preferences, but they still are expected utility maximizers. It’s a partial order, not a total order, but it still disagrees with shard theory on relevant details.
Where are you seeing that conclusion in the 3rd post? AFAICT the message is that for an agent made up of parts that want different things / agent with incomplete preferences, there is no corresponding utility function that would uniquely correspond to its preferences, so humans (having incomplete preferences) are not EUMs. At best, such an agent is more like a market / committee of internal EUMs whose utility functions differ, which accords very well with the mainline “shard”-based picture.
Sorta? I mean, if you construct an agent via learning, then for a long time the shards within the agent will be much more like reflexes than like full sub-agents/utility maximizers. But in the limit of sophistication, yes there will be some pressure pushing those shards towards individual coherence (EUM-ness), though it’s hard to say how the balance shakes out compared to coalitional & other pressures.
Strongly upvoted.
Humans at least do not satisfy completeness/don’t admit a total order over their preferences.
See also:
This Answer
Why The Focus on Expected Utility Maximisers?
Why Subagents?
Yes, I generally view human values as partially ordered, not totally ordered.
However, the third post answers your second question well. Humans don’t have complete preferences, but they still are expected utility maximizers. It’s a partial order, not a total order, but it still disagrees with shard theory on relevant details.
Where are you seeing that conclusion in the 3rd post? AFAICT the message is that for an agent made up of parts that want different things / agent with incomplete preferences, there is no corresponding utility function that would uniquely correspond to its preferences, so humans (having incomplete preferences) are not EUMs. At best, such an agent is more like a market / committee of internal EUMs whose utility functions differ, which accords very well with the mainline “shard”-based picture.
Sorry for misrepresenting the third post.
Though does shard theory agree with the implication of the third post that the shards/sub-agents are utility maximizers themselves?
Sorta? I mean, if you construct an agent via learning, then for a long time the shards within the agent will be much more like reflexes than like full sub-agents/utility maximizers. But in the limit of sophistication, yes there will be some pressure pushing those shards towards individual coherence (EUM-ness), though it’s hard to say how the balance shakes out compared to coalitional & other pressures.