Humans have a theory of mind, that makes certain types of modularizations easier. That doesn’t mean that the same modularization is simple for an agent that doesn’t share that theory of mind.
Then again, it might be. This is worth digging into empirically. See my post on the optimistic and pessimistic scenarios; in the optimistic scenario, preferences, human theory of mind, and all the other elements, are easy to deduce (there’s an informal equivalence result; if one of those is easy to deduce, all the others are).
So we need to figure out if we’re in the optimistic or the pessimistic scenario.
Humans have a theory of mind, that makes certain types of modularizations easier. That doesn’t mean that the same modularization is simple for an agent that doesn’t share that theory of mind.
Then again, it might be. This is worth digging into empirically. See my post on the optimistic and pessimistic scenarios; in the optimistic scenario, preferences, human theory of mind, and all the other elements, are easy to deduce (there’s an informal equivalence result; if one of those is easy to deduce, all the others are).
So we need to figure out if we’re in the optimistic or the pessimistic scenario.