I think this is a communication issue: people don’t actually assume this, but they may use language that makes its sound like the AI has only one goal because it’s simpler and doesn’t actually make a difference.
The reason it doesn’t make a difference is because what really matters is that the AI has a preference over possible future worlds. A single goal and a complex set of goals both give you that. In fact, goal is just an abstraction anyway, since the AI may not literally have such a thing explicitly represented in the code. But there are coherence arguments that say that it will have something that amounts to an ordering of future states (though this is not entirely uncontroversial either).
If you’re curious, this exact issue (multiple vs. single goal) came up in the debate of Stuart Russel and Steven Pinker (podcast episode, part on x-risk starts at 01:09:38).
I think this is a communication issue: people don’t actually assume this, but they may use language that makes its sound like the AI has only one goal because it’s simpler and doesn’t actually make a difference.
The reason it doesn’t make a difference is because what really matters is that the AI has a preference over possible future worlds. A single goal and a complex set of goals both give you that. In fact, goal is just an abstraction anyway, since the AI may not literally have such a thing explicitly represented in the code. But there are coherence arguments that say that it will have something that amounts to an ordering of future states (though this is not entirely uncontroversial either).
If you’re curious, this exact issue (multiple vs. single goal) came up in the debate of Stuart Russel and Steven Pinker (podcast episode, part on x-risk starts at 01:09:38).