The Orthogonality Thesis Intelligence and final goals are orthogonal axes along which possible agents can freely vary. In other words, more or less any level of intelligence could in principle be combined with more or less any final goal.
It makes no claim about how likely intelligence and final goals are to diverge, it only claims that it’s in principle possible to combine any intelligence with any set of goals. Later on in the paper he discusses ways of actually predicting the behavior of a superintelligence, but that’s beyond the scope of the Thesis.
What’s your source for this definition?
See for example Bostrom’s original paper (pdf):
It makes no claim about how likely intelligence and final goals are to diverge, it only claims that it’s in principle possible to combine any intelligence with any set of goals. Later on in the paper he discusses ways of actually predicting the behavior of a superintelligence, but that’s beyond the scope of the Thesis.