what matters for Bostrom’s definition of intelligence is whether the agent is getting what it wants
This brings up another way—comparable to the idea that complex goals may require high intelligence—in which the orthogonality thesis might be limited. I think that the very having of wants itself requires a certain amount of intelligence. Consider the animal kingdom, sphexishness, etc. To get behavior that clearly demonstrates what most people would confidently call “goals” or “wants”, you have to get to animals with pretty substantial brain sizes.
The third point Bostrom makes is that a superintelligent machine could be created with no functional analogues of what we call “beliefs” and “desires”.
This contradicts the definition of intelligence via “the agent getting what it wants”.
This brings up another way—comparable to the idea that complex goals may require high intelligence—in which the orthogonality thesis might be limited. I think that the very having of wants itself requires a certain amount of intelligence. Consider the animal kingdom, sphexishness, etc. To get behavior that clearly demonstrates what most people would confidently call “goals” or “wants”, you have to get to animals with pretty substantial brain sizes.
This contradicts the definition of intelligence via “the agent getting what it wants”.