superintelligences can have nearly any type of motivation (at least, nearly any utility function-bases motivation).
Sure they can, but will they?
The weaker “in-theory” orthogonality thesis is probably true, almost trivially, but it doesn’t matter much.
We don’t care about all possible minds or all possible utility functions for the same reason we don’t care about all possible programs. What’s actually important is the tiny narrow subset of superintelligences and utility functions that are actually likely to be built and exist in the future.
And in this light it is clear that there will be some correlation between the population distributions over intelligences and utility functions/motivations, and the strongest form of the orthogonality thesis trivially fails.
Intelligence in humans evolved necessarily in the context of language and the formation of social meta-organisms, and we thus have many specific features such as altruistic punishment (moral justice), empathy, and so on that are critical to the meta-organism.
AGI systems will likewise develop from this foundation and evolve in our economy. This environment will select for AGI systems that either fulfill our needs or are like us (or both). The rest will be culled.
Sure they can, but will they?
The weaker “in-theory” orthogonality thesis is probably true, almost trivially, but it doesn’t matter much.
We don’t care about all possible minds or all possible utility functions for the same reason we don’t care about all possible programs. What’s actually important is the tiny narrow subset of superintelligences and utility functions that are actually likely to be built and exist in the future.
And in this light it is clear that there will be some correlation between the population distributions over intelligences and utility functions/motivations, and the strongest form of the orthogonality thesis trivially fails.
Intelligence in humans evolved necessarily in the context of language and the formation of social meta-organisms, and we thus have many specific features such as altruistic punishment (moral justice), empathy, and so on that are critical to the meta-organism.
AGI systems will likewise develop from this foundation and evolve in our economy. This environment will select for AGI systems that either fulfill our needs or are like us (or both). The rest will be culled.