As written there, the strong form of the orthogonality thesis states ‘there’s no extra difficulty or complication in creating an intelligent agent to pursue a goal, above and beyond the computational tractability of that goal.’
I don’t know whether that’s intended to mean the same as ‘there are no types of goals that are more ‘natural’ or that are easier to build agents that pursue, or that you’re more likely to get if you have some noisy process for creating agents’.
I feel like I haven’t seen a good argument for the latter statement, and it seems intuitively wrong to me.
As written there, the strong form of the orthogonality thesis states ‘there’s no extra difficulty or complication in creating an intelligent agent to pursue a goal, above and beyond the computational tractability of that goal.’
I don’t know whether that’s intended to mean the same as ‘there are no types of goals that are more ‘natural’ or that are easier to build agents that pursue, or that you’re more likely to get if you have some noisy process for creating agents’.
I feel like I haven’t seen a good argument for the latter statement, and it seems intuitively wrong to me.