The orthogonality thesis says that an AI can have any combination of intelligence and goals, not that P(goal =x|intelligence =y)=P(goal =x) for all x and y. It depends entirely on how the AI is built. People like Rohin Shah assign significant probability on alignment by default, at least last I heard.
The orthogonality thesis says that an AI can have any combination of intelligence and goals, not that P(goal =x|intelligence =y)=P(goal =x) for all x and y. It depends entirely on how the AI is built. People like Rohin Shah assign significant probability on alignment by default, at least last I heard.