I’m sympathetic. I think that I should have said “instrumental convergence seems like a moot point when deciding whether to be worried about AI disempowerment scenarios)”; instrumental convergence isn’t a moot point for alignment discussion and within lab strategy, of course.
But I do consider the “give AIs power” to be a substantial part of the risk we face, such that not doing that would be quite helpful. I think it’s quite possible that GPT 6 isn’t autonomously power-seeking, but I feel pretty confused about the issue.
I’m sympathetic. I think that I should have said “instrumental convergence seems like a moot point when deciding whether to be worried about AI disempowerment scenarios)”; instrumental convergence isn’t a moot point for alignment discussion and within lab strategy, of course.
But I do consider the “give AIs power” to be a substantial part of the risk we face, such that not doing that would be quite helpful. I think it’s quite possible that GPT 6 isn’t autonomously power-seeking, but I feel pretty confused about the issue.