Different views about the fundamental difficulty of inner alignment seem to be a (the?) major driver of differences in views about how likely AI X risk is overall.
I strongly disagree with inner alignment being the correct crux. It does seem to be true that this is in fact a crux for many people, but I think this is a mistake. It is certainly significant.
But I think optimism about outer alignment and global coordination (“Catch-22 vs. Saving Private Ryan”) is much bigger factor, and optimists are badly wrong on both points here.
I strongly disagree with inner alignment being the correct crux. It does seem to be true that this is in fact a crux for many people, but I think this is a mistake. It is certainly significant.
But I think optimism about outer alignment and global coordination (“Catch-22 vs. Saving Private Ryan”) is much bigger factor, and optimists are badly wrong on both points here.