I think a good understanding of 1 would be really helpful for advocacy. If I don’t understand why AI alignment is a big issue, I can’t explain it to anybody else, and they won’t be convinced by me saying that I trust the people who say AI alignment is a big issue.
and I sloppily merged the two together in 8, which thanks to FinalFormal2 and other’s comments, I no longer believe needs to be a necessary belief of AGI pessimists.
I think a good understanding of 1 would be really helpful for advocacy. If I don’t understand why AI alignment is a big issue, I can’t explain it to anybody else, and they won’t be convinced by me saying that I trust the people who say AI alignment is a big issue.
Agreed. It’s just a separate question.
and I sloppily merged the two together in 8, which thanks to FinalFormal2 and other’s comments, I no longer believe needs to be a necessary belief of AGI pessimists.