I think if we do a poll, it will become clear that the strong majority of readers interpreted Nate’s post as “If you don’t solve aligment, you shouldn’t expect that some LDT/simulation mumbo-jumbo will let you and your loved ones survive this” and not in the more reasonable way you are interpreting this. I certainly interpreted the post that way.
You can run the argument past a poll of LLM models of humans and show their interpretations.
You can run the argument past a poll of LLM models of humans and show their interpretations.
I strongly agree with your second paragraph.