Yeah, the fact that the responses to the optimistic arguments sometimes rely on simply not engaging with it at all in detail has really dimmed my prospects of reaching out, and causes me to think poorer of AI doom, epistemically.
This actually happened before with Eliezer’s Comment here:
Yeah, the fact that the responses to the optimistic arguments sometimes rely on simply not engaging with it at all in detail has really dimmed my prospects of reaching out, and causes me to think poorer of AI doom, epistemically.
This actually happened before with Eliezer’s Comment here:
https://www.lesswrong.com/posts/wAczufCpMdaamF9fy/my-objections-to-we-re-all-gonna-die-with-eliezer-yudkowsky#YYR4hEFRmA7cb5csy