Yeah, the fact that the responses to the optimistic arguments sometimes rely on simply not engaging with it at all in detail has really dimmed my prospects of reaching out, and causes me to think poorer of AI doom, epistemically.
This actually happened before with Eliezer’s Comment here:
I agree. It’s rare enough to get reasonable arguments for optimistic outlooks, so this seems worth for someone to openly engage with in some detail.
Yeah, the fact that the responses to the optimistic arguments sometimes rely on simply not engaging with it at all in detail has really dimmed my prospects of reaching out, and causes me to think poorer of AI doom, epistemically.
This actually happened before with Eliezer’s Comment here:
https://www.lesswrong.com/posts/wAczufCpMdaamF9fy/my-objections-to-we-re-all-gonna-die-with-eliezer-yudkowsky#YYR4hEFRmA7cb5csy