“I arrogantly think I could write a broadly compelling and accessible case for AI risk”
Please do so. Your current essay is very good, so chances are your “arrogant” thought is correct.
Edit: I think this is too pessimistic about human nature, but maybe we should think about this more before publishing a “broadly compelling and accessible case for AI risk”.
“I arrogantly think I could write a broadly compelling and accessible case for AI risk”
Please do so. Your current essay is very good, so chances are your “arrogant” thought is correct.
Edit: I think this is too pessimistic about human nature, but maybe we should think about this more before publishing a “broadly compelling and accessible case for AI risk”.
https://www.lesswrong.com/posts/xAzKefLsYdFa4SErg/accurate-models-of-ai-risk-are-hyperexistential-exfohazards