I didn’t downvote, but besides the obvious explanation of people being anti-anti-AI risk, I’ve seen you write these sort of one-liner comments that express your antipathy towards AI risk and do nothing else. Some people probably feel that their time is being wasted, and some people probably find it improbable that you can simultaneously be thinking your own thoughts and agree with an entire index of critiques. On the other hand, I can see from your perspective that there is a selection effect favoring people who take AI risk seriously and that you might think it prudent to represent your position whenever you can.
Let’s just take one of his most recent critiques. It’s an uncharitable interpretation of the standard position on why AGIs will not automatically do what you mean. The reason that there is not already UFAI is that even though AIs don’t share our goals, they lack optimization power. If I can trivially discover a misinterpretation of the standard position, then that lowers my estimate that you are examining his arguments critically or engaging in this debate charitably, which is behavior that is subject to social punishment in this community.
I didn’t downvote, but besides the obvious explanation of people being anti-anti-AI risk, I’ve seen you write these sort of one-liner comments that express your antipathy towards AI risk and do nothing else. Some people probably feel that their time is being wasted, and some people probably find it improbable that you can simultaneously be thinking your own thoughts and agree with an entire index of critiques. On the other hand, I can see from your perspective that there is a selection effect favoring people who take AI risk seriously and that you might think it prudent to represent your position whenever you can.
Let’s just take one of his most recent critiques. It’s an uncharitable interpretation of the standard position on why AGIs will not automatically do what you mean. The reason that there is not already UFAI is that even though AIs don’t share our goals, they lack optimization power. If I can trivially discover a misinterpretation of the standard position, then that lowers my estimate that you are examining his arguments critically or engaging in this debate charitably, which is behavior that is subject to social punishment in this community.