Let’s steelman his argument into “Which is more likely to succeed, actually stopping all research associated with existential risk or inventing a Friendly AI?”. If you find another reason why the first option wouldn’t work, include the desperate effort needed to overcome that problem in the calculation.
Me minutes after writing that: “I precommit to post this at most a week from now. I predict someone will give a clever answer along the lines of driving humanity extinct in order to stop existential risk research.”
Let’s steelman his argument into “Which is more likely to succeed, actually stopping all research associated with existential risk or inventing a Friendly AI?”. If you find another reason why the first option wouldn’t work, include the desperate effort needed to overcome that problem in the calculation.
I don’t think “existential risk research” and “research associated with existential risks” are the same thing.
Yes, that’s what I meant. Let me edit that.
Me minutes after writing that: “I precommit to post this at most a week from now. I predict someone will give a clever answer along the lines of driving humanity extinct in order to stop existential risk research.”