I’ve seen plenty of AI x-risk skeptics present their object-level argument, and I’m not interested in paying out a bounty for stuff I already have. I’m most interested in the arguments from this specific school of thought, and that’s why I’m offering the terms I offer.
I see. Maybe you could address it towards “DAIR, and related, researchers”? I know that’s a clunkier name for the group you’re trying to describe, but I don’t think more succinct wording is worth progressing towards a tribal dynamic between researchers who care about X-risk and S-risk and those who care about less extreme risks.
I’ve seen plenty of AI x-risk skeptics present their object-level argument, and I’m not interested in paying out a bounty for stuff I already have. I’m most interested in the arguments from this specific school of thought, and that’s why I’m offering the terms I offer.
I see. Maybe you could address it towards “DAIR, and related, researchers”? I know that’s a clunkier name for the group you’re trying to describe, but I don’t think more succinct wording is worth progressing towards a tribal dynamic between researchers who care about X-risk and S-risk and those who care about less extreme risks.