Policy is hard and complex, and details matter. I also worry that the community is in the middle of failing horribly at basic coordination because it’s trying to be democratic but abstract with deliberations, and unilateralist and concrete with decision making.
Abstractions like “don’t ally with X on topic Y” are disconnected from the actual decision making processes that might lead to concrete advances, and I think that even the pseudo-concrete “block progress in Y,” for Y being compute, or data, or whatever, fails horribly at the concreteness criterion needed for actual decision making.
What the post does do is push for social condemnation for “collaboration with the enemy” without concrete criteria for when it is good or bad, attempting to constrain action spaces for various actors who the AI safety community is allied with, or pushing them to disassociate themselves with the community because of perceived or actual norms against certain classes of policy work.
I think that even the pseudo-concrete “block progress in Y,” for Y being compute, or data, or whatever, fails horribly at the concreteness criterion needed for actual decision making. [...] What the post does do is push for social condemnation for “collaboration with the enemy” without concrete criteria for when it is good or bad
There are quite specific things I would not endorse that I think follow from the post relatively smoothly. Funding the lobbying group mentioned in the introduction is one example.
I do agree though that I was a bit vague in my suggestions. Mostly, I’m asking people to be careful, and not rush to try something hasty because it seems “better than nothing”. I’m certainly not asking people to refuse to collaborate or associate with anyone who I might consider a “neo-luddite”.
I edited the title (back) to “Slightly against aligning with neo-luddites” to better reflect my mixed feelings on this matter.
Yeah, definitely agree that donating to the Concept Art Association isn’t effective—and their tweet tagline “This is the most solid plan we have yet” is standard crappy decision making on its own.
Policy is hard and complex, and details matter. I also worry that the community is in the middle of failing horribly at basic coordination because it’s trying to be democratic but abstract with deliberations, and unilateralist and concrete with decision making.
Abstractions like “don’t ally with X on topic Y” are disconnected from the actual decision making processes that might lead to concrete advances, and I think that even the pseudo-concrete “block progress in Y,” for Y being compute, or data, or whatever, fails horribly at the concreteness criterion needed for actual decision making.
What the post does do is push for social condemnation for “collaboration with the enemy” without concrete criteria for when it is good or bad, attempting to constrain action spaces for various actors who the AI safety community is allied with, or pushing them to disassociate themselves with the community because of perceived or actual norms against certain classes of policy work.
There are quite specific things I would not endorse that I think follow from the post relatively smoothly. Funding the lobbying group mentioned in the introduction is one example.
I do agree though that I was a bit vague in my suggestions. Mostly, I’m asking people to be careful, and not rush to try something hasty because it seems “better than nothing”. I’m certainly not asking people to refuse to collaborate or associate with anyone who I might consider a “neo-luddite”.
I edited the title (back) to “Slightly against aligning with neo-luddites” to better reflect my mixed feelings on this matter.
Yeah, definitely agree that donating to the Concept Art Association isn’t effective—and their tweet tagline “This is the most solid plan we have yet” is standard crappy decision making on its own.