I think there’s a major internal tension in the picture you present (though the tension is only there with further assumptions). You write:
Obstruction doesn’t need discernment
[...] I don’t buy it. If all you want is to slow down a broad area of activity, my guess is that ignorant regulations do just fine at that every day (usually unintentionally). In particular, my impression is that if you mess up regulating things, a usual outcome is that many things are randomly slower than hoped. If you wanted to speed a specific thing up, that’s a very different story, and might require understanding the thing in question.
The same goes for social opposition. Nobody need understand the details of how genetic engineering works for its ascendancy to be seriously impaired by people not liking it. Maybe by their lights it still isn’t optimally undermined yet, but just not liking anything in the vicinity does go a long way.
And you write:
Technological choice is not luddism
Some technologies are better than others [citation not needed]. The best pro-technology visions should disproportionately involve awesome technologies and avoid shitty technologies, I claim. If you think AGI is highly likely to destroy the world, then it is the pinnacle of shittiness as a technology. Being opposed to having it into your techno-utopia is about as luddite as refusing to have radioactive toothpaste there. Colloquially, Luddites are against progress if it comes as technology.10 Even if that’s a terrible position, its wise reversal is not the endorsement of all ‘technology’, regardless of whether it comes as progress.
From “obstruction doesn’t need discernment”, you’re proposing a fairly broad dampening. While this may be worth it from an X-risk perspective, it’s (1) far more Luddite, (2) far more potentially actually harmful (though we’d agree, still worth it), and therefore (3) far more objectionable, and (4) far more legitimately opposed.
The tension wouldn’t be there if obstruction isn’t bottlenecked on discernment because discernment is easy / not too hard, but I don’t think you made that argument.
If discernment is not too hard, that’s potentially a dangerous thing: by being all discerning in a very noticeable way, you’re painting a big target on “here’s the dangerous [cool!] research”. Which is what seems to have already happened with AGI.
This is also a general problem with “just make better arguments about AI X-risk”. You can certainly make such arguments without spreading ideas about how to advance capabilities, but still, the most pointed arguments are like “look, in order to transform the world you have to do XYZ, and XYZ is dangerous because ABC”. You could maybe take the strategy of, whenever a top researcher makes a high-level proposal for how to make AGI, you can criticize that like “leaving aside whether or not that leads to AGI, if it led to AGI, here’s how that would go poorly”.
(I acknowledge that I’m being very “can’t do” in emphasis, but again, I think this pathway is crucial and worth thinking through… and therefore I want to figure out the best ways to do it!)
I think there’s a major internal tension in the picture you present (though the tension is only there with further assumptions). You write:
And you write:
From “obstruction doesn’t need discernment”, you’re proposing a fairly broad dampening. While this may be worth it from an X-risk perspective, it’s (1) far more Luddite, (2) far more potentially actually harmful (though we’d agree, still worth it), and therefore (3) far more objectionable, and (4) far more legitimately opposed.
The tension wouldn’t be there if obstruction isn’t bottlenecked on discernment because discernment is easy / not too hard, but I don’t think you made that argument.
If discernment is not too hard, that’s potentially a dangerous thing: by being all discerning in a very noticeable way, you’re painting a big target on “here’s the dangerous [cool!] research”. Which is what seems to have already happened with AGI.
This is also a general problem with “just make better arguments about AI X-risk”. You can certainly make such arguments without spreading ideas about how to advance capabilities, but still, the most pointed arguments are like “look, in order to transform the world you have to do XYZ, and XYZ is dangerous because ABC”. You could maybe take the strategy of, whenever a top researcher makes a high-level proposal for how to make AGI, you can criticize that like “leaving aside whether or not that leads to AGI, if it led to AGI, here’s how that would go poorly”.
(I acknowledge that I’m being very “can’t do” in emphasis, but again, I think this pathway is crucial and worth thinking through… and therefore I want to figure out the best ways to do it!)