Any chance that you could make this more concrete by specifying such a proposal? I expect it’d be possible to make up an example if you want to avoid criticising any specific project.
I have seen several people say that EAs should focus on promoting stupid legislation that slows down AI incidentally, since that’s “our best hope” to make sure things go well. In one of the footnotes, I cited an example of someone making this argument.
While this example could be dismissed as a weakman, I’ve also seen more serious proposals that I believe share both this theme and tone. This is how I currently perceive some of the “AI pause” proposals, especially those that fail to specify a mechanism to adjust regulatory strictness in response to new evidence. Nonetheless, I acknowledge that my disagreement with these proposals often comes down to a more fundamental disagreement about the difficulty of alignment, rather than any beliefs about the social response to AI risk.
Any chance that you could make this more concrete by specifying such a proposal? I expect it’d be possible to make up an example if you want to avoid criticising any specific project.
I have seen several people say that EAs should focus on promoting stupid legislation that slows down AI incidentally, since that’s “our best hope” to make sure things go well. In one of the footnotes, I cited an example of someone making this argument.
While this example could be dismissed as a weakman, I’ve also seen more serious proposals that I believe share both this theme and tone. This is how I currently perceive some of the “AI pause” proposals, especially those that fail to specify a mechanism to adjust regulatory strictness in response to new evidence. Nonetheless, I acknowledge that my disagreement with these proposals often comes down to a more fundamental disagreement about the difficulty of alignment, rather than any beliefs about the social response to AI risk.