I agree we should not be complacent. I think there’s a difference between being complacent and moving our focus to problems that are least likely to be solved by default. My primary message here is that we should re-evaluate which problems need concerted effort now, and potentially move resources to different parts of the problem—or different problems entirely—after we have reassessed. I am asking people to raise the bar for what counts as “concerted effort to actually try to govern AI”, which I think pushes against some types of blanket advocacy that merely raise awareness, and some proposals that (in my opinion) lack nuance.
Any chance that you could make this more concrete by specifying such a proposal? I expect it’d be possible to make up an example if you want to avoid criticising any specific project.
I have seen several people say that EAs should focus on promoting stupid legislation that slows down AI incidentally, since that’s “our best hope” to make sure things go well. In one of the footnotes, I cited an example of someone making this argument.
While this example could be dismissed as a weakman, I’ve also seen more serious proposals that I believe share both this theme and tone. This is how I currently perceive some of the “AI pause” proposals, especially those that fail to specify a mechanism to adjust regulatory strictness in response to new evidence. Nonetheless, I acknowledge that my disagreement with these proposals often comes down to a more fundamental disagreement about the difficulty of alignment, rather than any beliefs about the social response to AI risk.
I agree we should not be complacent. I think there’s a difference between being complacent and moving our focus to problems that are least likely to be solved by default. My primary message here is that we should re-evaluate which problems need concerted effort now, and potentially move resources to different parts of the problem—or different problems entirely—after we have reassessed. I am asking people to raise the bar for what counts as “concerted effort to actually try to govern AI”, which I think pushes against some types of blanket advocacy that merely raise awareness, and some proposals that (in my opinion) lack nuance.
Any chance that you could make this more concrete by specifying such a proposal? I expect it’d be possible to make up an example if you want to avoid criticising any specific project.
I have seen several people say that EAs should focus on promoting stupid legislation that slows down AI incidentally, since that’s “our best hope” to make sure things go well. In one of the footnotes, I cited an example of someone making this argument.
While this example could be dismissed as a weakman, I’ve also seen more serious proposals that I believe share both this theme and tone. This is how I currently perceive some of the “AI pause” proposals, especially those that fail to specify a mechanism to adjust regulatory strictness in response to new evidence. Nonetheless, I acknowledge that my disagreement with these proposals often comes down to a more fundamental disagreement about the difficulty of alignment, rather than any beliefs about the social response to AI risk.
Right now it seems to me that one of the highest impact things not likely to be done by default is substantially increased funding for AI safety.
And another interesting one from the summit:
“There was almost no discussion around agents—all gen AI & model scaling concerns.
It’s perhaps because agent capabilities are mediocre today and thus hard to imagine, similar to how regulators couldn’t imagine GPT-3’s implications until ChatGPT.”—https://x.com/kanjun/status/1720502618169208994?s=46&t=D5sNUZS8uOg4FTcneuxVIg