I’m not sure who you’ve spoken to, but at least among the AI policy people who I talk to regularly (which admittedly is a subset of people who I think are doing the most thoughtful/serious work), I think nearly all of them have thought about ways in which regulation + regulatory capture could be net negative. At least to the point of being able to name the relatively “easy” ways (e.g., governments being worse at alignment than companies).
I continue to think people should be forming alliances with those who share similar policy objectives, rather than simply those who belong in the “I believe xrisk is a big deal” camp. I’ve seen many instances in which the “everyone who believes xrisk is a big deal belongs to the same camp” mentality has been used to dissuade people from communicating their beliefs, communicating with policymakers, brainstorming ideas that involve coordination with other groups in the world, disagreeing with the mainline views held by a few AIS leaders, etc.
The cultural pressures against policy advocacy have been so strong that it’s not surprising to see folks say things like “perhaps our groups are no longer natural allies” now that some of the xrisk-concerned people are beginning to say things like “perhaps the government should have more of a say in how AGI development goes than in status quo, where the government has played ~0 role and ~all decisions have been made by private companies.”
Perhaps there’s a multiverse out there in which the AGI community ended up attracting govt natsec folks instead of Bay Area libertarians, and the cultural pressures are flipped. Perhaps in that world, the default cultural incentives pushed people heavily brainstorming ways that markets and companies could contribute meaningfully to the AGI discourse, and the default position for the “AI risk is a big deal” camp was “well obviously the government should be able to decide what happens and it would be ridiculous to get companies involved– don’t be unilateralist by going and telling VCs about this stuff.”
I bring up this (admittedly kinda weird) hypothetical to point out just how skewed the status quo is. One might generally be wary of government overinvolvement in regulating emerging technologies yet still recognize that some degree of regulation is useful, and that position would likely still push them to be in the “we need more regulation than we currently have” camp.
As a final note, I’ll point out to readers less familiar with the AI policy world that serious people are proposing lots of regulation that is in between “status quo with virtually no regulation” and “full-on pause.” Some of my personal favorite examples include: emergency preparedness (akin to the OPPR), licensing (see Romney), reporting requirements, mandatory technical standards enforced via regulators, and public-private partnerships.
I’m not sure who you’ve spoken to, but at least among the people who I talk to regularly who I consider to be doing “serious AI policy work” (which admittedly is not everyone who claims to be doing AI policy work), I think nearly all of them have thought about ways in which regulation + regulatory capture could be net negative. At least to the point of being able to name the relatively “easy” ways (e.g., governments being worse at alignment than companies).
I don’t disagree with this; when I say “thought very much” I mean e.g. to the point of writing papers about it, or even blog posts, or analyzing it in talks, or basically anything more than cursory brainstorming. Maybe I just haven’t seen that stuff, idk.
I’m not sure who you’ve spoken to, but at least among the AI policy people who I talk to regularly (which admittedly is a subset of people who I think are doing the most thoughtful/serious work), I think nearly all of them have thought about ways in which regulation + regulatory capture could be net negative. At least to the point of being able to name the relatively “easy” ways (e.g., governments being worse at alignment than companies).
I continue to think people should be forming alliances with those who share similar policy objectives, rather than simply those who belong in the “I believe xrisk is a big deal” camp. I’ve seen many instances in which the “everyone who believes xrisk is a big deal belongs to the same camp” mentality has been used to dissuade people from communicating their beliefs, communicating with policymakers, brainstorming ideas that involve coordination with other groups in the world, disagreeing with the mainline views held by a few AIS leaders, etc.
The cultural pressures against policy advocacy have been so strong that it’s not surprising to see folks say things like “perhaps our groups are no longer natural allies” now that some of the xrisk-concerned people are beginning to say things like “perhaps the government should have more of a say in how AGI development goes than in status quo, where the government has played ~0 role and ~all decisions have been made by private companies.”
Perhaps there’s a multiverse out there in which the AGI community ended up attracting govt natsec folks instead of Bay Area libertarians, and the cultural pressures are flipped. Perhaps in that world, the default cultural incentives pushed people heavily brainstorming ways that markets and companies could contribute meaningfully to the AGI discourse, and the default position for the “AI risk is a big deal” camp was “well obviously the government should be able to decide what happens and it would be ridiculous to get companies involved– don’t be unilateralist by going and telling VCs about this stuff.”
I bring up this (admittedly kinda weird) hypothetical to point out just how skewed the status quo is. One might generally be wary of government overinvolvement in regulating emerging technologies yet still recognize that some degree of regulation is useful, and that position would likely still push them to be in the “we need more regulation than we currently have” camp.
As a final note, I’ll point out to readers less familiar with the AI policy world that serious people are proposing lots of regulation that is in between “status quo with virtually no regulation” and “full-on pause.” Some of my personal favorite examples include: emergency preparedness (akin to the OPPR), licensing (see Romney), reporting requirements, mandatory technical standards enforced via regulators, and public-private partnerships.
I don’t disagree with this; when I say “thought very much” I mean e.g. to the point of writing papers about it, or even blog posts, or analyzing it in talks, or basically anything more than cursory brainstorming. Maybe I just haven’t seen that stuff, idk.