Well—I’m gonna speak broadly—if you look at the history of PauseAI, they are marked by belief that the measures proposed by others are insufficient for Actually Stopping AI—for instance the kind of policy measures proposed by people working at AI companies isn’t enough; that the kind of measures proposed by people funded by OpenPhil are often not enough; and so on.
They are correct as far as I can tell. Can you identify a policy measure proposed by an AI company or an OpenPhil-funded org that you think would be sufficient to stop unsafe AI development?
I think there is indeed exactly one such policy measure, which is SB 1047, supported by Center for AI Safety which is OpenPhil-funded (IIRC), which most big AI companies lobbied against, and Anthropic opposed the original stronger version and got it reduced to a weaker and probably less-safe version.
When I wrote where I was donating in 2024 I went through a bunch of orgs’ policy proposals and explained why they appear deeply inadequate. Some specific relevant parts: 1, 2, 3, 4
Edit: Adding some color so you don’t have to click through– when I say the proposals I reviewed were inadequate, I mean they said things like (paraphrasing) “safety should be done on a completely voluntary basis with no government regulations” and “companies should have safety officers but those officers should not have final say on anything”, and would simply not address x-risk at all, or would make harmful proposals like “the US Department of Defense should integrate more AI into its weapon systems” or “we need to stop worrying about x-risk because it’s distracting from the real issues”.
“sufficient to stop unsafe AI development? I think there is indeed exactly one such policy measure, which is SB 1047,”
I think it’s obviously untrue that this would stop unsafe AI—it is as close as any measure I’ve seen, and would provide some material reduction in risk in the very near term, but (even if applied universally, and no-one tried to circumvent it,) it would not stop future unsafe AI.
Yeah I actually agree with that, I don’t think it was sufficient, I just think it was pretty good. I wrote the comment too quickly without thinking about my wording.
Disagree that it could stop dangerous work, and doubly disagree given the way things are headed, especially with removing whistleblower protections and the lack of useful metrics for compliance. I don’t think it would even be as good as SB-1047, even in the amended weaker form.
I was previously more hopeful that if the EU COP was a strong enough code, then when things inevitably went poorly anyways we could say “look, doing pretty good isn’t enough, we need to actually regulate specific parts of this dangerous technology,” but I worry that it’s not even going to be strong enough to make that argument.
They are correct as far as I can tell. Can you identify a policy measure proposed by an AI company or an OpenPhil-funded org that you think would be sufficient to stop unsafe AI development?
I think there is indeed exactly one such policy measure, which is SB 1047, supported by Center for AI Safety which is OpenPhil-funded (IIRC), which most big AI companies lobbied against, and Anthropic opposed the original stronger version and got it reduced to a weaker and probably less-safe version.When I wrote where I was donating in 2024 I went through a bunch of orgs’ policy proposals and explained why they appear deeply inadequate. Some specific relevant parts: 1, 2, 3, 4
Edit: Adding some color so you don’t have to click through– when I say the proposals I reviewed were inadequate, I mean they said things like (paraphrasing) “safety should be done on a completely voluntary basis with no government regulations” and “companies should have safety officers but those officers should not have final say on anything”, and would simply not address x-risk at all, or would make harmful proposals like “the US Department of Defense should integrate more AI into its weapon systems” or “we need to stop worrying about x-risk because it’s distracting from the real issues”.
“sufficient to stop unsafe AI development? I think there is indeed exactly one such policy measure, which is SB 1047,”
I think it’s obviously untrue that this would stop unsafe AI—it is as close as any measure I’ve seen, and would provide some material reduction in risk in the very near term, but (even if applied universally, and no-one tried to circumvent it,) it would not stop future unsafe AI.
Yeah I actually agree with that, I don’t think it was sufficient, I just think it was pretty good. I wrote the comment too quickly without thinking about my wording.
EU AI Code of Practice is better, a little closer to stopping ai development
Disagree that it could stop dangerous work, and doubly disagree given the way things are headed, especially with removing whistleblower protections and the lack of useful metrics for compliance. I don’t think it would even be as good as SB-1047, even in the amended weaker form.
I was previously more hopeful that if the EU COP was a strong enough code, then when things inevitably went poorly anyways we could say “look, doing pretty good isn’t enough, we need to actually regulate specific parts of this dangerous technology,” but I worry that it’s not even going to be strong enough to make that argument.