Below, I’ve segmented by x-risk and non-x-risk related proposals, excluding the proposals that are geared towards promoting its use and focusing solely on those aimed at risk.
Hmm, I get the idea that people value succinctness a lot with these sorts of things, because there’s so much AI information to take in now, so I’m not so sure about the net effect, but I’m wondering maybe if I could get at your concern here by mocking up a percentage (i.e. what percentage of the proposals were risk oriented vs progress oriented)?
It wouldn’t tell you the type of stuff the Biden administration is pushing, but it would tell you the ratio which is what you seem perhaps most concerned with.
Thanks for the work put into the distillation! But I think that the acceleration proposal to safety proposal ratio is highly relevant. British PM’s Rishi Sunak’s speech, for example, was in large part an announcement that the UK would not regulate AI anytime soon. I’ve argued previously that governments have strong short term incentives to accelerate AI and even lie about it, so my prediction is that omitting the ratio of safety to pro-acceleration points here, by omitting pro-acceleration points entirely, is net harmful.
Hmm, I get the idea that people value succinctness a lot with these sorts of things, because there’s so much AI information to take in now, so I’m not so sure about the net effect, but I’m wondering maybe if I could get at your concern here by mocking up a percentage (i.e. what percentage of the proposals were risk oriented vs progress oriented)?
It wouldn’t tell you the type of stuff the Biden administration is pushing, but it would tell you the ratio which is what you seem perhaps most concerned with.
[Edit] this is included now