I find it pretty ironic that many in AI risk mitigation would make asks for if-then committments/RSPs from the top AI capabilities labs, but they won’t make the same asks for AI safety orgs/funders. E.g.: if you’re an AI safety funder, what kind of evidence (‘if’) will make you accelerate how much funding you deploy per year (‘then’)?
One of these types of orgs is developing a technology with the potential to kill literally all of humanity. The other type of org is funding research that if it goes badly mostly just wasted their own money. Of course the demands for legibility and transparency should be different.
Epistemic status: at least somewhat rant-mode.
I find it pretty ironic that many in AI risk mitigation would make asks for if-then committments/RSPs from the top AI capabilities labs, but they won’t make the same asks for AI safety orgs/funders. E.g.: if you’re an AI safety funder, what kind of evidence (‘if’) will make you accelerate how much funding you deploy per year (‘then’)?
One of these types of orgs is developing a technology with the potential to kill literally all of humanity. The other type of org is funding research that if it goes badly mostly just wasted their own money. Of course the demands for legibility and transparency should be different.