There is a fuzzy line between “let’s slow down AI capabilities” and “lets explicitly, adversarially, sabotage AI research”. While I am all for the former, I don’t support the latter; it creates worlds in which AI safety and capabilities groups are pitted head to head, and capabilities orgs explicitly become more incentivized to ignore safety proposals. These aren’t worlds I personally wish to be in.
While I understand the motivation behind this message, I think the actions described in this post cross that fuzzy boundary, and pushes way too far towards that style of adversarial messaging
There is a fuzzy line between “let’s slow down AI capabilities” and “lets explicitly, adversarially, sabotage AI research”. While I am all for the former, I don’t support the latter; it creates worlds in which AI safety and capabilities groups are pitted head to head, and capabilities orgs explicitly become more incentivized to ignore safety proposals. These aren’t worlds I personally wish to be in.
While I understand the motivation behind this message, I think the actions described in this post cross that fuzzy boundary, and pushes way too far towards that style of adversarial messaging
I see your point, and I agree. But I’m not advocating for sabotaging research.
I’m talking about admonishing a corporation for cutting corners and rushing a launch that turned out to be net negative.
Did you retweet this tweet like Eliezer did?https://twitter.com/thegautamkamath/status/1626290010113679360
If not, is it because you didn’t want to publicly sabotage research?
Do you agree or disagree with this twitter thread? https://twitter.com/nearcyan/status/1627175580088119296?t=s4eBML752QGbJpiKySlzAQ&s=19