Adversarial mindset. Adversarial communication is to some extent necessary to communicate clearly that Conjecture pushes back against AGI orthodoxy. However, inside the company, this can create a soldier mindset and poor epistemic. With time, adversariality can also insulate the organization from mainstream attention, eventually making it ignored.
This post has a battle-of-narratives framing and uses language to support Conjecture’s narrative but doesn’t actually argue why Conjecture’s preferred narrative puts the world in a less risky position.
There is an inside view and an outside view reason why I argue the cooperative route is better. First, being cooperative is likely to make navigating AI risks and explosive-growth smoother, and is less likely to lead to unexpected bad outcomes. Second is the unilateralist’s curse. Conjecture’s empirical views on the risks posed by AI and the difficulty of solving them via prosaic means are in the minority, probably even within the safety community. This minority actor shouldn’t take unilateral action whose negative tail is disproportionately large according to the majority of reasonable people.
Part of me wants to let the Conjecture vision separate itself from the more cooperative side of the AI safety world, as it has already started doing, and let the cooperative side continue their efforts. I’m fairly optimistic about these efforts (scalable oversight, evals-informed governance, most empirical safety work happening at AGI labs). However, the unilateral action supported by Conjecture’s vision is in opposition to the cooperative efforts. For example a group affiliated with Conjecture ran ads in opposition to AI safety efforts they see as insufficiently ambitious in a rather uncooperative way. As things heat up I expect the uncooperative strategy to become substantially riskier.
One call I’ll make is for those pushing Conjecture’s view to invest more into making sure they’re right about the empirical pessimism that motivates their actions. Run empirical tests of your threat models and frequently talk to reasonable people with different views.
This post has a battle-of-narratives framing and uses language to support Conjecture’s narrative but doesn’t actually argue why Conjecture’s preferred narrative puts the world in a less risky position.
There is an inside view and an outside view reason why I argue the cooperative route is better. First, being cooperative is likely to make navigating AI risks and explosive-growth smoother, and is less likely to lead to unexpected bad outcomes. Second is the unilateralist’s curse. Conjecture’s empirical views on the risks posed by AI and the difficulty of solving them via prosaic means are in the minority, probably even within the safety community. This minority actor shouldn’t take unilateral action whose negative tail is disproportionately large according to the majority of reasonable people.
Part of me wants to let the Conjecture vision separate itself from the more cooperative side of the AI safety world, as it has already started doing, and let the cooperative side continue their efforts. I’m fairly optimistic about these efforts (scalable oversight, evals-informed governance, most empirical safety work happening at AGI labs). However, the unilateral action supported by Conjecture’s vision is in opposition to the cooperative efforts. For example a group affiliated with Conjecture ran ads in opposition to AI safety efforts they see as insufficiently ambitious in a rather uncooperative way. As things heat up I expect the uncooperative strategy to become substantially riskier.
One call I’ll make is for those pushing Conjecture’s view to invest more into making sure they’re right about the empirical pessimism that motivates their actions. Run empirical tests of your threat models and frequently talk to reasonable people with different views.