We have heard that Conjecture misrepresent themselves in engagement with the government, presenting themselves as experts with stature in the AIS community, when in reality they are not.
What does it mean for Conjecture to be “experts with stature in the AIS community”? Can you clarify what metrics comprise expertise in AIS—are you dissatisfied with their demonstrated grasp of alignment work, or perhaps their research output, or maybe something a little more qualitative?
Basically, this excerpt reads like a crisp claim of common knowledge (“in reality”) but the content seems more like a personal judgment call by the author(s).
Hi TurnTrout, thanks for asking this question. We’re happy to clarify:
‘experts’: We do not consider Conjecture at the same level of expertise as [edit] alignment leaders and researchers at other organizations such as Redwood, ARC, researchers at academic labs like CHAI, and the alignment teams at Anthropic, OpenAI and DeepMind. This is primarily because we believe their research quality is low.
‘with stature in the AIS community’: Based on our impression (from conversations with many senior TAIS researchers at a range of organizations, including a handful who reviewed this post and didn’t disagree with this point) of the TAIS community, Conjecture is not considered a top alignment research organization within the community.
We do not consider Conjecture at the same level of expertise as other organizations such as Redwood, ARC, researchers at academic labs like CHAI, and the alignment teams at Anthropic, OpenAI and DeepMind. This is primarily because we believe their research quality is low.
This isn’t quite the right thing to look at IMO. In the context of talking to governments, an “AI safety expert” should have thought deeply about the problem, have intelligent things to say about it, know the range of opinions in the AI safety community, have a good understanding of AI more generally, etc. Based mostly on his talks and podcast appearances, I’d say Connor does decently well along these axes. (If I had to make things more concrete, there are a few people I’d personally call more “expert-y”, but closer to 10 than 100. The AIS community just isn’t that big and the field doesn’t have that much existing content, so it seems right that the bar for being an “AIS expert” is lower than for a string theory expert.)
I also think it’s weird to split this so strongly along organizational lines. As an extreme case, researchers at CHAI range on a spectrum from “fully focused on existential safety” to “not really thinking about safety at all”. Clearly the latter group aren’t better AI safety experts than most people at Conjecture. (And FWIW, I belong to the former group and I still don’t think you should defer to me over someone from Conjecture just because I’m at CHAI.)
One thing that would be bad is presenting views that are very controversial within the AIS community as commonly agreed-upon truths. I have no special insight into whether Conjecture does that when talking to governments, but it doesn’t sound like that’s your critique at least?
Hi Erik, thanks for your points, we meant to say “at the same level of expertise as alignment leaders and researchers other organizations such as...”. This was a typo on our part.
What does it mean for Conjecture to be “experts with stature in the AIS community”? Can you clarify what metrics comprise expertise in AIS—are you dissatisfied with their demonstrated grasp of alignment work, or perhaps their research output, or maybe something a little more qualitative?
Basically, this excerpt reads like a crisp claim of common knowledge (“in reality”) but the content seems more like a personal judgment call by the author(s).
Hi TurnTrout, thanks for asking this question. We’re happy to clarify:
‘experts’: We do not consider Conjecture at the same level of expertise as [edit] alignment leaders and researchers at other organizations such as Redwood, ARC, researchers at academic labs like CHAI, and the alignment teams at Anthropic, OpenAI and DeepMind. This is primarily because we believe their research quality is low.
‘with stature in the AIS community’: Based on our impression (from conversations with many senior TAIS researchers at a range of organizations, including a handful who reviewed this post and didn’t disagree with this point) of the TAIS community, Conjecture is not considered a top alignment research organization within the community.
This isn’t quite the right thing to look at IMO. In the context of talking to governments, an “AI safety expert” should have thought deeply about the problem, have intelligent things to say about it, know the range of opinions in the AI safety community, have a good understanding of AI more generally, etc. Based mostly on his talks and podcast appearances, I’d say Connor does decently well along these axes. (If I had to make things more concrete, there are a few people I’d personally call more “expert-y”, but closer to 10 than 100. The AIS community just isn’t that big and the field doesn’t have that much existing content, so it seems right that the bar for being an “AIS expert” is lower than for a string theory expert.)
I also think it’s weird to split this so strongly along organizational lines. As an extreme case, researchers at CHAI range on a spectrum from “fully focused on existential safety” to “not really thinking about safety at all”. Clearly the latter group aren’t better AI safety experts than most people at Conjecture. (And FWIW, I belong to the former group and I still don’t think you should defer to me over someone from Conjecture just because I’m at CHAI.)
One thing that would be bad is presenting views that are very controversial within the AIS community as commonly agreed-upon truths. I have no special insight into whether Conjecture does that when talking to governments, but it doesn’t sound like that’s your critique at least?
Hi Erik, thanks for your points, we meant to say “at the same level of expertise as alignment leaders and researchers other organizations such as...”. This was a typo on our part.