I agree with Conjecture’s reply that this reads more like a hitpiece than an even-handed evaluation.
I don’t think your recommendations follow from your observations, and such strong claims surely don’t follow from the actual evidence you provide. I feel like your criticisms can be summarized as the following:
Conjecture was publishing unfinished research directions for a while.
Conjecture does not publicly share details of their current CoEm research direction, and that research direction seems hard.
Conjecture told the government they were AI safety experts.
Some people (who?) say Conjecture’s governance outreach may be net-negative and upsetting to politicians.
Conjecture’s CEO Connor used to work on capabilities.
One time during college Connor said that he replicated GPT-2, then found out he had a bug in his code.
Connor has said at some times that open source models were good for alignment, then changed his mind.
Conjecture’s infohazard policy can be overturned by Connor or their owners.
They’re trying to scale when it is common wisdom for startups to try to stay small.
It is unclear how they will balance profit and altruistic motives.
Sometimes you talk with people (who?) and they say they’ve had bad interactions with conjecture staff or leadership when trying to tell them what they’re doing wrong.
Conjecture seems like they don’t talk with ML people.
I’m actually curious about why they’re doing 9, and further discussion on 10 and 8. But I don’t think any of the other points matter, at least to the depth you’ve covered them here, and I don’t know why you’re spending so much time on stuff that doesn’t matter or you can’t support. This could have been so much better if you had taken the research time spent on everything that wasn’t 8, 9, or 10, and used to to do analyses of 8, 9, and 10, and then actually had a conversation with Conjecture about your disagreements with them.
I especially don’t think your arguments support your suggestions that
Don’t work at Conjecture.
Conjecture should be more cautious when talking to media, because Connor seems unilateralist.
Conjecture should not receive more funding until they get similar levels of organizational competence than OpenAI or Anthropic.
Rethink whether or not you want to support conjecture’s work non-monetarily. For example, maybe think about not inviting them to table at EAG career fairs, inviting Conjecture employees to events or workspaces, and taking money from them if doing field-building.
(1) seems like a pretty strong claim, which is left unsupported. I know of many people who would be excited to work at conjecture, and I don’t think your points support the claim they would be doing net-negative research given they do alignment at Conjecture.
For (2), I don’t know why you’re saying Connor is unilateralist. Are you saying this because he used to work on capabilities?
(3) is just absurd! OpenAI will perhaps be the most destructive organization to-date. I do not think your above arguments make the case they are less organizationally responsible than OpenAI. Even having an info-hazard document puts them leagues above both OpenAI and Anthropic in my book. And add onto that their primary way of getting funded isn’t building extremely large models… In what way do Anthropic or OpenAI have better corporate governance structures than Conjecture?
(4) is just… what? Ok, I’ve thought about it, and come to the conclusion this makes no sense given your previous arguments. Maybe there’s a case to be made here. If they are less organizationally competent than OpenAI, then yeah, you probably don’t want to support their work. This seems pretty unlikely to me though! And you definitely don’t provide anything close to the level of analysis needed to elevate such hypotheses.
Edit: I will add to my note on (2): In most news articles in which I see Connor or Conjecture mentioned, I feel glad he talked to the relevant reporter, and think he/Conjecture made that article better. It is quite an achievement in my book to have sane conversations with reporters about this type of stuff! So mostly I think they should continue doing what they’re doing.
I’m not myself an expert on PR (I’m skeptical if anyone is), so maybe my impressions of the articles are naive and backwards in some way. This is something which if you think is important, it would likely be good to mention somewhere why you think their media outreach is net-negative, ideally pointing to particular things you think they did wrong rather than vague & menacing criticisms of unilateralism.
From my perspective 9 (scaling fast) makes perfect sense since Conjecture is aiming to stay “slightly behind state of the art”, and that requires engineering power.
I’m pretty skeptical they can achieve that right now using CoEm given the limited progress I expect them to have made on CoEm. And in my opinion of greater importance than “slightly behind state of the art” is likely security culture, and commonly in the startup world it is found that too-fast scaling leads to degradation in the founding culture. So a fear would be that fast scaling would lead to worse info-sec.
However, I don’t know to what extent this is an issue. I can certainly imagine a world where because of EA and LessWrong, many very mission-aligned hires are lining up in front of their door. I can also imagine a lot of other things, which is why I’m confused.
2) As we note in the relevant section: “We think there is a reasonable risk that Connor and Conjecture’s outreach to policymakers and media is alarmist and may decrease the credibility of x-risk.” This kind of relationship-building is unilateralist when it can decrease goodwill amongst policymakers.
3) To be clear, we do not expect Conjecture to have the same level of “organizational responsibility” or “organizational competence” (we aren’t sure what you mean by those phrases and don’t use them ourselves) as OpenAI or Anthropic. Our recommendation was for Conjecture to have a robust corporate governance structure. For example, they could change their corporate charter to implement a “springing governance” structure such that voting equity (but not political equity) shift to an independent board once they cross a certain valuation threshold. As we note in another reply, Conjecture’s infohazard policy has no legal force, and therefore is not as strong as either OpenAI or Anthropic’s corporate governance models. As we’ve noted already, we have concerns about both OpenAI and Anthropic despite having these models in place: Conjecture doesn’t even have those, which makes us more concerned.
I responded to a very similar comment of yours on the EA Forum.
To respond to the new content, I don’t know if changing the board of conjecture once a certain valuation threshold is crossed would make the organization more robust (now that I think of it, I don’t even really know what you mean by strong or robust here. Depending on what you mean, I can see myself disagreeing about whether that even tracks positive qualities about a corporation). You should justify claims like those, and at least include them in the original post. Is it sketchy they don’t have this?
I agree with Conjecture’s reply that this reads more like a hitpiece than an even-handed evaluation.
I don’t think your recommendations follow from your observations, and such strong claims surely don’t follow from the actual evidence you provide. I feel like your criticisms can be summarized as the following:
Conjecture was publishing unfinished research directions for a while.
Conjecture does not publicly share details of their current CoEm research direction, and that research direction seems hard.
Conjecture told the government they were AI safety experts.
Some people (who?) say Conjecture’s governance outreach may be net-negative and upsetting to politicians.
Conjecture’s CEO Connor used to work on capabilities.
One time during college Connor said that he replicated GPT-2, then found out he had a bug in his code.
Connor has said at some times that open source models were good for alignment, then changed his mind.
Conjecture’s infohazard policy can be overturned by Connor or their owners.
They’re trying to scale when it is common wisdom for startups to try to stay small.
It is unclear how they will balance profit and altruistic motives.
Sometimes you talk with people (who?) and they say they’ve had bad interactions with conjecture staff or leadership when trying to tell them what they’re doing wrong.
Conjecture seems like they don’t talk with ML people.
I’m actually curious about why they’re doing 9, and further discussion on 10 and 8. But I don’t think any of the other points matter, at least to the depth you’ve covered them here, and I don’t know why you’re spending so much time on stuff that doesn’t matter or you can’t support. This could have been so much better if you had taken the research time spent on everything that wasn’t 8, 9, or 10, and used to to do analyses of 8, 9, and 10, and then actually had a conversation with Conjecture about your disagreements with them.
I especially don’t think your arguments support your suggestions that
Don’t work at Conjecture.
Conjecture should be more cautious when talking to media, because Connor seems unilateralist.
Conjecture should not receive more funding until they get similar levels of organizational competence than OpenAI or Anthropic.
Rethink whether or not you want to support conjecture’s work non-monetarily. For example, maybe think about not inviting them to table at EAG career fairs, inviting Conjecture employees to events or workspaces, and taking money from them if doing field-building.
(1) seems like a pretty strong claim, which is left unsupported. I know of many people who would be excited to work at conjecture, and I don’t think your points support the claim they would be doing net-negative research given they do alignment at Conjecture.
For (2), I don’t know why you’re saying Connor is unilateralist. Are you saying this because he used to work on capabilities?
(3) is just absurd! OpenAI will perhaps be the most destructive organization to-date. I do not think your above arguments make the case they are less organizationally responsible than OpenAI. Even having an info-hazard document puts them leagues above both OpenAI and Anthropic in my book. And add onto that their primary way of getting funded isn’t building extremely large models… In what way do Anthropic or OpenAI have better corporate governance structures than Conjecture?
(4) is just… what? Ok, I’ve thought about it, and come to the conclusion this makes no sense given your previous arguments. Maybe there’s a case to be made here. If they are less organizationally competent than OpenAI, then yeah, you probably don’t want to support their work. This seems pretty unlikely to me though! And you definitely don’t provide anything close to the level of analysis needed to elevate such hypotheses.
Edit: I will add to my note on (2): In most news articles in which I see Connor or Conjecture mentioned, I feel glad he talked to the relevant reporter, and think he/Conjecture made that article better. It is quite an achievement in my book to have sane conversations with reporters about this type of stuff! So mostly I think they should continue doing what they’re doing.
I’m not myself an expert on PR (I’m skeptical if anyone is), so maybe my impressions of the articles are naive and backwards in some way. This is something which if you think is important, it would likely be good to mention somewhere why you think their media outreach is net-negative, ideally pointing to particular things you think they did wrong rather than vague & menacing criticisms of unilateralism.
From my perspective 9 (scaling fast) makes perfect sense since Conjecture is aiming to stay “slightly behind state of the art”, and that requires engineering power.
I’m pretty skeptical they can achieve that right now using CoEm given the limited progress I expect them to have made on CoEm. And in my opinion of greater importance than “slightly behind state of the art” is likely security culture, and commonly in the startup world it is found that too-fast scaling leads to degradation in the founding culture. So a fear would be that fast scaling would lead to worse info-sec.
However, I don’t know to what extent this is an issue. I can certainly imagine a world where because of EA and LessWrong, many very mission-aligned hires are lining up in front of their door. I can also imagine a lot of other things, which is why I’m confused.
(cross-posted from the EA Forum)
Regarding your specific concerns about our recommendations:
1) We address this point in our response to Marius (5th paragraph)
2) As we note in the relevant section: “We think there is a reasonable risk that Connor and Conjecture’s outreach to policymakers and media is alarmist and may decrease the credibility of x-risk.” This kind of relationship-building is unilateralist when it can decrease goodwill amongst policymakers.
3) To be clear, we do not expect Conjecture to have the same level of “organizational responsibility” or “organizational competence” (we aren’t sure what you mean by those phrases and don’t use them ourselves) as OpenAI or Anthropic. Our recommendation was for Conjecture to have a robust corporate governance structure. For example, they could change their corporate charter to implement a “springing governance” structure such that voting equity (but not political equity) shift to an independent board once they cross a certain valuation threshold. As we note in another reply, Conjecture’s infohazard policy has no legal force, and therefore is not as strong as either OpenAI or Anthropic’s corporate governance models. As we’ve noted already, we have concerns about both OpenAI and Anthropic despite having these models in place: Conjecture doesn’t even have those, which makes us more concerned.
I responded to a very similar comment of yours on the EA Forum.
To respond to the new content, I don’t know if changing the board of conjecture once a certain valuation threshold is crossed would make the organization more robust (now that I think of it, I don’t even really know what you mean by strong or robust here. Depending on what you mean, I can see myself disagreeing about whether that even tracks positive qualities about a corporation). You should justify claims like those, and at least include them in the original post. Is it sketchy they don’t have this?