2) As we note in the relevant section: “We think there is a reasonable risk that Connor and Conjecture’s outreach to policymakers and media is alarmist and may decrease the credibility of x-risk.” This kind of relationship-building is unilateralist when it can decrease goodwill amongst policymakers.
3) To be clear, we do not expect Conjecture to have the same level of “organizational responsibility” or “organizational competence” (we aren’t sure what you mean by those phrases and don’t use them ourselves) as OpenAI or Anthropic. Our recommendation was for Conjecture to have a robust corporate governance structure. For example, they could change their corporate charter to implement a “springing governance” structure such that voting equity (but not political equity) shift to an independent board once they cross a certain valuation threshold. As we note in another reply, Conjecture’s infohazard policy has no legal force, and therefore is not as strong as either OpenAI or Anthropic’s corporate governance models. As we’ve noted already, we have concerns about both OpenAI and Anthropic despite having these models in place: Conjecture doesn’t even have those, which makes us more concerned.
I responded to a very similar comment of yours on the EA Forum.
To respond to the new content, I don’t know if changing the board of conjecture once a certain valuation threshold is crossed would make the organization more robust (now that I think of it, I don’t even really know what you mean by strong or robust here. Depending on what you mean, I can see myself disagreeing about whether that even tracks positive qualities about a corporation). You should justify claims like those, and at least include them in the original post. Is it sketchy they don’t have this?
(cross-posted from the EA Forum)
Regarding your specific concerns about our recommendations:
1) We address this point in our response to Marius (5th paragraph)
2) As we note in the relevant section: “We think there is a reasonable risk that Connor and Conjecture’s outreach to policymakers and media is alarmist and may decrease the credibility of x-risk.” This kind of relationship-building is unilateralist when it can decrease goodwill amongst policymakers.
3) To be clear, we do not expect Conjecture to have the same level of “organizational responsibility” or “organizational competence” (we aren’t sure what you mean by those phrases and don’t use them ourselves) as OpenAI or Anthropic. Our recommendation was for Conjecture to have a robust corporate governance structure. For example, they could change their corporate charter to implement a “springing governance” structure such that voting equity (but not political equity) shift to an independent board once they cross a certain valuation threshold. As we note in another reply, Conjecture’s infohazard policy has no legal force, and therefore is not as strong as either OpenAI or Anthropic’s corporate governance models. As we’ve noted already, we have concerns about both OpenAI and Anthropic despite having these models in place: Conjecture doesn’t even have those, which makes us more concerned.
I responded to a very similar comment of yours on the EA Forum.
To respond to the new content, I don’t know if changing the board of conjecture once a certain valuation threshold is crossed would make the organization more robust (now that I think of it, I don’t even really know what you mean by strong or robust here. Depending on what you mean, I can see myself disagreeing about whether that even tracks positive qualities about a corporation). You should justify claims like those, and at least include them in the original post. Is it sketchy they don’t have this?